Troubleshooting
Common issues and how to fix them.Build issues
Rust compilation fails
Symptoms:cargo build errors, missing system dependencies
Fix:
Node.js errors
Symptoms:pnpm install fails, version mismatch
Fix: Ensure Node.js 18+ and pnpm are installed:
Provider issues
”No providers configured”
Symptoms: Agent can’t respond, no model available Fix: Add at least one provider in Settings → Providers.Provider returns errors
Symptoms: Billing, auth, or rate limit errors Fix:- Check your API key is valid
- Check your account has credits
- Add a second provider for automatic fallback
Ollama not detected
Symptoms: Pawz doesn’t see local Ollama Fix:Model not found
Symptoms: “Model not found” error Fix:Memory issues
Embeddings fail
Symptoms: Memory search returns no results, embedding errors Fix:Old memories have no embeddings
Symptoms: Memories created before embedding setup don’t appear in search Fix: Use the backfill button in Memory Palace to retroactively embed all memories.Channel issues
Channel won’t connect
Symptoms: Channel status stays “disconnected” Fix:- Verify the bot token is correct
- Check network connectivity
- Ensure the bot has proper permissions on the platform
- Check the Pawz logs for error details
Messages blocked
Symptoms: Some messages aren’t getting through Cause: Prompt injection scanner is blocking messages withcritical severity.
Fix: This is working as intended — the messages contained injection patterns. Check the logs to see what was blocked.
Bot not responding in groups
Symptoms: Bot works in DMs but not in group chats Fix:- Telegram: Disable Group Privacy in @BotFather
- Discord: Ensure the bot has “Read Message History” permission
- Slack: Invite the bot to the channel with
/invite @bot
Docker sandbox
”Docker not available”
Symptoms: Container sandbox fails Fix:Container times out
Symptoms: Commands hit the timeout limit Fix: Increase the timeout in Settings → Advanced → Container Sandbox.Performance
Slow responses
Causes:- Large context — use
/compactto summarize older messages - Too many memories — reduce
recall_limitin memory settings - Slow model — switch to a faster model (gpt-4o-mini, gemini-2.0-flash)
- Ollama on CPU — use a smaller model or get a GPU
Chat behavior
Agent keeps repeating the same failed approach
Symptoms: The agent tries the same broken tool call over and over. What Pawz does: Failed tool exchanges are automatically deleted from context before each turn, so the agent gets a clean slate on retry. If you’re still seeing loops, the agent may be hitting a genuine capability limitation. Fix:- Send a new message with more specific instructions
- Try a different model with stronger reasoning (e.g., Claude Sonnet 4, Gemini 2.5 Pro)
- Start a new session with
/newfor a completely fresh context
Can I send messages while the agent is working?
Yes. Pawz queues your message automatically. The active agent wraps up at the next tool-call boundary, then processes your queued message. You don’t need to wait for a response before typing.High memory usage
Fix:- Reduce
max_concurrent_runsin Settings → Engine - Use smaller models
- Close unused browser profiles
Logs
Check the Tauri logs for detailed error information:/debug in any chat to toggle debug mode for verbose output.
Use /status to see current engine configuration and provider status.
