Installation
First build takes a few minutes while Rust compiles dependencies. Subsequent builds are incremental and much faster.
Prerequisites
| Requirement | Version | Install |
|---|---|---|
| Node.js | 18+ | nodejs.org |
| Rust | Latest stable | rustup.rs |
| Platform deps | — | See Tauri prerequisites |
Linux (Debian/Ubuntu)
macOS
Windows
Install Visual Studio Build Tools with the “Desktop development with C++” workload and the Windows 10/11 SDK.Clone and build
Production build
src-tauri/target/release/bundle/ — platform-specific installer (.dmg, .deb, .msi, etc.).
Verify
After launching, OpenPawz opens to the Today dashboard. Go to Settings → Providers to configure your first AI provider, then create agents and start chatting.Recommended: Ollama (local AI)
Ollama powers three systems in OpenPawz: low-cost tool execution, semantic memory, and offline chat. We strongly recommend installing it for zero-cost local inference, but these roles can also be filled by cloud models from any provider.localhost:11434, auto-starts it if needed, and auto-pulls the embedding model on first launch.
Worker Model Setup
The worker model (worker-qwen) is what makes tool execution free. When your cloud AI agent (Gemini, Claude, GPT) needs to run a tool via the MCP Bridge, it delegates to the local worker instead of making additional API calls.
To set up the worker:
- Go to Settings → Advanced → Ollama
- Click Setup Worker Agent
- This creates a custom
worker-qwenmodel with the right system prompt
worker-qwen variant automatically.
Without Ollama, tool calls go through your paid AI provider. With Ollama, they’re free. See the MCP Bridge guide for the full Architect/Worker architecture.

