Skip to main content

Installation

First build takes a few minutes while Rust compiles dependencies. Subsequent builds are incremental and much faster.

Prerequisites

RequirementVersionInstall
Node.js18+nodejs.org
RustLatest stablerustup.rs
Platform depsSee Tauri prerequisites

Linux (Debian/Ubuntu)

sudo apt update
sudo apt install libwebkit2gtk-4.1-dev build-essential curl wget file \
  libxdo-dev libssl-dev libayatana-appindicator3-dev librsvg2-dev

macOS

xcode-select --install

Windows

Install Visual Studio Build Tools with the “Desktop development with C++” workload and the Windows 10/11 SDK.

Clone and build

# Clone the repository
git clone https://github.com/OpenPawz/openpawz.git
cd paw

# Install frontend dependencies
pnpm install

# Run in development mode (hot-reload)
pnpm tauri dev
The first build will take a few minutes while Rust compiles all dependencies. Subsequent builds are incremental.

Production build

pnpm tauri build
The built app will be in src-tauri/target/release/bundle/ — platform-specific installer (.dmg, .deb, .msi, etc.).

Verify

After launching, OpenPawz opens to the Today dashboard. Go to Settings → Providers to configure your first AI provider, then create agents and start chatting. Ollama powers three systems in OpenPawz: low-cost tool execution, semantic memory, and offline chat. We strongly recommend installing it for zero-cost local inference, but these roles can also be filled by cloud models from any provider.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Worker model — executes MCP tool calls (any model works; local = $0)
ollama pull qwen2.5-coder:7b

# Embedding model — semantic memory + tool discovery (any embedding model works; local = $0)
ollama pull nomic-embed-text

# Chat fallback — offline conversations when no API is available
ollama pull llama3.2:3b
Pawz auto-detects Ollama on localhost:11434, auto-starts it if needed, and auto-pulls the embedding model on first launch.

Worker Model Setup

The worker model (worker-qwen) is what makes tool execution free. When your cloud AI agent (Gemini, Claude, GPT) needs to run a tool via the MCP Bridge, it delegates to the local worker instead of making additional API calls. To set up the worker:
  1. Go to Settings → Advanced → Ollama
  2. Click Setup Worker Agent
  3. This creates a custom worker-qwen model with the right system prompt
Or from the command line:
ollama pull qwen2.5-coder:7b  # Base model (~4.7 GB)
The one-click setup in Settings creates the optimized worker-qwen variant automatically.
Without Ollama, tool calls go through your paid AI provider. With Ollama, they’re free. See the MCP Bridge guide for the full Architect/Worker architecture.