Run RockBot locally with Docker Compose on Docker Desktop. This is the simplest way to experiment with the agent — chat with it, explore its memory and skills system, and get a feel for how it works.
This minimal setup runs three containers: RabbitMQ (message bus), the RockBot Agent (the brain), and the Blazor UI (the chat interface). No Kubernetes, no Helm, no MCP servers.
What you can do: Chat, build long-term memory, learn skills, web search, use the dream consolidation service. MCP and A2A also work if you register HTTP endpoints (see Connecting MCP servers and A2A agents below).
What you can't do (without Kubernetes): Run sandboxed Python scripts (requires the scripts-manager pod with K8s API access).
- Docker Desktop installed and running
- An LLM provider — either:
- An OpenAI-compatible LLM API key — OpenRouter is the easiest option, or
- A GitHub Copilot license — uses per-request billing via the Copilot SDK (requires a GitHub token with
copilotscope)
- A Brave Search API key (free tier available) — https://api.search.brave.com/
The compose template and example .env file live in the repo at deploy/docker-compose/:
deploy/docker-compose/
docker-compose.yml # RabbitMQ + Agent + Blazor UI
.env.example # Template for your API keys
Copy them to a working directory (or run directly from the repo):
cp deploy/docker-compose/.env.example deploy/docker-compose/.envEdit the .env file. Choose one of two LLM provider options:
Option A — OpenAI-compatible (per-token billing):
LLM_API_KEY=sk-or-v1-your-openrouter-key-here
LLM_ENDPOINT=https://openrouter.ai/api/v1
LLM_MODEL_ID=anthropic/claude-haiku-4.5
BRAVE_API_KEY=BSA-your-brave-key-hereOption B — GitHub Copilot SDK (per-request billing):
LLM_PROVIDER=Copilot
LLM_MODEL_ID=gpt-4.1
GITHUB_TOKEN=ghp_your-token-with-copilot-scope
BRAVE_API_KEY=BSA-your-brave-key-hereDo not commit the
.envfile to version control. It contains your API keys.
cd deploy/docker-compose
docker compose up -dWait about 30 seconds for RabbitMQ to become healthy and the agent to connect. You can follow the agent logs to see when it's ready:
docker compose logs -f agentLook for log output indicating the agent has connected to RabbitMQ and is listening for messages.
Open your browser to http://localhost:8080 to access the Blazor chat UI.
Start a conversation — the agent will respond using your configured LLM. Try things like:
- Ask it a question (it will use web search if needed)
- Ask it to remember something ("remember that I prefer dark mode")
- Ask it to recall something ("what do you know about me?")
- Ask it about its skills or capabilities
RabbitMQ Management UI: http://localhost:15672 (login: rockbot / rockbot)
Browse queues, exchanges, and message rates to see the event-driven architecture in action.
View logs:
# Agent logs
docker compose logs -f agent
# All services
docker compose logs -fThe agent's personality, directives, and behavior are defined by markdown files on the agent-data volume. To customize them:
# Find the volume mount path
docker volume inspect <your-directory>_agent-data
# Or copy a file into the running container
docker compose cp my-custom-soul.md agent:/data/agent/soul.mdKey files you can customize:
| File | Purpose |
|---|---|
soul.md |
Agent identity and personality |
directives.md |
Operating instructions and workflow patterns |
style.md |
Voice and tone |
memory-rules.md |
How the agent forms and manages memories |
The agent hot-reloads these files via FileSystemWatcher — changes take effect within seconds, no restart needed.
Any OpenAI-compatible endpoint works. Some options:
| Provider | Endpoint | Notes |
|---|---|---|
| OpenRouter | https://openrouter.ai/api/v1 |
Recommended — access to many models with one key |
| OpenAI | https://api.openai.com/v1 |
GPT-4o, GPT-4.1, etc. |
| Ollama (local) | http://host.docker.internal:11434/v1 |
Free, runs on your machine. Use host.docker.internal to reach the host from Docker |
To use Ollama locally, set in your .env:
LLM_ENDPOINT=http://host.docker.internal:11434/v1
LLM_API_KEY=ollama
LLM_MODEL_ID=llama3.1If you have a GitHub Copilot license, you can use it as the LLM provider. The Copilot SDK bundles its own CLI binary — no separate installation needed. Models available include GPT-4.1, Claude Sonnet 4, and others depending on your Copilot subscription tier.
LLM_PROVIDER=Copilot
LLM_MODEL_ID=gpt-4.1
GITHUB_TOKEN=ghp_your-token-with-copilot-scopeCreate a token at github.com/settings/tokens with the copilot scope.
Each tier (Low/Balanced/High) can use a different provider. Set a per-tier Provider override in your Helm values or environment variables. For example, Copilot for Low, OpenRouter for Balanced and High. See deploy/values.personal.example.yaml for examples.
By default the agent uses BM25 keyword search for memory, skills, and working memory recall. You can optionally add a text embedding model to enable hybrid search (BM25 + cosine similarity), which improves recall for semantically similar but lexically different content.
Any OpenAI-compatible embedding endpoint works. The easiest local option is Ollama:
- Install and start Ollama on your host machine
- Pull an embedding model:
ollama pull nomic-embed-text - Uncomment the embedding lines in your
docker-compose.ymlagent environment and in your.env:
EMBEDDING_ENDPOINT=http://host.docker.internal:11434
EMBEDDING_MODEL=nomic-embed-text- Restart the agent:
docker compose restart agent
The agent logs will confirm: Embedding model configured: nomic-embed-text @ http://host.docker.internal:11434. If the embedding config is missing or the endpoint is unreachable, the agent falls back to BM25-only search with no loss of functionality.
The agent's MCP bridge and A2A client make standard HTTP calls — they work the same in Docker as in Kubernetes. Any SSE-based MCP server or A2A agent reachable over HTTP can be registered.
Edit /data/agent/mcp.json on the volume to add SSE-based MCP servers:
{
"mcpServers": {
"my-server": {
"type": "sse",
"url": "http://host.docker.internal:3000/"
}
}
}Use host.docker.internal to reach servers running on your host machine. For servers running as additional Compose services, use the service name as the hostname (e.g., http://my-mcp-service:8080/).
The agent watches mcp.json with a FileSystemWatcher and hot-reloads when the file changes — no restart needed.
Note:
stdio-based MCP servers (the"command"transport) are not supported in this Docker setup — only HTTP/SSE endpoints.
Edit /data/agent/well-known-agents.json on the volume before starting the agent to pre-register A2A agents:
[
{
"agentName": "my-agent",
"description": "What this agent does",
"version": "1.0",
"url": "http://host.docker.internal:5000",
"skills": [
{
"id": "my-skill",
"name": "My Skill",
"description": "What this skill does"
}
]
}
]This file is read once at startup, so changes require a restart (docker compose restart agent). To register agents at runtime without restarting, ask the agent to use its register_agent tool during a conversation.
# Stop all containers (data is preserved in volumes)
docker compose down
# Stop and remove all data (fresh start)
docker compose down -vThis minimal setup gets you chatting with the agent. For the full experience with script execution, MCP tools, and multi-agent coordination, see the Helm deployment guide for Kubernetes.