Setup Guide

Hardware requirements, dependencies, and the creator's reference architecture.

1Prerequisites

What you need before installing Cerebro:

  • Claude Subscription (Pro or Max)

    Cerebro uses your native Claude Code subscription. Zero extra API cost.

  • Python 3.10+

    With pip installed. Check with python --version

  • An MCP-Compatible Client

    Claude Code, Cursor, Windsurf, or Claude Desktop

  • 4GB+ RAM, 500MB Disk Space

    More RAM recommended if using local embeddings

2The Creator's Setup

The reference architecture powering Cerebro's development — a distributed home lab with dedicated compute, storage, and networking.

Windows 11 PC
Main dev machine, Python 3.13
Development, Claude Code, daily driver
ASUS GX10
119GB RAM, NVIDIA GB10 GPU, Ubuntu
Cerebro Server — full desktop app, backend, agents, cognitive loop
NVIDIA DGX Spark
128GB RAM, GB10 GPU
Distributed embeddings, Ollama LLM, GPU-accelerated FAISS
PROFESSORS-NAS
16TB Synology NAS
Centralized AI Memory storage — all devices read/write here
Home Server (Pi 5)
Raspberry Pi 5, Ubuntu 24.04
Pi-hole DNS, Portainer, Jellyfin, Tailscale VPN

You don't need all of this. Cerebro runs great on a single laptop. This is what happens when you go all-in.

3Recommended Setups

Pick the tier that matches your setup. Each includes the full workflow — what to install, how to configure, and what your day-to-day looks like.

Minimal

(Laptop only)

Everything runs on a single machine. Memory stored locally. No network, no GPU, no NAS.

What You Need

Python 3.10+pipClaude Pro or Max subscriptionAny MCP client

Setup Steps

1

Install Cerebro

The base install includes all 49 MCP tools with keyword search.

terminal
pip install cerebro-ai
2

Initialize storage

Creates ~/.cerebro with the SQLite database, config, and FAISS index.

terminal
cerebro init
3

Add to your MCP client

Add this to your Claude Desktop, Claude Code, Cursor, or Windsurf config:

claude_desktop_config.json
{
  "mcpServers": {
    "cerebro": {
      "command": "cerebro",
      "args": ["serve"]
    }
  }
}
4

Verify

terminal
cerebro status

Day-to-day workflow

Open your MCP client, start chatting. Cerebro automatically saves conversations, extracts facts, and builds your memory. Search with search(), save insights with record_learning(). All data lives in ~/.cerebro on your machine.

Best for: Trying it out, personal projects, single-machine setups

Enthusiast

(Desktop + NAS)Recommended

Add a NAS for centralized, persistent memory accessible from any device on your network. Includes semantic search with embeddings.

What You Need

Everything in MinimalNAS (Synology, QNAP, TrueNAS, etc.)8GB+ RAM (for embeddings)Network connection to NAS

Setup Steps

1

Install with embeddings

This adds sentence-transformers and FAISS for semantic vector search.

terminal
pip install cerebro-ai[embeddings]
2

Mount your NAS

Create a shared folder on your NAS, then mount it on every machine you use:

macOS / Linux
# Mount NAS share (replace with your NAS IP and share name)
sudo mount -t nfs YOUR_NAS_IP:/volume1/AI_MEMORY /mnt/nas

# Or add to /etc/fstab for auto-mount on boot
# YOUR_NAS_IP:/volume1/AI_MEMORY /mnt/nas nfs defaults 0 0
Windows
:: Map network drive (replace with your NAS IP and share)
net use Z: \\YOUR_NAS_IP\AI_MEMORY /persistent:yes
3

Initialize with NAS path

terminal
# Point Cerebro at your NAS
CEREBRO_STORAGE_PATH=/mnt/nas/cerebro cerebro init

# Windows:
# set CEREBRO_STORAGE_PATH=Z:\cerebro && cerebro init
4

Configure MCP client with NAS storage

claude_desktop_config.json
{
  "mcpServers": {
    "cerebro": {
      "command": "cerebro",
      "args": ["serve"],
      "env": {
        "CEREBRO_STORAGE_PATH": "/mnt/nas/cerebro"
      }
    }
  }
}

On Windows, use "CEREBRO_STORAGE_PATH": "Z:\\cerebro"

5

Repeat on other machines

Install Cerebro + mount the NAS on each machine you use. They all read/write the same memory database. No sync needed — it's the same files.

Day-to-day workflow

Work from any machine — your laptop, desktop, or a VM. Cerebro reads and writes to the same NAS-backed memory. Semantic search finds memories by meaning, not just keywords. Switch devices mid-conversation and pick up exactly where you left off.

Tip: Any NAS that supports NFS or SMB works — Synology, QNAP, TrueNAS, or even a Raspberry Pi with an external drive. The key is a shared directory all your machines can access.

Best for: Daily use, multiple workstations, persistent memory across devices

Power User

(Full home lab)Creator's setup

Dedicated GPU server running Cerebro Pro, NAS for storage, optional GPU compute node for embeddings. Docker-orchestrated.

What You Need

Everything in EnthusiastDedicated server (GPU recommended)Docker + Docker ComposeCerebro Pro license

Setup Steps

1

Install with GPU support on your server

SSH into your dedicated server and install with GPU acceleration:

terminal (on server)
pip install cerebro-ai[gpu]

# Verify GPU is detected
python -c "import torch; print(torch.cuda.is_available())"
2

Mount NAS on the server

Same as the Enthusiast tier — mount your NAS so the server can read/write the shared memory:

terminal (on server)
sudo mount -t nfs YOUR_NAS_IP:/volume1/AI_MEMORY /mnt/nas

# Initialize Cerebro pointing to NAS
CEREBRO_STORAGE_PATH=/mnt/nas/cerebro cerebro init
3

Deploy with Docker Compose

For always-on deployment with Redis caching and automatic restarts:

docker-compose.yml
services:
  cerebro:
    image: professorlow/cerebro:latest
    environment:
      - CEREBRO_LICENSE_KEY=your-license-key
      - CEREBRO_STORAGE_PATH=/data
      - CEREBRO_EMBEDDING_MODEL=all-MiniLM-L6-v2
    volumes:
      - /mnt/nas/cerebro:/data
    ports:
      - "8420:8420"
    restart: unless-stopped
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - redis-data:/data

volumes:
  redis-data:
4

Start the stack

terminal (on server)
docker compose up -d

# Verify it's running
docker compose ps
curl http://localhost:8420/health
5

Point your workstations at the server

On each dev machine, you can either run Cerebro locally (pointed at NAS) or connect to the server's MCP endpoint:

claude_desktop_config.json (local install, NAS storage)
{
  "mcpServers": {
    "cerebro": {
      "command": "cerebro",
      "args": ["serve"],
      "env": {
        "CEREBRO_STORAGE_PATH": "/mnt/nas/cerebro"
      }
    }
  }
}

Optional: Dedicated GPU Compute Node

If you have a second GPU machine (like a DGX Spark), you can offload embedding generation and Ollama LLM inference to it. Install cerebro-ai[gpu] on that node and configure it to write to the same NAS path. This distributes the compute load across your network.

Day-to-day workflow

Your server runs 24/7. The Cerebro desktop app (Pro) connects to it for the full experience — agents, cognitive loop, autonomous reasoning. Your dev machines run Claude Code with the MCP tools pointed at the same NAS. Everything stays in sync because it's the same storage. GPU-accelerated FAISS makes semantic search instant even with millions of memories.

Best for: Agents, autonomy, multi-device workflows, maximum performance

4Full Dependency Breakdown

Everything Cerebro installs, broken down by install tier.

Core Dependencies

PackageVersionPurpose
mcp>=1.25.0MCP protocol server
anyiolatestAsync I/O framework
numpylatestNumerical operations
pydanticlatestData validation
python-dateutillatestDate/time parsing

Embeddings (recommended)

PackageVersionPurpose
sentence-transformers>=5.0.0Embedding model loading
faiss-cpu>=1.13.0Vector similarity search

GPU Acceleration (optional)

PackageVersionPurpose
faiss-gpulatestGPU-accelerated FAISS
torch>=2.0.0PyTorch for GPU compute

Install Commands

terminal
pip install cerebro-ai                    # Minimal
pip install cerebro-ai[embeddings]       # With semantic search (recommended)
pip install cerebro-ai[gpu]              # With GPU acceleration

Docker stack: Redis 7 and Python 3.12-slim base image. External services (Ollama, Redis, DGX) are all optional.

5What Cerebro Does NOT Require

Common assumptions that are wrong — Cerebro is simpler than you think:

  • No API keys

    Uses your Claude subscription directly

  • No cloud account

    100% local — your data never leaves your machine

  • No database server

    Built-in SQLite storage, zero config

  • No Playwright or browser tools

    Memory server only — browser automation is a Cerebro Pro desktop feature

  • No Redis

    Only needed in the Docker stack for caching

Next Steps