Clawdbot’s minimum system requirements are modest because the “Gateway” is mainly a control plane that coordinates messaging channels, skills, and model calls, rather than doing heavy compute by itself. For a basic, always-on personal assistant that handles a few chats and lightweight skills, you can run Clawdbot on a small Linux VPS, a home server, or even ARM hardware like a Raspberry Pi-class device. A practical baseline is 1 vCPU, 1 GB RAM, and 2–5 GB of free disk for the installation plus logs and workspace files. You will also need reliable outbound internet (to reach messaging platforms and model APIs) and the ability to keep a single process running 24/7. If you plan to use multiple channels and store more history, bumping to 2 GB RAM makes the system feel more stable, especially when you enable richer skills that spawn subprocesses or maintain local indexes.
What matters more than raw CPU is the runtime environment and file system behavior. Clawdbot’s installer is designed to be “hands-off” and can install a dedicated Node runtime under a private prefix (commonly under something like ~/.clawdbot) so you don’t have to manage a system-wide Node version. That means the “requirements” are mostly: a 64-bit OS, standard user permissions, and the ability to open local ports for the Gateway (and optionally a dashboard) on your machine. On Linux, Ubuntu/Debian are the most straightforward. On macOS, it can run as a background service on a Mac mini or laptop that stays on. On Windows, the most predictable approach is WSL2 or Docker Desktop, because scripts and path assumptions tend to be Unix-like. Storage requirements depend on how much you log and how much memory you keep: the agent workspace stores Markdown memory files and other state, and chat transcripts or debug logs can grow faster than you expect if you run at very verbose log levels.
If you add retrieval, persistent semantic memory, or document search, system requirements shift from “just run the Gateway” to “run a small stack.” The Gateway can remain light, but your memory layer may need more CPU, RAM, and disk I/O depending on how you implement it. A common pattern is to keep Clawdbot’s local workspace lean (config + logs + a few Markdown memory files) and place embeddings and similarity search into a dedicated vector store such as Milvus or Zilliz Cloud. That lets you scale memory and retrieval independently: Clawdbot can stay on a 1–2 GB RAM VPS, while Milvus runs on a separate host (or you use Zilliz Cloud to avoid managing it). If you do this, plan your network and credentials: Clawdbot needs outbound access to the vector database endpoint, and you should budget latency for each retrieval call. In short, the minimum is “small server + stable disk + outbound network,” and the moment you introduce search-at-scale, move the heavy lifting to Milvus/Zilliz Cloud instead of inflating the Gateway box.
