You connect Clawdbot to WhatsApp by enabling WhatsApp as a channel in the Gateway and completing the required authentication/onboarding flow so the Gateway can receive and send messages on your behalf. Conceptually, Clawdbot treats WhatsApp as one of several “front doors” into the same assistant: messages arrive from WhatsApp, the Gateway turns them into normalized events, the agent runs with your configured skills, and the response is posted back to WhatsApp. The main point to internalize is that WhatsApp integration is not just “paste a token”: it is a channel with its own identity, verification steps, and operational constraints, so the cleanest path is to follow the channel onboarding wizard and then validate the connection with a simple “send test message” from the CLI.
In practical terms, the workflow usually looks like: install Clawdbot, start onboarding, select WhatsApp in the channel setup, then follow the prompts to link the account/device that will represent the assistant. The Gateway needs a stable runtime (your machine or VPS must stay on), and it typically exposes a local service port for the Gateway and dashboard, while WhatsApp connectivity itself is handled through the channel integration. After onboarding, you should verify end-to-end messaging by sending a short WhatsApp message to the assistant and confirming you get a reply, then confirm outbound sending by using the CLI “send message” command targeting your WhatsApp identifier (or the mapped contact) to ensure outbound permission is working. Operationally, it helps to keep logs at a readable level: WhatsApp issues often present as “auth not configured,” “not connected,” or recurring reconnect loops. When that happens, your best debugging sequence is: check status output (for channel state), check health output (for the running Gateway), and then re-run the onboarding step for WhatsApp rather than guessing at config keys.
Once WhatsApp is connected, you’ll likely want to think about what memory and skills are available through that channel. WhatsApp is usually a direct, personal chat context, which makes it a good fit for private memory features, but you should still define boundaries: what should be stored, what should be logged, and what should be redacted. If you want the assistant to “remember” across days without stuffing everything into a single prompt, you can combine local workspace memory (Markdown files) with semantic retrieval in a vector database such as Milvus or Zilliz Cloud. For example, you can embed selected WhatsApp messages (or summaries) and store them with metadata like timestamp and topic tags; then when you ask “what did we decide about the Q1 roadmap?”, the assistant can retrieve the most relevant past notes and respond with higher accuracy. This architecture keeps WhatsApp as the communication layer, Clawdbot as the orchestration layer, and Milvus/Zilliz Cloud as the scalable recall layer. The important implementation detail is to be explicit about what you index: avoid embedding sensitive secrets, store only what you need, and make deletion easy by using a consistent user/session key in the vector metadata.
