CAMEL and Zilliz Cloud Integration
CAMEL and Zilliz Cloud integrate to build advanced multi-agent RAG systems, combining CAMEL's open-source framework for autonomous and communicative LLM agents with Zilliz Cloud's high-performance vector database for efficient similarity search, retrieval-augmented generation, and collaborative AI tasks.
Use this integration for FreeWhat is CAMEL
CAMEL is an open-source multi-agent framework dedicated to the study of autonomous and communicative agents. It provides infrastructure for creating customizable agents, constructing multi-agent systems, and enabling practical applications. CAMEL explores scalable techniques for autonomous cooperation among communicative agents and their cognitive processes, offering various types of agents, tasks, prompts, models, datasets, and simulated environments to facilitate research in this field.
By integrating with Zilliz Cloud (fully managed Milvus), CAMEL's agents gain access to a scalable vector database for storing and querying high-dimensional vectors, enabling agents to perform similarity searches, retrieve relevant information through RAG, and enhance their decision-making processes for applications like question-answering systems, recommendation engines, and collaborative AI tasks.
Benefits of the CAMEL + Zilliz Cloud Integration
- Multi-agent RAG with vector search: CAMEL's agents can leverage Zilliz Cloud's vector database to perform similarity searches and retrieve context, enabling multi-agent systems to make informed decisions based on relevant external knowledge.
- Customized and auto RAG modes: The integration supports both customized RAG pipelines with fine-grained control over embedding models and storage, and an AutoRetriever mode that automatically handles collection management and retrieval.
- Role-playing with function calling: CAMEL's RolePlaying feature combined with retrieval functions allows multi-agent conversations where agents can dynamically search and retrieve information from Zilliz Cloud during collaborative task-solving.
- Flexible retriever configuration: Developers can configure VectorRetriever with custom embedding models, similarity thresholds, and top-k settings, all backed by Zilliz Cloud's MilvusStorage for efficient vector operations.
How the Integration Works
CAMEL serves as the multi-agent framework, providing agent creation, role-playing orchestration, and retrieval modules (VectorRetriever and AutoRetriever). It handles document processing through its integrated Unstructured Module, embedding generation via OpenAIEmbedding, and agent-based conversation flows with function calling capabilities.
Zilliz Cloud serves as the vector database layer through MilvusStorage, storing and indexing document embeddings for fast similarity search. It provides the retrieval backend for CAMEL's agents, enabling efficient vector search with configurable similarity thresholds and top-k results.
Together, CAMEL and Zilliz Cloud create a complete multi-agent RAG system: documents are processed, chunked, and embedded into Zilliz Cloud via CAMEL's VectorRetriever. Agents can then retrieve relevant context during conversations — whether in single-agent Q&A, multi-agent role-playing, or complex collaborative tasks — with Zilliz Cloud handling the vector search at scale.
Step-by-Step Guide
1. Install Dependencies
$ pip install -U "camel-ai[all]" pymilvus2. Load Data
Download the CAMEL paper as example local data:
import os import requests os.makedirs("local_data", exist_ok=True) url = "https://arxiv.org/pdf/2303.17760.pdf" response = requests.get(url) with open("local_data/camel paper.pdf", "wb") as file: file.write(response.content)3. Customized RAG — Set Up Embedding, Storage, and Retriever
Set the OpenAI API key and initialize the embedding model, Milvus storage, and vector retriever:
os.environ["OPENAI_API_KEY"] = "Your Key" from camel.embeddings import OpenAIEmbedding from camel.storages import MilvusStorage from camel.retrievers import VectorRetriever embedding_instance = OpenAIEmbedding() storage_instance = MilvusStorage( vector_dim=embedding_instance.get_output_dim(), url_and_api_key=( "./milvus_demo.db", # Your Milvus connection URI "", # Your Milvus token ), collection_name="camel_paper", ) vector_retriever = VectorRetriever( embedding_model=embedding_instance, storage=storage_instance )For the
url_and_api_key: Using a local file, e.g../milvus.db, as the Milvus connection URI is the most convenient method, as it automatically utilizes Milvus Lite. If you have large scale of data, you can set up a more performant Milvus server on Docker or Kubernetes. If you want to use Zilliz Cloud, the fully managed cloud service for Milvus, adjust the connection URI and token, which correspond to the Public Endpoint and API Key in Zilliz Cloud.4. Process and Query Documents
Process the PDF and query for information:
vector_retriever.process(content_input_path="local_data/camel paper.pdf") retrieved_info = vector_retriever.query(query="What is CAMEL?", top_k=1) print(retrieved_info)5. Auto RAG with AutoRetriever
Use AutoRetriever for automatic collection management and retrieval:
from camel.retrievers import AutoRetriever from camel.types import StorageType auto_retriever = AutoRetriever( url_and_api_key=( "./milvus_demo.db", "", ), storage_type=StorageType.MILVUS, embedding_model=embedding_instance, ) retrieved_info = auto_retriever.run_vector_retriever( query="What is CAMEL-AI", content_input_paths=[ "local_data/camel paper.pdf", "https://www.camel-ai.org/", ], top_k=1, return_detailed_info=True, ) print(retrieved_info)6. Single Agent with Auto RAG
Combine AutoRetriever with a ChatAgent:
from camel.agents import ChatAgent from camel.messages import BaseMessage from camel.types import RoleType def single_agent(query: str) -> str: assistant_sys_msg = BaseMessage( role_name="Assistant", role_type=RoleType.ASSISTANT, meta_dict=None, content="""You are a helpful assistant to answer question, I will give you the Original Query and Retrieved Context, answer the Original Query based on the Retrieved Context, if you can't answer the question just say I don't know.""", ) auto_retriever = AutoRetriever( url_and_api_key=( "./milvus_demo.db", "", ), storage_type=StorageType.MILVUS, embedding_model=embedding_instance, ) retrieved_info = auto_retriever.run_vector_retriever( query=query, content_input_paths=[ "local_data/camel paper.pdf", "https://www.camel-ai.org/", ], top_k=1, return_detailed_info=True, ) user_msg = BaseMessage.make_user_message(role_name="User", content=retrieved_info) agent = ChatAgent(assistant_sys_msg) assistant_response = agent.step(user_msg) return assistant_response.msg.content print(single_agent("What is CAMEL-AI"))7. Role-Playing with Auto RAG
Combine retrieval functions with RolePlaying using Function Calling for multi-agent collaboration:
from camel.functions import MATH_FUNCS, RETRIEVAL_FUNCS from camel.societies import RolePlaying from camel.configs import ChatGPTConfig from camel.types import ModelType function_list = [*MATH_FUNCS, *RETRIEVAL_FUNCS] assistant_model_config = ChatGPTConfig( tools=function_list, temperature=0.0, ) role_play_session = RolePlaying( assistant_role_name="Searcher", user_role_name="Professor", assistant_agent_kwargs=dict( model_type=ModelType.GPT_4O, model_config=assistant_model_config, tools=function_list, ), user_agent_kwargs=dict( model_type=ModelType.GPT_4O, ), task_prompt="What is the main termination reasons for AI Society dataset?", with_task_specify=False, )Learn More
- Retrieval-Augmented Generation (RAG) with Milvus and Camel — Official Milvus tutorial for building RAG with CAMEL
- CAMEL-AI Documentation — Official CAMEL documentation
- CAMEL-AI GitHub Repository — CAMEL source code and community resources
- Agentic RAG with Claude 3.5 Sonnet, LlamaIndex, and Milvus — Zilliz blog on agentic RAG
- Building an AI Agent for RAG with Milvus and LlamaIndex — Zilliz blog on AI agents for RAG