Dynamiq and Zilliz Cloud Integration
Dynamiq and Zilliz Cloud integrate to power fast, secure, and context-aware AI agents, combining Dynamiq's enterprise agentic AI platform with visual workflow builder and Python SDK alongside Zilliz Cloud's high-performance vector storage for scalable RAG retrieval.
Use this integration for FreeWhat is Dynamiq
Dynamiq is the enterprise operating platform for agentic AI. It enables teams to construct, coordinate, and implement production-grade AI agents and RAG workflows without integrating multiple separate tools. It offers both a visual drag-and-drop builder and an open-source Python SDK, giving developers and business users the flexibility to scale agentic applications in SaaS, private cloud, or fully air-gapped environments.
By integrating with Zilliz Cloud (fully managed Milvus), Dynamiq gains access to high-performance vector storage and retrieval that powers its agentic RAG workflows, combining Zilliz's scalable vector infrastructure with Dynamiq's orchestration layer to enable responsive, context-aware agents operating across enterprise knowledge bases.
Benefits of the Dynamiq + Zilliz Cloud Integration
- Scalable agentic RAG workflows: Dynamiq's orchestration layer combined with Zilliz Cloud's high-performance vector storage enables production-grade AI agents that can retrieve context from large-scale enterprise knowledge bases in real time.
- Dual builder flexibility: Teams can build RAG workflows using Dynamiq's visual drag-and-drop builder or the open-source Python SDK, with Zilliz Cloud handling the vector storage seamlessly in either approach.
- Secure enterprise deployment: The integration supports SaaS, private cloud, and fully air-gapped environments, with full observability and guardrails around LLM calls and vector usage.
- Efficient document indexing and retrieval: Dynamiq's Writer Node ingests and indexes documents into Zilliz Cloud, while the Retriever Node connects agents to Milvus-powered knowledge bases for real-time context retrieval.
How the Integration Works
Dynamiq serves as the orchestration platform, providing the agentic AI framework with visual workflow builder, Python SDK, and LLM agent capabilities. It handles document processing, workflow orchestration, prompt management, and response generation through its modular node-based architecture.
Zilliz Cloud serves as the vector database layer, storing and indexing document embeddings through Dynamiq's MilvusDocumentWriter node and providing fast similarity search through the MilvusDocumentRetriever node. It enables efficient retrieval of relevant context from large knowledge bases.
Together, Dynamiq and Zilliz Cloud create an end-to-end agentic RAG solution: documents are processed, embedded, and stored in Zilliz Cloud through Dynamiq's indexing workflow. When users interact with AI agents, Dynamiq's retrieval workflow queries Zilliz Cloud for relevant document embeddings and passes the context to the LLM to generate detailed, context-aware responses — all without custom code or infrastructure setup.
Step-by-Step Guide
1. Install Required Libraries
$ pip install dynamiq pymilvus2. Configure the LLM Agent
We will use OpenAI as the LLM in this example. Prepare the API key as an environment variable:
import os os.environ["OPENAI_API_KEY"] = "sk-***********"3. Document Indexing Flow — Import Libraries and Initialize Workflow
from io import BytesIO from dynamiq import Workflow from dynamiq.nodes import InputTransformer from dynamiq.connections import ( OpenAI as OpenAIConnection, Milvus as MilvusConnection, MilvusDeploymentType, ) from dynamiq.nodes.converters import PyPDFConverter from dynamiq.nodes.splitters.document import DocumentSplitter from dynamiq.nodes.embedders import OpenAIDocumentEmbedder from dynamiq.nodes.writers import MilvusDocumentWriter # Initialize the workflow rag_wf = Workflow()4. Define the Indexing Pipeline Nodes
Define the PDF converter, document splitter, embedding, and Milvus vector store nodes:
# PDF Converter converter = PyPDFConverter(document_creation_mode="one-doc-per-page") converter_added = rag_wf.flow.add_nodes(converter) # Document Splitter document_splitter = DocumentSplitter( split_by="sentence", split_length=10, split_overlap=1, input_transformer=InputTransformer( selector={ "documents": f"${[converter.id]}.output.documents", }, ), ).depends_on(converter) splitter_added = rag_wf.flow.add_nodes(document_splitter) # Embedding Node embedder = OpenAIDocumentEmbedder( connection=OpenAIConnection(api_key=os.environ["OPENAI_API_KEY"]), input_transformer=InputTransformer( selector={ "documents": f"${[document_splitter.id]}.output.documents", }, ), ).depends_on(document_splitter) document_embedder_added = rag_wf.flow.add_nodes(embedder) # Milvus Vector Store vector_store = ( MilvusDocumentWriter( connection=MilvusConnection( deployment_type=MilvusDeploymentType.FILE, uri="./milvus.db" ), index_name="my_milvus_collection", dimension=1536, create_if_not_exist=True, metric_type="COSINE", ) .inputs(documents=embedder.outputs.documents) .depends_on(embedder) ) milvus_writer_added = rag_wf.flow.add_nodes(vector_store)Milvus offers two deployment types: MilvusDeploymentType.FILE is ideal for local prototyping or small-scale data storage, setting the
urito a local file path (e.g.,./milvus.db) to leverage Milvus Lite. MilvusDeploymentType.HOST is designed for large-scale data scenarios. You can deploy a Milvus server using Docker or Kubernetes, or use Zilliz Cloud by setting theuriandtokento the Public Endpoint and API Key in Zilliz Cloud.5. Run the Indexing Workflow
file_paths = ["./pdf_files/WhatisMilvus.pdf"] input_data = { "files": [BytesIO(open(path, "rb").read()) for path in file_paths], "metadata": [{"filename": path} for path in file_paths], } inserted_data = rag_wf.run(input_data=input_data)6. Document Retrieval Flow — Initialize and Define Nodes
from dynamiq import Workflow from dynamiq.connections import ( OpenAI as OpenAIConnection, Milvus as MilvusConnection, MilvusDeploymentType, ) from dynamiq.nodes.embedders import OpenAITextEmbedder from dynamiq.nodes.retrievers import MilvusDocumentRetriever from dynamiq.nodes.llms import OpenAI from dynamiq.prompts import Message, Prompt # Initialize the workflow retrieval_wf = Workflow() # OpenAI connection and text embedder openai_connection = OpenAIConnection(api_key=os.environ["OPENAI_API_KEY"]) embedder = OpenAITextEmbedder( connection=openai_connection, model="text-embedding-3-small", ) embedder_added = retrieval_wf.flow.add_nodes(embedder) # Milvus document retriever document_retriever = ( MilvusDocumentRetriever( connection=MilvusConnection( deployment_type=MilvusDeploymentType.FILE, uri="./milvus.db" ), index_name="my_milvus_collection", dimension=1536, top_k=5, ) .inputs(embedding=embedder.outputs.embedding) .depends_on(embedder) ) milvus_retriever_added = retrieval_wf.flow.add_nodes(document_retriever)7. Define the Prompt and Answer Generator
prompt_template = """ Please answer the question based on the provided context. Question: {{ query }} Context: {% for document in documents %} - {{ document.content }} {% endfor %} """ prompt = Prompt(messages=[Message(content=prompt_template, role="user")]) answer_generator = ( OpenAI( connection=openai_connection, model="gpt-4o", prompt=prompt, ) .inputs( documents=document_retriever.outputs.documents, query=embedder.outputs.query, ) .depends_on([document_retriever, embedder]) ) answer_generator_added = retrieval_wf.flow.add_nodes(answer_generator)8. Run the Retrieval Workflow
sample_query = "What is the Advanced Search Algorithms in Milvus?" result = retrieval_wf.run(input_data={"query": sample_query}) answer = result.output.get(answer_generator.id).get("output", {}).get("content") print(answer)Learn More
- Getting Started with Dynamiq and Milvus — Official Milvus tutorial for building RAG with Dynamiq
- Dynamiq Documentation — Official Dynamiq documentation
- Dynamiq GitHub Repository — Dynamiq open-source SDK
- Zilliz Cloud Documentation — Zilliz Cloud documentation for managed Milvus
- Build AI Apps with Milvus: Tutorials & Notebooks — Zilliz collection of Milvus tutorials and notebooks