Build RAG Chatbot with Haystack, Zilliz Cloud, Google Vertex AI Gemini 1.5 Flash, and Cohere embed-english-light-v3.0
Introduction to RAG
Retrieval-Augmented Generation (RAG) is a game-changer for GenAI applications, especially in conversational AI. It combines the power of pre-trained large language models (LLMs) like OpenAI’s GPT with external knowledge sources stored in vector databases such as Milvus and Zilliz Cloud, allowing for more accurate, contextually relevant, and up-to-date response generation. A RAG pipeline usually consists of four basic components: a vector database, an embedding model, an LLM, and a framework.
Key Components We'll Use for This RAG Chatbot
This tutorial shows you how to build a simple RAG chatbot in Python using the following components:
- Haystack: An open-source Python framework designed for building production-ready NLP applications, particularly question answering and semantic search systems. Haystack excels at retrieving information from large document collections through its modular architecture that combines retrieval and reader components. Ideal for developers creating search applications, chatbots, and knowledge management systems that require efficient document processing and accurate information extraction from unstructured text.
- Zilliz Cloud: a fully managed vector database-as-a-service platform built on top of the open-source Milvus, designed to handle high-performance vector data processing at scale. It enables organizations to efficiently store, search, and analyze large volumes of unstructured data, such as text, images, or audio, by leveraging advanced vector search technology. It offers a free tier supporting up to 1 million vectors.
- Google Vertex AI Gemini 1.5 Flash: A lightweight, high-speed AI model optimized for rapid inference and scalable deployment. Ideal for real-time applications like chatbots, content generation, and data analysis, it balances performance with cost-efficiency, making it suitable for high-throughput environments requiring low-latency responses and resource optimization.
- Cohere embed-english-light-v3.0: A lightweight, efficient embedding model designed to convert English text into high-dimensional vector representations. Excelling in speed and scalability, it balances accuracy with low computational demands, making it ideal for semantic search, text clustering, and retrieval-augmented applications in resource-constrained environments.
By the end of this tutorial, you’ll have a functional chatbot capable of answering questions based on a custom knowledge base.
Note: Since we may use proprietary models in our tutorials, make sure you have the required API key beforehand.
Step 1: Install and Set Up Haystack
import os
import requests
from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter
Step 2: Install and Set Up Google Vertex AI Gemini 1.5 Flash
Using theVertexAIGeminiGenerator
with Haystack requires authentication using Google Cloud Application Default Credentials (ADCs). This means your application must be set up with credentials that allow it to access Google Cloud services. If you're not sure how to configure ADCs, check the official Google documentation for setup instructions.
It's important to use a Google Cloud account that has the right permissions to access a project with Google Vertex AI endpoints. Without proper access, the generator won’t work as expected.
To find your project ID, you can either look it up in the Google Cloud Console under the resource manager or run the following command in your terminal.
Now let's install and set up this model.
pip install google-vertex-haystack
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiGenerator
generator = VertexAIGeminiGenerator(model="gemini-1.5-flash")
Step 3: Install and Set Up Cohere embed-english-light-v3.0
To start using this integration with Haystack, install it with:
pip install cohere-haystack
from haystack import Document
from haystack_integrations.components.embedders.cohere.document_embedder import CohereDocumentEmbedder
from haystack_integrations.components.embedders.cohere.text_embedder import CohereTextEmbedder
text_embedder = CohereTextEmbedder(model="embed-english-light-v3.0")
document_embedder = CohereDocumentEmbedder(model="embed-english-light-v3.0")
Step 4: Install and Set Up Zilliz Cloud
pip install --upgrade pymilvus milvus-haystack
from milvus_haystack import MilvusDocumentStore
from milvus_haystack.milvus_embedding_retriever import MilvusEmbeddingRetriever
document_store = MilvusDocumentStore(connection_args={"uri": ZILLIZ_CLOUD_URI, "token": ZILLIZ_CLOUD_TOKEN}, drop_old=True,)
retriever = MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
Step 5: Build a RAG Chatbot
Now that you’ve set up all components, let’s start to build a simple chatbot. We’ll use the Milvus introduction doc as a private knowledge base. You can replace it your own dataset to customize your RAG chatbot.
url = 'https://raw.githubusercontent.com/milvus-io/milvus-docs/refs/heads/v2.5.x/site/en/about/overview.md'
example_file = 'example_file.md'
response = requests.get(url)
with open(example_file, 'wb') as f:
f.write(response.content)
file_paths = [example_file] # You can replace it with your own file paths.
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
indexing_pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=2))
indexing_pipeline.add_component("embedder", document_embedder)
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter", "splitter")
indexing_pipeline.connect("splitter", "embedder")
indexing_pipeline.connect("embedder", "writer")
indexing_pipeline.run({"converter": {"sources": file_paths}})
# print("Number of documents:", document_store.count_documents())
question = "What is Milvus?" # You can replace it with your own question.
retrieval_pipeline = Pipeline()
retrieval_pipeline.add_component("embedder", text_embedder)
retrieval_pipeline.add_component("retriever", retriever)
retrieval_pipeline.connect("embedder", "retriever")
retrieval_results = retrieval_pipeline.run({"embedder": {"text": question}})
# for doc in retrieval_results["retriever"]["documents"]:
# print(doc.content)
# print("-" * 10)
from haystack.utils import Secret
from haystack.components.builders import PromptBuilder
retriever = MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
text_embedder = CohereTextEmbedder(model="embed-english-light-v3.0")
prompt_template = """Answer the following query based on the provided context. If the context does
not include an answer, reply with 'I don't know'.\n
Query: {{query}}
Documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Answer:
"""
rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", text_embedder)
rag_pipeline.add_component("retriever", retriever)
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("generator", generator)
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")
results = rag_pipeline.run({"text_embedder": {"text": question}, "prompt_builder": {"query": question},})
print('RAG answer:\n', results["generator"]["replies"][0])
Optimization Tips
As you build your RAG system, optimization is key to ensuring peak performance and efficiency. While setting up the components is an essential first step, fine-tuning each one will help you create a solution that works even better and scales seamlessly. In this section, we’ll share some practical tips for optimizing all these components, giving you the edge to build smarter, faster, and more responsive RAG applications.
Haystack optimization tips
To optimize Haystack in a RAG setup, ensure you use an efficient retriever like FAISS or Milvus for scalable and fast similarity searches. Fine-tune your document store settings, such as indexing strategies and storage backends, to balance speed and accuracy. Use batch processing for embedding generation to reduce latency and optimize API calls. Leverage Haystack's pipeline caching to avoid redundant computations, especially for frequently queried documents. Tune your reader model by selecting a lightweight yet accurate transformer-based model like DistilBERT to speed up response times. Implement query rewriting or filtering techniques to enhance retrieval quality, ensuring the most relevant documents are retrieved for generation. Finally, monitor system performance with Haystack’s built-in evaluation tools to iteratively refine your setup based on real-world query performance.
Zilliz Cloud optimization tips
Optimizing Zilliz Cloud for a RAG system involves efficient index selection, query tuning, and resource management. Use Hierarchical Navigable Small World (HNSW) indexing for high-speed, approximate nearest neighbor search while balancing recall and efficiency. Fine-tune ef_construction and M parameters based on your dataset size and query workload to optimize search accuracy and latency. Enable dynamic scaling to handle fluctuating workloads efficiently, ensuring smooth performance under varying query loads. Implement data partitioning to improve retrieval speed by grouping related data, reducing unnecessary comparisons. Regularly update and optimize embeddings to keep results relevant, particularly when dealing with evolving datasets. Use hybrid search techniques, such as combining vector and keyword search, to improve response quality. Monitor system metrics in Zilliz Cloud’s dashboard and adjust configurations accordingly to maintain low-latency, high-throughput performance.
Google Vertex AI Gemini 1.5 Flash optimization tips
To optimize Gemini 1.5 Flash in RAG, ensure retrieved documents are preprocessed into concise, context-rich chunks aligned with the model’s input limits (e.g., 1M tokens). Use semantic filtering to prioritize high-relevance passages and trim redundant text. Fine-tune prompts with explicit instructions for grounding responses in retrieved data, and adjust temperature settings to balance creativity and accuracy. Leverage Vertex AI’s batch processing for parallel inference and monitor latency-to-quality tradeoffs via A/B testing. Cache frequent queries to reduce costs and latency, and validate outputs against retrieval sources to minimize hallucinations.
Cohere embed-english-light-v3.0 optimization tips
To optimize Cohere embed-english-light-v3.0 in RAG, ensure input text is clean and concise by removing redundant whitespace, special characters, or irrelevant metadata. Use batch processing for embeddings to reduce API calls and latency. Align chunk sizes with the model’s 512-token limit, splitting longer texts into coherent segments. Cache frequent or static embeddings to save costs. Fine-tune retrieval scoring (e.g., cosine similarity) to match your data distribution, and pre-filter low-relevance documents using metadata to reduce computational overhead. Regularly validate embedding quality against domain-specific benchmarks.
By implementing these tips across your components, you'll be able to enhance the performance and functionality of your RAG system, ensuring it’s optimized for both speed and accuracy. Keep testing, iterating, and refining your setup to stay ahead in the ever-evolving world of AI development.
RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
Estimating the cost of a Retrieval-Augmented Generation (RAG) pipeline involves analyzing expenses across vector storage, compute resources, and API usage. Key cost drivers include vector database queries, embedding generation, and LLM inference.
RAG Cost Calculator is a free tool that quickly estimates the cost of building a RAG pipeline, including chunking, embedding, vector storage/search, and LLM generation. It also helps you identify cost-saving opportunities and achieve up to 10x cost reduction on vector databases with the serverless option.
Calculate your RAG cost
What Have You Learned?
By diving into this tutorial, you’ve unlocked the power of combining cutting-edge tools to build a fully functional RAG system! You learned how Haystack acts as the glue, seamlessly orchestrating your pipeline by connecting the dots between components. Zilliz Cloud stepped in as your high-performance vector database, storing and retrieving embeddings at lightning speed, while Cohere’s embed-english-light-v3.0 transformed raw text into rich, semantic vectors that capture meaning beyond keywords. Then, Google Vertex AI Gemini 1.5 Flash brought the magic, generating human-like responses by synthesizing retrieved context with its advanced reasoning capabilities. Together, these tools form a dynamic RAG pipeline that’s not just smart—it’s adaptable, scaling to handle everything from simple Q&A to complex, domain-specific queries.
But wait, there’s more! You also picked up pro tips for optimizing costs and performance, like tweaking chunk sizes and balancing speed with accuracy. The free RAG cost calculator shared in the tutorial? That’s your secret weapon for budgeting experiments without surprises. Now that you’ve seen how these pieces fit together, imagine the possibilities: customizing pipelines for niche industries, enhancing chatbots with real-time data, or even building your own AI-powered research assistant. The tools are in your hands, and the future is wide open. So go ahead—experiment, iterate, and let your creativity run wild. Your next breakthrough RAG application is just a few lines of code away. Let’s build something amazing! 🚀
Further Resources
🌟 In addition to this RAG tutorial, unleash your full potential with these incredible resources to level up your RAG skills.
- How to Build a Multimodal RAG | Documentation
- How to Enhance the Performance of Your RAG Pipeline
- Graph RAG with Milvus | Documentation
- How to Evaluate RAG Applications - Zilliz Learn
- Generative AI Resource Hub | Zilliz
We'd Love to Hear What You Think!
We’d love to hear your thoughts! 🌟 Leave your questions or comments below or join our vibrant Milvus Discord community to share your experiences, ask questions, or connect with thousands of AI enthusiasts. Your journey matters to us!
If you like this tutorial, show your support by giving our Milvus GitHub repo a star ⭐—it means the world to us and inspires us to keep creating! 💖
- Introduction to RAG
- Key Components We'll Use for This RAG Chatbot
- Step 1: Install and Set Up Haystack
- Step 2: Install and Set Up Google Vertex AI Gemini 1.5 Flash
- Step 3: Install and Set Up Cohere embed-english-light-v3.0
- Step 4: Install and Set Up Zilliz Cloud
- Step 5: Build a RAG Chatbot
- Optimization Tips
- RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
- What Have You Learned?
- Further Resources
- We'd Love to Hear What You Think!
Content
Vector Database at Scale
Zilliz Cloud is a fully-managed vector database built for scale, perfect for your RAG apps.
Try Zilliz Cloud for Free