Build RAG Chatbot with Haystack, Milvus, OpenAI GPT-4, and Ollama snowflake-arctic-embed
Introduction to RAG
Retrieval-Augmented Generation (RAG) is a game-changer for GenAI applications, especially in conversational AI. It combines the power of pre-trained large language models (LLMs) like OpenAI’s GPT with external knowledge sources stored in vector databases such as Milvus and Zilliz Cloud, allowing for more accurate, contextually relevant, and up-to-date response generation. A RAG pipeline usually consists of four basic components: a vector database, an embedding model, an LLM, and a framework.
Key Components We'll Use for This RAG Chatbot
This tutorial shows you how to build a simple RAG chatbot in Python using the following components:
- Haystack: An open-source Python framework designed for building production-ready NLP applications, particularly question answering and semantic search systems. Haystack excels at retrieving information from large document collections through its modular architecture that combines retrieval and reader components. Ideal for developers creating search applications, chatbots, and knowledge management systems that require efficient document processing and accurate information extraction from unstructured text.
- Milvus: An open-source vector database optimized to store, index, and search large-scale vector embeddings efficiently, perfect for use cases like RAG, semantic search, and recommender systems. If you hate to manage your own infrastructure, we recommend using Zilliz Cloud, which is a fully managed vector database service built on Milvus and offers a free tier supporting up to 1 million vectors.
- OpenAI GPT-4: A state-of-the-art multimodal AI model designed for advanced natural language understanding and generation, capable of processing both text and image inputs. Its strengths include superior reasoning, contextual accuracy, and adaptability across domains. Ideal for complex tasks like content creation, data analysis, technical support, and educational tools, while maintaining enhanced safety and ethical alignment compared to predecessors.
- Ollama Snowflake-Arctic-Embed: A high-performance embedding model optimized for semantic understanding and retrieval tasks. It excels in generating dense vector representations for text, offering robust accuracy and scalability. Ideal for enterprise applications like semantic search, recommendation systems, and data clustering, particularly in environments leveraging Snowflake’s data ecosystem for seamless integration and large-scale analytics.
By the end of this tutorial, you’ll have a functional chatbot capable of answering questions based on a custom knowledge base.
Note: Since we may use proprietary models in our tutorials, make sure you have the required API key beforehand.
Step 1: Install and Set Up Haystack
import os
import requests
from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter
Step 2: Install and Set Up OpenAI GPT-4
To use OpenAI models, you need to get an OpenAI API key. The Haystack integration with OpenAI models uses an OPENAI_API_KEY
environment variable by default. Otherwise, you can pass an API key at initialization with api_key
:
generator = OpenAIGenerator(api_key=Secret.from_token("<your-api-key>"), model="gpt-4o-mini")
Then, the generator component needs a prompt to operate, but you can pass any text generation parameters valid for the openai.ChatCompletion.create
method directly to this component using the generation_kwargs
parameter, both at initialization and to run()
method. For more details on the parameters supported by the OpenAI API, refer to the OpenAI documentation.
Now let's install and set up OpenAI models.
from haystack.components.generators import OpenAIGenerator
generator = OpenAIGenerator(model="gpt-4", api_key=Secret.from_token("<your-api-key>"))
Step 3: Install and Set Up Ollama snowflake-arctic-embed
pip install ollama-haystack
Make sure that you have a running Ollama model (either through a docker container, or locally hosted). No other configuration is necessary as Ollama has the embedding API built in.
from haystack import Document
from haystack_integrations.components.embedders.ollama import OllamaDocumentEmbedder
from haystack_integrations.components.embedders.ollama import OllamaTextEmbedder
text_embedder = OllamaTextEmbedder(model="snowflake-arctic-embed")
document_embedder = OllamaDocumentEmbedder(model="snowflake-arctic-embed")
Step 4: Install and Set Up Milvus
pip install --upgrade pymilvus milvus-haystack
from milvus_haystack import MilvusDocumentStore
from milvus_haystack.milvus_embedding_retriever import MilvusEmbeddingRetriever
document_store = MilvusDocumentStore(connection_args={"uri": "./milvus.db"}, drop_old=True,)
retriever = MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
Step 5: Build a RAG Chatbot
Now that you’ve set up all components, let’s start to build a simple chatbot. We’ll use the Milvus introduction doc as a private knowledge base. You can replace it your own dataset to customize your RAG chatbot.
url = 'https://raw.githubusercontent.com/milvus-io/milvus-docs/refs/heads/v2.5.x/site/en/about/overview.md'
example_file = 'example_file.md'
response = requests.get(url)
with open(example_file, 'wb') as f:
f.write(response.content)
file_paths = [example_file] # You can replace it with your own file paths.
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
indexing_pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=2))
indexing_pipeline.add_component("embedder", document_embedder)
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter", "splitter")
indexing_pipeline.connect("splitter", "embedder")
indexing_pipeline.connect("embedder", "writer")
indexing_pipeline.run({"converter": {"sources": file_paths}})
# print("Number of documents:", document_store.count_documents())
question = "What is Milvus?" # You can replace it with your own question.
retrieval_pipeline = Pipeline()
retrieval_pipeline.add_component("embedder", text_embedder)
retrieval_pipeline.add_component("retriever", retriever)
retrieval_pipeline.connect("embedder", "retriever")
retrieval_results = retrieval_pipeline.run({"embedder": {"text": question}})
# for doc in retrieval_results["retriever"]["documents"]:
# print(doc.content)
# print("-" * 10)
from haystack.utils import Secret
from haystack.components.builders import PromptBuilder
retriever = MilvusEmbeddingRetriever(document_store=document_store, top_k=3)
text_embedder = OllamaTextEmbedder(model="snowflake-arctic-embed")
prompt_template = """Answer the following query based on the provided context. If the context does
not include an answer, reply with 'I don't know'.\n
Query: {{query}}
Documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Answer:
"""
rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", text_embedder)
rag_pipeline.add_component("retriever", retriever)
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("generator", generator)
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")
results = rag_pipeline.run({"text_embedder": {"text": question}, "prompt_builder": {"query": question},})
print('RAG answer:\n', results["generator"]["replies"][0])
Optimization Tips
As you build your RAG system, optimization is key to ensuring peak performance and efficiency. While setting up the components is an essential first step, fine-tuning each one will help you create a solution that works even better and scales seamlessly. In this section, we’ll share some practical tips for optimizing all these components, giving you the edge to build smarter, faster, and more responsive RAG applications.
Haystack optimization tips
To optimize Haystack in a RAG setup, ensure you use an efficient retriever like FAISS or Milvus for scalable and fast similarity searches. Fine-tune your document store settings, such as indexing strategies and storage backends, to balance speed and accuracy. Use batch processing for embedding generation to reduce latency and optimize API calls. Leverage Haystack's pipeline caching to avoid redundant computations, especially for frequently queried documents. Tune your reader model by selecting a lightweight yet accurate transformer-based model like DistilBERT to speed up response times. Implement query rewriting or filtering techniques to enhance retrieval quality, ensuring the most relevant documents are retrieved for generation. Finally, monitor system performance with Haystack’s built-in evaluation tools to iteratively refine your setup based on real-world query performance.
Milvus optimization tips
Milvus serves as a highly efficient vector database, critical for retrieval tasks in a RAG system. To optimize its performance, ensure that indexes are properly built to balance speed and accuracy; consider utilizing HNSW (Hierarchical Navigable Small World) for efficient nearest neighbor search where response time is crucial. Partitioning data based on usage patterns can enhance query performance and reduce load times, enabling better scalability. Regularly monitor and adjust cache settings based on query frequency to avoid latency during data retrieval. Employ batch processing for vector insertions, which can minimize database lock contention and enhance overall throughput. Additionally, fine-tune the model parameters by experimenting with the dimensionality of the vectors; higher dimensions can improve retrieval accuracy but may increase search time, necessitating a balance tailored to your specific use case and hardware infrastructure.
OpenAI GPT-4 optimization tips
To optimize GPT-4 in RAG, structure prompts to explicitly separate instructions from context using delimiters (e.g., ##CONTEXT##
), prioritize concise retrieved passages to stay within token limits, and use system messages to guide tone and formatting. Adjust temperature (lower for factual accuracy, higher for creativity) and set max_tokens
to avoid truncation. Employ chunking for long documents, cache frequent queries, and validate outputs against retrieved data to reduce hallucinations. Test iteratively with domain-specific examples to refine performance.
Ollama Snowflake-Arctic-Embed optimization tips
To optimize Ollama Snowflake-Arctic-Embed in a RAG setup, ensure input text is cleanly chunked (e.g., 256-512 tokens) to align with its context window. Use batch processing for embeddings to reduce latency, and leverage hardware acceleration (e.g., CUDA for GPUs). Fine-tune with domain-specific data to improve retrieval relevance. Quantize the model for faster inference with minimal accuracy loss. Cache frequently accessed embeddings, and experiment with dimensionality reduction techniques like PCA if storage or speed constraints exist. Regularly validate embedding quality using similarity benchmarks.
By implementing these tips across your components, you'll be able to enhance the performance and functionality of your RAG system, ensuring it’s optimized for both speed and accuracy. Keep testing, iterating, and refining your setup to stay ahead in the ever-evolving world of AI development.
RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
Estimating the cost of a Retrieval-Augmented Generation (RAG) pipeline involves analyzing expenses across vector storage, compute resources, and API usage. Key cost drivers include vector database queries, embedding generation, and LLM inference.
RAG Cost Calculator is a free tool that quickly estimates the cost of building a RAG pipeline, including chunking, embedding, vector storage/search, and LLM generation. It also helps you identify cost-saving opportunities and achieve up to 10x cost reduction on vector databases with the serverless option.
Calculate your RAG cost
What Have You Learned?
By diving into this tutorial, you’ve unlocked the magic of building a RAG system from the ground up! You learned how to weave together Haystack as your orchestration framework, Milvus as your lightning-fast vector database, OpenAI GPT-4 as your creative LLM powerhouse, and Ollama’s snowflake-arctic-embed as your precision embedding model. Each component plays a starring role: Haystack stitches the workflow together, Milvus handles the heavy lifting of storing and retrieving vectorized data at scale, GPT-4 generates human-like responses packed with context, and snowflake-arctic-embed transforms raw text into rich numerical representations for accurate semantic search. Together, they form a seamless pipeline that lets you tap into vast knowledge bases while maintaining the flexibility to adapt to your specific use case—whether it’s customer support, research, or creative brainstorming!
But wait, there’s more! You also picked up pro tips for optimizing your RAG system, like balancing speed and accuracy during retrieval, fine-tuning prompts for GPT-4, and leveraging Milvus’s indexing strategies. Plus, the free RAG cost calculator you explored helps you estimate expenses and scale wisely without surprises. Now that you’ve seen how these pieces fit together, imagine the possibilities—personalized AI assistants, hyper-targeted search engines, or even domain-specific chatbots. The tools are in your hands, and the roadmap is clear. So go ahead, experiment, tweak, and innovate! Build something that wows your users, solves real problems, and pushes the boundaries of what AI can do. Your RAG journey is just beginning—let’s make it legendary! 🚀
Further Resources
🌟 In addition to this RAG tutorial, unleash your full potential with these incredible resources to level up your RAG skills.
- How to Build a Multimodal RAG | Documentation
- How to Enhance the Performance of Your RAG Pipeline
- Graph RAG with Milvus | Documentation
- How to Evaluate RAG Applications - Zilliz Learn
- Generative AI Resource Hub | Zilliz
We'd Love to Hear What You Think!
We’d love to hear your thoughts! 🌟 Leave your questions or comments below or join our vibrant Milvus Discord community to share your experiences, ask questions, or connect with thousands of AI enthusiasts. Your journey matters to us!
If you like this tutorial, show your support by giving our Milvus GitHub repo a star ⭐—it means the world to us and inspires us to keep creating! 💖
- Introduction to RAG
- Key Components We'll Use for This RAG Chatbot
- Step 1: Install and Set Up Haystack
- Step 2: Install and Set Up OpenAI GPT-4
- Step 3: Install and Set Up Ollama snowflake-arctic-embed
- Step 4: Install and Set Up Milvus
- Step 5: Build a RAG Chatbot
- Optimization Tips
- RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
- What Have You Learned?
- Further Resources
- We'd Love to Hear What You Think!
Content
Vector Database at Scale
Zilliz Cloud is a fully-managed vector database built for scale, perfect for your RAG apps.
Try Zilliz Cloud for Free