Build RAG Chatbot with Haystack, Pgvector, Mixtral 8x7B, and jina-colbert-v2
Introduction to RAG
Retrieval-Augmented Generation (RAG) is a game-changer for GenAI applications, especially in conversational AI. It combines the power of pre-trained large language models (LLMs) like OpenAI’s GPT with external knowledge sources stored in vector databases such as Milvus and Zilliz Cloud, allowing for more accurate, contextually relevant, and up-to-date response generation. A RAG pipeline usually consists of four basic components: a vector database, an embedding model, an LLM, and a framework.
Key Components We'll Use for This RAG Chatbot
This tutorial shows you how to build a simple RAG chatbot in Python using the following components:
- Haystack: An open-source Python framework designed for building production-ready NLP applications, particularly question answering and semantic search systems. Haystack excels at retrieving information from large document collections through its modular architecture that combines retrieval and reader components. Ideal for developers creating search applications, chatbots, and knowledge management systems that require efficient document processing and accurate information extraction from unstructured text.
- Pgvector: an open-source extension for PostgreSQL that enables efficient storage and querying of high-dimensional vector data, essential for machine learning and AI applications. Designed to handle embeddings, it supports fast approximate nearest neighbor (ANN) searches using algorithms like HNSW and IVFFlat. Since it is just a vector search add-on to traditional search rather than a purpose-built vector database, it lacks scalability and availability and many other advanced features required by enterprise-level applications. Therefore, if you prefer a much more scalable solution or hate to manage your own infrastructure, we recommend using Zilliz Cloud, which is a fully managed vector database service built on the open-source Milvus and offers a free tier supporting up to 1 million vectors.)
- Mixtral 8x7B: A sparse mixture-of-experts (MoE) model with eight 7B parameter networks, designed for efficient, high-performance NLP tasks. Excels in text generation, reasoning, and multilingual support while minimizing computational costs. Ideal for scalable enterprise applications, real-time chatbots, and multi-task environments requiring optimized resource utilization and versatile AI capabilities.
- Jina-ColBERT-v2: A dense passage retrieval model optimized for semantic search and document ranking. It combines ColBERT's contextualized late interaction with efficient indexing, delivering high accuracy in understanding query intent and matching relevant text. Ideal for large-scale enterprise search, Q&A systems, and content discovery platforms requiring nuanced semantic understanding and rapid retrieval.
By the end of this tutorial, you’ll have a functional chatbot capable of answering questions based on a custom knowledge base.
Note: Since we may use proprietary models in our tutorials, make sure you have the required API key beforehand.
Step 1: Install and Set Up Haystack
import os
import requests
from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter
Step 2: Install and Set Up Mixtral 8x7B
To use Mistral models, you need first to get a Mistral API key. You can write this key in:
- The
api_key
init parameter using Secret API - The
MISTRAL_API_KEY
environment variable (recommended)
Now, after you get the API key, let's install the Install the mistral-haystack
package.
pip install mistral-haystack
from haystack_integrations.components.generators.mistral import MistralChatGenerator
from haystack.components.generators.utils import print_streaming_chunk
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
generator = MistralChatGenerator(api_key=Secret.from_env_var("MISTRAL_API_KEY"), streaming_callback=print_streaming_chunk, model='open-mixtral-8x7b')
Step 3: Install and Set Up jina-colbert-v2
pip install jina-haystack
from haystack_integrations.components.embedders.jina import JinaTextEmbedder
from haystack_integrations.components.embedders.jina import JinaDocumentEmbedder
text_embedder = JinaTextEmbedder(api_key=Secret.from_token("<your-api-key>"), model="jina-colbert-v2")
document_embedder = JinaDocumentEmbedder(api_key=Secret.from_token("<your-api-key>"), model="jina-colbert-v2")
Step 4: Install and Set Up Pgvector
To quickly set up a PostgreSQL database with pgvector, you can use Docker:
docker run -d -p 5432:5432 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=postgres ankane/pgvector
To use pgvector with Haystack, install the pgvector-haystack
integration:
pip install pgvector-haystack
import os
from haystack_integrations.document_stores.pgvector import PgvectorDocumentStore
from haystack_integrations.components.retrievers.pgvector import PgvectorEmbeddingRetriever
os.environ["PG_CONN_STR"] = "postgresql://postgres:postgres@localhost:5432/postgres"
document_store = PgvectorDocumentStore()
retriever = PgvectorEmbeddingRetriever(document_store=document_store)
Step 5: Build a RAG Chatbot
Now that you’ve set up all components, let’s start to build a simple chatbot. We’ll use the Milvus introduction doc as a private knowledge base. You can replace it your own dataset to customize your RAG chatbot.
url = 'https://raw.githubusercontent.com/milvus-io/milvus-docs/refs/heads/v2.5.x/site/en/about/overview.md'
example_file = 'example_file.md'
response = requests.get(url)
with open(example_file, 'wb') as f:
f.write(response.content)
file_paths = [example_file] # You can replace it with your own file paths.
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
indexing_pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=2))
indexing_pipeline.add_component("embedder", document_embedder)
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter", "splitter")
indexing_pipeline.connect("splitter", "embedder")
indexing_pipeline.connect("embedder", "writer")
indexing_pipeline.run({"converter": {"sources": file_paths}})
# print("Number of documents:", document_store.count_documents())
question = "What is Milvus?" # You can replace it with your own question.
retrieval_pipeline = Pipeline()
retrieval_pipeline.add_component("embedder", text_embedder)
retrieval_pipeline.add_component("retriever", retriever)
retrieval_pipeline.connect("embedder", "retriever")
retrieval_results = retrieval_pipeline.run({"embedder": {"text": question}})
# for doc in retrieval_results["retriever"]["documents"]:
# print(doc.content)
# print("-" * 10)
from haystack.utils import Secret
from haystack.components.builders import PromptBuilder
retriever = PgvectorEmbeddingRetriever(document_store=document_store)
text_embedder = JinaTextEmbedder(api_key=Secret.from_token("<your-api-key>"), model="jina-colbert-v2")
prompt_template = """Answer the following query based on the provided context. If the context does
not include an answer, reply with 'I don't know'.\n
Query: {{query}}
Documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Answer:
"""
rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", text_embedder)
rag_pipeline.add_component("retriever", retriever)
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("generator", generator)
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")
results = rag_pipeline.run({"text_embedder": {"text": question}, "prompt_builder": {"query": question},})
print('RAG answer:\n', results["generator"]["replies"][0])
Optimization Tips
As you build your RAG system, optimization is key to ensuring peak performance and efficiency. While setting up the components is an essential first step, fine-tuning each one will help you create a solution that works even better and scales seamlessly. In this section, we’ll share some practical tips for optimizing all these components, giving you the edge to build smarter, faster, and more responsive RAG applications.
Haystack optimization tips
To optimize Haystack in a RAG setup, ensure you use an efficient retriever like FAISS or Milvus for scalable and fast similarity searches. Fine-tune your document store settings, such as indexing strategies and storage backends, to balance speed and accuracy. Use batch processing for embedding generation to reduce latency and optimize API calls. Leverage Haystack's pipeline caching to avoid redundant computations, especially for frequently queried documents. Tune your reader model by selecting a lightweight yet accurate transformer-based model like DistilBERT to speed up response times. Implement query rewriting or filtering techniques to enhance retrieval quality, ensuring the most relevant documents are retrieved for generation. Finally, monitor system performance with Haystack’s built-in evaluation tools to iteratively refine your setup based on real-world query performance.
pgvector optimization tips
To optimize pgvector in a Retrieval-Augmented Generation (RAG) setup, consider indexing your vectors using GiST or IVFFlat to significantly speed up search queries and improve retrieval performance. Make sure to leverage parallelization for query execution, allowing multiple queries to be processed simultaneously, especially for large datasets. Optimize memory usage by tuning the vector storage size and using compressed embeddings where possible. To further enhance query speed, implement pre-filtering techniques to narrow down search space before querying. Regularly rebuild indexes to ensure they are up to date with any new data. Fine-tune vectorization models to reduce dimensionality without sacrificing accuracy, thus improving both storage efficiency and retrieval times. Finally, manage resource allocation carefully, utilizing horizontal scaling for larger datasets and offloading intensive operations to dedicated processing units to maintain responsiveness during high-traffic periods.
Mixtral 8x7B optimization tips
To optimize Mixtral 8x7B in RAG, prioritize efficient context retrieval by fine-tuning chunk size and overlap for balanced relevance and latency. Use sparse attention configurations to reduce computational overhead, and enable tensor parallelism to leverage its mixture-of-experts architecture. Quantize the model to 4-bit precision (e.g., via GPTQ) for faster inference with minimal accuracy loss. Pre-filter retrieved documents to remove noise, and cache frequent query embeddings. Adjust temperature (0.2-0.5) and max tokens to balance creativity and focus. Profile expert routing to ensure balanced workload distribution across GPU resources.
jina-colbert-v2 optimization tips
To optimize jina-colbert-v2 in a RAG setup, ensure input text is preprocessed by truncating or chunking documents to fit its 512-token limit, preserving context. Use batch inference for dense embeddings to maximize GPU utilization. Fine-tune on domain-specific data to improve relevance. Leverage ColBERT’s late interaction by precomputing document embeddings and caching them for faster retrieval. Adjust the compression ratio for query-document token tensors to balance speed and accuracy. Filter irrelevant documents early using metadata to reduce computational overhead. Monitor retrieval latency and accuracy to iteratively refine parameters.
By implementing these tips across your components, you'll be able to enhance the performance and functionality of your RAG system, ensuring it’s optimized for both speed and accuracy. Keep testing, iterating, and refining your setup to stay ahead in the ever-evolving world of AI development.
RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
Estimating the cost of a Retrieval-Augmented Generation (RAG) pipeline involves analyzing expenses across vector storage, compute resources, and API usage. Key cost drivers include vector database queries, embedding generation, and LLM inference.
RAG Cost Calculator is a free tool that quickly estimates the cost of building a RAG pipeline, including chunking, embedding, vector storage/search, and LLM generation. It also helps you identify cost-saving opportunities and achieve up to 10x cost reduction on vector databases with the serverless option.
Calculate your RAG cost
What Have You Learned?
By diving into this tutorial, you’ve unlocked the power of combining cutting-edge tools to build a RAG system from the ground up! You learned how Haystack acts as the backbone, seamlessly connecting components into a cohesive pipeline. With Pgvector, you discovered how to store and retrieve vectors efficiently, turning unstructured data into searchable knowledge. The Mixtral 8x7B LLM then stepped in as your creative powerhouse, generating human-like responses by synthesizing retrieved context with its vast understanding. And let’s not forget jina-colbert-v2, the embedding model that supercharged your system’s ability to grasp context and nuance, ensuring your RAG pipeline delivers precise, relevant answers. Together, these tools transformed raw data into an intelligent, responsive application—proving that even complex systems can feel approachable with the right framework and models!
But this wasn’t just about assembly—it was about optimization and innovation! You explored pro tips like tweaking chunk sizes for better retrieval, balancing speed and accuracy in vector searches, and even using a free RAG cost calculator to estimate expenses and scale smarter. Imagine the possibilities now: building chatbots that understand niche domains, creating personalized research tools, or even automating customer support with flair. The tools are yours, the foundation is set, and the only limit is your creativity. So go ahead—experiment, iterate, and push boundaries! Whether you’re refining your first prototype or dreaming up the next big AI-driven solution, remember: every line of code brings you closer to something extraordinary. Start building, stay curious, and let your RAG applications shine! 🚀
Further Resources
🌟 In addition to this RAG tutorial, unleash your full potential with these incredible resources to level up your RAG skills.
- How to Build a Multimodal RAG | Documentation
- How to Enhance the Performance of Your RAG Pipeline
- Graph RAG with Milvus | Documentation
- How to Evaluate RAG Applications - Zilliz Learn
- Generative AI Resource Hub | Zilliz
We'd Love to Hear What You Think!
We’d love to hear your thoughts! 🌟 Leave your questions or comments below or join our vibrant Milvus Discord community to share your experiences, ask questions, or connect with thousands of AI enthusiasts. Your journey matters to us!
If you like this tutorial, show your support by giving our Milvus GitHub repo a star ⭐—it means the world to us and inspires us to keep creating! 💖
- Introduction to RAG
- Key Components We'll Use for This RAG Chatbot
- Step 1: Install and Set Up Haystack
- Step 2: Install and Set Up Mixtral 8x7B
- Step 3: Install and Set Up jina-colbert-v2
- Step 4: Install and Set Up Pgvector
- Step 5: Build a RAG Chatbot
- Optimization Tips
- RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
- What Have You Learned?
- Further Resources
- We'd Love to Hear What You Think!
Content
Vector Database at Scale
Zilliz Cloud is a fully-managed vector database built for scale, perfect for your RAG apps.
Try Zilliz Cloud for Free