Build RAG Chatbot with Haystack, Haystack In-memory store, Mistral Large, and Optimum all-mpnet-base-v2
Introduction to RAG
Retrieval-Augmented Generation (RAG) is a game-changer for GenAI applications, especially in conversational AI. It combines the power of pre-trained large language models (LLMs) like OpenAI’s GPT with external knowledge sources stored in vector databases such as Milvus and Zilliz Cloud, allowing for more accurate, contextually relevant, and up-to-date response generation. A RAG pipeline usually consists of four basic components: a vector database, an embedding model, an LLM, and a framework.
Key Components We'll Use for This RAG Chatbot
This tutorial shows you how to build a simple RAG chatbot in Python using the following components:
- Haystack: An open-source Python framework designed for building production-ready NLP applications, particularly question answering and semantic search systems. Haystack excels at retrieving information from large document collections through its modular architecture that combines retrieval and reader components. Ideal for developers creating search applications, chatbots, and knowledge management systems that require efficient document processing and accurate information extraction from unstructured text.
- Haystack in-memory store: a very simple, in-memory document store with no extra services or dependencies. It is great for experimenting with Haystack, and we do not recommend using it for production. If you want a much more scalable solution for your apps or even enterprise projects, we recommend using Zilliz Cloud, which is a fully managed vector database service built on the open-source Milvusand offers a free tier supporting up to 1 million vectors.)
- Mistral Large: A state-of-the-art language model optimized for advanced reasoning, multilingual tasks, and high-stakes decision-making. It excels in code generation, complex analysis, and cross-lingual understanding, offering scalability, efficiency, and high accuracy for enterprise solutions, AI-driven research, and global customer interaction platforms.
- Optimum all-mpnet-base-v2: A high-performance sentence-transformers model optimized for semantic textual similarity, offering robust multilingual embeddings. Its strengths include efficient inference, scalability, and state-of-the-art accuracy in tasks like semantic search, clustering, and retrieval-augmented generation (RAG). Ideal for enterprise applications requiring fast, precise text analysis across diverse languages and domains.
By the end of this tutorial, you’ll have a functional chatbot capable of answering questions based on a custom knowledge base.
Note: Since we may use proprietary models in our tutorials, make sure you have the required API key beforehand.
Step 1: Install and Set Up Haystack
import os
import requests
from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter
Step 2: Install and Set Up Mistral Large
To use Mistral models, you need first to get a Mistral API key. You can write this key in:
- The
api_key
init parameter using Secret API - The
MISTRAL_API_KEY
environment variable (recommended)
Now, after you get the API key, let's install the Install the mistral-haystack
package.
pip install mistral-haystack
from haystack_integrations.components.generators.mistral import MistralChatGenerator
from haystack.components.generators.utils import print_streaming_chunk
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
generator = MistralChatGenerator(api_key=Secret.from_env_var("MISTRAL_API_KEY"), streaming_callback=print_streaming_chunk, model='mistral-large-latest')
Step 3: Install and Set Up Optimum all-mpnet-base-v2
Haystack's OptimumTextEmbedder
embeds text strings using models loaded with the HuggingFace Optimum library. It uses the ONNX runtime for high-speed inference. Similarly to other Embedders, this component allows adding prefixes (and suffixes) to include instructions. For more details, refer to the Optimum API Reference.
pip install optimum-haystack
from haystack_integrations.components.embedders.optimum import OptimumTextEmbedder
from haystack.dataclasses import Document
from haystack_integrations.components.embedders.optimum import OptimumDocumentEmbedder
text_embedder = OptimumTextEmbedder(model="sentence-transformers/all-mpnet-base-v2")
text_embedder.warm_up()
document_embedder = OptimumDocumentEmbedder(model="sentence-transformers/all-mpnet-base-v2")
document_embedder.warm_up()
Step 4: Install and Set Up Haystack In-memory store
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.retrievers import InMemoryEmbeddingRetriever
document_store = InMemoryDocumentStore()
retriever=InMemoryEmbeddingRetriever(document_store=document_store))
Step 5: Build a RAG Chatbot
Now that you’ve set up all components, let’s start to build a simple chatbot. We’ll use the Milvus introduction doc as a private knowledge base. You can replace it your own dataset to customize your RAG chatbot.
url = 'https://raw.githubusercontent.com/milvus-io/milvus-docs/refs/heads/v2.5.x/site/en/about/overview.md'
example_file = 'example_file.md'
response = requests.get(url)
with open(example_file, 'wb') as f:
f.write(response.content)
file_paths = [example_file] # You can replace it with your own file paths.
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
indexing_pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=2))
indexing_pipeline.add_component("embedder", document_embedder)
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter", "splitter")
indexing_pipeline.connect("splitter", "embedder")
indexing_pipeline.connect("embedder", "writer")
indexing_pipeline.run({"converter": {"sources": file_paths}})
# print("Number of documents:", document_store.count_documents())
question = "What is Milvus?" # You can replace it with your own question.
retrieval_pipeline = Pipeline()
retrieval_pipeline.add_component("embedder", text_embedder)
retrieval_pipeline.add_component("retriever", retriever)
retrieval_pipeline.connect("embedder", "retriever")
retrieval_results = retrieval_pipeline.run({"embedder": {"text": question}})
# for doc in retrieval_results["retriever"]["documents"]:
# print(doc.content)
# print("-" * 10)
from haystack.utils import Secret
from haystack.components.builders import PromptBuilder
retriever=InMemoryEmbeddingRetriever(document_store=document_store)
text_embedder = OptimumTextEmbedder(model="sentence-transformers/all-mpnet-base-v2")
text_embedder.warm_up()
prompt_template = """Answer the following query based on the provided context. If the context does
not include an answer, reply with 'I don't know'.\n
Query: {{query}}
Documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Answer:
"""
rag_pipeline = Pipeline()
rag_pipeline.add_component("text_embedder", text_embedder)
rag_pipeline.add_component("retriever", retriever)
rag_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
rag_pipeline.add_component("generator", generator)
rag_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
rag_pipeline.connect("retriever.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder", "generator")
results = rag_pipeline.run({"text_embedder": {"text": question}, "prompt_builder": {"query": question},})
print('RAG answer:\n', results["generator"]["replies"][0])
Optimization Tips
As you build your RAG system, optimization is key to ensuring peak performance and efficiency. While setting up the components is an essential first step, fine-tuning each one will help you create a solution that works even better and scales seamlessly. In this section, we’ll share some practical tips for optimizing all these components, giving you the edge to build smarter, faster, and more responsive RAG applications.
Haystack optimization tips
To optimize Haystack in a RAG setup, ensure you use an efficient retriever like FAISS or Milvus for scalable and fast similarity searches. Fine-tune your document store settings, such as indexing strategies and storage backends, to balance speed and accuracy. Use batch processing for embedding generation to reduce latency and optimize API calls. Leverage Haystack's pipeline caching to avoid redundant computations, especially for frequently queried documents. Tune your reader model by selecting a lightweight yet accurate transformer-based model like DistilBERT to speed up response times. Implement query rewriting or filtering techniques to enhance retrieval quality, ensuring the most relevant documents are retrieved for generation. Finally, monitor system performance with Haystack’s built-in evaluation tools to iteratively refine your setup based on real-world query performance.
Haystack in-memory store optimization tips
Haystack in-memory store is just a very simple, in-memory document store with no extra services or dependencies. We recommend that you just experiment it with RAG pipeline within your Haystack framework, and we do not recommend using it for production. If you want a much more scalable solution for your apps or even enterprise projects, we recommend using Zilliz Cloud, which is a fully managed vector database service built on the open-source Milvusand offers a free tier supporting up to 1 million vectors
Mistral Large optimization tips
To enhance Mistral Large’s performance in RAG systems, prioritize efficient context handling by truncating or summarizing retrieved documents to fit its token limit while retaining key information. Fine-tune prompts to explicitly guide the model to reference retrieved content, using phrases like “based on the provided context.” Adjust temperature settings (lower for factuality, higher for creativity) and max token limits to balance output quality and length. Implement caching for frequent queries, and use parallel processing to speed up document retrieval. Regularly evaluate retrieval relevance scores to ensure high-quality inputs, and experiment with chunk sizes/overlaps during indexing to optimize context granularity.
Optimum all-mpnet-base-v2 optimization tips
To optimize Optimum all-mpnet-base-v2 in a RAG setup, preprocess input text by trimming redundant whitespace, normalizing casing, and splitting long documents into smaller chunks (≤512 tokens) to align with the model’s max sequence length. Use batch processing for embeddings to leverage GPU parallelism, adjusting batch size based on GPU memory. Quantize the model via ONNX Runtime or FP16 precision for faster inference. Cache frequently accessed embeddings to reduce recomputation, and pair with efficient vector search libraries (e.g., FAISS) for low-latency retrieval. Regularly update and prune the document corpus to maintain relevance.
By implementing these tips across your components, you'll be able to enhance the performance and functionality of your RAG system, ensuring it’s optimized for both speed and accuracy. Keep testing, iterating, and refining your setup to stay ahead in the ever-evolving world of AI development.
RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
Estimating the cost of a Retrieval-Augmented Generation (RAG) pipeline involves analyzing expenses across vector storage, compute resources, and API usage. Key cost drivers include vector database queries, embedding generation, and LLM inference.
RAG Cost Calculator is a free tool that quickly estimates the cost of building a RAG pipeline, including chunking, embedding, vector storage/search, and LLM generation. It also helps you identify cost-saving opportunities and achieve up to 10x cost reduction on vector databases with the serverless option.
Calculate your RAG cost
What Have You Learned?
By diving into this tutorial, you’ve unlocked the magic of combining cutting-edge tools to build a powerful RAG system from scratch! You learned how Haystack acts as the backbone, orchestrating the flow of data between components with ease, while the Haystack In-memory store keeps things lightning-fast by handling vector storage and retrieval without breaking a sweat. The Optimum all-mpnet-base-v2 embedding model became your trusty sidekick, transforming text into rich, meaningful vectors that capture the essence of your documents. And let’s not forget Mistral Large, the LLM powerhouse that turns retrieved context into coherent, insightful answers—like having a brilliant researcher and storyteller rolled into one! Together, these tools create a seamless pipeline where documents are ingested, indexed, and queried with precision, proving that RAG isn’t just a concept—it’s a practical, scalable solution you can implement today.
But wait, there’s more! You also picked up pro tips for optimizing performance, like tuning chunk sizes for embeddings and balancing speed with accuracy in retrieval. The tutorial even threw in a free RAG cost calculator to help you estimate expenses and scale smartly, ensuring your projects stay budget-friendly. Now that you’ve seen how these pieces fit together—like a well-oiled machine—it’s time to take the reins! Whether you’re building a chatbot, a research assistant, or a knowledge hub, you’ve got the tools and know-how to make it happen. So go ahead, experiment, tweak, and innovate. The world of RAG is yours to explore, and the possibilities are endless. Your next breakthrough is just a few lines of code away—let’s build something amazing! 🚀
Further Resources
🌟 In addition to this RAG tutorial, unleash your full potential with these incredible resources to level up your RAG skills.
- How to Build a Multimodal RAG | Documentation
- How to Enhance the Performance of Your RAG Pipeline
- Graph RAG with Milvus | Documentation
- How to Evaluate RAG Applications - Zilliz Learn
- Generative AI Resource Hub | Zilliz
We'd Love to Hear What You Think!
We’d love to hear your thoughts! 🌟 Leave your questions or comments below or join our vibrant Milvus Discord community to share your experiences, ask questions, or connect with thousands of AI enthusiasts. Your journey matters to us!
If you like this tutorial, show your support by giving our Milvus GitHub repo a star ⭐—it means the world to us and inspires us to keep creating! 💖
- Introduction to RAG
- Key Components We'll Use for This RAG Chatbot
- Step 1: Install and Set Up Haystack
- Step 2: Install and Set Up Mistral Large
- Step 3: Install and Set Up Optimum all-mpnet-base-v2
- Step 4: Install and Set Up Haystack In-memory store
- Step 5: Build a RAG Chatbot
- Optimization Tips
- RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
- What Have You Learned?
- Further Resources
- We'd Love to Hear What You Think!
Content
Vector Database at Scale
Zilliz Cloud is a fully-managed vector database built for scale, perfect for your RAG apps.
Try Zilliz Cloud for Free