Build RAG Chatbot with Llamaindex, OpenSearch, Amazon Titan Text G1, and Mistral Embed
Introduction to RAG
Retrieval-Augmented Generation (RAG) is a game-changer for GenAI applications, especially in conversational AI. It combines the power of pre-trained large language models (LLMs) like OpenAI’s GPT with external knowledge sources stored in vector databases such as Milvus and Zilliz Cloud, allowing for more accurate, contextually relevant, and up-to-date response generation. A RAG pipeline usually consists of four basic components: a vector database, an embedding model, an LLM, and a framework.
Key Components We'll Use for This RAG Chatbot
This tutorial shows you how to build a simple RAG chatbot in Python using the following components:
- Llamaindex: a data framework that connects large language models (LLMs) with various data sources, enabling efficient retrieval-augmented generation (RAG). It helps structure, index, and query private or external data, optimizing LLM applications for search, chatbots, and analytics.
- OpenSearch: An open-source search and analytics suite derived from Elasticsearch. It offers robust full-text search and real-time analytics, with vector search available as an add-on for similarity-based queries, extending its capabilities to handle high-dimensional data. Since it is just a vector search add-on rather than a purpose-built vector database, it lacks scalability and availability and many other advanced features required by enterprise-level applications. Therefore, if you prefer a much more scalable solution or hate to manage your own infrastructure, we recommend using Zilliz Cloud, which is a fully managed vector database service built on the open-source Milvus and offers a free tier supporting up to 1 million vectors.)
- Amazon Titan Text G1: Amazon Titan Text G1 is a powerful language model designed for efficient text generation and understanding. It excels in handling large-scale text processing tasks with high accuracy and speed, making it ideal for content creation, summarization, and chatbots in enterprise applications.
- Mistral Embed: A high-performance embedding model designed to convert text into dense vector representations, capturing semantic meaning for tasks like retrieval, clustering, and similarity analysis. It excels in efficiency, multilingual support, and scalability, making it ideal for semantic search engines, multilingual content organization, and large-scale data processing applications requiring rapid, context-aware text analysis.
By the end of this tutorial, you’ll have a functional chatbot capable of answering questions based on a custom knowledge base.
Note: Since we may use proprietary models in our tutorials, make sure you have the required API key beforehand.
Step 1: Install and Set Up Llamaindex
pip install llama-index
Step 2: Install and Set Up Amazon Titan Text G1
%pip install llama-index-llms-bedrock
from llama_index.llms.bedrock import Bedrock
llm = Bedrock(model="amazon.titan-text-express-v1", profile_name=profile_name)
Step 3: Install and Set Up Mistral Embed
%pip install llama-index-embeddings-mistralai
# imports
from llama_index.embeddings.mistralai import MistralAIEmbedding
# get API key and create embeddings
api_key = "YOUR API KEY"
model_name = "mistral-embed"
embed_model = MistralAIEmbedding(model_name=model_name, api_key=api_key)
Step 4: Install and Set Up OpenSearch
%pip install llama-index-vector-stores-opensearch
from os import getenv
from llama_index.core import SimpleDirectoryReader
from llama_index.vector_stores.opensearch import (
OpensearchVectorStore,
OpensearchVectorClient,
)
from llama_index.core import VectorStoreIndex, StorageContext
# http endpoint for your cluster (opensearch required for vector index usage)
endpoint = getenv("OPENSEARCH_ENDPOINT", "http://localhost:9200")
# index to demonstrate the VectorStore impl
idx = getenv("OPENSEARCH_INDEX", "gpt-index-demo")
# OpensearchVectorClient stores text in this field by default
text_field = "content"
# OpensearchVectorClient stores embeddings in this field by default
embedding_field = "embedding"
# OpensearchVectorClient encapsulates logic for a
# single opensearch index with vector search enabled
client = OpensearchVectorClient(
endpoint, idx, 1536, embedding_field=embedding_field, text_field=text_field
)
# initialize vector store
vector_store = OpensearchVectorStore(client)
Step 5: Build a RAG Chatbot
Now that you’ve set up all components, let’s start to build a simple chatbot. We’ll use the Milvus introduction doc as a private knowledge base. You can replace it with your own dataset to customize your RAG chatbot.
import requests
from llama_index.core import SimpleDirectoryReader
# load documents
url = 'https://raw.githubusercontent.com/milvus-io/milvus-docs/refs/heads/v2.5.x/site/en/about/overview.md'
example_file = 'example_file.md' # You can replace it with your own file paths.
response = requests.get(url)
with open(example_file, 'wb') as f:
f.write(response.content)
documents = SimpleDirectoryReader(
input_files=[example_file]
).load_data()
print("Document ID:", documents[0].doc_id)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, embed_model=embed_model
)
query_engine = index.as_query_engine(llm=llm)
res = query_engine.query("What is Milvus?") # You can replace it with your own question.
print(res)
Example output
Milvus is a high-performance, highly scalable vector database designed to operate efficiently across various environments, from personal laptops to large-scale distributed systems. It is available as both open-source software and a cloud service. Milvus excels in managing unstructured data by converting it into numerical vectors through embeddings, which facilitates fast and scalable searches and analytics. The database supports a wide range of data types and offers robust data modeling capabilities, allowing users to organize their data effectively. Additionally, Milvus provides multiple deployment options, including a lightweight version for quick prototyping and a distributed version for handling massive data scales.
Optimization Tips
As you build your RAG system, optimization is key to ensuring peak performance and efficiency. While setting up the components is an essential first step, fine-tuning each one will help you create a solution that works even better and scales seamlessly. In this section, we’ll share some practical tips for optimizing all these components, giving you the edge to build smarter, faster, and more responsive RAG applications.
LlamaIndex optimization tips
To optimize LlamaIndex for a Retrieval-Augmented Generation (RAG) setup, structure your data efficiently using hierarchical indices like tree-based or keyword-table indices for faster retrieval. Use embeddings that align with your use case to improve search relevance. Fine-tune chunk sizes to balance context length and retrieval precision. Enable caching for frequently accessed queries to enhance performance. Optimize metadata filtering to reduce unnecessary search space and improve speed. If using vector databases, ensure indexing strategies align with your query patterns. Implement async processing to handle large-scale document ingestion efficiently. Regularly monitor query performance and adjust indexing parameters as needed for optimal results.
OpenSearch optimization tips
To optimize OpenSearch in a Retrieval-Augmented Generation (RAG) setup, fine-tune indexing by enabling efficient mappings and reducing unnecessary stored fields. Use HNSW for vector search to speed up similarity queries while balancing recall and latency with appropriate ef_search
and ef_construction
values. Leverage shard and replica settings to distribute load effectively, and enable caching for frequent queries. Optimize text-based retrieval with BM25 tuning and custom analyzers for better relevance. Regularly monitor cluster health, index size, and query performance using OpenSearch Dashboards and adjust configurations accordingly.
Amazon Titan Text G1 optimization tips
To optimize Amazon Titan Text G1 in a RAG setup, ensure your retrieval pipeline delivers precise and well-structured context to leverage its advanced text generation capabilities. Use embedding models optimized for semantic search to retrieve the most relevant documents efficiently. Fine-tune document chunking to provide enough context without exceeding token limits. Experiment with prompt engineering techniques to guide the model toward accurate and relevant responses. Utilize caching for frequently asked queries to reduce API calls and improve latency. Adjust temperature and top-k sampling settings to balance response creativity and consistency. Monitor inference times and optimize query batching to enhance throughput while maintaining cost efficiency.
Mistral Embed optimization tips
To optimize Mistral Embed in a RAG setup, preprocess text by removing redundant whitespace, special characters, and normalizing casing to reduce embedding noise. Use batch processing for bulk embeddings to leverage GPU parallelism. Fine-tune Mistral Embed on domain-specific data if retrieval accuracy is low. Reduce input sequence length via truncation or sliding windows for long documents. Cache frequent queries to save compute. Test different pooling strategies (mean, max) for sentence-level embeddings and normalize outputs to improve similarity scoring consistency.
By implementing these tips across your components, you'll be able to enhance the performance and functionality of your RAG system, ensuring it’s optimized for both speed and accuracy. Keep testing, iterating, and refining your setup to stay ahead in the ever-evolving world of AI development.
RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
Estimating the cost of a Retrieval-Augmented Generation (RAG) pipeline involves analyzing expenses across vector storage, compute resources, and API usage. Key cost drivers include vector database queries, embedding generation, and LLM inference.
RAG Cost Calculator is a free tool that quickly estimates the cost of building a RAG pipeline, including chunking, embedding, vector storage/search, and LLM generation. It also helps you identify cost-saving opportunities and achieve up to 10x cost reduction on vector databases with the serverless option.
Calculate your RAG cost
What Have You Learned?
Wow, what an incredible journey we’ve had exploring the integration of cutting-edge technologies to build a Retrieval-Augmented Generation (RAG) system! In this tutorial, you learned how to seamlessly combine LlamaIndex as your framework, OpenSearch as your reliable vector database, Amazon Titan Text G1’s robust language model, and the Mistral embedding model to create a powerful RAG pipeline. Each component plays a critical role: LlamaIndex helps organize your data efficiently, OpenSearch allows for rapid retrieval of relevant information, and the LLM generates human-like responses based on that data. The Mistral embedding model enriches your vectors with deeper semantic understanding, ensuring that your RAG system isn’t just functional, but truly intelligent and capable of nuanced interactions.
What’s even more exciting are the optimization tips we shared, which can help enhance system performance, making your RAG application not only effective but also efficient. And don’t forget the bonus – a free RAG cost calculator to help you keep track of your resource usage! Now it’s time to harness the knowledge you’ve gained. Dive in and start building, optimizing, and innovating your own RAG applications! The tools and framework are at your fingertips, so unleash your creativity and let your ideas take flight. The future of intelligent applications is bright, and you’re at the forefront of this thrilling evolution! Let's get started!
Further Resources
🌟 In addition to this RAG tutorial, unleash your full potential with these incredible resources to level up your RAG skills.
- How to Build a Multimodal RAG | Documentation
- How to Enhance the Performance of Your RAG Pipeline
- Graph RAG with Milvus | Documentation
- How to Evaluate RAG Applications - Zilliz Learn
- Generative AI Resource Hub | Zilliz
We'd Love to Hear What You Think!
We’d love to hear your thoughts! 🌟 Leave your questions or comments below or join our vibrant Milvus Discord community to share your experiences, ask questions, or connect with thousands of AI enthusiasts. Your journey matters to us!
If you like this tutorial, show your support by giving our Milvus GitHub repo a star ⭐—it means the world to us and inspires us to keep creating! 💖
- Introduction to RAG
- Key Components We'll Use for This RAG Chatbot
- Step 1: Install and Set Up Llamaindex
- Step 2: Install and Set Up Amazon Titan Text G1
- Step 3: Install and Set Up Mistral Embed
- Step 4: Install and Set Up OpenSearch
- Step 5: Build a RAG Chatbot
- Optimization Tips
- RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
- What Have You Learned?
- Further Resources
- We'd Love to Hear What You Think!
Content
Vector Database at Scale
Zilliz Cloud is a fully-managed vector database built for scale, perfect for your RAG apps.
Try Zilliz Cloud for Free