Build RAG Chatbot with LangChain, LangChain vector store, Anthropic Claude 3 Haiku, and IBM granite-embedding-278m-multilingual
Introduction to RAG
Retrieval-Augmented Generation (RAG) is a game-changer for GenAI applications, especially in conversational AI. It combines the power of pre-trained large language models (LLMs) like OpenAI’s GPT with external knowledge sources stored in vector databases such as Milvus and Zilliz Cloud, allowing for more accurate, contextually relevant, and up-to-date response generation. A RAG pipeline usually consists of four basic components: a vector database, an embedding model, an LLM, and a framework.
Key Components We'll Use for This RAG Chatbot
This tutorial shows you how to build a simple RAG chatbot in Python using the following components:
- LangChain: An open-source framework that helps you orchestrate the interaction between LLMs, vector stores, embedding models, etc, making it easier to integrate a RAG pipeline.
- LangChain in-memory vector store: an in-memory, ephemeral vector store that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. The default similarity metric is cosine similarity, but can be changed to any of the similarity metrics supported by ml-distance. It is intended for demos and does not yet support ids or deletion. (If you want a much more scalable solution for your apps or even enterprise projects, we recommend using Zilliz Cloud, which is a fully managed vector database service built on the open-source Milvusand offers a free tier supporting up to 1 million vectors.)
- Anthropic Claude 3: This advanced AI language model from Anthropic focuses on safety and alignment, capable of generating coherent and context-aware text. It excels in creative writing, conversational AI, and insightful summarization. Ideal for creating engaging content while ensuring adherence to ethical standards and user intent.
- IBM granite-embedding-278m-multilingual: This advanced AI model specializes in generating multilingual text embeddings, making it highly effective for tasks such as cross-linguistic information retrieval and translation. With its strength in understanding diverse languages, it excels in applications involving global datasets and multilingual customer engagement.
By the end of this tutorial, you’ll have a functional chatbot capable of answering questions based on a custom knowledge base.
Note: Since we may use proprietary models in our tutorials, make sure you have the required API key beforehand.
Step 1: Install and Set Up LangChain
%pip install --quiet --upgrade langchain-text-splitters langchain-community langgraph
Step 2: Install and Set Up Anthropic Claude 3 Haiku
pip install -qU "langchain[anthropic]"
import getpass
import os
if not os.environ.get("ANTHROPIC_API_KEY"):
os.environ["ANTHROPIC_API_KEY"] = getpass.getpass("Enter API key for Anthropic: ")
from langchain.chat_models import init_chat_model
llm = init_chat_model("claude-3-haiku-20240307", model_provider="anthropic")
Step 3: Install and Set Up IBM granite-embedding-278m-multilingual
pip install -qU langchain-ibm
import getpass
import os
if not os.environ.get("WATSONX_APIKEY"):
os.environ["WATSONX_APIKEY"] = getpass.getpass("Enter API key for IBM watsonx: ")
from langchain_ibm import WatsonxEmbeddings
embeddings = WatsonxEmbeddings(
model_id="ibm/granite-embedding-278m-multilingual",
url="https://us-south.ml.cloud.ibm.com",
project_id="<WATSONX PROJECT_ID>",
)
Step 4: Install and Set Up LangChain vector store
pip install -qU langchain-core
from langchain_core.vectorstores import InMemoryVectorStore
vector_store = InMemoryVectorStore(embeddings)
Step 5: Build a RAG Chatbot
Now that you’ve set up all components, let’s start to build a simple chatbot. We’ll use the Milvus introduction doc as a private knowledge base. You can replace it with your own dataset to customize your RAG chatbot.
import bs4
from langchain import hub
from langchain_community.document_loaders import WebBaseLoader
from langchain_core.documents import Document
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langgraph.graph import START, StateGraph
from typing_extensions import List, TypedDict
# Load and chunk contents of the blog
loader = WebBaseLoader(
web_paths=("https://milvus.io/docs/overview.md",),
bs_kwargs=dict(
parse_only=bs4.SoupStrainer(
class_=("doc-style doc-post-content")
)
),
)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
all_splits = text_splitter.split_documents(docs)
# Index chunks
_ = vector_store.add_documents(documents=all_splits)
# Define prompt for question-answering
prompt = hub.pull("rlm/rag-prompt")
# Define state for application
class State(TypedDict):
question: str
context: List[Document]
answer: str
# Define application steps
def retrieve(state: State):
retrieved_docs = vector_store.similarity_search(state["question"])
return {"context": retrieved_docs}
def generate(state: State):
docs_content = "\n\n".join(doc.page_content for doc in state["context"])
messages = prompt.invoke({"question": state["question"], "context": docs_content})
response = llm.invoke(messages)
return {"answer": response.content}
# Compile application and test
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()
Test the Chatbot
Yeah! You've built your own chatbot. Let's ask the chatbot a question.
response = graph.invoke({"question": "What data types does Milvus support?"})
print(response["answer"])
Example Output
Milvus supports various data types including sparse vectors, binary vectors, JSON, and arrays. Additionally, it handles common numerical and character types, making it versatile for different data modeling needs. This allows users to manage unstructured or multi-modal data efficiently.
Optimization Tips
As you build your RAG system, optimization is key to ensuring peak performance and efficiency. While setting up the components is an essential first step, fine-tuning each one will help you create a solution that works even better and scales seamlessly. In this section, we’ll share some practical tips for optimizing all these components, giving you the edge to build smarter, faster, and more responsive RAG applications.
LangChain optimization tips
To optimize LangChain, focus on minimizing redundant operations in your workflow by structuring your chains and agents efficiently. Use caching to avoid repeated computations, speeding up your system, and experiment with modular design to ensure that components like models or databases can be easily swapped out. This will provide both flexibility and efficiency, allowing you to quickly scale your system without unnecessary delays or complications.
LangChain in-memory vector store optimization tips
LangChain in-memory vector store is just an ephemeral vector store that stores embeddings in-memory and does an exact, linear search for the most similar embeddings. It has very limited features and is only intended for demos. If you plan to build a functional or even production-level solution, we recommend using Zilliz Cloud, which is a fully managed vector database service built on the open-source Milvus and offers a free tier supporting up to 1 million vectors.)
Anthropic Claude 3 Haiku optimization tips
Claude 3 Haiku is designed for efficiency, making it a great choice for low-latency RAG applications. Optimize token usage by structuring prompts concisely, removing redundant text, and leveraging system messages effectively to guide responses. Use function calling when applicable to offload structured processing tasks and improve response reliability. Batch process queries where possible to reduce API overhead and enhance throughput. If latency is critical, consider caching frequent queries and pre-generating responses for common questions. Fine-tune response control with temperature and top-p sampling; lower temperature values (e.g., 0.2-0.3) help maintain consistency in factual retrieval tasks. Use streaming mode for real-time applications to get faster partial responses while processing large prompts. Regularly evaluate and adjust model parameters based on performance benchmarks to balance speed and accuracy in your RAG pipeline.
IBM granite-embedding-278m-multilingual optimization tips
To optimize the IBM granite-embedding-278m-multilingual for your Retrieval-Augmented Generation (RAG) setup, consider fine-tuning the model on domain-specific data relevant to your use case, which helps improve accuracy in embeddings. Use mini-batches when processing queries to balance memory efficiency and speed, ensuring you leverage GPU acceleration. Implement a caching mechanism for frequently accessed documents to reduce retrieval latency, and experiment with different similarity metrics to find the most effective one for your data. Regularly monitor the performance and iterate over hyperparameters such as learning rates and embedding dimensions to further enhance your retrieval capabilities.
By implementing these tips across your components, you'll be able to enhance the performance and functionality of your RAG system, ensuring it’s optimized for both speed and accuracy. Keep testing, iterating, and refining your setup to stay ahead in the ever-evolving world of AI development.
RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
Estimating the cost of a Retrieval-Augmented Generation (RAG) pipeline involves analyzing expenses across vector storage, compute resources, and API usage. Key cost drivers include vector database queries, embedding generation, and LLM inference.
RAG Cost Calculator is a free tool that quickly estimates the cost of building a RAG pipeline, including chunking, embedding, vector storage/search, and LLM generation. It also helps you identify cost-saving opportunities and achieve up to 10x cost reduction on vector databases with the serverless option.
Calculate your RAG cost
What Have You Learned?
By now, you’ve seen how LangChain acts as the ultimate orchestrator, seamlessly tying together the powerful components of your RAG pipeline! You learned to leverage LangChain’s vector store to efficiently index and retrieve context using embeddings generated by IBM’s granite-embedding-278m-multilingual model, which shines at capturing nuanced meaning across multiple languages. Pairing this with Anthropic Claude 3 Haiku—a lightning-fast, cost-effective LLM—you’ve unlocked the ability to generate precise, context-aware answers by feeding retrieved data directly into the model’s prompt. This combo transforms raw information into actionable insights, whether you’re building multilingual chatbots, research assistants, or dynamic Q&A systems. Plus, those pro tips on optimizing chunking strategies and indexing methods? They’re game-changers for balancing speed, accuracy, and cost in real-world applications. And don’t forget the free RAG cost calculator—your secret weapon for budgeting smarter as you scale!
But this is just the beginning! You’re now equipped with the tools to create systems that don’t just answer questions but understand context deeply. Imagine enhancing your pipeline with hybrid search, custom metadata filters, or even fine-tuning embeddings for niche domains. The flexibility of LangChain means you can experiment fearlessly, swapping models or databases as your needs evolve. So go ahead—take what you’ve built, tweak it, and watch your ideas come to life. Whether you’re optimizing for enterprise use or tinkering with personal projects, you’ve got the foundation to innovate. The future of intelligent applications is in your hands… now go make it awesome! 🚀
Further Resources
🌟 In addition to this RAG tutorial, unleash your full potential with these incredible resources to level up your RAG skills.
- How to Build a Multimodal RAG | Documentation
- How to Enhance the Performance of Your RAG Pipeline
- Graph RAG with Milvus | Documentation
- How to Evaluate RAG Applications - Zilliz Learn
- Generative AI Resource Hub | Zilliz
We'd Love to Hear What You Think!
We’d love to hear your thoughts! 🌟 Leave your questions or comments below or join our vibrant Milvus Discord community to share your experiences, ask questions, or connect with thousands of AI enthusiasts. Your journey matters to us!
If you like this tutorial, show your support by giving our Milvus GitHub repo a star ⭐—it means the world to us and inspires us to keep creating! 💖
- Introduction to RAG
- Key Components We'll Use for This RAG Chatbot
- Step 1: Install and Set Up LangChain
- Step 2: Install and Set Up Anthropic Claude 3 Haiku
- Step 3: Install and Set Up IBM granite-embedding-278m-multilingual
- Step 4: Install and Set Up LangChain vector store
- Step 5: Build a RAG Chatbot
- Optimization Tips
- RAG Cost Calculator: A Free Tool to Calculate Your Cost in Seconds
- What Have You Learned?
- Further Resources
- We'd Love to Hear What You Think!
Content
Vector Database at Scale
Zilliz Cloud is a fully-managed vector database built for scale, perfect for your RAG apps.
Try Zilliz Cloud for Free