Ensuring Secure and Permission-Aware RAG Deployments
In the fast-paced area of artificial intelligence, Retrieval Augmented Generation (RAG) has emerged as a powerful approach to enhance the capabilities of generative models such as OpenAI’s GPT series and Google’s Gemini. However, with great potential comes significant responsibility, particularly when it comes to safeguarding sensitive data and ensuring compliance with privacy regulations.
As organizations increasingly rely on AI-driven solutions, understanding the security implications of these technologies is crucial. Implementing strong security measures that not only protect data but also build user trust is essential for production-ready RAG applications.
At a recent Unstructured Data Meetup hosted by Zilliz, Oz Wasserman, Co-Founder of Opsin, highlighted key security considerations for RAG deployments, emphasizing the importance of data anonymization, strong encryption, input/output validation, and robust access controls, among other critical security measures.
In this blog, we'll discuss the key aspects of secure and permission-aware RAG deployments. We'll also walk through an example notebook of a RAG pipeline using the Milvus vector database and LlamaIndex postprocessors designed to remove sensitive information, ensuring data privacy and compliance.
RAG Architecture Diagram
In his talk, Oz began by explaining a fundamental RAG architecture similar to the one shown in Figure 1. Essentially, a RAG system enhances a large language model (LLM) by integrating a vector database-powered knowledge base that stores documents for retrieving relevant content in response to a user's query. This approach improves accuracy, ensures greater contextual relevance, and minimizes hallucinations often seen in standalone LLM outputs.
However, this basic pipeline lacks specific security measures unless they are integrated into the various stages. Oz highlighted a RAG workflow chart (see Figure 2) from Ken Huang from DistributedApps.ai, where security controls can be implemented throughout the RAG pipeline:
Data Source/VectorDB Stage
Retrieval Stage
Generation Stage
Figure- Vector database facilitating RAG chatbot.png
Figure 1: Basic RAG Architecture
Figure 2- Detailed RAG Architecture (Author- Ken Huang)
Figure 2: Detailed RAG Architecture (Author: Ken Huang)
Data Source/VectorDB Stage
Vector databases such as Milvus and Zilliz Cloud (the managed Milvus) store, index, and retrieve vector embeddings converted from unstructured data. They are critical repositories of valuable information. However, they can also become targets for data breaches or unauthorized access, making the implementation of robust protection strategies like encryption, access control, or data anonymization necessary.
Figure 3- Data Source:VectorDB Security Controls.png
Figure 3: Data Source/VectorDB Security Controls
The first security thread can be found in data anonymization. Data contains personal sensitive information, commonly referred to as Personally Identifiable Information (PII). This data must be anonymized to protect individual privacy. This step is mandatory before any data processing to ensure this information cannot be traced back to specific individuals.
Once the data has been anonymized, it can be indexed and embeddings can be generated to enable semantic search for retrieving relevant content. At this stage, it is important to define who can store and retrieve the data in and from the vector database, in other words, who has access to it. Implementing strict access controls is essential to prevent unauthorized access, which could result in data manipulation or leakage.
Access control can be broken down into several stages:
Authentication: Ensuring the user verifies their identity, typically using methods like OAuth 2.0.
Authorization: Based on the verified identity, specific permissions, and access rights are granted to the user.
Traceability: Monitoring access to ensure that any attempts to access data are logged and can be tracked, providing an audit trail for security compliance.
These measures help safeguard the vector database and ensure only authorized users can interact with sensitive data.
Another security layer can be added with encryption to make data unintelligible at rest (when stored) or in transit (when transmitted). Traditional encryption uses encryption keys, but more advanced techniques such as Differential Privacy or Decentralization & Sharding are also increasingly being adopted to enhance data security.
Zilliz Cloud is a fully managed vector database service powered by Milvus. It offers all these essential data security measures. It can provide other solutions, like Private Link, to avoid public internet access, backup and restoration to ensure regular and secure data backups, and recovery of the knowledge base in case of data loss.
Figure 6- Zilliz Multi-layered Enterprise-grade Security
Figure 6: Zilliz Multi-layered Enterprise-grade Security (Source)
Retrieval Stage
The retrieval stage is another critical step where security concerns must be addressed. As in the previous stage, controlling access to the knowledge base through queries is essential. Additionally, during this stage, several security risks also need to be mitigated:
Query Validation: Validating queries is crucial to preventing prompt injection attacks. This approach ensures that user inputs do not exploit system vulnerabilities, potentially leading to unauthorized access or manipulation of the data.
Similarity Search Risks: It is also important to manage the risks associated with similarity searches. Proper measures should be in place to ensure that the similarity search does not inadvertently expose sensitive information or provide unauthorized access to restricted data.
Figure 7- Query Validation Step
Figure 7: Query Validation Step
Oz shared an example (see Figure 8) of prompt injection while presenting an in-house model. This is a good example of prompt manipulation, where the prompt is changed to retrieve data.
Figure 8- Example of Prompt Injection
Figure 8: Example of Prompt Injection
In addition, Oz discussed several risks associated with similarity search, including:
Data Leakage: By manipulating similarity queries, attackers can influence the search mechanism to retrieve sensitive data indirectly.
Search Result Manipulation: Attackers could modify the search process to influence which results are retrieved, leading to potential exposure of restricted information.
Reconnaissance and Pattern Analysis: Attackers may analyze patterns in search queries and responses to map out the structure of the database and gain insights into the data stored.
Resource Exhaustion: Continuous or excessive querying could lead to denial-of-service conditions, exhausting system resources and reducing availability for other users.
We see that similar security techniques must be implemented for the retrieval stage, like access control or data validation. Additionally, encryption (in transit) must be considered as data is transmitted between components.
Generation Stage
The final stage in the RAG pipeline is the generation stage, where the Large Language Model (LLM) generates responses based on the retrieved content from the vector database. Although the LLM is the central actor at this stage, security and compliance issues can arise depending on the nature of the data used to train the model. Some key risks include:
Data Privacy Violations: The LLM may inadvertently reveal sensitive or private information if trained on data that includes personally identifiable information (PII) or other confidential content. This could result in breaches of privacy regulations such as GDPR or HIPAA.
Output Manipulation: Attackers could manipulate the LLM's output by influencing the input queries, leading to the generation of malicious or misleading content. This is particularly concerning in environments where outputs are trusted without verification.
Bias and Offensive Content: If the training data contains biased or offensive elements, the LLM might produce biased or offensive responses, which could lead to legal liabilities, especially in production environments.
Addressing these risks requires careful post-processing of the generated content, including techniques like:
Content Filtering: Automatically filtering outputs to detect and block sensitive, biased, or offensive content.
Output Validation: Validating the model's response to ensure it complies with security policies and regulatory standards before delivering it to the user.
Model Fine-tuning: Ensuring the model is fine-tuned with regulatory-compliant data and applying reinforcement learning strategies to avoid harmful outputs.
By incorporating these strategies, the generation stage can be made more secure and reliable and adds a security layer.
Let’s now dig into an example of Data Anonymization, where different LlamaIndex postprocessors will be compared and an example of Data Privacy Violations will be shown.
Secure RAG Using LlamaIndex and Milvus
The following notebook is an example of an RAG pipeline built with LlamaIndex as the LLM framework, Milvus as the vector database, and three different modules specialized in PII masking: one using a Hugging Face NER model, another using an LLM (OpenAI) and Presidio, a Microsoft library.
You can also check the full code in this colab notebook.
Step 1: Set up Environmental Variables
We need an OpenAI key to test the module using OpenAI models.
from google.colab import userdata
import os
os.environ["OPENAI_API_KEY"] = userdata.get('OPENAI_API_KEY')
Step 2: Define Text with Private Data
First, we define a short text that contains private information like credit card numbers, names, or birth dates.
from llama_index.core.postprocessor import NERPIINodePostprocessor
from llama_index.core.schema import TextNode, NodeWithScore
text = """
Hi, I'm Sarah Mitchell, and I just got a new credit card with the number 3714-496089-47322.
My personal email is sarah.mitchell@mailbox.com, and I'm currently based in Sydney.
By the way, I tried paying my utility bill with card number 6011-5832-9109-1726, but it didn't work.
For my bank transactions, I use this IBAN: NL91ABNA0417164300.
Also, can you help me with my Wi-Fi issues? I keep getting blocked by IP address 203.0.113.15.
I've shared a family photo on my personal blog at https://www.sarahs-lifediary.org/.
Oh, and my grandfather, George Stone, was born in 1921, while my grandmother, Emily Clarkson, was born in 1925.
Last question--what's the spending limit on my main card, the one ending in 8473?
"""
node = TextNode(text=text)
Step 3: NER Model for PII Masking: NERPIINodePostprocessor
NERPIINodePostprocessor is a module from Llama Index that masks that information using a Hugging Face model specializing in NER (named entity recognition).
from llama_index.core.postprocessor import NERPIINodePostprocessor
from llama_index.core.schema import TextNode, NodeWithScore
processor = NERPIINodePostprocessor()
new_nodes = processor.postprocess_nodes([NodeWithScore(node=node)])
print(new_nodes[0].node.get_text())
We see that this specific model masks some information, but private data is still visible. Using this approach would represent a data leakage. Therefore, another model or approach shall be used.
"""
Output:
Hi, I'm [PER_9], and I just got a new credit card with the number 3714-496089-47322.
My personal email is sarah.mitchell@mailbox.com, and I'm currently based in [LOC_169].
By the way, I tried paying my utility bill with card number 6011-5832-9109-1726, but it didn't work.
For my bank transactions, I use this IBAN: NL91ABNA0417164300.
Also, can you help me with my Wi-[MISC_374] issues? I keep getting blocked by IP address 203.0.113.15.
I've shared a family photo on my personal blog at https://www.sarahs-lifediary.org/.
Oh, and my grandfather, [PER_545], was born in 1921, while my grandmother, [PER_599], was born in 1925.
Last question--what's the spending limit on my main card, the one ending in 8473?
"""
Step 4: LLM for PII Masking: PIINodePostprocessor
PIINodePostprocessor is a module from Llama Index that uses an LLM model to mask sensitive information. To test the efficiency of this approach, we will use the default OpenAI model.
from llama_index.core.postprocessor import PIINodePostprocessor
from llama_index.core.schema import TextNode, NodeWithScore
from llama_index.llms.openai import OpenAI
processor = PIINodePostprocessor(llm=OpenAI())
new_nodes = processor.postprocess_nodes([NodeWithScore(node=node)])
print(new_nodes[0].node.get_text())
Following this approach, the model can recognize all sensitive information and mask it accordingly.
"""
Output:
Hi, I'm [NAME1] [NAME2], and I just got a new credit card with the number [CREDIT_CARD_NUMBER1].
My personal email is [EMAIL], and I'm currently based in [CITY].
By the way, I tried paying my utility bill with card number [CREDIT_CARD_NUMBER2], but it didn't work.
For my bank transactions, I use this IBAN: [IBAN].
Also, can you help me with my Wi-Fi issues? I keep getting blocked by IP address [IP_ADDRESS].
I've shared a family photo on my personal blog at [URL].
Oh, and my grandfather, [NAME3] [NAME4], was born in [DATE1], while my grandmother, [NAME5] [NAME6], was born in [DATE2].
Last question--what's the spending limit on my main card, the one ending in [CREDIT_CARD_ENDING].
"""
Step 5: Presidio for PII Masking
Finally, we test Presidio, a Microsoft library that masks sensitive information using a Spacy model specialized in NER.
from llama_index.postprocessor.presidio import PresidioPIINodePostprocessor
from llama_index.core.schema import TextNode, NodeWithScore
from llama_index.llms.openai import OpenAI
processor = PresidioPIINodePostprocessor()
new_nodes = processor.postprocess_nodes([NodeWithScore(node=node)])
print(new_nodes[0].node.get_text())
This time, we see that the model can mask all information except the credit card number, as it is considered part of it as a driver's license. In the next step, we will test several queries with this model and will show how data privacy violations are possible because part of the credit card number is available.
"""
Output:
Hi, I'm <PERSON_3>, and I just got a new credit card with the number 3714-<US_DRIVER_LICENSE_1>-47322.
My personal email is <EMAIL_ADDRESS_1>, and I'm currently based in <LOCATION_1>.
By the way, I tried paying my utility bill with card number <IN_PAN_1>9109-1726, but it didn't work.
For my bank transactions, I use this IBAN: <IBAN_CODE_1>.
Also, can you help me with my Wi-Fi issues? I keep getting blocked by IP address <IP_ADDRESS_1>.
I've shared a family photo on my personal blog at <URL_1>
Oh, and my grandfather, <PERSON_2>, was born in <DATE_TIME_2>, while my grandmother, <PERSON_1>, was born in <DATE_TIME_1>.
Last question--what's the spending limit on my main card, the one ending in 8473?
"""
Step 6: Retrieval using Milvus
First, let’s create a Milvus vector database to store our text.
from llama_index.core import VectorStoreIndex, StorageContext
from llama_index.vector_stores.milvus import MilvusVectorStore
vector_store = MilvusVectorStore(
uri="./milvus_demo.db", dim=1536, overwrite=True
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex([n.node for n in new_nodes], storage_context=storage_context)
Next, let’s test some queries.
This first query works properly. It provides us with the correct answer and keeps the information masked.
response = index.as_query_engine().query(
"What is the name of the person?"
)
print(str(response))
"""
Output:
The name of the person is <PERSON_3>.
"""
Now we try to get the credit card number, and we see that we get the complete number with part of it masked as a driver's license.
response = index.as_query_engine().query(
"What is the number of the credit card?"
)
print(str(response))
"""
Output:
The number of the credit card is 3714-<US_DRIVER_LICENSE_1>-47322.
"""
But it is also known that all credit card issuers have assigned IIN (Issuer Identification Number). The first digits define the issuer. So, in this case, 37 means American Express. If you do not know it, we can check it by asking the model.
response = index.as_query_engine().query(
"What is the issuer of the credit card number?"
)
print(str(response))
The issuer of the credit card number is American Express
So, we see that the model can recognize the card issuer, even though this information has not been provided in the text. This means that the model has been trained with credit card information about issuers and can provide the answer with its knowledge. This is an example of Data Privacy Violations in the Generation Stage, described above.
Conclusion
Oz walked us through the various stages of a RAG pipeline and highlighted where security concerns can be addressed. The first key concern involves data anonymization, ensuring that sensitive data cannot be accessed by unauthorized third parties. However, it's important to remember that LLM foundation models could have been trained on this data, making the models themselves potential sources of data leakage.
To secure data access and transmission, access control and encryption are critical. These techniques provide a high level of security and control, but threats like prompt injection and search manipulation still represent risks to sensitive information.
Therefore, adding multiple security layers throughout the pipeline is essential to mitigate potential threats. A production-ready RAG application must integrate robust security measures at each stage to ensure the system is resilient against attacks and compliant with regulatory standards.
Further Resources
- RAG Architecture Diagram
- Data Source/VectorDB Stage
- Retrieval Stage
- Secure RAG Using LlamaIndex and Milvus
- Conclusion
- Further Resources
Content
Start Free, Scale Easily
Try the fully-managed vector database built for your GenAI applications.
Try Zilliz Cloud for Free