What is a Vector Database and How Does It Work?
A vector database stores, indexes, and searches vector embeddings generated by machine learning models for fast information retrieval and similarity search. This post explains how vector databases work, their key features and use cases, and the ecosystem.
Read the entire series
- Introduction to Unstructured Data
- What is a Vector Database and How Does It Work?
- Understanding Vector Databases: Compare Vector Databases, Vector Search Libraries, and Vector Search Plugins
- Introduction to Milvus Vector Database
- Milvus Quickstart: Install Milvus Vector Database in 5 Minutes
- Introduction to Vector Similarity Search
- Everything You Need to Know about Vector Index Basics
- Scalar Quantization and Product Quantization
- Hierarchical Navigable Small Worlds (HNSW)
- Approximate Nearest Neighbors Oh Yeah (Annoy)
- Choosing the Right Vector Index for Your Project
- DiskANN and the Vamana Algorithm
- Safeguard Data Integrity: Backup and Recovery in Vector Databases
- Dense Vectors in AI: Maximizing Data Potential in Machine Learning
- Integrating Vector Databases with Cloud Computing: A Strategic Solution to Modern Data Challenges
- A Beginner's Guide to Implementing Vector Databases
- Maintaining Data Integrity in Vector Databases
- From Rows and Columns to Vectors: The Evolutionary Journey of Database Technologies
- Decoding Softmax Activation Function
- Harnessing Product Quantization for Memory Efficiency in Vector Databases
- How to Spot Search Performance Bottleneck in Vector Databases
- Ensuring High Availability of Vector Databases
- Mastering Locality Sensitive Hashing: A Comprehensive Tutorial and Use Cases
- Vector Library vs Vector Database: Which One is Right for You?
- Maximizing GPT 4.x's Potential Through Fine-Tuning Techniques
- Deploying Vector Databases in Multi-Cloud Environments
- An Introduction to Vector Embeddings: What They Are and How to Use Them
Lastest Update: Oct 22
Welcome back to Vector Database 101.
In the previous tutorial, we took a quick look at the ever-increasing amount of data that is being generated daily. We then covered how these bits of data can be split into structured/semi-structured and unstructured data types, the differences between them, and how modern machine learning can be used to understand unstructured data through vector embeddings. Finally, we briefly discussed this data processing via ANN search.
With all of this information, it’s now clear that the ever-increasing amount of data requires a paradigm shift and a new category of database and data management and system—the vector database.
What is a vector database?
So, first things first, what is a vector database? Before we explore the details of vector databases, let me give you a quick answer to this question and some key facts about them.
A vector database is a new database system that stores, indexes, and searches through high dimensional vector embeddings for fast semantic information retrieval and vector semantic search.
Vector databases are a key infrastructure component of the modern AI stack: Retrieval Augmented Generation (RAG), which augments the output of large language models (LLM) and addresses AI hallucinations by providing the LLM with external knowledge. Vector databases store this external knowledge and find and retrieve contextual information for the LLM to generate more accurate answers.
Vector databases are widely used for use cases and applications such as chatbots, recommendation systems, image/video/audio search, semantic search, and RAG.
Mainstream purpose-built vector databases include Milvus, Zilliz Cloud (fully managed Milvus), Qdrant, Weaviate, Pinecone, and Chroma.
Besides specialized vector databases, many traditional relational databases add a vector plugin capable of performing small-scale vector searches. Such databases include Cassandra, MongoDB, and Pgvector.
To compare different vector databases, refer to this vector database comparison page.
To evaluate their performance, refer to this vector database benchmark page.
Vector databases vs traditional databases
What are the key differences between vector databases and traditional databases?
Traditional relational database systems, particularly relational databases, excel at managing structured data with predefined formats and executing precise search operations. In contrast, vector databases specialize in storing, indexing, and retrieving unstructured data types, such as images, audio, videos, and textual content, through high-dimensional numerical representations of data objects known as vector embeddings. Unlike traditional relational databases with rows and columns, data points in a vector database are represented by vectors with a fixed number of dimensions clustered based on similarity.
VectorDBs are used to perform semantic similarity searches (also called vector search) using techniques like the Approximate Nearest Neighbor Search (ANN) by calculating the distance between vectors in a vector space.
Vector databases have been widely adopted to build applications in various domains, including recommender systems, semantic searches, chatbots, retrieval augmented generation (RAG), anomaly detection, and tools for searching for similar images, videos, and audio content.
With the rise of artificial intelligence (AI) and LLMs like ChatGPT, vector databases become a crucial infrastructure component of the retrieval augmented generation (RAG) pipelines, a technique used to address LLM hallucinations.
How does a vector database work?
Vector databases like Milvus and Zilliz (fully managed Milvus) are purpose-built to store, process, index, and search vector embeddings. Most vector databases support mainstream indexes such as hierarchical navigable small world (HNSW), locality-sensitive hashing (LSH), and Product Quantization (PQ). In other words, vector databases mainly operate on vector embeddings and closely collaborate with machine learning models that transform this data into vector embeddings.
The diagram below shows how a vector database works. Here, we use Zilliz as an example.
A machine learning model, usually an embedding model, transforms all types of unstructured data into vector embeddings.
Vector embeddings are stored in Zilliz Cloud.
Users perform a query.
The machine learning model converts the query into vector embeddings.
Zilliz Cloud conducts vector search by comparing the distance between the query vector and the vector held in the dataset using an approximate nearest neighbor (ANN) algorithm and finds the Top-K results most relevant to the query.
Zilliz Cloud returns the results to the user.
Vector databases from 1000 feet
Guess how many curators it took to label the now-famous ImageNet dataset. Ready for the answer?
25000 people (that's a lot).
Being able to search across images, video, full text documents, audio, and other forms of unstructured data via their content rather than human-generated labels or tags is exactly what vector databases were meant to solve. When combined with powerful embedding models, these vector databases like Milvus have the ability to revolutionize e-commerce solutions, recommendation systems, semantic searches, computer security, pharmaceuticals, and many other industries.
Let’s think about it from a user perspective. What good is a piece of technology without strong usability and a good user API? In concert with the underlying technology, multi-tenancy and usability are also incredibly important attributes. Let’s list out all of the vector database features to look out for (many of these features overlap with those of vector databases important for structured/semi-structured data):
Scalability and tunability
As the number of elements stored in a vector database grows into the hundreds of millions or billions, horizontal scaling across multiple nodes becomes paramount (scaling up by manually inserting sticks of RAM into a server rack every 3 months is no fun). Furthermore, differences in insert rate, query rate, and underlying hardware may result in different application needs, making overall system tunability mandatory for the best vector databases.
Multi-tenancy and data isolation
Supporting multiple users is an obvious feature for all database systems. However, going guns blazing and creating a new vector database for every new user will probably turn out poorly for everyone. Parallel to this notion fault tolerance is data isolation - the idea that any inserts, deletes, or queries made to one collection in a database should be invisible to the rest of the system unless the collection owner explicitly wishes to share the information.
A complete suite of APIs
A database without a full suite of APIs programming languages, and SDKs is, frankly speaking, not a real database. For example, Milvus maintains Python, Node, Go, and Java SDKs for communicating with and administering a Milvus vector database.
An intuitive user interface/administrative console
User interfaces can help significantly reduce the learning curve associated with VectorDBs. These interfaces also expose new features and tools that would otherwise be inaccessible.
Phew. That was quite a bit of info, so we’ll summarize it right here: a vector database should have the following features: 1) scalability and tunability, 2) multi-tenancy and data isolation, 3) full access control a complete suite of APIs, and 4) an intuitive user interface/administrative console. In the next two sections, we’ll follow up on this concept by comparing vector databases versus vector search libraries and vector search plugins, respectively.
How to Use a Vector Database with LLMs for Your GenAI applications?
A vector database is a fully managed, no-frills solution for storing, indexing, and searching across a massive dataset of unstructured data that leverages the power of vector embeddings from embedding models, particularly embedding models such as OpenAI text embedding models, ResNet50 image embedding model, and many other multimodal models.
Large language models (LLM) are Generative AI models that can perform various natural language processing (NLP) tasks based on pre-trained knowledge. However, because they lack domain-specific knowledge, LLMs are prone to hallucinations. Vector databases are a vital technology that can address this hallucinatory issue by providing LLMs with domain-specific, up-to-date, or confidential private data.
How to use a vector database with LLMs?
Let’s use the Zilliz Cloud vector database as an example. Zilliz Cloud stores domain-specific, up-to-date, and confidential private data outside LLMs in the form of vector embeddings. When a user asks a question, Zilliz Cloud transforms the question into vectors and then performs ANN searches for the Top-K results most relevant to the question by comparing the spatial distance between the query vectors and those stored in the vector database. This measuring can be based on various similarity metrics, such as dot product, cosine similarity, or Euclidean distance (L2). Furthermore, if you store any metadata with your data, you can fine tune your results with a hybrid search. Finally, these results are combined with the original question to create a prompt that provides a comprehensive context for the LLM.
This framework, which includes a vector database, an LLM, and Prompts as code, is also known as retrieval augmented generation (RAG) and serves as the foundation for developing LLM-powered applications.
Technical challenges of vector databases
Earlier in this tutorial, I listed the desired features a vector database should implement, before comparing vector databases to vector search libraries and to vector databases excel to search plugins. Now, let’s briefly go over some high-level technical challenges of vector databases. In future tutorials, we’ll provide an overview of how Milvus tackles each of these, in addition to how these technical decisions improve Milvus’ performance over other open-source vector databases.
Picture an airplane. The airplane itself contains a number of interconnected mechanical, electrical, and embedded systems, all working on harmony to provide us with a smooth and pleasurable in-flight experience. Likewise, a VectorDB are composed of a number of evolving software components. Roughly speaking, these can be broken down into the storage, the index, and the service. Although these three components are tightly integrated[1], companies such as Snowflake have shown the broader storage industry that "shared nothing" database architectures are arguably superior to the traditional "shared storage" cloud database models. Thus, the first technical challenge associated with these databases is designing a flexible and scalable data model.
Great, so we have a data model. What's next? With data already stored in a vector db, being able to search across that vector data store, i.e. vector querying and indexing, is the next important component. The compute-heavy nature of machine learning and multi-layer neural networks has allowed GPUs, NPUs/TPUs, FPGAs, and other general purpose compute hardware to flourish. Vector indexing and querying is also compute-heavy, operating at maximum speed and efficiency when run on accelerators. This diverse set of compute resources gives way to the second main technical challenge, developing a heterogeneous computing architecture.
With a data model, query engine, and architecture in place, the last step is making sure your application can, well, read from the database - this ties closely into the API and user interface bullet points mentioned in the first section. While a new category of database necessitates a new architecture in order to extract maximal performance at minimal cost, the majority of vector database users are still acclimated to traditional CRUD operations (e.g. INSERT, SELECT, UPDATE, and DELETE in SQL). Therefore, the final primary challenge is developing a set of APIs and GUIs that leverage existing user interface conventions while maintaining compatibility with the underlying architecture.
Note how each of the three components corresponds to a primary technical challenge. With that being said, there is no one-size-fits-all vector database architecture. The best vector databases will fulfill all of these technical challenges by focusing on delivering the features mentioned in the first section.
Advantages of Vector Databases
Vector databases offer several advantages over traditional relational and leading vector databases, for use cases that involve vector similarity search, semantic searches, machine learning, and AI applications. Here are some of the benefits of vector databases:
High-dimensional search: perform efficient similarity searches on high dimensional vectors, commonly used in machine learning and Generative AI (GenAI) applications, such as image recognition, natural language processing, and recommendation systems. They can quickly find data points that are most similar to a given query, which is crucial for applications like recommendation engines, image recognition, and natural language processing.
Scalability: scale horizontally, efficiently storing and retrieving large amounts of high dimensional vectors. Scalability is significant for applications that require real-time search and retrieval of large amounts of data.
Flexibilitywith hybrid search: handle various vector data types, including sparse and dense vectors. They can also handle multiple data types, including numerical, text, and binary.
Performance: perform vector similarity searches efficiently, often providing faster search times than traditional databases.
Customizable indexing: allow custom indexing schemes for specific use cases and data types.
Overall, they offer significant advantages for applications that involve efficient similarity search, semantic searches complex data,, and machine learning, providing fast and efficient search and retrieval of high dimensional vector data in a vector space.
Which one is the fastest vector database? Benchmark them.
ANN-Benchmarks is a benchmarking environment to evaluate the performance of various vector databases and nearest neighbor search algorithms. The main functions of ANN Benchmarks include the following:
- Dataset and parameter specification: The benchmark provides a variety of datasets of different sizes and dimensions, along with a set of parameters for each dataset, such as the number of neighbors to search for and the distance metric to use.
- Search recall calculation: The benchmark calculates the search recall, the proportion of queries for which the true nearest neighbors are found among the k returned neighbors. Search recall is a metric for evaluating the accuracy of nearest-neighbor search algorithms.
- RPS calculation: The benchmark also calculates the RPS (queries per second), the rate at which the vector database or search algorithm can process queries. This metric is vital for evaluating the speed and scalability of the system.
Using the ANN Benchmarks, users can compare the performance of different vectordbs and search algorithms under a standardized set of conditions, making it easier to identify the most suitable option for a particular use case.
Vector databases comparison
No matter what your semantic search use case is, your application will require storing lots of vector embeddings while being able to retrieve the most relevant vectors with low latency. You also want to choose a vector database that you can use long-term and adhere to the compliance requirements of the target application.
When comparing a vector database to an alternative, you should consider these factors: architecture, scalability, performance, use cases, and costs. Each alternative database may have different strengths and weaknesses in these areas, so evaluating them based on specific requirements and priorities is essential. The following is a list of resources that will help you choose the right tool for your use case:
- Open Source Vector Database Comparison
- Vector Database benchmark
- Milvus vs Pinecone (and Zilliz vs Pinecone)
Wrapping up
In this tutorial, we took a quick tour of vector databases. Specifically, we looked at 1) what features go into a mature example, 2) how a vector database differs from vector search libraries, 3) how a vector database differs from vector search plugins in traditional databases or search systems, and 4) the key challenges associated with building a vector database.
This tutorial is not meant to be a deep dive, nor is it meant to show how it can be used in applications. Rather, the goal is to provide an overview. This is where your journey truly begins!
In the next tutorial, we’ll provide an introduction to Milvus, the world’s most popular open-source vector database:
Provide a brief history of Milvus, including the most important question - where does the name come from!
Cover how Milvus 1.0 differs from Milvus 2.0 and where the future of Milvus lies.
Discuss the differences between Milvus and other VectorDBs such as Google Vertex AI’s Matching Engine.
Go over some common vector database applications.
Take another look at the Vector Database 101 courses
- Introduction to Unstructured Data
- What is a Vector Database?
- Comparing Vector Databases, Vector Search Libraries, and Vector Search Plugins
- Introduction to Milvus
- Milvus Quickstart
- Introduction to Vector Similarity Search
- Vector Index Basics and the Inverted File Index
- Scalar Quantization and Product Quantization
- Hierarchical Navigable Small Worlds (HNSW)
- Approximate Nearest Neighbors Oh Yeah (ANNOY)
- Choosing the Right Vector Index for Your Project
- DiskANN and the Vamana Algorithm
- What is a vector database?
- Vector databases vs traditional databases
- How does a vector database work?
- Vector databases from 1000 feet
- How to Use a Vector Database with LLMs for Your GenAI applications?
- Technical challenges of vector databases
- Advantages of Vector Databases
- Which one is the fastest vector database? Benchmark them.
- Vector databases comparison
- Wrapping up
- Take another look at the Vector Database 101 courses
Content
Start Free, Scale Easily
Try the fully-managed vector database built for your GenAI applications.
Try Zilliz Cloud for Free