Driven by the widespread adoption of ChatGPT and other LLMs, vector databases saw a rise in popularity in 2023.
Although many Zilliz Cloud customers use our service in some sort of retrieval-augmented generation (RAG) system, we've also seen adoption across various search and retrieval systems. This is not an anomaly; broadly speaking, Zilliz Cloud is meant to be a way for computers to truly understand human-generated data, from text and images to bank transactions and user behaviors.
Today, we introduced the next step in our evolution by adding a slew of new features to Zilliz Cloud - range search, multi-tenancy & RBAC, up to 10x improved search & indexing performance, and much more. But why did we decide to introduce these features? The answer is simple: our users demanded it. The applications they're building and the engineering challenges they are tackling necessitate a purpose-built vector database that supports essential database features and a variety of workloads. For the remainder of this post, I'll dive into three real-world customer use cases where the new features of Zilliz Cloud were not just beneficial, but critical in the development and success of our users' applications.
Let's dive in.
Efficient autonomous agents
Memory - bits and pieces of text injected into an LLM's context window to give it historical information - is arguably one of the most important components of an autonomous agent (virtual character). Context windows are severely limited for two different reasons: 1) they are not infinite, 2) long contexts significantly slow text generation, and 3) most long-context LLMs tend only to "remember" information at the beginning and end of the context window. Vector databases solve all of these problems.
Let's take a customer service bot or support agent as an example. In this application, many different pieces of information, i.e., the original knowledge base, all of the user's prompts, images uploaded by users, audio snippets, all of the agent's responses, etc, must be stored in Zilliz Cloud to enable fast retrieval. Whenever a customer types a message, all relevant content from the knowledge base and prior conversations must be retrieved, making this workload both read-heavy and write-heavy, with tons of message data stored per second.
To make this problem even more challenging, the data fed to this customer service agent is inherently multi-modal; text search simply isn't enough. Zilliz's distributed database architecture already supports this at a massive scale - increase query node count for higher read throughput and data node count for higher write throughput - but our new Cardinal vector search engine was the ultimate deciding factor. Cardinal includes our own proprietary vector index, a slew of compute optimizations at the machine code level, and cache-aware algorithms, among many other performance optimizations. Long story short, for this particular application, Zilliz Cloud ended up enabling the same search and indexing throughput of other vector databases at less than a third of the price point. Where the annual cost was prohibitive for other vector search solutions, we stepped in and made this autonomous agent a reality.
Product recommendation
Recommender systems are designed to push a variety of content, e,g. products, news, user content, etc, based on a consumer's previous viewing or browsing history. Vector databases are perfect for this kind of application - by vectorizing and storing each individual item in Zilliz Cloud, recommendations can be performed simply by calling collection.search
. The resulting nearest neighbors are themselves the most relevant recommendations.
Fast, relevant recommendations are table stakes for any B2C product, but in the e-commerce world, product recommendations are especially crucial to the overall experience and can be incredibly efficient drivers of revenue. One of our users' core missions was to improve the performance of their product recommendation system by leveraging AI. Like many others, their use case was extremely latency-sensitive - queries to their vector database needed to be completed within 10 milliseconds to maintain a good end-user experience - and required moderate throughput. Moreover, these recommendations weren't generic searches. Their end users often needed filtered recommendations (such as specific dimensions for apparel or a particular shoe size). This complexity extended to the nature of the product data, which was inherently multi-modal, encompassing product titles, descriptions, and images.
Cardinal enabled them to meet their performance requirements, but Zilliz Cloud's dynamic schema and JSON support proved to be a game-changer here. It enabled the customer to tailor their data models to the specific characteristics of each product class, ensuring that the varied and complex metadata could be efficiently stored and queried with each vector. This adaptability, combined with Cardinal's superior performance, was the deciding factor for this particular customer.
AI-powered drug discovery
Drug discovery is an incredibly hard problem. Molecules for drugs can vary in size, from "small molecules" with tens of atoms to large biologics with tens of thousands of atoms. Machine learning can vectorize each molecule, creating a representation based on its function, such as treating a specific disease or symptom.
In this process, Zilliz Cloud's range search feature plays a crucial role. Researchers vectorize their target disease or symptom and search Zilliz Cloud for candidate drugs. Range search goes beyond standard top-k searches by finding all vectors (molecules) within a certain distance of the target, providing all relevant candidates instead of a fixed number. This feature is not only vital for drug discovery but also applicable in areas like fraud detection and cyber security. For instance, in banking, transactions are vectorized and compared against new transactions using range search to identify similar past activities, aiding in outlier detection.
Parting words
RAG has been and will continue to be incredibly important for us, but it's just the tip of the vector database iceberg. Here, we introduced three real-world use cases directly enabled by some of the new features we're announcing today. When comparing Zilliz Cloud to other vector databases, these features were, and will continue to be, the defining factor for success.
With that being said, don't take my word for it. Many data types can be vectorized, stored, and queried inside Zilliz Cloud. Give it a try now for free with no installation hassles, and let the data revolution begin.
Start Free, Scale Easily
Try the fully-managed vector database built for your GenAI applications.
Try Zilliz Cloud for Free