Jina AI / jina-embeddings-v3
Milvus Integrated
Task: Embedding
Modality: Text
Similarity Metric: Any (Normalized)
License: CC BY-NC 4.0
Dimensions: 1024
Max Input Tokens: 8192
Price:
jina-embeddings-v3 Overview
The jina-embeddings-v3 model is JinaAI's newly released multilingual text embedding tool with 570 million parameters and a maximum input length of 8192 tokens. It can handle multilingual data processing and long-context retrieval tasks, achieving state-of-the-art (SOTA) performance across 94 languages. This model creates embeddings suited for a range of tasks, including query-document retrieval, clustering, classification, and text matching.
Jina-embeddings-v3 also supports Matryoshka Embeddings, which lets you customize the output embedding size based on your needs. While the default output dimension is 1024, you can reduce it to 32, 64, 128, 256, 512, or 768 without losing too much performance, making it adaptable for various applications.
Compare jina-embeddings-v3 with Jina v2 models:
Model | Parameter Size | Embedding Dimension | Text |
---|---|---|---|
jina-embeddings-v3 | 570M | flexible embedding size (Default: 1024) | multilingual text embeddings; supports 94 language in total |
jina-embeddings-v2-small-en | 33M | 512 | English monolingual embeddings |
jina-embeddings-v2-base-en | 137M | 768 | English monolingual embeddings |
jina-embeddings-v2-base-zh | 161M | 768 | Chinese-English Bilingual embeddings |
jina-embeddings-v2-base-de | 161M | 768 | German-English Bilingual embeddings |
jina-embeddings-v2-base-code | 161M | 768 | English and programming languages |
How to create embeddings with jina-embeddings-v3
There are two primary ways to generate vector embeddings:
- PyMilvus: the Python SDK for Milvus that seamlessly integrates the
jina-embeddings-v3
model. - SentenceTransformer library: the Python library
sentence-transformer
.
Once the vector embeddings are generated, they can be stored in Zilliz Cloud (a fully managed vector database service powered by Milvus) and used for semantic similarity search. Here are four key steps:
- Sign up for a Zilliz Cloud account for free.
- Set up a serverless cluster and obtain the Public Endpoint and API Key.
- Create a vector collection and insert your vector embeddings.
- Run a semantic search on the stored embeddings.
Create embeddings via PyMilvus and insert them into Zilliz Cloud for semantic search
from pymilvus.model.dense import SentenceTransformerEmbeddingFunction
from pymilvus import MilvusClient
ef = SentenceTransformerEmbeddingFunction("jinaai/jina-embeddings-v3", trust_remote_code=True)
docs = [
"Artificial intelligence was founded as an academic discipline in 1956.",
"Alan Turing was the first person to conduct substantial research in AI.",
"Born in Maida Vale, London, Turing was raised in southern England."
]
# Generate embeddings for documents
docs_embeddings = ef(docs)
queries = ["When was artificial intelligence founded",
"Where was Alan Turing born?"]
# Generate embeddings for queries
query_embeddings = ef(queries)
# Connect to Zilliz Cloud with Public Endpoint and API Key
client = MilvusClient(
uri=ZILLIZ_PUBLIC_ENDPOINT,
token=ZILLIZ_API_KEY)
COLLECTION = "documents"
if client.has_collection(collection_name=COLLECTION):
client.drop_collection(collection_name=COLLECTION)
client.create_collection(
collection_name=COLLECTION,
dimension=ef.dim,
auto_id=True)
for doc, embedding in zip(docs, docs_embeddings):
client.insert(COLLECTION, {"text": doc, "vector": embedding})
results = client.search(
collection_name=COLLECTION,
data=query_embeddings,
consistency_level="Strong",
output_fields=["text"])
For more details, check out this Jina AI documentation page.
Create embeddings via Sentence Transformer and insert them into Zilliz Cloud for semantic search
from sentence_transformers import SentenceTransformer
from pymilvus import MilvusClient
model = SentenceTransformer("jinaai/jina-embeddings-v3", trust_remote_code=True)
docs = [
"Artificial intelligence was founded as an academic discipline in 1956.",
"Alan Turing was the first person to conduct substantial research in AI.",
"Born in Maida Vale, London, Turing was raised in southern England."
]
# Generate embeddings for documents
docs_embeddings = model.encode(docs, normalize_embeddings=True)
queries = ["query: When was artificial intelligence founded",
"query: Wo wurde Alan Turing geboren?" ]
# Generate embeddings for queries
query_embeddings = model.encode(queries, normalize_embeddings=True)
# Connect to Zilliz Cloud with Public Endpoint and API Key
client = MilvusClient(
uri=ZILLIZ_PUBLIC_ENDPOINT,
token=ZILLIZ_API_KEY)
COLLECTION = "documents"
if client.has_collection(collection_name=COLLECTION):
client.drop_collection(collection_name=COLLECTION)
client.create_collection(
collection_name=COLLECTION,
dimension=512,
auto_id=True)
for doc, embedding in zip(docs, docs_embeddings):
client.insert(COLLECTION, {"text": doc, "vector": embedding})
results = client.search(
collection_name=COLLECTION,
data=query_embeddings,
consistency_level="Strong",
output_fields=["text"])
Further Reading
- Training Text Embeddings with Jina AI
- General Text-Image Representation Learning for Search and Multimodal RAG
- Choosing the Right Embedding Model for Your Data
- Evaluating Your Embedding Model
- Training Your Own Text Embedding Model
- A Beginner's Guide to Website Chunking and Embedding for Your RAG Applications
- What is RAG?
- jina-embeddings-v3 Overview
- How to create embeddings with jina-embeddings-v3
- Create embeddings via Sentence Transformer and insert them into Zilliz Cloud for semantic search
- Further Reading
Content
Seamless AI Workflows
From embeddings to scalable AI search—Zilliz Cloud lets you store, index, and retrieve embeddings with unmatched speed and efficiency.
Try Zilliz Cloud for Free