Events, conferences and webinars.
Memory for LLM applications: Different retrieval techniques for getting the most relevant context
Connecting external data sources to LLMs to give them memory is a crucial part of many LLM applications. This can take the form of connecting to pre-existing large corpora of data, remembering user conversations, or even creating “new memories” by reflection. Underpinning all of all this is retrieval - the process of pulling relevant pieces of information into context. Join LangChain’s Harrison Chase for a deep dive into retrieval where we dissect the challenges of finding relevant information from a large corpus of data. You’ll learn: - What is memory and why is it important - Types of memory - Basics of semantic search - Edge cases of semantic search - Generative Agent examplesJun 08, 2023 09:00 AM PacificZilliz Webinar - Zoom
Tutorial: Working with LLMs at Scale
In this hands-on tutorial, we’ll introduce LLMs and two main problems they face when it comes to production. First, high cost. Second, lack of domain knowledge. We then introduce vector databases as a solution to this problem. We cover how a vector database can facilitate data injection and caching through the use of vector embeddings. Then we’ll use this knowledge to build an LLM application using LlamaIndex and Milvus, the world’s most popular vector database. What you’ll need: - Python 3.9 or above - A basic understanding of vectors and databases What you’ll learn: - What is a vector database - Why do LLMs face data issues? - How to deal with data issues in an LLMJun 15, 2023 09:00 AM PacificZilliz Webinar - Zoom
Virtual EventRegister Now
LLMs in Production - Part II
Join us at the LLMs in Production virtual conference by MLOps on the 15th-16th of June. Our very own Yujian Tang will be speaking about "Solving LLM Data Problems" on Friday, June 16th at 12pm Pacific. Register free to join us!Jun 16, 2023 12:00 PM PacificVirtual - MLOps Community