Memory for LLM applications: Different retrieval techniques for getting the most relevant context
Jun 08, 2023 09:00 AM PacificJun 08, 2023 12:00 PM Eastern
Zilliz Webinar - Zoom
Join the Webinar
About the session
Connecting external data sources to LLMs to give them memory is a crucial part of many LLM applications. This can take the form of connecting to pre-existing large corpora of data, remembering user conversations, or even creating “new memories” by reflection. Underpinning all of all this is retrieval - the process of pulling relevant pieces of information into context.
Join LangChain’s Harrison Chase for a deep dive into retrieval where we dissect the challenges of finding relevant information from a large corpus of data.
- What is memory and why is it important
- Types of memory
- Basics of semantic search
- Edge cases of semantic search
- Generative Agent examples
Meet the Speaker
Join the session for live Q&A with the speaker
Founder, LangChainHarrison Chase is the co-founder and CEO of LangChain, a company formed around the open-source Python/Typescript packages that aim to make it easy to develop Language Model applications. Prior to starting LangChain, he led the ML team at Robust Intelligence (an MLOps company focused on testing and validation of machine learning models), led the entity linking team at Kensho (a fintech startup), and studied stats and CS at Harvard.