You’re in!
Training
Answer 'What's for Dinner?' with Vector Search and Natural Language using Haystack and Milvus
Okay, today I'm pleased to introduce today's session,what's for Dinner with Vector Searchand Natural Language using Haystack and vis. And our guest speaker is Tilian. Tilde is San Francisco based artist and engineer. By day they're free and open sourcesoftware advocate at Deep Set. They can probably deadlift more than you ask them about howto paint an algorithm, the intersections between Mutual aidand biology, or which coast has the best vegan croissants.
Welcome Tilde. Hello. Thank you so muchfor the warm introduction, Chrissy. It's very nice to meet you. So I'm gonna share my screen and let's roll on with it.
Can you see my slides everybody?Great. I'm going to, um, yes. Okay, cool. Let me look at the chat, make sure I'mkeeping an eye on everything. So, um, yeah, as mentioned, my name is Tilda.
I'm based in San Francisco. I'm a senior developer advocate at Deep Set working on thehaystack open source framework. And I'm based in San Francisco. And my interests other than powerliftingand art are surprise Vegan cooking,as you could probably tell from the title of this webinar. So today what we're gonna go over,I'm gonna give you a brief overview of Haystack justto make sure you have contextto understand the code walkthrough,we'll introduce VIS as well.
Uh, I'll show you the application that I builtand then we'll have an informal q and a. And I would really love for this to be helpful to you. So please drop any questions you have in the chatand we will follow up with them at the end. Um, but just to kinda get a senseof people's prior knowledgeand where you're at, can you say in the chat,who here has heard of Haystack before?No. Okay, perfect.
We'll start from the beginning. So Haystack is an open source frameworkfor building production ready largelanguage model applications. So we let you connect your data in whatever format you haveit in, to whatever model you want whilebeing flexible and easy to use. The basic building blocks of Haystack,which I will talk more extensively about in a second,are pipelines and components. And we will send these slides out to you, slidesafter the webinar so that if you wanna reference anyof these reference materials, you'll be able to do that.
So don't worry about needingto frantically screenshot this QR code. Um, deep set cloud,our commercial offering is actually built on topof the haystack framework. Now, how many of you have heard ofretrieval augmented generation before?Yes. Cool. Um, retrieval, augmented generationof RAG is very hot in the large language model space now,and what it basically means for anybodythat doesn't know is giving a large language modeladditional context so it can give you a better,more accurate answer to the question.
And I was at picon this weekendand this was the best example that I could find showing Ragwith Haystack in three lines of code. So basically we have a URL, which is the haystack website,and we have a question which we then passto a large language model, get a response back explainingwhat Haystack is based on the document providedand giving us a little bit of metadata. So it's cool that you don't need like a, a huge amountof lines of code to make that happen,although of course things can get quite more complicated. So Haystack is actually four years old. We've been doing natural language processing sincebefore it was cool, although at the time in 2020and it was released, it was focused on the use casesthat were more popular at the time, such as likesemantic search, extractive question answering table qa.
But then in 2022 everything changed. Large language models went mainstreamand we started seeing an explosion of use casesof people wanting to support RAG and agents. So we decided to pause re-architect our frameworkto better support those use cases. And then in March we released Haystack 2. 0,which is more focused on the way that people are writingAI apps these days, as well as having a more flexible,extensible, better developer experience.
And I will give you enough of an overviewof haystacks features sothat we can hopefully understand the code demo. So pipelines, as I mentionedbefore, are one of the building blocks of haystack. They're powerful abstractions that allow youto define the flow of data through your LLM application. They're a graph. Everything's a graph.
You get a graph and you get a graph. And um, the nodes in this graph, as it were,are called components. And you have complete flexibility overthe way you arrange the components in the pipelineand the way that data passes through them. Now, this code is a slightly more complicated example thanwhat I showed you earlier, but it's gonna be like similarto the demo I'll show you. So let me walk you through it.
We have a pipeline, which is like a Python object. We're instantiating, we're adding some components to it,which is an better a retrieverto retrieve documents from a document store prompt builder,which builds the prompt that is then sentto the large language model. We connect these things to specify which order thecomponents are run in and then give it some arguments sothat we can actually run the pipeline. And haystack also includes a utilityto help you draw your pipeline, which is coolbecause you can actually get quite complex with the logic ofhow your pipeline flows. For example, pipeline support branches.
Now let's say you wanted to run a rag pipelineand you wanted to do what's called hybrid search,where you're using both keyword based searchand embedding based search to find documentsthat are relevant to your query. Because the combinationof these two things can sometimes produce betterresults than one alone. With haystack, you can branch out,join the results again at the document joiner componentand then, you know, use the rankerto decide which results are best. Haystack also supports conditional routing. Let's say you have a rag pipeline that wants to lookfor an answer in a database,but if it doesn't find it, you wanna search the web or slackor notion or any other data source.
Well, we have you covered. Pipelines can also contain loops,which are great if you want to generate output suchas structured output that needs to be validated,that contains a certain shape. Now my colleague wrote a demo where it takes meeting notesand then produce, you know, the looping pipeline to produce,um, API calls that you could then send to the GitHub IPIto create issues because typing up issues from meeting notesand getting them into GitHub is not my favorite thing. I don't know about you. So it's incredibly handy.
And then finally, um, which is relevant towhat we're talking about today, we have agent pipelines. Now agents can call third party functions to go outand do other things, make decisions like askwhat the weather isor, um, you know, help you construct a recipe. So excited to show you an example ofwhat this looks like in practice. Now Hayek also offers you a lot of flexibility in exactlyhow the data that is returned from the search is passedto the large language model. 'cause we use a templating language called Ginger twothat has loops and conditionals.
And so you can really specify into exactlyhow your prompt is going to look. And this prompt template is just another componentin the haystack pipeline. So speaking of components, as I mentioned,they're the building blocks of a pipeline. The nodes in the graph. Now haystack provides a tonof components out of the boxfor common tasks like retrieving documents from a documentstore, pre-processingand removing white space summarizing textor browning queries like some of those conditional, um,you know, branching pipelines that I showed you earlier.
But in addition to the out of the box components,you can write your own custom components injust a few lines of code. Now, all you really need, it's a Python class,you got a decorator, and we wanna know what are the inputsand outputs that this pipe that this component expects sothat we can serialize the pipeline. And then like, you know, what arguments are neededto actually run the component. Give you an example of what this looks like in real life. Our friends over at Gina AI wrote a blog post on using ahaystack pipeline to de-duplicate Jira ticketsbecause who among us hasn't filed an issue or filed a bugand thought you were being a good citizen,but then you're like, oh no,someone already reported this, my bad.
Um, but this custom component, um,components ideally have one job and only one job. Um, this one is just deduplicating the,um, keys. So if you want a little more detail about that,you can check out their blog posts, which specifieshow they built the entire project. Now, some more general haystack features, haystack is modeland database agnosticbecause we wanna be ableto support whatever data source you have,whatever model you want to use. And, um, we also wanna be able to incorporate, uh,additional databases or an additional data sources.
Brag does not just mean vector databases,like the web is a rich source of contentand you know, if it's, there's an API for it,if it exists on the web, haystack can help you get it into apipeline haystack. The core of it is meant to be kind of like a lightweight,flexible library, but we have many extensionsand integrations in order to extend the functionality,including like extensions that support modelsand vector databases as well as like tracing and monitoringand everything you need to get your app into production,including the VIS integration, which is maintainedby our friends over here. So speaking of vus, um,is an open source vector database designedfor high scalabilityand vector databases are specialized systems designedfor managing and retrieving unstructured data. They excel at semantic similarities searchesand, um, which make them great for writing applicationswith large language models, especially if you have a lotof data you're operating over. Um, Christie, what am I missing?I'm not an expert on vector databases, so I wantedto ask you to add additional content.
Oh,That sounds great. Yeah, II usually just talk about you storing and indexingand searching vector vectors. Cool. All right. So let's do a live demo of this.
What is Regener application?But first, um,I started this project thinking I'd use a Kaggle dataset,but then I looked at it and it was like, oh,this doesn't actually contain the recipe datafrom my favorite cookbook. It's just metadata that's not actually very useful. But that's okay because as I mentioned,haystack is very flexible inhow you get the data into the application. And the codethat I've written uses several different formatsto accomplish that, but showing is easier than telling. So I'm gonna roll over to my editor now.
Um, how is my font size?I think a little bigger. A little bigger. Cool. Is that good?Yeah, that's good. Thanks.
Great. Cool. So, um, first we are connecting to the Melva document storeand because I wanted like one, one instance of thisthat could be passed around between the different,um, components. And this is running in a Docker containeron my local machine. Oh, excuse me.
So the first pipelinethat I ran, um, wasan indexing pipeline. Now indexing pipelines, take in data, um,and like many other people,I'm not the most organized person all the time,so I have three recipes that I've written,but oh no, they're in different file formatsbecause I wasn't thinking about ingesting them into apipeline when I wrote them, but that's okaybecause haystack can take these recipes no matterwhat type they are and actually ingestthem into a document store. And I will show you exactly how that is done. So, um,the first component we have is a file type routerthat takes different types of filesand it specifies which types we accept. And then each type of file,which is text marked down in PDF, kindof takes a converter component, which will convert itto a haystack document.
And then we have a document joiner,which like makes these three branches become oneand like joins all the results. Anytime you're pre-processing documents,you wanna like clean them, remove the white space, allthat fun stuff, and then wanna split them into chunks. Um, you can split by wordor paragraph depending on like what kind of document it is,how long the chunks of text tend to be. I thought for recipes, 150 words seemed like a good size. Um, and then with an overlap of 50 to make surethat we're sort of not missing anything.
And then we use this sentence transformers documentand better to actually make those documents into vectorswith this, um, this model, which is lightweight enoughto run on my local machine without causing any hiccups. And then we wanna write those to the Melva document store. So then we make our haystack pipelineand we gotta add all these components to the pipeline. And then we gotta use the connect method to specify whichcomponent is connected to the next component sothat we know which order to run them. And then over here we're just saying, Hey, here'swhere these files liveand um, we're gonna run the pipelinewith the file type router sayinginput all these files.
So let me roll over to hereand I'll say Python, um, pre-processingand hopefully this will work. Great. So it is spitting out this error. It just started spitting out this error this morningand I think it is a side effectthat is not actually relevantbecause as you can see we still have six um, filesin our database, so I'm not gonna worry about it. But, um, sorry about that.
So then if we go, um, we have this rag pipeline,which will actually like, um,show you how to query the document store. So we have a prompt,which is like answer the questionsbased on the given context. It loops through the documents that are providedand also provides the question to the large language model. And it's a pretty similar API to what we used before. Um, we have a rag pipeline, we're adding an embed,which important uses the same model that we were usingto create these embeddings.
Um, we have our document store,which is the same instance of a document store. Um, they have our prompt builder, which takes thisand then makes it into a prompt,which is understandable by the pipeline. And we have our large language model,we're using the open AI generator here,but as I mentioned, haystack is flexibleand you can use whatever any of the major modelsthat your heart desires. So we connect all these things. And then let's pass a question, which iswhat ingredients would I needto make all three of these recipes?So let's roll over here and run.
All right, so our replies to make the, to makeketo eggplant, lasagna flan and hemp cheese. And it looks to be actually listing all of the recipes. And since these recipes are specific to me, right,this doesn't exist on the web, then I knowthat it's not the large language model using its priorknowledge, it's actually using the input that weprovided, which is pretty cool. But as I mentionedbefore, I'm not the world's most prolific,um, cookbook author. I have a lot of other hobbies, Ihave a lot of stuff going on.
So I wanted to also add additional data. So, um, we have another index pipelinethat takes data from the weband um, it's not that different fromIndex Pipeline we used before. Um, except it does use the haystack link content vet,which is a component we provide that will take a URLand turn that URL into haystack documents sothat you don't have to like write a scraperand scrape it yourselfand you know, decide what's important about that. Um, so we also need a converterto convert the at TML to documents and likebefore, we need a writer and an embed that uses that samemodel that we were using before. Um, so these URLs I'm using are actually, it isthe cookbook, my favorite cookbook, the Vegan Omicronby Issa ch Chandre Moscowitz, like she has a lotof her recipes on onlineand recipes are not able to be copyrighted legally.
Fun fact. Which is why they all start out at the beginningwith like, you know, my hubbyand my dog hated this recipe,which like, no one wants that, right?But with this, we can avoid it. Um, so I passed this list of URLs into the indexing pipelineand let's go ahead and run it. Hmm. Uh, web index pipeline.
Lemme make this bigger as well. It'll probably be easier for you to read. Oh, great. I didn't know you could do that. Yeah, sorry about that.
Um, okay, cool. So there's that. And then, um, what I did here,now this gets really interesting, right?I decided to make another rag pipeline. We could have used our original rag pipeline,but I wanted to get a little fancy with it. Um, because I wanted tomake a pipeline that we could easily like wrap into afunction and then provide as a tool so that other,um, other pipelines could actually call this functionand say, Hey, vegan shelf chef, help us write a recipe.
So, um, I added a little wrapper sothat this works as a standalone. So let's run the chef pipelinescript just to prove that it works. And it's asking about buffalo wingsand lasagna bolonese, which are some of the recipes herethat we have, um, ingested into the pipeline. So this is Chef Pipeline. Aha.
So to make buffalo wings,you would need a 14 ounce block of extra firm tofu salt. Oh man, this is making me hungry. Um, even though it's pretty early in the morning here,but I'm basically always hungry. Um, but again, you can tell like if I had just asked OpenAIhow to make buffalo wings, it would certainly provide mewith an answer that included meat, which is not this. So I know that it is taking this fromthe document store where it is supposed to.
Now let's look at where we are actually using toolsto wrap this function. So, um, we are importing the chef pipelinefrom the file where we defined itand then made a function called ask a vegan chef. Just basically just like, um, it runs a pipelinewith some arguments, which it just like some JSONand the query were passing in prints the response justbecause I like, I like printing thingsto make sure my code is doing anything at all,so I'm not just sitting there freaking out. Um, and then open a API,or sorry, open ai in order toprovide your function as a tool,it has given you like a JSO spec that you must follow. So you have to say what type of tool is this?It is a function.
A function has a name,it has a description, some parameters. It takes a query, which is a string and it describes it. Um, and it is a required parameter. So there's sort of like two different waysof calling large languages models right now. There's generation, which iswhat we were doing in our basic rag pipelines.
We just pass it a prompt. But really all the large language models are sortof moving towards a chat completion, API,which is more like an agent. Um, so this, in this we're gonna do some agentic,um, chat message. We're gonna say, Hey, this message from a system,you're a helpful and knowledgeable agentand you have access to your personal vegan chef. Isn't that cool? And ask them about vegan recipes.
And then the user says, what ingredients would I needto make buffalo wings and lasagna bolognese?And then again, here we use haystacks wrapperfor the open AI chat generator, um,which you do need an API key for this, which I have savedin my environment variables because you know, I like youand I'm sure you're all very cool and trustworthy people,but I just don't know you that well yet. So here we're going to actually call this functionand see what kind of chat message it produces. So let's run Python tools pi,aha. Oh, I have a key error. Did I delete something? Um,Oops.
Hmm. Interesting. Available functions, function name. I really don't see where there's a typo here. There we go.
I didn't change anything. You saw that. I'm not crazy, right? Well who knows? Computers, right?Um, but anyway, so we get a message back from the largelanguage model that is kindof like explaining in natural languageto make buffalo wings, here's the ingredients you need. And, um, it's a little bit different than theresponse that we received before. Gives you a little more sortof context in English, which is nice.
And, um, that is really as far as I have gotten. Um, so let me just pauseand kind of look at what questions that we haveand we can take it more conversationally from here. Great, thank you Tilda. Um, so in,in your agent it looked like you only had one function. Would you be able to add more than one function to that?Yeah, I think it would be cool to, um, uh, likewhat if you had one that just got ingredients?What if you had one that suggested recipe ideas?Or like, I would love an integrationthat like emailed you a shopping list sothat you didn't have to pull it out of an application.
Right. Um, so that'd be cool. Okay. I see a question in the chat. Um, Nicole Nikolai asks, could you recommend strategiesfor dealing with large amount files?Like let's say I've got Italian recipes,french recipes, et cetera.
Mm. Yeah. Metadata labeling, if you've got a lotof files is really huge. And so that way, like when you're doing search, you can say,Hey, like I wanna filter out filesthat are not Italian recipes. Um, so yeah, again, just like doing a little bitof thought on the front end to make surethat your files have the metadatathat you want will save you, um,headaches later on when you wanna search it.
Or especially like dates, right?A lot of times people wanna filter or re-rank by recency. So that could be like another great pieceof metadata to support. Does that answer your question?Yeah. And, and also you can, um, vissas a vector database. Um, all of the fields except the vector we refer toas metadata and metadata filtering as possible in searchwith, with viss.
Um, wonderful. As the database geek here, I was, I, it kindof, it probably was therebut skipped over me my eyes when you were doing thevis, um, call. I didn't see which, um, vector in, uh,vector database index you were using. Um, the open AI functions can call an API rest. Um, yeah, we're just like wrapping the open AI APIfor you in haystack to make that easier.
But like, yeah, they're just essentially calling those,those models from the cloud. Um,Okay. You're prob there'sprobably some default setting in therefor 'cause DA index and database refers to a data structureand we have different data structures like clusteringor um, uh, trees. Um, so yeah, I guess I haveto look at your code to figure out. Yeah,I can show you.
Um, I just used the default setting from your documentation,which was very easy to get started with. So, um, if this is helpful, like this is what IUse, oh, then maybe, maybe, um,you're using our auto index, which defaults to HNSW. Okay. I see a question. Yeah.
Uh, far beyond asks the open AI functions can callan API rest. Yeah, that's the question. Yeah. Yeah. I mentioned, um, haystack,the haystack generators are like a wrapperfor the open API call.
Um, so as long as you have your open AI key savedas an environment variable in your environment, then um,we will just take care of that in the backgroundfor you passing that secret. And it's not just open ai, right?You can use any of the major model providers like, um,anthropics, Claude, mixed mixture. You can run local models with a llama. It's really quite flexiblebecause, um, the models are really in an arms raceand we wanna support likewhat everyone best fits your specific use case. Where do people see which models are supported?Ah, I can show you that in the haystack documentation.
Oop, let me stop sharing this. Um, so our documentation is quite comprehensiveand um, if you look at generators, we have a guidethat explains like how this all really works. Um, the differencebetween generators versus chat generators,which I kind of showed you earlier. There's like, you know, a generation API versus chatcompletion that just like requires youto pass in more parameters, um,and then gives you like a more agent like response. But then here's all the models we support, right?Like AWS bedrock, Azure cohere gradient,many models off a hugging face.
Um, yeah. So it's quite an extensive list. Mm-Hmm, good questions. SoAre there other agents that, other agent typesthat haystack supports?Um, in terms of like, uh, agent types?I am, like ouragent features are still being built right now. Um, what you can do with agents, I didn't show allof this today, but like function calling,which we did demonstrate in tools, routing, fallbacks,you know, if you don't get an answer, you want loopingand pro producing structured output.
But what we don't have yet that is on our roadmap is supportfor memory, which is a really important,like if you really want a conversational dialoguethat goes back and forth, um, we need to support that. So that is on our roadmap for this quarter, um, supportfor multi-agent, which is also quite hot. And just making sure our overall developerexperience makes sense. So, um, we have our pub our, since we're open source,our roadmap is public on GitHub,and you can check that out to seehow we are making progress against that. But does that answer your question?Yes.
Yeah, I was wondering what kindof agent agentic features, um, we, yeah, we,we actually had a maybe personal question for you. We had a speaker the other day, um, for our vis,uh, meetup who was creating an open source memory system. And, um, what's your take on open source versus closedsource for memory systems?Oh, man. Um, orJust in general, like you've got a lotof generators there I noticedthat are open source as well as closed source. Um, well, all the gener, the generators themselves,those are like wrappers, um, for API.
So they're all open, the generators themselves,the wrapper classes are open source. Like yes, we do call closed source models,but like, it's a nuanced question, right?Because some people open source modelsand like what they do is they publish a bunch of weightsand they're like, here you go with like, no real instructionor context on how to actually buildand run the model yourself, right?Like open source isn't a binary, it's a gradient. And, um, so I,I personally, um,before I joined Deep Set, um, you know, I worked on Adam,the open source editor. So like I am really bullish on work in public,show your work allow people to contribute,but that has to be balanced with like,how does this support your business strategy?And, um, like who's gonna do the emotional laborof like making sure that youand the community are on the same page. But like at Haystack, we have regular office hourswhere like when we're gonna build a new feature likeevaluation, we ask our community to show up and weigh inand even like, you know, um, compensate themfor their time in certain cases to just really make surethat we have communities as firstand foremost the championsand the consumers of our products.
So we're building what works best for you. Um, but that being said, AI is a rapidly shifting ecosystemand landscape and like sometimes we haveto support more closed source thingsbecause that is what the general consensus isthat everybody is usingand we want our product to be useful as well. So I feel like I just rambled at you about a source. Okay. My opinions.
Does that make sense?Am I answering your question?Yeah, possibly. Um, so is this, this is very naive questionfor me from, for myself. Um, maybe other people were wondering too. So I I'm it, I'm, in terms of like integrators,I know about Lang Chain and I know about LAMA Index. Should I think about haystack as another option for me?An open source, another open source optionfor integrator and integrator? Yes.
That is like exactly the category where we see ourselvesand like each framework has differentstrengths and weaknesses, right?Like, um, Lang Chain has a huge community, um,it's very easy to get started with,but like the complaints we hear about Lang Chain are like,oh, this tutorial is three weeks old and it's already brokenand haystack. We have like a history ofmaintaining backwards compatibilityand resources, like specifying our deprecation policy. We really want to give you featuresto make your app production readyand be a little bit more stable. Um, and LAMA Index is really focused on like the dataprocessing piece of it. So like we even have an integration between Haystackand LAMA Index if you wanted to use both.
Oh, that's interesting. Yeah. Yeah. So it's like, um, you know, you could be a collaboratorand competition at the same time,but like yeah, we're, they're all great tools. We're all solving similar problemsand we just kind of have different approaches inhow we think about those problems.
Right. Well, I see people are there,but I don't see any questions. I'm gonna read your documentation about, uh, haystack's viewof, of, um, LLM generators versus check completers. That's that's an interesting nuance. Yeah.
The Could you say providers?Yeah. Yeah. The model providers I think are really tryingto shift people away from the sort of generatorsto the chat completersbecause I think that it like gives them a little bit moreflexibility to like, this is my take, right?I didn't call OpenAIand get their PR department to vet my answer. Um, but my personal opinion on this is like, um,you can produce a chat message from the system, right?It makes it a lot harder to like putwords in the AI's mouth,so like they have a little bit more control over making surethat they're producing a response that is goingto be a good user experienceand consistent with what they wanna provide. Um, plus everybody seems to want agents, right?We seem to have settled on like, um, chatas like the primary model for how people are using LLMs now.
And so like that gives you a little bit more toolsto produce that kinds of experience. So like that is my opinion on why this change is being made. But, um, hopefully I'll run into someone from open AI at anevent at some point and I can kindof like run this past themand see if they agree with me or not. Oh, we have a question. Um, in a hypothetical scenario,using Rag Pipelineand Haystack, can the pipeline initiate the retrieval stepfor a second query before processing the generation stepof the first query rather thanexecuting these steps sequentially?Um, yeah, as long as the second query doesn't, resultdoesn't depend on the results of the first query,you absolutely can do that.
Um, I made a pipeline wherewe have multiple different querying steps where, um, we,um, the LLM, so I made a medical chat botthat pulls papers from PubMed,and the first query is where we, um, take the, the questionwe make that into query like keywords that we can useto search PubMed because their API takes keywords,not natural language return some papers,and then we pass those papers backto the large language model so it can reduceits answer in natural language. So yes, like you can have pipelines with multiple stepsthat call the, um, LLMs at multiple steps depending onwhat your use case is. But, um, does that answer your question?And, um, does know, what are you building?This has got me curious. Oh, I think I can, let me see if I, yeah, there we go. I just unmuted you Alex, if you'd like to say something.
Uh, hey, um, I was just, uh, investigating how, how, um,hey Tech works and um, of course when you deploy a pipeline,you have multiple queries, multiple, multiple, uh, userthat are trying to, uh, uh, use it in the same time. And, uh, usually, um, the pipeline is, uh, uh,taking some time on the retrieval stepand also in the, uh, generation step. And there is no reason to wait for the generation step forend step before starting the retrieval stepfor the next step. So, um, I was looking on some way tomake sure that the,the queries are not executed sequentiallyif they're independent. Ah, so you wanna be ableto like run different queries in parallel?Uh, yes.
Or like, uh, ju just, just at least send a request,uh, synchronously. Yeah, async support is coming. Um, that is also on our Q2 roadmap. We have not built it yet. Um, so stay tuned for detailsand hopefully we have to have, we're going to havethat out soon because that is a feature the community wants.
I agree. It's really important and we don't quite havesupport for it yet, so. Okay. Thank, thank you. Um, Rishi asks, how does Haystackand pair with Llama Index?Um, yeah, so Llama Index focuses more onlike data processing.
They have like a lot more functionality for that. Um, llama Index also has a, like JavaScript SDK,whereas we do not. So, um, but they're both great frameworksand there's actually a haystack integrationwith Llama Index should you want to use both. So, um, those are kind of like, um, we have I think,a little bit more focus on toolsto get your app into production than Llama Index does. So, um, there's pros and cons, but like,but yeah, they're both really solid products to be honest.
Um, let's see. Well, if we don't have any more questions Oh, okay. We have more questions. So Jim is asking in the chat,you demonstrated several pipelines. Would each pipeline be deployed as its own web service?Yes.
Um,because indexing pipelines where we ingest data have sortof like a different needs in terms of like being, you know,IO bound versus, um, versus CPU bound. And so like we wouldn't want to have the same configurationfor all of our pipelines,but we do have a package called Hay Hooks,which uses Fast APIto help you deploy an individual haystack pipeline, um,you know, via Kubernetes. And, um, it exposes those, those API, um, endpoints. So I can point you to the documentationwhere we have a little bit more information about that. We can't have like a one size fits all deployment guidebecause, um, it really depends onwhat your pipeline is doing, but we at least have somematerials to get you kind of started in the right direction.
Thank you. These are great questions. Appreciate y'all being so engaged. And if people wanna get started, do you have a placewhere you usually point people as as, as the best place?Yeah. Um, on our website we have, um,let me share my screen again.
Yeah. So on our website we have a bunch of tutorialsand those are just collab notebooks where you,they're rendered and you can just startrunning and playing with them. Um, we have videos on YouTubethat explain, um, if that's how you learn. Um, we have regular meetups, we have a Discord serverwhere you can have support, you can read our code on GitHub. So depending on your preferred method of learning,we have many ways to get started,but like, you know, our documentation gets a lot of love.
So I would just start poking around our websiteand see what speaks to you. Are those meetups virtual or in person? Both. Where where are you located?Um, I'm in San Francisco,but Deep Set is distributed across Europe. So like, we've had meetups in Pittsburgh, San Francisco,London, and Berlin, um, over the past couple of months. So if you subscribe to our Luma calendar as well, as wellas like, you know, several virtual meetupsand webinars, so if you subscribe to our calendar,then you can stay in the loopabout what we're planning next.
Okay. Well I, I think that looks like it. Um, give them one more minute ifthey wanna ask more questions. Otherwise we'll call it a wrap. Well, thank you so much.
You've been a lovely audienceand um, if you have any questions, please hit us upand hope to see you. Oh, do we have one more question?Wait, I see something in the q and a area. Thanks for the event. Oh, thanks. Oh, you're welcome.
Yeah. Um, wonderful chatting with you this morningand have a great rest of your dayand I hope to talk to you again soon. ThankYou. Bye.