- Events
Drag-and-Drop LLMs: Simplifying RAG with FlowiseAI and Milvus
Webinar
Drag-and-Drop LLMs: Simplifying RAG with FlowiseAI and Milvus
Join the Webinar
Loading...
About the Session
Join us for an in-depth webinar featuring Henry Heng from FlowiseAI. In this session, we'll explore how to build customized Large Language Models (LLM) with a special focus on the crucial role of Retrieval Augmented Generation (RAG). Discover how Milvus, a leading vector database, serves as the backbone for efficient data retrieval and generation within the RAG process, seamlessly integrating with FlowiseAI to elevate your LLM capabilities.
Key Takeaways
- Get an overview of FlowiseAI's open-source UI visual tool for building LLM flows.
- Learn how Milvus, a leading vector database, integrates seamlessly with FlowiseAI to power the vector store component in RAG.
- Walk through real-world use cases that leverage the combined strengths of FlowiseAI and Milvus for optimized language understanding and generation.
Um, my name is Henry, uh, one of the founder of Flow Wise. So a little bit background of myself as you can see from, from here. So I was a, uh, software engineer in a company called Fidelity Investment. Um, a lot of people have been asking like, why Fidelity?It is like a finance brokerage firm, um, firm. So mainly I was working with our own in internal automation tools and combinedwith our own, um, pro train large language models.
So it was either before Chad g p t, we have to like, you know,label our own data sets, train our own models and things like that. Um,so been working there for a few years and also got a,um, computer science and major in artificial intelligence from the University ofYork. So that's kinda like my little background over there. Not too fancy,you know, not coming from Harvard or m i t, things like that. So a little bit story about flow wise.
Um,so as I mentioned, right,I was working in Fidelity investment and at that time we were trying to,you know, find ways. I mean,obviously after you have charge GT people are trying to find ways to make, um,something work that wasn't possible before, right?So you have charge gt and then people are trying to use libraries likeland chain LAMA index, you know,haystack trying to figure out the problems that could not be solved,solved before this tragedy, the era. And I was part of the, um,developers on the team and seeing firsthand how, you know,order these frameworks will be able to, um,help you to speed up the development time. And also,like I'm so intrigued by the frameworks and, you know,all the development progress. Um, I, I decided to kind of like, um,build flow-wise becauseone key things that about building large luggage models is like the one that Imentioned,it takes so many iterations for you to figure out the perfect combinationsthat works for your own use case, right?So you kind of have to figure out your correct chunking strategies, um,the right vector start to use, um, your configurations,the prompts and the models as well.
So flow-wise is designed to do that without having you to, you know, the code,the holding over and over again. Um,so why don't I jump to, um,the applications of flow wise so that you guys can getan idea to see, okay, what is flow wise and how does it work?Let just track it down here. All right, so this is flow wise. Um, as you can see,if I click into one of these components here,you can see like here is like the little Lego blocks that allows youto kind of connect from one block to another block. And the idea is to have you kind of like build, build your own Lego,which is like your own custom AI solutionsthat fit your own use.
And for you to do that,we have like, um,many different templates that actually give you an idea, uh,how you can do stuff. So as can see,like the first one is called flow wise documents question answering. So if you put into that one,and so a rough idea is that it actually ingests a document froma GitHubs and then it will be able to convert them into avector embeddings using and embeddings models of course. And after that it'll be sent to a conversational recruiter QA chain,which would then be used by a chat model in this case chat PT,to able to return the answers back to you. So I'll show that how,how all of things works, um, in a second.
Um,but for now I'm just trying to show you, show you guys how, you know,there are many, many different templates that you can do many different stuff. So for example, um, this kind of agents isdesigned to for you to ask questions on your C S V data,which mean your spreadsheet, your Excel sheet. So you can simply just upload the flower and then you'll be able to askquestions. So things like what is the opening of your stock market today?What is the highest tradings today's, so different things like that. Um, so without further ado, I will jump into a real quick demo,um, to show you how it works.
Okay?So this is a very simple, um,what we call like a chat flow. And the idea is to have a memory which will be able to hold aconversation for, um, between you and the ai. And then you have this model, which is obviously, you know, Che, G P T. And finally you have the conversation chain. So this chain is built on top of long chain and under the hood,the prom is designed to for you to allow for you to have, uh,normal conversations with your, with your ai.
So if I click a use template,and then the next thing I do is just put in my credentials. So if you want,you can also like create a new one. So these are the one that I've created. Um, so I click create new. So you just put in your credentialing and then your e p i key and that's it.
Um, but you also have the options to select different parameters. So as can see the max token, the top probability, frequency,penality timeout, your base path, et cetera, right?So I'll choose, so you have the options to use different models as well. Um,I'll choose the 16 K one,and that's, that's it really. So you just save the flow,save one, three. So the next thing that you can do is you can straightaway just chat with your,um, you know, your ai, right? So this is very simple example, right?And right now let's take it into the next step,like how do you build a rack stack which involve databasefactor, database like vu.
So to do that,we have uh, we actually have a template to do that if you want,you can use that. But for the web mins,I will be creating it from scratch so that people will like understandwhat,so yeah, feel free to stop me if you have any questions or um,anything for us, right? Um,so when we open up, we can, uh,straight away click at new here, you'll be brought up to a new canvas. And then you, when you click the plus icon here, you have many,many different integrations that you can select. So you can use agents,you can use embeddings models and, um, different, different things. So for now I'm trying to build a rec stack, right?So a rec stack,obviously it involves ingesting your document sources and then convert into theembeddings and apps it to.
So first I will be looking at thesedocument loaders and you have many different options that you can pull your datafrom. So things like,you can load it from an a p I or you can pull it from an Airtable,you can even square a website,pull from Confluence page, you know, the document files, et cetera, et cetera. And this demo,I'm going to show you how you can like scrap a website and then build a recstep from it. So I'll be using a chio web scrapper,and in this demo I'll be scraping my own website, which is, you know, Floris ai. So it doesn't have too, too much of a information there.
Um, very basic, um,one, one page applications. Um,so I'll just put in my UR over here. But if you click additional parameters, you have different options to, um,you know, better, uh, enhance your scraping function a piece. But for the,for the sake of the demo, I'll just keep it simple. So now we have the, um,scrapper ready and obviously you need to chunk your datainto different pieces, right?Because we are working with a limited contact size of, uh,of our lms.
Um,so we have to split the data into different chunks. So for that I'll be using text splitters. And here you have also many different options to use. Um,so for the website it is recommended to use H D ML to mark downtext split. So this is designed specifically toconvert your H D M L page to a markdown,which then split up by different separators.
So if you look at the website, right,you have many H D M L code, you have many C ss S code,are those not providing real information to, to, you know,your L apps. So what we want to do is we want to get the actual wordings of the website. So things like build apps easily. So when people ask what is flow wise,you'll be able to get this piece of text and answer the questions back to you,right? So when people ask like, um,how do we get started or how do we install flow wise?So you'll be able to get the information from here and return it to you. So that's why we have to use this kind of splitter to get rid of all thedifferent, you know, H and C SS SS code.
So for that, I will be connecting to my web scrapper. So now that we have this tool ready,and obviously the next thing that we have to dois we need to convert the different chunks into your vectorembeddings. And then obviously then after you can touniverse. So for that I'll be using, um,a various, uh, open AI and embed models. So here,but also you have the different options that can use, you can use hugging phase,Azure, cohere, Google, uh, et cetera.
The local ai, which is like,uh, you know, your open source embeddings as well. So after I've got embeddings and I need to have my vector stores,right? So this case be using vu,so just put in your opposite documents. So if you notice over the vector stores integrations,we have vu, um,upset and load as existing collections, right?So here in this case we want to upset your vector and vettings to vu. So obviously we'll be using this one and I'll show you how you can use the otherone in another example. But in this case we'll be using this one.
So we connecting to embedsand then obviously, uh, connect your credentials and then connect the documents. So you, so as you can see, like it takes two inputs documents, which is fromthe contents into different chunks, put into your input over here. And then combined with the embeddings models,it will convert the different chunks into different vector embeddings. So you'll see like how does it look under the, uh, dashboard later on. Um, so here we have to connect our credentials.
Um,you can create new credentials as well. Just specify username and password. Um,and then for the server, u r l,so right now I have a project set up already on,on vu, right? So I just need to grab my link,which is the public endpoint. So just copy this one. Um, remember we have to put in the H D P Ss,let me log sheet copy paste the link here,and also remember to copy the portover here.
So you,you'll need to append the port to the end of the Q R L. So now this is all of it that you need. And then we need to specify,you know, um, the, the new collection name that you want to to. So I would say bu um, demo in this case. So now that we have got the universe nodes set up correctly,so there's two finer things that we need to do, right?So here we have many different chains.
So chains are um,the components that is built on top of long chain and under the hood. Each of this chain it designed specifically to do different things. So in this case we'll be using conversational re QA chain andunder the hood it is designed specifically to answers questions fromyour document sources. So there are like different, uh,chains that are designed to do different things. But um,feel free to look at our documentations,um, or long chain documentations to understand the different use cases.
So in this case, I'll be using this one conversational QA chain. And then as you can see we need to take in three inputs. First is language model, second is the vector stores,and the third one is the memory. This is optional, right?If this is not being connected, then a default buffer memory will be used. So the only two, uh,requires inputs are the one that have the asterisk over here,which is the languagemodel and a vector stores, right? So we just need to um,like Lego, uh, connect this up to here.
And lastly, the language model. Um,I will be using the chat G P T model. So just put in the chat g PT note, connect your credentials. You can select any different models that you like. Um,I always go for six linky because why not?You have like more contact size as opposed to the one that have only 4,000contact size.
So I connect, um, together. So that's everything, right?So this is kind of like your simple rack stackthat can get you off the ground and get you quickly tested and create prototypesand then you can iterate from here. So let's save it. I would say rightafter I save itright now I have a little chat over here so I can start askingquestions already. So remember that I am trying,what I'm trying to do here is that I'm trying to script, you know,flow wise website and then upload informations toso that I can do, you know, basic question answering on my,on the flow wise website.
So let me ask what is flow wise, rightbefore I do that, as you can see, like here my collections, I only have book. And then in here I specify a new collections,which is called a vis demo if everything works. Um,so it will be, so you will see a new collections over here. So let's say,let's ask what's flow wise. Uh, okay,there's some errors.
Um,okay, let me try to clean thisup and then use a, hmm,this is a kind of large error. Um,let me try the specify like the 4,000 chunk size and thentry to check if everything's configured correctly. Share your pity. Let me try to switch through to triple point, uh,triple 0. 5 and embeddings.
This seems to be correct and double check if my serveru r l is correct. So I'll be, um, deleting thisum, instance later on. So that's, um, prevent any hackers,um, looks correct. Okay. Um,okay, everything seems correct.
Lemme save it. Try to refresh again. Oh, fingers crossed everything worksis otherwise,hmm, okay, something is probably not working. Lemme see. Okay, let's kind of change the strategy a little bit.
Not sure why it is not working. It was like working in, you know,a few seconds before. Um, okay, my render incident down. So I had actually deploying my own, uh, flow wise instance over here. Um, just need to restart my service.
So of course youcan also look at the, uh, flow wise documentationsto have a look at different deployment options that we have whenwe want to deploy flow wise. So we have providing different, uh,deployments. So in this case I was using render. So now that it is ready,let's try this one. This is the one that I was, that I just created.
Um, instead of scraping the website here,I was up upload a file, um, like a text one file. And if you look at the text one file,but there's one,it is just like a piece of text,um, with some informations not too fancy. And then I was using a text leader. It's kind of the same idea, right?So I was just using different document sources and that's it. Um,if I save it, so everything's still the same.
You have the change PT models, you have the opposite note, you have the, uh,open air embeddings. So if I clickwho are the co-founder of Flow Wise,so what it will try to do here is we'll go through the whole flow andit will chunk into different pieces and then up,so to move vu and then finally you'll be able to use CHE to return the answer. As you can see it is working. So you can also see where is the source document that is retrieving the answersfrom, right? And if you look at the vu, so if I refresh,I should be able to see a new collections over here,which is called VU 1, 2 3 oh, which I have specified over here. And if you look at the data preview here,you can see the text that I've just ell and these are the vectorembeddings that was converted by theopen air embed models.
Models. So here is your kind of like the simple, um,very simple rec stack and you have,so if you want to use it, uh, for your own applications. So we provide many different ways for you to do so you can, um, you know, use,uh,to close an A P I and you also have the differentconfigurations that you can specify. So all,all of these different informations,so hey Henry, that's kind of like the basic idea. Yeah.
Hey Henry. So Alex was wondering if maybe, um,you had HT t p s in as for the uh, VISS endpoint for the first demo,maybe it should be H T T P, maybe that's what the error was. Oh, I think, I'm not sure it was error, but, but as you can see,like in this case I was also using HT p s. Okay. So yeah.
Okay. Um,I'm gonna interrupt you right now 'cause there's actually a couple questions. Um, so, um, let's see. Yeah, so, uh,Jefferson just asked 'em sure are, is the data stored, um, uh,encrypted by default, uh, in this rag exampleData source as in like all this, this different credentials? They are encrypted,um, by default in flow wise. So when you spin up your flow wise instance,we you will be able to specify your own encryption key,or if not, we will create a random encryption key.
They'll be able to encrypt all of the credentials that you have created inwithin the flow wise. So,So that's the credentials and the same's discussion. And the same for all the data that's stored for, um, the RAG applicationE exactly. So like influence,we don't hold any informations because it is like,so all of these documents that you uploaded will be all encrypted usingthe same encryption key. So you can specify where to store it,but it is in our roadmap to allow you to store it into more secure, uh,locations like secure vault, um, from different providers.
But for now it'll be just stored inside your database, but encrypted, yes. Okay. Um, I have one more question then we'll keep going with the demo. So, uh,dupan carves ask what's the typical response time for the rag model?So it kind of depends on how is the configurations,but averagely I would say it is pretty quick because as you can see,like if I just ask, um, who are the co-founder flow wise,you'll be able to return the answers back to me, right? Um,I would say like three to five average. It depends on like what is your document sources,because obviously if you have like more complicated documentsources,so it might take a little bit longer time because it has to kind of like theprocess out of this different irrelevant informationsand try to get the most relevant informations and then feed back to yourapplications, right? Um, but yeah, average I would say three to five seconds.
But again, depending on your document sources,ThatMakes sense. So what, what I'd like to say is that the,the rec stack that you're seeing here, when you ask the question the first time,it will go through the process of upsetting, you know, the vector and billings. But when you ask thequestion like the second time, so let's say, what is the response, uh,of flow wise? So when asked the question, second time,it will not go through the process of upsetting the whole thing again becausewhy it has already to be upset, right? And we store the hash of the,of the whole diagram so that it don't actually, you know,go and try to observe again and again, right? So as you can see,like the second questions, the answers, uh, returns very quickly,right?So it is only the first questions when they ask it will go through the wholeloop of, of upsetting the factor embeddings and stuff. So,Um, what, what types of, um, agents are available?Can you support custom agents?Uh, custom agents? Not possible at the moment,but we have a lot of demands for that. Um,we are actually working on allowing you to create your own custom,uh, agent and chain in the next release.
But right now these are the,um, preview agent for you. Yeah. Okay. So we can check into in ui, but you'llBe able create your own custom too. Okay.
Yeah,Exactly. And then, um, uh, so I actually have a question. So, um, I think,I mean I know I, looking at it, I think I know the answer, but um,when you're talking about the embeds, um,it's just you're just using an a p i right? To, uh, reach out to Yeah. You're not actually hosting it?No. Okay.
Um, cool. Yeah, so like if, if you want to host your own embeddings, um, you can also do,so you can use local AI embeddings. So this allows you to specify your own hosted embeddings models,um, but obviously you have to spin up your own docker instance of a local AI tohost, you know, your embedding models there. But, um, for the demo,we are just using the open AI A p i, so,So let's just kinda stay, stay on that area a little bit because, um,Vitale asked the question, can you customize the, um, the chunk sizes?I thought I saw something of Yeah, that you can customize that. Yeah,there it is.
Cool. So lemme customize. Yeah,So hopefully that answers your question, uh, vital. And then, uh,Bruno asks a really great question, how about evaluation?Yeah, good question. Good question.
So, uh,let me go back to the flow that was working. So this one, right?And here you can see like the analyze chat flow. So here you have the different options to use different providers. So a lot of people know Lang Smith, I mean a,a product by Lang chain that allows you to, um, you know,kind of see your different stacks and then you have the option to use landfilland um, on, um, so why not we try to go through how does it work?Uh, if you want to analyze chat flow, right?So if you just click create new credentials,obviously you need to get an a p i from a msmith. Um,not sure have you guys seen,but this is msmith.
Hopefully my internet works. It's slow. So, but idea is that you choose,oh, sorry, my line was unstable. Okay, so hopefully, um, you guys can hear me now. Um,so just specify your credentialsand then you can specify a project name.
So let's say vis demo,you click on save it,and now if you try to ask question again,like how to install flow wise,you'll be able to see the full trace,the step by step of the whole rack stack. Soif you go to the project,so you can see here's the new project that created, you have this demo,and then if you click into chain,and then you can see like, you know, the step by step, um, you know,process that is, you know,flow through from the start to the end. So here's one way that you can kind of like to analyze your flow and thenevaluate, see which one is working better. Um,and also we have, um, a environment variables,which is calledyou can turn on and use the stack trace of your ap, um,of your flow watch instance. So let's say if I deploy to,um, render, as you can see,like here I can see all of my different steps that, um, you know,was giving the final answers to my, uh,flow wise applications.
You just need to turn on the debug, uh, the two. So yeah, so there are different ways that you can, you know, evaluate your,your flow wise flow. So it kind of depends on which one that you wanna go for. So maybe just, just to make sure that everyone is, uh, up to speed on that. So what, what is, uh, evaluation, um, and what were you actually doing there?Why did you need to actually, you know, take a look at, uh,what was actually flowing through?Yeah, so a lot of people when when I ask a question, right?So sometime when asked questions,the answers that return to you might be something that isnot correct, right? Might be something that is, you know,off the rail, complete off the rail, and you want to know why is that, right?You want to know like what has been executing under the hood.
So that's why you need, um, traces for example,like this kind of logs to see what are the informations that havebeen feeded into your lms and what are the documents thathave been retrieved from the universe for that kind of question answering,right?So this is why we need evaluations to kind of see andcompare the different performance and really to know likehow, you know and why. So these,these are here to answer your, your, you know, your, your, your,your why and how queries. So as you can see, like, um,when they ask, uh, how to install flow wise,and then here are the source documents. So it is actually using this piece of the informations toreturn this text to you flow wise can be installed by specific our website andfollow the instructions provided, blah, blah, blah, right? So you want,you want to know like which piece of the informations is being retrievedto construct the final response back to you?Yeah. Without this, you're kind of running blind, right?You don't know for certain, um, what, where, what, what, what it's retrieving,um, yeah.
And, uh, what's actually going on?Exactly. That's, that's the whole idea of, um,you have the factories and then you have have like different, um, like,like the one that I showed to you and like shuffle,you have different possibility platform that allows you to see step by step. So what about adding citations in the, uh, in the chat responses?So then even your user can also kind of have, um, you know, confidence, right?That this is, uh, this is actually a legitimate response. Can you show a demo of thatOne of the way? Is that what you mean? Here?It's a return source documents and then you can kind of see like,where is this documents being matured from? So we do actually have that already,but it is limited to only a few chains oragents. Mm-hmm.
So here you can see like here, okay, the,the answers is actually true from this particular content, right?And well, they're the same. SoYeah. 'cause in this particular example you were just showing, uh,pulling from a text file, but it was mm-hmm. It was the other example, um,pulling from your website,it would actually list the page that it pulled it from. Right,Exactly.
Yeah. Um, yeah, I,I'm trying to, okay, let, lemme try to use, try to get this, uh,working again. Um,we can easily duplicate the flow andthen I replace it again with, um,the scrap notes web scraper,um, stick using this onedocument. So put in a linkover here and save on tree. So I'll stick to the same, um,endpoint and the same collection name.
Okay. Let's see if everything works. Um,up time is not provided. Um hmm. Okay,let's try to use a, uh,I think everything's kind of same.
I think we may be running into the same problem again. Hmm. Okay. Maybe there's a box somewhere. Um, again,we need to go back and fix that.
So yeah, it'sThe demo gods, don't worry, the demo gods are always cruel. It, it was like literally working, likeI was just checking three or four times before the webinars is working and thenafter when when it goes right to the demo, it does work. Um, so we have another question from Alex. Uh, so, uh, actually two questions. So his first question is, uh,is this AI chat only available from this particular web ui or how can hemake this available on AUM website or as a, as a um, web service? Yeah,So like, um, we have this, um,little button over here when you click that so you have different ways to useit.
Ah, um, yeah,so one way is to use it as a better chat. So just copy paste this code and paste it to HNA page and you'll have thislittle chat chat bot, um, to applications. That's one way of using it. You can also have different configurations that you can specify, you know,your colors, your size, your, your tags, um,different different stuff. So nice.
That's one way. Then you can also,um, import into your React applications. So again,it's kind of the same concept with the, uh, you know, and better chat. So you have the different configurations. Um, the other way is to use the A p I.
So as I was showing you have, you can use it with Python, a p i,JavaScript or simple, you know, simply just using a curve, uh, call. And the best part is that you have all the flexibilities of specifyingall the different configurations within the flow. So all you needed to do here is specify as you can see, uh,temperatures or the modelnames, your a p i key you can even override in your A p I. So that's the other way to use it. And the final way is if you want to kinda like a very quick chat bot that youwanna share with people so you can click, uh, make it public and then whenever,when it send a link over to anybody, people will, will be able to open,you know,this whole chat bot and they will be able to interact with the flowthat you have created.
So like otherwise,so again,like the backend of this is actually executingall of this, right? So, so it's the same idea of, uh,asking a question over here. So yeah, again,like you have many different ways that you can use it. Um, okay, so the next question Alex has is about how about scale?So can you handle a thousand queries per second versus a hundred onincoming requests?Yeah. So to do that, um, again, so we have like thisscript over here, which is called Axillary. So Axillary is a famous, um,load testing frameworks that allows you to test,test how many requests that you can handle using a p i.
So after you have deployed your flow instance,just specify your instance and then specify your A p i, uh, call. And then you'll be able to test how many concurrent requests that you can handleper second. So we have done, uh, the testing. So for 10,000 requests per second,it's no problem. But again, like it kind of depends on the instance that you,that you're using or that you're deploying.
Um,and also it kind of depends on the limitations of thea p I calls that you are making to, you know,your models to embedding also tovu, right? So I I would say like the limitations is not on the flow wise,but it's under the, you know, all the APIs that you're calling under the hood,you know, the chat G P D A P I, the VU A P I embed a P I. So,Um, what have you, you,I think you and I talked a little bit about G P T cache, uh,the semantic caching layer that we have Yeah. As an open source project. Mm-hmm. Can, can, um, users add that to this as well?Uh, we actually have a storythat is, uh, working on it.
So, uh, per request, this is coming to next release. So we have the G P PT catch, which is coming, um, yeah,so we are working on that. Um, cool. Soon people will be able to, uh,to use catch for your embeddings and also for your own chat models,So Awesome. Hopefully.
Yeah. Awesome. So Alex, that should actually help also,um, with, uh, performance. So, uh, and um, vis we built,we have another open source project called G P T Cash, uh, and it was, um,it was built out of, uh, a mistake. Basically we were building a demo application called OSSchat.
io and um, yeah, here it is. And, uh, you know,of course we got a really big bill for an OpenAI because we were in Dev askingthe same question. And then, uh, we ran some performance issues with, um,OpenAI and so we decided to, uh, build a,a caching layer and then we realized this is probably gonna be a problem thateveryone faces. So, uh, we actually, um,open sourced it and it actually works with a number of different vector stores,not just limited to, to vis and to, to Zillow. So I'm excited that you're gonna be adding that, Henry, that's, uh,that's pretty cool.
Um, I wanna go, yeah,We get a lot of us as well, soYeah. Cool. Um, you know, mistakes are, are very valuable sometimes, right?Yeah. Um, I do wanna go, um,back to a question that Bruno has because I actually have uh, sure. This as well.
So, um, he wants you to go back to,he wants to go back to talking about agents and um,ask you to show how to specify tools. And I agree if you can maybe go a little bit slower on the agent portion andkind of show us how you can select it and what are the configuration options. Cool, thanks. Perfect. So in flow-wise,you have the options to create your own custom two.
So two is two that is designed for agents to use. So for example, like in marketplaces we have different tools,so maybe add a contact or a HubSpot, create an editable record. So different, different tools. But under the hood, all of these two is just,you know, you can see like the JavaScript functions,it is just calling the a p i and that's it, right?So in this case it's calling an a p i from IBLE to add a record to yourible, uh, database, right? Um,you have like send Slack message, send,send the scope message and different things so they can kind ofget the idea of how each of these two works, um,um, by looking at the example that we have over here, and then you can, um,iterate on your own. So one thing that I would like to highlight is here the output schema.
So maybe,um,it is better for me to show the combinations of the flow combined with two of,um, let me find, oneis agent. Agent, um,okay. Uh, okay, maybe not this one. Open AI conversation agent. The agent, okay,I will try to get rid, um, okay,while I'm here is good to show as well.
So you can actually combine the chain, you know,the chain that we've created just now from, from,from for the VUS question answering and use it as a tool foragent. So you just need to create a custom tool for that. Um, but let me, lemme show you guys, um, how to works. So here we have, uh, two sections and then you have, uh,some of the prebuilt tools that allows us to, um,do specific stuff. Things like, you know, executing a custom Google search,things like, uh, you know, browsing the web, things like writing to a file,connecting to Zebra, different, different things.
But I was showing how you can use a custom two. So here we have a custom two node and you'll be able to select the tools. Obviously I don't have any yet,but you can select any tools that you'd like to use, right?So here I have a, a, a Discord two,and then I'll be able to kind of like a use template and thenadd it to my own two sections. So here you have the two,and what it is actually doing is just if you look at the functions,it is just calling the web full u r l and then with the message,which is the content from,from the upper schema. So I'll talk a a little bit about the upper schema,right? So as you can see from description,what should be the upper response in the JS format.
So when you ask, so let's say if I were to use the two Connect two, um,maybe I have to kind of save itand then refresh so that you'll be able to load the two,the message with Discord channel. So right now I have specified the tool itselfand the problem is that I say hello,okay? And then, uh, I need to specify the open, I keymy, my name is Fox, right?So it'd be able to, um, you know, answer me. So right now, if I want to say,um, can you send a messageto Discord, right?So right now it asks you to kind of like specify the message that you would liketo send to Discord, right? So we actually want tocapture the user's, um, answers so that the,the message would then be sent to the Discord, right?But the problem is that how can we do that?So that's why in the two sections we have the output schema. So this is designed to do that. This is designed to capture the informationsfrom the users so I can see the content.
So let's say if I specify, um, this,say Hello, soyou'll be able to actually capture this messagecalled hello or whatever, and pass it as the content. Then using the a p iyou will be able to send actual message to your Discord. So where did you set it up to actually send it to a particular channel?So this is like the web hook, oh, u r. So yeah, so you just specify,I don't have the A Discord app installed in this laptop, but um, you know,when you go to Discord, you have the Wahoo U,you just specify the WebBook U rl and that's it. So here's your WebBook U rl,right? And this part is to actually capture the, um,information that's from the chat over here.
So that's,that's kind of like the idea of the custom too. So if you can imagine like you can basically use it tocall any a p i, right? If you have an internet, a p I and you want to use it,a combination of agents, and then with,with the ability to capture the informations from the users,that's how you can, you know, you can use custom two to do that. So this is a kind of like the very simple basic example thatshow you how you can, you know, do this kind of stuff. So,um, and again, like, um, this is specific to openand function agents. Um, we have different agents, agents, auto, G B P,baby, h i, um, different, different agents from chain as well.
Um, again,like I would say that most of these agents, uh,if you want to know like how do they work and what, you know,what is the use cases, you can actually just go to, uh,link chain and then kind of look at the agent types and, you know,here are some of the agents that we have, uh, you know, put it on top. So conversation agents exactly the same as the one over here. So yeah, it is, it isdesigned to kind of like use the two,but also be able to hold the conversations between users and the ai. So,Um, I have another question for you. Um, so Gill said, great demo, um,is flow wise open sourceExactly a hundred percent.
We, we, we'll always be open source. Yeah. So you can just go to GitHub and then, um,you have the options to deploy your own flow wise instance. Also take a look at our documentations to see howwe can deploy to, um, you know, one of cloud services that you like. Cool.
So we just have, um, five minutes left. Um, I wanna leave the, uh,the lines open for any, um, final, uh, questions, but as, um,you made it look so super easy,but is there any advice on places where, you know,people can get themselves into a little bit of trouble and what they might want,what they should probably do to kinda, uh, avoid that?Yeah, I think like, again, um, yeah, I think the, the,go back to the evaluation things, right? So this is why evaluations and,you know, observability platform exist that allows you to kind of, you know,know what is being executed under the hood. So whenever you bump into any sort of problems, um,always, I always suggest people to use, you know, um,one of these MSF role to see what is being executed under the hood. If you don't want to do, you can also take a look at the stack trace mm-hmm. And to understand what is being, um, executed, you know, step by step.
So,Um, um, and then Mike asked, is the recording gonna be available? Of course,Mike? Yes, absolutely. Recording. We'll put the recording on zillows. com and I'll also on our YouTube channel, uh,after we do a quick edit. Perfect.
So I think I saw one typo on your website and,uh, in the banner of your website should say, uh, yeah,it should say VIS and not, uh, chroma,Uh, this one. But other than that, really great demo, um, and it, and it looks great. It looks super easy to use, uh, and hopefully everybody on, uh, this, um,webinar can, uh, try it out for yourself. Um, yeah, and also, uh,you know, it's like I said in the beginning, it Isto Fest, um, you know,I'm sure Henry's got a number of issues that he'd love to get everybody's helpwith. Uh, same with vis.
So if you, um, if you have,can spare the bandwidth to help out, uh, uh, everyone yeah. Uh,in open source community would love that185 open. So feel, feel free toTake any one of them. There's always plenty to do. Yeah.
Awesome. Well, thank you so much and thanks everybody,all the really great questions. Um, and yeah,we can't wait to see what you build, uh, with, uh,with flow wise and also with Vis. Thanks everybody.
Meet the Speaker
Join the session for live Q&A with the speaker
Henry Heng
Co-founder, Flowise
Ex-software engineer @ Fidelity Investment, Msc Comp Science in AI @ University of York.