- Events
Integrating Multimodal AI in Your Apps with Floom
Webinar
Integrating Multimodal AI in Your Apps with Floom
Join the Webinar
Loading...
About the Session
Join our webinar to explore the integration of multimodal AI functions in applications with insights from the co-founder and CEO of Floom AI. We'll discuss technical strategies for integrating and deploying diverse functionalities for AI projects.
What you'll learn:
- Techniques for developing multimodal AI features
- How to enhance performance and stay secure and compliant for AI functions
- Why Milvus stands out for complex queries and scalability
Today, I am pleased to introduce, uh, the session,integrating Multimodel AI in your apps with Floom. And our guest speaker, mark, uh, max, uh,max bring over two decades of experience in codingand hacking to his career marketby substantial expertise in software securityand architecture. He is a recognized patient, authorand has had significant role across the tech industry,including CTO at, um, docs Innovation,lead at the Cyber Arc, and the CEO at cars. This roles underscore his depth of knowledgeand the leadership in technology innovation. Welcome Max.
Um, today,today we wanna talk about you as a company, um, bloom. Um, tell us a little bit about Bloomand what the business does, uh,or so meanwhile, let me stop sharing since II've heard you got a doctor share. Yeah. So first of all, thank you for having me. Um, and thank you everyone for joining.
And, uh, thank you for setting this up, Steffy. So, while I'm sharing my screen now,hopefully you can see it. Yep. Yeah. So basically,flu is all about helping developers integratemultimodal AI features in their apps.
This is what Flu is about. It's an open source solution. So it's being built by, um, the team of Flumeand by the community, um,and by different experts coming from AI and developmentand architecture and so on. Um, and it's a fairly new project, so we just launched it,uh, three months ago. Um, but we are, we've been building thisfor about seven to eight months now.
Um, and, um, that's basically flu. So, uh, this is me, as you mentioned. Um, I have some, uh, experience with software developmentand architecture, um, um, which I've been doingfor 20 plus years now. Um, I also, uh, was chief architect for several companiesand started two, uh, different startups, um, bothof which were acquired cards were,was acquired by Zepo Apps. Um, and the last role that I, uh, was doing was CTO of cyberfor, uh, Amdocs, which is quite a big, uh,multinational company, 30,000 employees.
Um, and that's basically about meNow with flu. Uh, we are trying to solve, I would say, one of the biggest,uh, problems nowadays. You know, a year plus ago when, uh, OpenAI released their,uh, chat, GPT, uh, everyone waited for their APIs, um,anxiously, uh,because people understood, you know,it has this huge potential not only with, you know,simple examples as text generation and co-generationand so on, but with identifying patterns and text and videosand image, um, in sound, you know, speech, audio, and so on. Um, and also extracting different details,generating obviously, and, and so much more. So we look at AI basically as a very scalable, very,very cheap kind of human being, um,that we can invoke with APIs.
This is how we think about ai. And what happened is really, you know, many different, um,approaches came up with, uh, developers tryingto integrate ai. Most of them are very much focused on code libraries, um,you know, such as Lang chain, LA index kernel and so on. Um, but we kind of immediately felt that there is a big,big gap there, a big problem or big problems. Um, and, and,and we, we think that the approach, um,that is much more suitable for enterprises hasto have some kind of a centralized component that helps, um,manage all the interactions between the applicationsand the features to the very differentmultiple AI models wherever they are hosted.
So this is why we, uh, we we're thinking about, you know,um, there's a huge gap. We have AI potential on one side, we have many peoplewho want to integrate ai,but unfortunately most of them are not really able to doso in a production grade manner, um, especiallywith enterprise grade. So this is kind of what we're trying to solve. We felt there's a huge problem,and we wanted, we, we sat down for many, uh, weeks tryingto think about different solutions to these problems. And this is how basically flu, um, was invented.
So, um, basically the underlying causes for this gap,or the actual gap is we think, um,after interviewing many, uh, more than a hundred, um,employees from companies raging from startupsto enterprises, then we basically, um, kind of,um, identified, you know, four different big issues. The first one is there's no really, um,an enterprise grade robust developmentor deployment infrastructure. So we have different code libraries,but they lack centralization management with logging,caching, auditing, monitoring, privacy, security,safety and so on. Um, and in order to have this fully work, um,fully production grade pipeline, you'll haveto go on a shopping spree on GitHuband try to download many different libraries and so on. Um, and this will create a very,very volatile experience in the perspective of enterprises.
Um, so our product is very much focused on enterprisedevelopment and, uh,this is basically what we're trying to solve. The second problem, which is I would say even a bigger one,is the technical barrier. So nowadays, when we speak to an average, um, enterprise,we hear that, um,gen AI projects are ma mainly focusedon the data science team or on the innovation team,but they kind of leave out the vast majorityof the workforce, which are developerswho have zero prior knowledge in AI development. And this is kind of also athing that we are trying to change. We want to make AI much more accessible.
Um, so, uh, this is, uh, you know, moving this from, um,very specializedand very expensive groups, um, inside specialized companiesthat are investing in AI into the vast majority of, uh,feature integrators, which are developers. The firm problem is,there's no centralized management right now. So besides all the throughput that goesthrough the centralized management, as I mentionedbefore, for caching, um, and for fail safe mechanismsand so on, and privacy and security. Um, when we integrate AI right now nowadayswith the existing technologies, we write a lotof different, a lot of code. And this becomes a very,a big code base really, really fast.
That is built basically, uh, based on two different things. The first one is the AI logic itself, either builtby a link chain or, or whatever. And the second one is all the surrounding infrastructurethat developers suddenly, um, uh, realizethat they are missing, um, when they try to integrate it in,in, you know, for a very big kind of B2Cor B2B product that, um,has very specialized production needs. And, and this is only for one integration,so it becomes a headache really, really fast. And so just imagine if you would have, um,many different AI integrations,which we definitely think the future.
Um, you know, in the future,people will definitely integrate more AI features,then it becomes an unmanageable, uh,problem really, really fast. And the last problem, we think is the inspiration deficit. So right now, when we speak to an average product manager,um, what we hear is mostly they're focused on, uh,co-generation or text generation,but they kind of leave out all the other hundredsof different possibilities that gen AI has to offer. Um, and that's basically, uh, what we think the problem,uh, is. So the existing, uh, AI integration flow right now, kindof the average flow, I would say is an app, you know,in an average enterprise, um, usually has to be Pythonbecause, you know, data science,love using Python and so on.
Although, um, most production, um,upgrade systems are not based onPython for different reasons. So then we have some kind of, um, this, uh, connectivity,uh, proxy that we need to build to work with code libraries. And then we have some code libraries such as Lang Chainand LAMA Index and many different others, um,where we build all the logic inside. Um, and then we have all the logic coatings. So, um, it, it's usually data ingestionor rug mechanisms, embeddings and so on.
So it's being taken care of, obviously by vis, um,and also by Redis for caching mechanismsand NEO four J for graph solutions and so on. And, and then we have this fairly new conceptthat we call AI gateway that is providing some, um,infrastructure surrounding this connectivity. So it's usually logging, caching,routing, security, and so on. So you'll have one APIand you'll be able to connectto different AI models without changing the API schemaor the SDK that you are using. So that's basically kind of how we look at things nowadaysand how we think that, uh, the averageAI integration looks right now.
And, uh, and we thought about this really, really hardand, um, we came up with this. So, um, this is an entirely new category, um,and it's called an flume, which is the AI orchestrator. So an orchestrator basically consistsof two different parts. You have the AI gateway inside, which is very similarto different solutions that we have nowadays, suchas portkey and so on. Um, and also on the other hand,you have the actual orchestrator that helpsbuilding the actual logic inside the AI pipeline.
We combined two of these, uh, the AI orchestratorand the AI gateway into one solution,which is open source, by the way. Um, and it's basically logicthat handles all the, uh, all the AI logic behind this,behind the scenes inside a docker containerthat you can place anywherebecause you know, it's docker,you can place it on private cloud,public cloud on your developer's pc, which is, um, kind of,uh, very close to the developer's id. Um, and you place it in between your applicationand the AI model. It's completely agnostic to any AI model. It's fully multi-model.
Um, and we also offered this as a cloud service, soas a sus solution, basically. So you have two different options. If you are an enterprise looking, you know, for, uh, or,or under different regulations,and you want to use everything internally, great. You can use our Docker container, um,or if you are a startupor just want to try a flu out without installing anything,you can use flu cloud. Uh, so, so what does does it orchestrate?Basically it orchestrates somethingthat we call AI functions.
And AI functions are basically, um, we kindof looked at, you know, two different mechanismsof gen AI as we know it today. And we have the direct inference, which is the simple one. You have a very simple API,and then you have g the agents which, um, are kindof triggering this, uh, thought concept inside, um, an LLM. So what we saw is people are in, in the vast majorityof cases, are using both as very simple IO function,so input, output, different functions,and we decided to distill both of them into one, um,um, name or one object, which we call AI function. So it resonates with developersbecause it's a simple function, but now it has AI in it.
And these are just some very simple examples of the very,uh, large data set that we're building right nowwith many different hundreds of different AI functions, um,of things that you can invoke right now in yourcode within seconds. Um, so just for example,you can extract physical addresses from textand get back, um, a formatted json, which will translateinto an object that you can actually access in your code,what, whatever framework you use,by the way, it doesn't have to be Python. And then you have generate speech from text,or the opposite. You can switch between AI models in a matter of seconds. You can try out different models, different configurations,turn on caching, smart caching, um, use privacyand security mechanisms and, and so on.
Um, another example is classify PG rating in videosor detect objects in imagesand detect emotions in text,ask questions about data and so on. So we try to kind of, um, think, you know, um,in the product manager perspective,why would product measures, um,or how will they benefit from gen AI in their product, um,other than the simple text and code generation. So this is how we define an AI functionbasically in flu. So just forget about the code librariesand forget about the learning curve. It's a very simple readable YAML format filethat any developer with zero prior knowledge in AIwill be able to fill out in a matter of minutes.
Um, so let me just go through this YAML file. So what you see here is very similarto Kubernetes, by the way. Uh, um, so what you see is we specify the kindof the object that we are defining, which is an AI pipeline,um, and then we specify the name, then we specify the model. You can specify several of our models. If one model doesn't workand you wanna fall back to another one,then you can easily specify a new modeland a set of new rules,and make sure that your AI feature will work regardlessof the, uh, status of the firstor default model that you want to use.
And, um, it's completely agnostic too, as you see here. Um, this is a URIand this URI points towards a package, uh,or a plugin as we call it nowadays. And it's basically, um, logicor code logic that you can inject inside flu. You can build it by yourself,you can download it from our marketplacethat we'll launch really, really soon. Um, and you can easily debug all theinternals and everything else.
So you'll be able to easily connectwith any AI model within minutes, um,even if we don't support it, uh, by default. So this is kind of the idea of packaging this AI pipeline,um, with many different internal packages that will help youfully customize this, um, procedure from A to Z, um,with whatever logic you want to use. So it's kind of similar in a way to CICDand Kubernetes, um, CICDbecause it's a very distinct flowthat you can inject your own plugins into it, um,and Kubernetes, because you now kind of remotely, um,managing your assets. And in this case, the asset is an AI pipeline. So this is what a prompt template look like, looks like it,it's really very similar, you know, to, uh, different,uh, prompt templates.
It basically compiles the system prompt and the user prompt. You can use examples that other people uploadedto our marketplace, or you can writeyour own if you need it. Um, and this is a very important part. So this is what we call context. Context is actually what other people call rug.
Um, but we think rug is a, is a terrible name. So this is why we use why we use context. And with context, basically, you actually specify the datathat you wish to ingest into your AI feature,and, uh, basically the data, the knowledge that you wantto transfer to your AI model. So it's basically rug. Now we start with a very simplistic wayof showing a very simple and straightforward rug.
Here you can see we are using a pluginthat handles PDF files, and we're using one PDF file. I'll show you some more, um, uh, um,complex examples in the next slide. But for now, let's, uh, move on to the next part. And here is how we specify or configure the response. So we wanna make sure as developers, not as AI experts,I would say in this case that the response is beingformatted by a plugin that we call formatter.
And this plugin is enabling developersto pre-configure the exact response that they're expecting. You know, one of the biggest problemswith AI is the volatility. People are afraid of many timesof actually deploying this into productionbecause we don't know what kind of answers we're goingto get from the AI back. Um, and this will obviously affect the end users areusing our platforms. So with flu, what we did is we tried to minimize this riskas much as possible by enabling thisformatted a formatter that will help you make surethat the answers you're getting back from the AI are exactlywhat you're, what you're looking for.
So here we specify text and we specify the languageand we specify the maximum sentences. Um, and there are many,many different more variables that you can use here. Um, and it, it is also multi-model,and you can write your own pluginto handle different scenarios. And this is a feature that many, um, people who workwith flume requested, kind of,I would say the number one requested featureis basically validation. So if we have, you know, uh, an average enterprise, oneof the, uh, biggest fears is peopleand consumers will use AI for, uh,PII exfiltration, PII is personal identifiable informationexfiltration through the ai,or somehow will, um, use prompt injectionto fool the ai, um, to get back credit cards or,or whatever, or just, you know, filtering bad wordsthat we don't want our brand, um, to be exposed with.
So our filter here is also a pluginso anyone can write their own plugin. We are actually working, uh, with two different, um,LLM security companies that are already, uh,writing their own plugins for flu. So as a developer, you'll be able to easily setwhatever plugin you want to use, validation pluginand set whatever, um, variables they are expecting,um, whatever arguments. And then you have this validation plugin working. And, and this is also a very, um, kind of popular feature.
Um, so you have the global here,and global basically means packagesor logic that is, um, is rightfor the entire AI pipeline, not only for the responseor for the prompt, but for everything basically. And what you see here is we are using a cost managementplugin, and this cost management plugin, uh, as I said,very popular kind of, uh, with peoplewho are trying out flu, um,and you can set the maximum tokens that a user, um,can use a day or the pipeline can use a dayor a month and so on. So it really kind of eases this very big problem, um,that people are right now trying to solve by themselves. They're trying to write their own AI gatewaysand trying to manage their own costsby building their own cost me mechanisms and so on. And the last one is caching.
So, you know, insteadof writing your own caching mechanisms, just write,you know, two lines of code in, in the file. Um, and you'll have a fully working caching mechanismswith different arguments. Um, you can use your own downloadfrom Marketplace and so on. The next thing that you need to do is when you finish, um,editing this yamo file, you need to actually deploy it. So very similar to kind of, uh, GitHubor Kubernetes in a way,because you set your yamo file settingsand then you deploy it into your flu instance.
So this is, the way to do it is really, really simple. You have, you install our CLI,and then you just write flume, deploy, specify the endpoint. It could be flum Cloud,or it could be your own flu instance, whatever it is. And then you just specify the yaml,uh, that you just edited. And that's basically it.
Um, you can send several, deploy several yaml,um, in one batch. And, and that's basically how you deploy AI functions. So if we drill down a little bit to rugor context as we call it now, we rug. Um, we are very awarethat there are many different use cases, um,and you know, it's, it should be highly customizable,and this is exactly what we're working on today. Um, we're trying to offer this very customizableand yet easy to maintainand debug able experience for developerswho have zero prior know in ai.
Um, and this is basically what we came, come up with. And what you see here is, um, the last sectionof context in this specific case is actuallyingesting all the files, uh,all the PDF files in this specific path,and it uses different configurations of search patterns. In this specific case, we use L two,which is very popular in Mel. Um, and, and here we also specify the top results. Um, and another example is we're using a different productplugin This time for AWS S3, you just, uh,edit your own credentials, you set up the path.
And then, uh, in this specific example,we use co-sign similarity,and we specify the top K, which we call top results. Um, and this is basically what RUG looks like. Uh, in flu, we are working really hardto make it even more customizableand everything that is RUG relatedor embeddings related, um,which is fully multimodal now in flu. Um, um, except the video part, which, which which we will,uh, release in, uh, no time in about two or three weeks. Um, it's based on Melva db.
So that's kind of, um, um, the way that we, um,work internally in flume, both in flume cloud, by the way,and also in the flume docker, we kind of supply, uh, VU db,uh, alongside in our Docker composed file. Now, the last thing that a developer needs to do,so we started with the AML file, we deployed it. Now it's completely managed and orchestrated by. The last thing the developer needs to dois just use our SDK. Now, remember I saidbefore Python, you know, not everyone are working on Python.
Actually, the vast majority of production rate products,you know, high scale and high performant arenot based on Python. Um, and we wanted to spare this, uh, this timeof people just building, um, these proxiesthat help them communicate with their, uh,Python code or services. And this is why we released SDK for seven different,most popular languages. So most likely you'll find your, uh, language or proor framework, um, that we will ha we'll have our SDK in it. And that's the only thing you needto do is just write five lines of code, um, specify the AI,um, function that you just uploaded or deployed to, uh, flu,and then just specify the prompt.
Now, as I said, it is fully multi their models. You can send in several prompts, images, uh, by the ray,whatever you wanna, uh, send to, uh, to the AI model. This is a demo, actually a flu. It's a recorded demo. So, um, I'll run it and justbefore running it, I'll try to, uh, walk you through,uh, what you're seeing here.
So on the right side, you can see, um,the internals of flu. So you can actually see it on verbal mode turned on,and you can basically see whatever is happeningbehind the scenes, um, which is kind of familiar for ai,uh, experienced people. Um, it does embeddings and it loads them to the Melb. Then, uh, runs, uh, cosign similarity,I think in this specific case. Um, and gets all the data back into the LLM.
And on the left side, what you see is here is the code part. So we have different ai, um, functions here. We have three different examples. The first one is Ducks example. So it's a really simple one.
We just embed a specific document, a PDF file. It happens to be from an Italian, uh, bank. We just scraped the, uh, website and, uh,and run it through embedding, um,and then we ask it some questions. And then you have the second, uh, YAML file, whichrepresents a different AI function,which actually creates images. Um, and the third, the third one is text to speech.
So we have three different pipelines. And here, yeah, here on your bottom of your screen,you'll see our CLI in action. So let me just run the demo. Okay, so here is the simple, uh, kindof straightforward example. Um, and here's the PDF file that we runthrough our embedding framework.
Inside flu, it's just public knowledge straight from, uh,a website in Europe. And here you have the actual Yamal file that representswhat this AI function will look like in flu. So as you can see, it's extremely simple, very readable. You just specify the model, you specify the prompt,and you specify the response, and that's it. And you have a fully working AI pipelineor AI function, um, alongside if you want, possiblywith caching and security plugins and so on.
And on the right side, you can actually see what happensbehind the scenes, um,which is actually we are storing the PDF file,we're parsing it, we are extracting the data. Um, we then use some of the, uh, pipeline model connectorsto have the embeddings. In this specific case we're using open ai. Um, and then we are pushing the embeddings, um, to the, um,in the next step to the, uh, MELVA DB here. It is already pushed into the melva db.
So we created an index, created a collection, um,and so on loaded to memoryand inserted all the vectors that are relevantfor this specific PDF. And it's, as I said, it's very similar to CICD. It's kind of an event workflow, um,or even a messaging system, I would say. Um, because we definitely believe that in the vast majorityof cases, or at least, um, what our, uh,experience teaches us is that AI connectivityor AI integration is almost always, um, very much,um, um, aligned with a very specific flow,um, data ingestion, then building the prompt,building the response, um,working over different plugins and so on. Um, so here we can see on pipeline committee just an examplefor an event internally in flu.
Um, and this is all doneafter we see we've seen the flu deploy here. And now we have a very simple kind of a test code in Pythonspecifically that will run this specific prompt,who are the team membersand the roles of Neva, SGR,which is a very big bank in Italy. Um, and then we write, we have this helper tool here. Um, we are entering the choice. The first one, which is the Rug Pipeline stuff happens herebehind the scenes in flume.
Um, and then you have it, it's actually the full responseas we requested from the AI within seconds. Like we didn't need to know anything about ai, um,anything about embeddings or about rug and so on. But you have a fully working result that represents, um,the capabilities of gene AI in this specific case. And now we'll go to the image pipelineor what we actually call nowadays function, not pipeline. Um, but in this case you can see it's even a moresimple yamal file.
So, um, just imagine, you know, editing this oneal file,deploying this to your flume instance, and that's it. You have a fully working AI feature in your product. Um, so now we deployed it and,and, uh, flu, uh, ingested this, um, AI function,and now we're testing it. So we are actually trying to run this from Python again. Um, so this is the only code that you needto know about when you run flu.
You actually use the Flume client, you specify the endpoint,your API key, which flume if this is your Flum cloud. If not, you don't even have to have an API key. You specify the function ID and the actual prompt. So the prompt here is a groupof investors thinking about investing in AI infrastructure. So we now run, uh, I think, yeah, it works with open AIwith Dolly, so, so on runs, um, through lum,and you'll see the answer in a sec.
So it's kindof a very bad image the deli produced, but it is working. So it really depends of the qualityon the quality of the AI model. But the cool thing with flu is if you don't like the DLLquality, you can switch within secondsto any different other AI model without changing even oneline of code in your actual product application code. And this is just another example of a different here, uh,uh, text to speech AI function, um,which you can actually run. We deployed it.
Now this is the text that we are sendingto this, uh, prompt. And, uh, we'll have in a sec, uh, the actual, um,MP three file, I think in, in this caseor wave file that is, um,actually playing the text from the AI model. So the idea basically here is we try to simplifyand, um, kind of generalize the AIaccessibility for developers. And as you can see, we have three different, uh, very kindof similar looking YAML files that represent AI functions. They are very, uh, similar on one hand,but on the other hand they do completely different thingswithout having to have any prior knowledge with ai.
So, um, this is basically, um, what flum, um,is able to do nowadays. Um, and the reason that we chose melva DBwas we, um, kind of triedand benchmark many different vector databases, um,but we kind of immediately fell in love, um, with VISbecause of the documentation andbecause of the latency that we got, which wasby far much better than the other ones. Um, and it, it just looked professionaland it felt professional and it acted professional. So this is why we, um, kind of immediately knewthat Novus is going to be our, uh, go-to choicewith Vector database. The biggest, um, I would say benefit of Novas DBand especially in enterprise cases,is it's predictable with enterprise.
When you're building software, no matter the scale, um,you are looking for predictability. You want this software to have, um, very good results,not only in terms of latency and speed,but also in terms of quality, um,and the lack of, you know, uh, crashes or exceptionsor errors or whatever. Um, and this is what we found with bu we just, you know,tested many different benchmarks on bu, uh,and all other data databasesand, you know, we immediately feltthat BU was the right choice. Um, but how does VUS exactly enable flu behind the scenes?So first of all, we rely completely on VUSfor RUG and general embedding search. So be it cosign similarity, L two, um, IP whatever,whatever type of search we're always rely on Mel, we use it,um, multimodel except of video,which we are integrating right now, uh,or experimenting with, with it is the default, uh,vector database for the flu docker.
So when you install flu with our Docker composed file,you will immediately get vu, um, installed as well. And, um, it's a, it is, it is the default vector databasefor the cloud. So we use Zillow cloud behind the scenes, um,because it offers scaling and high throughputand good performance, and this is what we're looking for,and especially predictability and reliability. Um, so, and as I said, we are experi experimenting right nowwith, um, multimodal video, uh, embedding, uh,cosign similarity search and so on. So I would say the key lessons, um, that we learnedfrom, uh, working with many different, um,I would say even more than a hundred people, um, tryingto integrate Gene AI or thinking about gene AI integrationor already have integrated gene AI in, in some sense, um,is that most companies keep Gene AI right now as POCs,um, which creates this huge gap as we, uh, as I mentionedbefore, a gap that we are trying to solve with flu,um, for many different reasons.
But I would say, um, the list of reasons that I gavebefore are, uh, at leastwhat we feel about this specific subject. Um, and another big thing is when I speakto product managers, um,the average product manager in different companies,what I hear when I askhow do you think about benefiting from gene AIis I usually hear the most classic gene AI examples, textand co-generation, and that's it. And, um, it's kind of a shamebecause Gene AI is capable of so much more. Uh, we already have, um, hundreds of differentof AI functions ready for use, um,that will definitely benefit many, many different products,but it requires both inspiration, um, and,and just experience, you know, with Gene AI integration. Um, we've also learned that generalizing gene AI is hard,um, with our approach, which is obviously not,not code library where you canbasically write anything down.
Um, but it's definitely feasibleand it, it is completely well worth it. And this is one of the biggest lessons that we learnedbecause of the feedbacks that we get. Um, and Gene AI integration is currently reservedto gene AI enthusiasts and data science, uh, groupand innovation team. So it kind of, as I saidbefore, leaves out, um, the vast majority of the workforceof the feature integrators, the, the developers themselves,and, um, the current existing developmentor deployment methodologies. Um, gene AI is really notyet production grade in many different cases.
Um, so this is basically about flu. Um, we have two slides about our roadmap,which we are very excited, uh, to showcase, uh,because we are releasing this very, very soon,and it's already, um, being used by different companies,mostly, um, enterprises. So the first thing that we are working on, um,and I would say it is the hardest thing,it's actually building this generalizedAI definition markup language. So it's actually a language, um, that is built offof three different stages, um, three different parts. The first one is definition.
So we define, uh, inferenceand agents in one thing, in one objectthat we call AI functions in simple YAML files. And then we have the configuration,which most cases we expect the developersto just use the configuration just edited, and that's it. And then we have the orchestration part,which is very similar to CICDbecause it defines this flow of information, um, goingthrough the logic coating and also through, uh, the privacyand security plugins and so on. We package this entire thinginto one third GZ filethat will be available on somethingthat we call AI marketplace. So, um, and this is the, the second, uh, thingthat we will showcase with our roadmap.
Um, we are building a community powered marketplace,and this marketplace is actually this, um, um,showcasing many different use cases of aiand not only, you know, um, um, giving inspirationto different product managers,but also, um,helping developers integrate AI within minutes. So the only thing that you needto do here in this specific marketplace,if you see an AI featureor an AI function that you want to use in your product,for example, convert text to speech,you just click on deploy,and within seconds you will have a fully working, uh,production grade, enterprise grade, um, AI functionthat you will be able to integrate into your productand you will be able to debug, um,and, uh, customize and so on. The only thing that's left to do is use our seal lightto install the specific AI function as you see here. So, um, this is flu. Um, I think we have time for, uh, QAand, uh, hope you enjoyed my presentation.
Uh, thanks Max. It's great. Um, if you have any question, you can just like put it, uh,the q and a button at the bottom of the,um, okay. So we see, uh, question number one,what AI models do you support right now?So, um, as I mentionedbefore, flu is extremely customizable. So you can actually use the default AI connectorsor model connectors that we, um,provide in our marketplace as plugins.
Um, and these AI models are actually, uh,right now we support, um, uh, entropic, we support open ai,we support, um, O Lama and many different others. Um, but you can actually, so it meansthat we cover the vast majority of the market today,but if you'll have this very new cutting edgeor a different API schema AI model that we don't know about,um, then you can simply write your own plugin within minutesand, and, and when you write your own plugin, you can insertand debug it and use the AI function as you wouldwith any other different AI model. Uh, so this is the answer. Okay, I see another question. Yeah, can, can I run Flume on Ubuntu?So yes, definitely you can run Flume anywhere.
This is the idea with working with a Docker container. Um, and we kind of chose Docker containersbecause of, um, mostlybecause enterprises are not willing to, um,exfiltrate their sensitive data, um, to, uh, you know,external SaaS companies, especially not startups. And we feel they need this full controlinside their own company. This is why we're using a Docker container. So you can run it anywhere on OpenShift private publiccloud, on the developer pc,which is also something we've heard from developers.
They really want to have fulland close control, um, of, you know, the AI logic. So this is what we are also running on their own laptops. Um, I can see another question. Um, so from Armando, uh,do you support agents and assistants?So we will support agents, uh, in about twoto three weeks, hopefully. Um, one of the, um, kind of biggest ideasor notions that we got from a, when,when we were building film is we distilled both agentsand direct inference APIs into one objectthat we call AI function.
Because agents, uh, which, you know,with the perspective onof an AI experts are completely different than inference,but from a perspective of a developer, it,it is basically an io. So you just send maybe a very complex questionor an action to do. It does whatever it needs to do inside, uh,the agent itself, and then it goes back with this answer. So, um, in the perspective of developers, ai, uh,inference is very much like AI agents. So this is why we will support agents really, really soon,and it'll be really fun to, uh, build theminside the AML file because it's so easy.
It's so easy to not only buildand understand, it's really easy to also debug, um,and just, you know, have this, um, high quality kindof grade connectivity to ai. Um, so I see another question here. Um, do you have an SDK for C and k?I can, I use Flume with rest API. So yes, we have SDKs for C for Python,for no Gs, uh, for Java, um, rustand go right now. Um, maybe someone that, that I forgot,but we have them on their, on our website.
Um, and because this is a centralized solution,it's not a code library, it's really easy, you know,to create SDKs for itbecause it's based on a very simple, um, HT DP rest API. So if you don't have the SDK, you can just use the rest API,but, um, you know, just ask us for the SDK,we will probably build it in, in a matter of days. Um, so that's basically it, the answer. Um, we covered every question for now. Any, any additional questions for Max?Let's give a few more minutes.
Yeah, great. I'll drink some water. So, so, so yeah, we,we'll wait for the questions,but you know, it's what I would say with ai, I'm into, uh,computers now for, um, about 25 years,been developing in different frameworks and,and also cyber stuff and,and many different architectures, uh, super global scaleand so on when, you know, with ai, notthat open AI invented ai,but open AI kind of outed AI to the world,and they conveyed this messageand, you know, people immediately understood there's thisexplosive tool here that we have to harness somehow. Um, and this is what we're trying to do, like with flu. We, we really believe that we have to give this powerto every developer out there.
Um, um, so this is kind of the idea behind flu. Yeah, so I, okay. Yeah, go ahead. Sorry. No, I, I just wanted to say, I see, you know,there are no more questions right now.
Um, and we are just getting started. Um, if you have any feedbacksor any questions, just go to our websiteor GitHub repository or LinkedIn profile, whatever. Um, just, you know, ask and Jo join our community. I think it's a really kind of a fun process building this. Awesome.
Um, max, thank you so much for your timeand, uh, we look forward to seewhat amazing thing you are building down the road. Thank you very much Steffi for, uh, setting this up. And thank you for everyone, uh, for listening. And uh, and that's it. We'll see you later.
Thank you. Bye-Bye. Thank you.
Meet the Speaker
Join the session for live Q&A with the speaker
Max Brin
Co-Founder & CEO
Coding and hacking for 2+ decades. Patent Author. Expert in software, security and architecture. Previously CTO @ Amdocs, CEO @ Cards.