- Events
Knowledge Graphs in RAG with WhyHow.AI
Webinar
Knowledge Graphs in RAG with WhyHow.AI
Join the Webinar
Loading...
What will you learn?
In this talk, we'll offer a brief introduction to Graph RAG and how to integrate knowledge graphs into RAG systems, discussing the benefits, challenges, and solutions. Knowledge graphs provide a robust and flexible semantic representation of your data, enhancing the reliability, completeness, and accuracy of RAG systems. They help to reduce the opacity of RAG pipelines, increasing visibility, control, and determinism in retrieval workflows. When combined with vector databases, you get the best of both worlds: scalable storage of metadata-rich embeddings with a powerful, adaptable semantic layer.
At WhyHow, we develop tooling that enables developers to build, manage, and integrate knowledge graphs into RAG systems for more accurate and deterministic information retrieval. During this session, we'll share key findings, common patterns we've observed, and what we think the future holds for Graph RAG. We will also explore architectures and workflows showcasing how graphs provide an elegant solution for scalable and precise information retrieval.
Topics Covered
- Introduction to Graph RAG
- Common patterns in building, managing, and integrating knowledge graphs into RAG systems
- What the future holds for Graph RAG
Okay, so today I'm very pleased to introduce, uh,our session knowledge graphs in RAG with whyhow.AI andor guest speaker Chris Rec. Chris is the co-founder of yha ai. Wau helps developers do more accurate deterministic, um,RAC applications using the power of knowledge graphs. So without further ado, welcome Chris. Hey, thanks, Jiang.
Nice to meet everybody. Uh, like John said, I am the, uh, co-founder of Why how. ai. I have a few sides to share, so I'm gonna share those down. All right, John, can you see my screen?Yes, I can hear.
I can see. Awesome. Awesome. Okay. Uh, great to meet everybody.
Um, so we're gonna talk a little about how, why, how, uh,focuses on leveraging knowledge graphs to augment our, uh,rag pipelines to make, um,to make rag a bit more deterministic,accurate and controllable. Um, before we start, a bit about me,my background is a bit all over the place. Um, software engineer, architecture, uh, project management,product management, engineering management. Did a, uh, as a founder part of thisand, um, most of my experience in this realm,or in it has largely been, uh,in the platform engineering developer experiences space. So, um, I really, really like developer experiencesthat are intuitive and simpleand controllable, and they just work.
Um, and my experience when workingwith gen ai when I first got started waspretty much anything but that. Um, so that was, I, I say all this to, to share as like,this is my motivating reason for wantingto start working in this space. Um, when I met my co-founders who are, uh, Tomand Chia, uh, we, they were the onesthat helped me understand the power of knowledge graphs. And what we are focused on doing now is bringing the powerof knowledge graphs and that controllable experience, um,for your rag pipelines, uh,to users in a much more, uh, simple way. Um, so like, like Chung said,we are focused on building developer toolingthat makes it really easy and fast to createand manage knowledge graphsand orchestrate those knowledge graphs into your ragpipelines in order to make these, uh,these gen AI experiences much more accurate,deterministic, um, and explainable.
So, bit of background on like, kind of how we got here. Um, like a lot of people probably this phone call, we werestarting to build rag applications using moreof a traditional, kinda like vanilla rag, uh, approach. Um, you typically, you take a, a raw document by a PDF,you split it into, um, into pages or chunks. You break into text, you then pre-process it, um,you generate embeddingsand then insert that into a vector store. Um, and then you leverage natural language to then, uh,retrieve similar chunks to questions that you ask it,and then you pass those chunks into the LMto generate response.
Um, so I think like a lot of other people, it workedfor a majority of our use cases. It got us maybe 8% of the way there, uh,but then we couldn't always find a way to likethat last 20%. Um, how do we, how do we,how do we actually close the gap on that last, uh, 20%of what's, what, what was working for usor 20% of accuracy that was currently meeting these, someof the problems that we currently face that we faced when,when dealing, uh, with that vanilla rag experience?Um, one we had, we had problemswith the models not understanding our domain context. So, um, we were, we've been working with a, um, uh,like a, like a, a public regulations company that had, uh,reg, that had regulatory data about, um, what like rulesfor, for parks and recreations, for example, um,if I said the word vehicular capacity, um, to, to usand also to the models we were working with, uh,that meant the number of vehicles, for example,or number of people that could fit in a vehicle. Um, but actually in the context of this document, uh,in this use case, um, vehicular capacity meant the numberof vehicles that could fit on the road.
Um, this is an example of a type of like domain context, uh,that is necessary for you to understand and, and mapand then tell the LLM exactly what you mean. Um, otherwise it's gonna just assume, uh, that, for example,vehicle capacity meansseveral people that might fit in the vehicle. Um, this go, this is true also for your, uh,embedding models as well. Um, if this is the way that your model understands it,where, how, how and where is embedded, um, may, may differ. Um, another example we like to call upon is,what is rice to a farmer versus a chef, um,or a nutritionist or a grocery store owner, um, riceto a farmer you may care about like the water or the soil.
Um, a chef might care about the fluffiness of the color. Um, this type of context is importantfor answering your questions accurately. And we often found that models that we were workingwith our ride pipeline just did not fully understand theexact domain context, which we were trying to,um, uh, pull out of it. Um, we are also having problems with accounting for a lotof different varied types of queries. So, for example, we were working with a travel clientwho was trying to build a chat bot for a, um,wellness travel solution,and they would have a variety of different typesof questions that they would get from their customers.
So, uh, some people ask very explicit questions like, um,I want to go to a European beachand I want to maybe do these specific types of activities,um, compared with some other peoplewho were asking questions like, Hey, I am,I've really stressed out lately. I've been looking to get away. I think I need a vacation. Um, building embeddingsand building a a, a chunking strategy and, and,and embedding strategy that was gonna work for of a lotof these very different types of queries was,was challenging for us. Uh, and then creating the right types of problems to be ableto even, uh, handle these different types of queries and,and, and pass the context of l the right waywas, was a challenge.
Um, there's also this concept of a, a,a pretty significant difference between, um,similarity and, and relevance. Uh, for example, um, that same, that same travel client had,um, for example, like, uh, beach houses or, or, um,or properties that they wanted to advertise to their,to their consumers. Um, if you think about this,what if somebody's asking for a beach house?Um, while some people are maybe askingfor a beach front house, um,while these are pretty semantically similar concepts, um,they are actually quite different bothin price and, and location. Uh, beach house may be within a mileof a beach versus a beach fronthouse on the sand of the beach. Um, making sure that we can handle the semantic accountfor semantic similarity, uh,but also make sure that it is most relevant to the, the,the question that the user wasasking was challenging for us.
Um, last couple points, um,comprehensive answers was also pretty challenging. Um, we've had to, uh, work with a legal client, for example,who wanted to ask questions about like, all of the LPsthat are maybe, uh, involved in a fund. Um, so a question like that is difficultto answer when you are maybe having a top K thatthat limits, that is, there's less than the numberof like LPs or participants or,or items that you wanna be able to respond, uh, withfor, for a type of question. Um, and then on top of that,handling complex multi hop typequeries was also challenging. So, uh, it wasn't just types of questions like,who are the LPs might fund?Uh, customers often wanted to know, like,who are the LPs might fund and how much have they invested?And of those people who have invested a certain amount, um,how many of them have a special data access?Right? Um, answering these types of questions, there's a lotof different, like complex nuances and,and specific types of, uh, of dataand that type of question that are difficult to answer justwith a, uh, a, a single or,or a few sets of, of, of queries to, um, a vector store.
So there's a lot of different ways in which you can improveyour, your rag process. And, uh, Jung here has spoken many times about someof these ex these ways, um,and we've leveraged these a lot and on our side as well. But, um, I'm gonna talk a little more today abouthow we are actually shopping thistoday with, with knowledge graphs. And our solution has largely been to, uh, take the datathat you want your agent to work with. Um, let's put it in a very small, very well scoped graph,and then let the LLM talk to that graph.
Some of the things that we found in, uh, experimentingwith this have been that, um, first small graphs are,are better than one massive graph. Um, this isbecause when we think about the types of questions that a,um, an agent needs to solve,we only wanna give it the information that it needsto be able to, to, to accomplish the task that it has. Um, traditionally, knowledge graphs have been very large,uh, very comprehensive representationsof knowledge across a, a large and complex domain. Um, now while it, there's still definitely uses for this,um, we found that smaller graphs, um,when I say like smaller, like well scoped in on the typesof questions and tasks that a specific agent needsto accomplish, um, this is actually much more, um,much more manageable,but also, uh, scalable way, uh, much more,much more scalable way of, of managing, of,of building graphs, uh, suchthat you are giving the agent only the informationthat it needs to be able to answer a question. Um, we found that also LLMs now help us dograph creation and graph management a lot, um, a lot faster.
So, traditionally, um, LLM or traditionally knowledge graphand creation or, um, ontology alignmentor creation has been, um, a tedious processthat involves a lot of different stakeholders,like data engineers or software engineers,or, um, taxon is business experts. Um, everybody has an opinion on what the right wayto structure the ontology is,and it's still a very important thing to do,but it also like, takes a long time to align. Um, what we found is by leveraging a lot of different, um,uh, traditional, uh, machine learning solutions,but also like an emergent set of LLM solutions, we're ableto help customers, uh, build graphs really quickly. So the process, it, it's much moreof an iterative experimentation process, um,rather than a maybe long term, like on tall geo alignmentexercise that takes upwards of days or weeks, even months. Um, so our insight here is that, uh,we can leverage our lenss to help, uh,understand a user's contextand then build a graph very quickly for themand help them do that over and overand over again until it matches the, um,the use case they're trying to work with.
Uh, building on that, uh, we believethat graphs should be mapped to your view of the world. Uh, like we said before, traditionally I think a lotof knowledge graphs that we've interacted with have been,um, more so generic or, uh, domain specific,but, um, uh,but static representations of a, uh, of a complex domain. Um, we believe especially for being able to, um,inject the right context into for, for your specific setof questions and use case, um, your graph should be builtand mapped to your view of the world. Um, this means that when you, when, when a, when a farmeror, or a chef looks at, uh, information about rice,they want to be able to make sure it contains theinformation that they care about, um, in the contextof the questions that they want to answer. Um, so we're focused on building tools that help you, um,inject your opinion and then extract the information neededand then represent that in a graph such that it worksfor you and, and your use case.
Um, lastly, we found that knowledge graphsand vector databases, um, often while worked,like working together, uh, create better, uh,better results for, for rag. Um, I'm gonna share a demo, uh, shortly. It kind of walks us through how we're doing this,but, uh, by combining the like high level semanticrepresentation of your domainand knowledge graph with the powerful, scalable, um,vector storage solution like Zillow, um, this enables us to,um, make very largebut much more like pinpointed, um, queries to be ableto extract information in a more deterministic waythat retrieves information for us, uh, more, more accuratelyfor the types of questions we wanna answer. Um, so how does this actually play outand what are the actual benefits that we start to see here?Um, there's a few key areas I wanted to highlight here. Um, structured grounding answers.
So, um, this is about just giving the LLM only the domainspecific, uh, context that it needs. Um, there's a phrase, uh, that referred to called,like context poisoning. Um, if you think about the amount of datathat somebody needs to give a, um, an LLM, uh,or that you can give an LLM, uh, ultimately the, the contextthat you give it, the LM is going to answer that question,uh, with the, with the, withthe context that's been provided. So if you give it bad data,you're potentially giving it information that could be, um,misleading or, or could, could lead to, um, answersthat are basically exactly, exactly wrong. Um, so what we focused on is trying to give, um,trying to give developers the tools suchthat they can give their LLMsonly the information that they need.
Um, we work with a, um, a veterinary radiologist, uh,who has a big problem with hallucinations. So they've often asked like, chat GBT about diseasesor treatments, um, uh,and they're building solutionsfor helping them understand exactly, um, the typesof like maybe disease that might be associated with an imagethat they have already, uh, viewed. And they're trying to quickly diagnoseand understand the problems here. Um, so what we end up doing is pretty much in textbooksand reports and previous, um, assessmentsthat they've done into a database into effective database,and it still hallucinatedbecause, um, we weren't able to, um, effectively, uh,map the difference between different breeds. Only certain animals can have certain different types of,of diseases and therefore different treatments.
Um, and even that, that happens across like different,like breeds of dogs, for example, or, or, and,and that's quite different across likespecies of animals as well. So, um, what we wanted to do was to build a, uh, a graphthat would help them basically only haveto answer the questionsthat they are most commonly, uh, asked. So, um, turns out something like 50% of the questionsthat this radiologist has to dealwith are about canine abdominal issues. Um, so we just focus on making a graph specificallyfor canine abdomens and then splitting that apart by breedand then splitting that apart by diseasesand treatments for these different, uh,types of, of ailments. So, um, by making this latent space narrower, um,we reduce the risk of poisoning that context windowwith confounding information.
Um, and this gives us the abilityto then also provide much more, much better filtering, um,and, uh, and, and more like precise queriesinto our vector databases. Um, completeness, so this is, uh, I mentioned this before,but, um, with graphs we're able to write structured queriesthat enable us to get back a complete responseto a type of question that we may have. So in this case, when we had that question about like,who are the LPs?Who are all the LPs in my fund, um, ask you a question likethat to, uh, in a,in a semantic performing a semantic similarity,certain on something like that could reveal, um, a lotof relevant, but not necessarily guaranteed,uh, complete answers. Um, so this is a metric that we have actually really startedto promote and focus on quite a bit, is this conceptof, of completeness. How do we know that we are getting, um, allof the answers back to a specific question.
Um, this is a hard thingto do when you're relying on on top K,but with a graph, if you can specify the exact typesof entities or relationships that you care about retrieving,and then using that to make a subsequent query into yourvector store to get relevant,semantically similar information, um, this helps us, uh,better guarantee a more complete, um,more complete response. The last thing is, um, more complex multi hop category. So similar building on that use case about like LPs. Uh, like I said before, there are, it's rarethat the use cases, the questions that we're goingto get are just, who are the LPs in my fund?They want to know more nuance,but more complex information about it. Like how much have they invested?And therefore, how, if they've invested, what are the rightsthat I have to consider giving to, for example, um,you know, o other, other, other LPsor most favorite nation cl uh, LPs, et cetera.
Um, by, again, using a graph, we canbetter define the types of, uh, relationshipsthat we wanna be able to pull, pull in, um, and,and breakingthat are branching off fromdifferent nodes within our graph. Um, so this gives us, uh, the abilityto write much more complex, um, multi hop type queries,but in a very structured way as well. So a lot of great benefits. Um, but what we found when trying to implement a lotof these graphs is that it is just generally quitechallenging and time consuming to do the graph creation, um,do ongoing graph management,and then integrate these solutions, um,into your rag pipelines. So our solution was to focus on building toolingfor ourselves and now for the world, um,that focuses on making small graphs very,very well focused graphs, very easy and quick to make.
And like we said before, now with a,with a small graph like rapid iteration solution, you can,you can iterate, iterate, iterate, um,until you have a good enough representationof your domain such that you can solve your problemand more accurately answer a question. Um, so that's largely our approach ishow do we give you the best way to iterate quicklyuntil you can get to a graphthat best represents your, your data. Um, so our offerings right now, we,we largely offer three things. Um, I'm gonna demo the graphic creation,uh, right after this. So, um, graphic creation, management orchestration.
So on the creation side, we help customerscreate graphs in three different ways. Um, one is questions, uh,you can just ask some question or a set of questions. Uh, and then based on, um, retrievals, we performto a vector database with zills, uh, we're ableto extract entitiesand that information, uh,extract relationships based on the questions you've beenasking, turn those into triplesand then put them into a graph, um, a schema based approach,uh, on the PDF side, which enables us to, uh,give you the power to define the entitiesand relations that you care about most,and then build those into a graph in much similar fashion. Um, and then, uh, recently released, um,more structured graph creation through CSV. So, um, we can actually extract headers, um,define relationships between headersand then, um, set values to namesand other properties within your, your nodes, uh,and then use that to, uh, createand then combine those CSV type graphs with,with other graphs in your system.
Um, on top of that, we also enable, uh, chunk linking,which is a feature like that we can also implement very,very, very deeply and very closely with, uh,vector like Zillow. So, um, being able to enable explainability, um, and, and,and provide evidence for your answers, uh, by linking, um,entity data in nodes to the raw text that may be stored, uh,as a vector in your, um, in, in,in Zillow from your vector store. Um, graph management. This is more about, um, a lot of thingsthat make graphs just useful for youand your team and your organization. So these are things like schema managementand graph management at scale, um, workspaces, backups, uh,versioning and access control.
Uh, things that just make it useful across,across an organization, across the team. Um, and then orchestration. So this is largely about, um,enabling complex queries to your graphs. Um, and we're, this is one of the thingswe're most excited about the future. Um, when you think about like the, the valueof a small graph on, on, uh, on an agent scale, um,what we can do is effectively give the agent only theinformation that it needs to be able to, um,to be able to answer that question.
So, when you think about how this applies to, likesome emerging trends and,and I think best practices in, in kind of agentto agent workflows, um, you can think about what, what,what's, what's interesting is providing agraph on a per agent basis. So if we can give a, a, an agent a way to an ask,answer a question with just the domain available to themas small graph, and then do that for every agent workflow,we can inject a lot more controland accuracy into each agent, ensuringthat the agent only has the context that it needs, uh,for that stage of the workflow. So orchestration's about more complex query management, um,enabling graph to graph tasks, um, uh, graph, graph queryingor, uh, graph comparisons, um, solutions in generalthat make it easier for agents to, to, to talkto many small graphs, but then also just make use of,of an ecosystem of small graphs. Um, so with that, I think we're done. I'm just gonna jump into a demo and share a little bit moreabout how graph creation works with us, um,and then I'd be glad to take, uh, any questions after that.
Okay. So I have for us, uh, a notebook to walk through. Um, there's, uh, and,and Jung, if you can't see this,if needs to get a little bit, just lemme know. But, um, I wanna share a little more abouthow we're thinking about creating graphs, uh,starting with with schema. So, um, this is our, our SDK.
This is currently a closed beta. Um, but, uh, it works by, um,providing initially just a, a few sets of, you haveto initially bring a few,a few pieces of information to get started. So, uh, we can give you an API key. Uh, you bring your own, um, open AI key, for example, uh,we just start supporting, um, Azure Open A as well. Um, and then you bring for us a, a graph database, uh,Neo four J in this case,and then a vector database, in this case, Zillow.
So what we do is after we initialize and then installand initialize the, um, the why, how client we, uh,then have to do, um, uh, a few things. So we first add our document into, uh, yau, we then, uh,perform a graph extraction and graphic creation. So first thing I'm gonna do is then, um, createwhat is called a, a namespace and then upload data to it. Um, a namespace in our case is basically a, um,like a logical separation or a segregation of your data. So, uh, think of this as like a place where documentsand the chunks of those documents and schemasand graphs all live in this concept of, of a naming space.
And it's a way of like creating small domain focused, uh,areas for, for your, your different use cases. So, um, what I'm gonna do is I'm going to, um,upload the document I'm using for this, which is gonna be,uh, in this case the first book of Harry Potter. Uh, it's like 200 pages of, of, of textand PDF format, right?But it's, it's basically the,the first full book of Harry Potter. Um, so from here, what I'm gonna do is then, uh,create the graph based on a schema. So after the document's been uploaded, I can then run thisand then tell the systemto create the graph based on, uh, a schema.
So I'm gonna show you what the schema looks likebefore we actually see what the graph is created. Um, in this case, a schema consists of three main things,um, entities, relations,and then patterns, um, entities being the nodes themselves. So in this case, we wanna create nodes that areof type character, objects, spells, or locations. We give little descriptions, they're easier for humans, uh,to understand what this means. Um, relations are effectively the edges that we are usingto, uh, create relations between these nodes.
So in this case, a cast goes to or uses and patterns. And this is effectively the restrictionsthat we wanna add on our graph creation process. So, uh, we want characters to be able to cast spells, uh,characters go to locations,and, um, let's say characters use magical objects. Um, what's valuable, this isthat since it's in this simple kind of JSO format, um,you're able to really quickly iterate on this and,and add new, um, add new relations. So for example, if I wanted to add a new one down here,I can say something like, A character is friends witha character, and this is now a type of relationthat we are gonna be able to extractand then map out into the, um, the graph.
Cool. Simple as that. Okay, so now let's check outto see if this graph has been building. So, and we go over the Neo four J to see,um, to see our graphs. And, uh, we're using Neo four Jbecause it's a really powerful graph database for us.
But also, uh, it provides a really like friendly, um,user interface to start exploring your graphs or a db. Um, so if you check this out, uh, you can see a lotof different types of interesting informationthat was extracted by our solution. Um, a lot of different locations, for example, um, a lotof different types of, um, let's see,a lot of different, uh, spells,objects, locations, et cetera. Um, what's interesting, what I wanna call out,which is perhaps not exactly being displayed right now,lemme check this real quick. Cool.
All right. So this is a better representation of it. So as you can see here, I think what I wantedto point out is what's so powerful about this is that, um,only the nodes and the relationships that we've definedor the entities and relationships we've defined are beingpulled out and then displayed here. Uh, the initial ones I showed youbefore are in a different namespace,and that's one I'm gonna show you after this one. But, um, this is what I think is like someof the most powerful, um, stuff about, about, about our,our, our solution is that, uh, where,whereas I think you can ask an LLMto just find interesting relationships.
Um, you can't always provide the right controlto make sure you're pulling out the relationships that areand enforcing the right relationshipsand entity types that are most important to you. Um, so by making sure we only have entities of type,character, location, object, spell,or relationship with Casco as to uses, uh,we can write much more, uh, structured queriesthat we can trust are going to be, uh, talkingto the right information within our graph,rather than letting the LLM for itself decide, uh, whatto call a, uh, certain entity or, or relation. So, um, Harry Potter's kind of a fun, uh, demothat is, you know, not, you know,not always the most relevant, maybe if you're a screenwriteror, or a Harry Potter enthusiast. This is a really, uh, interesting, important use case. But, uh, I wanted to kind of demonstrate onethat's a bit more, uh, domain specificand probably more relevant.
So, um, I up, I have a couple different typesof ucs or documents. I wanna show you how we can work with herethat demonstrates how we do with CSV. So the first example here is a, um, it's really simple. It's a, it's just a small like news article about, um, uh,an investor that works for Sequoia, um,and the different types of companiesthat they've invested in, maybe a couple other peopleand who they, how they've worked together. We basically just asked, um, our,our schema based extraction to extract the following things,um, from the JSON, uh, a person company technology, um,relations, our things like people work at,they found invest in or focus on technologies and companies.
Um, and this is basically the schema that we useto extract information from, from this, uh, PDF right here. So, uh, as you can see, we have at the center of a graph,you know, one person, Stephanie, John,and, um, we have like that Stephanie works at Sequoia. Sequoia focuses on different types of technologies,and, uh, Stephanie's invested inor worked at various different places, um, stuff like this. So, uh, pretty standard, pretty small, pretty easy. But, um, what I wanted to show you is kind of how we're ableto do this with CSV and make more, make more useof this information by providing more structured data to it.
So, um, quite the same. I'm gonna, I import, initialize our white how client, um,with this information, what I can now do is then, uh,with a CSV, for example, um, I can automatically generate a,um, a CSV based schema because, because it's structuredand it's, uh, it's, it's, it's, it's fully structuredto determine it in this case. So, um, what I'm doing here is I'm basically on a per, uh,column header basis, pulling out the, um,put out the title of the column. I'm setting that as the name of an entity,and then I am using thatto create the same structure we createdbefore with the, with the PDF version. Um, but with the exceptionthat I can now actually take certain headersand specifically call them, um, properties.
So in this case, uh, what, what came out of this responseis, um, all the different headers in the CSV. And actually, I'll show you the CSV realquick so you can see it as well. Uh, it's quite simple. It's just a, uh,p person fund contact detailsand maybe recent investments that they've had. You see st that same personStephanie has mentioned here as well.
Um, but it's only about like 20 ish lines,and there's only a few different, uh, headers in there. Um, so the automatic schema generation pulled out eachof those headers person fund details and investments. It created patterns automatically between those, uh,basically calling the first column the primary key,and then creating relation between eachof the, uh, subsequent columns. So person has a fund, person has content details,person has recent investments. Um, so if you check out the schema that we were ableto build for this, uh,we let you to do a couple different things.
We wanna let you, uh, set headers as, as, uh, properties in,in nodes, uh,but then we also let you, um, override the nameof these, of these columns as different entity types. So if you can see from our other example over here,we're pulling out companies, technologies, and people. I really wanna make sure that the CSBS scheme is matchedto the same sche I created over here. So, um, I'm gonna call a fund a company. I'm gonna call recent investments companies, uh,'cause that seems to most closely match to,uh, what I was seeing here.
So once I've created the schema,I'm gonna do the same thing I did before. I'm going to, um, add the CSV to my namespace. Uh, so now that the CSVsand the namespace, I can use that CSV based schemato create a graph from the CSV. And if we go back to the, um, um,Neo four J as you can see that it's instantly being created. So there's a bunch of different nodesthat have been structurally addedand ex naturally extended on topof the existing entities that are in here right now.
So you can see here that Stephanie now works,it still works at Sequoia,but there's a bunch of other more structuredinformation that's been added here. So these are all the people that were mentioned in the,uh, the CSV. Um, their contact details are in here as well,but in this case, they are specifically put in,just specifically put in as a property of the person now. So before Stephanie was just the Stephanie person,but the, um, the contact details have been added there,as well as, um, some additional information that's been, uh,that's only found in the CSB. So Stephanie, uh, in this case we mentioned works, closedai, um, that's now mentioned here as well.
So I think what's powerful about this is it shows you theextensibility and also how well, um, these, uh,how structured data and unstructured data can work togetherand, and acknowledge graph, but then how you can now have,uh, what is a consistentand a single place to query this information from. Um, uh,and we're trying to make this a much more easy experiencefor you both on A-A-A-P-For CSV structured, unstructured way. The last thing I'll show is, um, a query interface. So what we're able to do now is we are able to specifically,um, ask questions about the types of entitiesand relations that exist in there. So if I wanna ask a question in natural Langor something like, what did Stephanie John invest in, um,I should be able to look at this graph, find,find Stephanie, and then understand based on these relationshere, uh, what are the companies that, that she invested in.
Um, so what we sent back to you here is, um, a few things. We give you a natural language response. So Stephanie invested in these companies. Uh, I can check itafter this so you can pause and, and double check. Um, but the other information that's important here is that,uh, we're also giving you the, um, the actual raw triplesthat are coming back from the query.
So from here, you can make use of these triples, uh,to basically construct your own, um,construct your own response, or maybe pass it into the nextstage of your rag pipeline. Um, we think that these are probably the most valuableinformation here, and as long as you understand the schema,uh, that you use to create this graphand you understand the context coming back from this on theagent level, you should be able to provide much, like,quite precise, um, next steps for,for processing this information. Um, one other thing I'll show in the, uh,graph is, uh, I'll go back to the Harry Potter graphfor this one, but we have, you see it, I mentioned up here,I got rid of this, uh, relation contains,but if I like, add that back in, what you'll see is,is gonna be quite messy, but you'll see a lot of different,um, specifically chunks that are being mapped to. And then, um, uh, containing information that has, uh,like raw chunk data for, um, for understanding each node. And see, it's probably so big that it's gonna take a minutefor it to actually load.
Um, if I ask a questionand I, for example, like ask for, um, if I set this to,uh, like chunks, including chunks,what I'm gonna get is actually a set of raw textthat comes back that that is the evidence for this answer. Um, okay, yeah. So a lot, a lot busier here,but what you'll see over here now is this chunk node,which references the,the raw text from which the entity was extracted. Um, so if I zoom in on one of these, it's quite messy,but here's actually a bit of text that is used to, uh,that was used to, um, uh, extract someof the entities that you've seen in here. So, um, i, I, it is tough for me to show you exactlywhat is in here right now, but like this chunk is,is related to entity McGonigleor, um, like Harry entity, for example.
Um, what we wanna be able to do is with this feature,give you a much more, um, a much more controllableand explainable way of giving your information, uh, of, of,of, of, of creating your answers. Um, a lot of the problems, I think with, uh, uh,I think some of the problems that are fixed by this are, uh,I think injecting a bit of like, trust,but also a bit of like, uh,like evidence into the types of answers you wanna provide. Um, and by linking that to raw chunks, raw text chunks, um,and then also making subsequent queof those text chunks into vector databases, um, like Zillow,um, we are able to provide much more, uh,explainable responses. Um, okay, so that's most of the demo. Um, you can see some of these, i I, there's a coupleof these recordings on our blog, so I'll,I'll share the link for that at the end.
Um, I also wanna quickly plugthat this is some of the work that we've been doing. This is related some work we've been doing with,um, results as well. So, um, we have a, um, a rule-based retrieval packagethat we open sourced. Um, I can share it with you all in here,but essentially it is a way this actually does not involve,um, like grass under the hood right now. Um, but this gives you an example of how you can use, um, a,a simple rule-based approach to being ableto make much more determined structuredretrievals, uh, of your vector.
So if you think about the, the chunk informationthat's in this graph, if I have, if I, as longas I know the chunk ID and the informaand how that maps to chunksthat I have located in my vector store, um,I can init first make queries to my vector,to my knowledge graph, um, pull out informationthat is relevant to, uh, the metadata that I've stored in,um, in vus, for example,and then make much more filtered structured queries to VUSthat I'm able to pull only, pull out only the chunksthat I care about, um, uh, from from Vector database. Uh, so this is open source, uh, definitely give it a look. Um, j Jon was great at like helping us, uh, in integrate. It was very easy to integrate with VIS as well. So, um, definitely give us a lookand it help, again, helps us like communicate a bit moreof the thesis about much more structureddetermines queries to the vector store.
Um, so, uh, to recapbefore I hand it back over, um, we think that it's valuableto build a graph that maps to the scope of your questionand your use case and nothing further. Uh, doing this helps you reduce the riskof context poisoning, ensuresthat your agent is always getting the most importantand relevant information. Um, and we find that by doing that, um,by doing this in a small graph way, by giving you the toolsto build, build your small graphs quickly, uh,you can more easily get to that answer. Um, representing information according to domain expertsand, and how they interact with it isprobably the most relevant, is the best wayto give relevant information to your, your,um, to your agent. So this is, again, thinking about how do we wantto represent information according to my view of the world?Am I building a graph from Rice about, from the perspectiveof a farmer or a chef or a nutritionist?Um, this type of context makes sure that the ventilationsand the entities you're extracting are meaningfulto the types of questions you're gonna be getting.
Um, in general, let agents talk to these graphsand perform those small tasks in the well scoped way. So, um, if you think about how these small graphs mapto your agent workflows, um, this provides a, like,you can think about the way that graphs need to be, um,structured in scope, just to answer a specific question forthat specific stage of your multi, um,your multi-agent pipeline. Uh, definitely more to come on that from us. Um, and then, uh,as a whole, um, knowledge graphsand vector databases are, are supportive of each other,and they're often forced multipliers for each other. Um, like I said before, the, the initial waythat we're even building these graphs to begin with, um,was like through questions.
So the first thing we would do is ask a question to a, uh,a vector database, get back a set of chunksthat are most relevant to that, that question,use those chunks to then extract to do entityand relationship extraction, turn those into triplesand build a graph that was, that's stillhow we build, um, graphs today. Um, and then, um, on the other side,graphs can be a way of extracting relevant metadatathat you want to use to make more structured, uh, semanand relevant queries to your, um, to your vector store, um,or, um, or connect them directly to, to chunk IDsthat are sitting, uh, in vis. Okay, that's it for me. Um, that's most, this is just a little bit more about us. Uh, our website is, uh, why how.
ai, uh, we also have a blog. We talk about a lot of the stuff, uh, pretty regularly. So please, uh, check us out there, join our, uh,mailing list or, or set some time us up with us to talk. Um, we love this stuff, so, uh,please don't, don't hesitate to reach out. I'll pass it back to you, uh, John, if you wantto take questions or, or move on to something else.
Thanks a lot. Thanks a lot, grace,for the great presentation. Um, I think let's, let's, um,answer the questions from the QA and maybe the group chat. Um, yeah, I do see a question from the QA session. Um, the question is about, uh, while, um, Neo four j uh,seems to have supported both knowledge graphand vector capability, why introducing, um, the integrationwith, uh, you know, dedicated vector database?Um, I think that's a very interesting question.
Um, I would like to have something to say, butbefore that I would like Chris to, to give some comments. Yeah, I, I'll just say, yeah, you're definitely right. Like the vector, the,the vector search option in Neo four J is,is de definitely available and super valuable. Um, it's something that we use as well. Um, what I like about the, uh,about using something like Novus is, is the extensibilityand like the scalability of this.
Uh, we get very fine tuned, easy to write, uh, queriesthat make it very simple to, um, uh, to write,to write structure, to add structured metadatainto the hit queries we wanna write. Um, also it's just very simple to work with for us. If we ever wanna change or,or modify our, um, our, uh, the, the strategyfor which we're putting raw data into vector, into vis, um,we have a really easy way of being able to like, modify,update, uh, in a, in a, in a, in a highly scalable way. Um, just generally it's very user friendly for us. So maybe just to, for a startupthat's also just a really relevant reason,but curious what you'd have to say, John.
Yep. Um, I mean this, I really love this question. Um, and I was asked a lot of times, likeMongoDB has a vector search, like a, b,C has a vector search, right?Why are you building a dedicated like,purpose built vector database?Well, um, I think first of all, um,you'll soon see every single vector store, uh, sorry,every single storage solutionor even like click house, like the, the analytical platformsto have vector search capability. 'cause that's such a, a very hyped topicand everybody wants to, you know, jump into the gameand, um, have a say, uh, in this, in this, um, kindof hot topic of ai. Um, so that's not, that's, that's, that's the current, um,the scene, but the way that we differentiate uswith the others that we are per purpose built vectordatabase, you can, you know,add vector search in your grocery store,but, um, that won't sustain you to a very, you know, um, um,very further in, uh, um, in this journey.
Um, so our primary, um,value proposition is the scalability. If you have less than a million vectors per se, um,probably any single vector database even, um, you know, um,just in memory, like, um,like something runs on your laptop, um, willbe good enough for you. Um, however, if you have billions of vectors, that iswhere we are good at like scalability and performance. Um, we can still provide, uh, tens of million secondsof search latency while the other ones probably, um,the server could have crashed,or it's like a few seconds of latency,which is not bearable in the production. And secondly, um, we provide a lotof great features including the enterprise readiness,like RAC, which, uh, aren't available in a lot of the,um, um, the, the competitors' options.
Um, and the other thing is, um,the other things like hybrid searchwhere you can combine the keyword search capabilitywith vector search, um,you can do very efficient metadata filtering,like if I want, um, the top 10 closest vector two days,but following the following, uh, following some particularcriteria, like the price is between 10 and $15,and the filter rate could be extremely high, like 99%of the, um,data in the CS doesn't really satisfy these criteria. So that, like, doing vector search at firstand then, you know, just, just blindly filter on them and,and feel, um, you know, hope you're luckyand get 10 results back,that doesn't really work in this case. Um, so there are a lot of things that, um,if you are just new to vector search,you may not have seen at this moment,but once you are, you know, um,going further in the developmentand Productionization journey, you soon see, okay, there,there's a lot of limitations in the kindof hacked together vector search solution from someoneworking on analytics, someone working on, you know, data,EDL pipeline, some, some, someone working on say,graph database in this case. Um, so I, I did check the new four Joffering of vector search. It seems that it doesn't have the, uh,metadata filtering capability even, um, please create,correct me if I, I was wrong.
And the, the syntax was, the syntax was, was kindof strange, like, um, to, to use vector search in specifywhere title text containshierarchical navigable smallword graph, like things like that. Um, so it's not really a, a native vector search, um, um,um, syntax and it's somehow can force, uh,fit it in into the graph search syntax. So that's, that's some of my observations. Um, if, I mean,if you have a very serious vector search, um, workload,like you have billions of vectors,or you're serving like thousands of KPIs, um,of search requests,then I think a purpose built vector database isyour, uh, your group friend. But say if you just have a very casual use case then,and you don't need to, you know, you don't wantto migrate the data to another data storage layer,then I think, I mean, just staying with whatever is workingfor you might be a very practical choice.
Uh, I hope that answers the question. There's a another question there about, about scaling to,um, billions of even using vectors in Neo four J and on Yap. Um, so, um, I'll say, uh, on the, I, I,uh, on the Neo four J side, there's, uh, plentyof documented examples of of them being able to scale to,um, billions, uh, of, um, of nodes,even trillions of relationships. So, um, they've, they've definitely like, uh,like this scalability Neo four J is something that they're,they're there is, is definitely possible, um, at this scale. Um, I'd say what, what why House is focused on, uh,it's two, two answers from the White House side.
Um, since we're building the technologies like these like,like Zillow, like Neo four J, we get the benefit ofthat scalability outta the box. So the Platformization, uh, is something that,that we're providing on top of solutions like these, um,and gives us the ability to scale outta the box,which is why we love working with, with,with vendors like Neo four J and, and um, uh, Zillow. But, um, our approach is mostly to, um,help break these types of problems at the,at the like large, large node, large graph scale, um,into smaller graphs that are a bit more purpose drivenfor answering your questionsand for working with agents in amore accurate, deterministic way. Um, so I think while it's certainly possible, um,we would largely guideand enable you to, uh, build those on a much,in a much more purpose driven, specific well scoped fashion. Um, so, uh, I'd say possible,but, uh, we should definitely talk about your use case and,and how that would be, uh, best represented with the typeof multi graph workflow that we're promoting.
Yeah, just, uh, quickly on, I, I think on top of that, um,I, I think it just, justbecause we, we know that, um, say knowledge retrievalor like, uh, graph search has its own, um, like high barrierof domain expertise and vector search. Similarly, we, we have a lot of, you know,engineering challenge to,to solve if you are running into scalability,scalability problem in vector index buildingand vector retrieval from the graphs like the,the graph index of vectors. So that's why we so appreciate there are, you know, um,partner friends like Wauwho can combine the great technologies from, you know,experts in individual domainand then combine them as a, um, easy to use and,and very effective solution to the, uh, developer for, uh,users or enterprise customers. Alright, we still have a few minutes. Let's see if there's any more questions.
Um, I think that this is the last slide with the discord. Uh, yeah, yeah, yeah, yeah. This is the last slide. Uh, yeah, you can always find links to medium discord, um,our Calendly on our website, so why, howthat AI is a good place to find it. Okay.
We got one more question. There are a sample code that shows the orchestrationbetween ILM Vector DA database and graph database. Yeah, let me, let me find you some of our co So our,our SDK, uh, currently is in a, uh, closed beta. Um, and if you'd like to join, please, like,please definitely send me a, an emailor, uh, grab some time on a Calendly. Uh, sorry, for, for the, for the webinar,the question was if, um, if there's code availablethat shows how, um, the LM Vector databaseand graph database talk to each other, um, right now a lotof that code is, is obfuscated through the SDK, um,but I can, I'm gonna po where can I post this link?Can I put it in, put it in the chat?Um, if you go here, you can see what our SDK looks like now,but it shows you how we are talking to the y how endpointand how, um, you can then bring in your own data in the formof A CSV or PDF specify model type, open AIor Azure Open ai, even your,like even a specific entity model extractiontype, uh, to be able to do this.
So sample code is in there,there's some examples in there as well. And then the last thing I'll call out isI can actually pull this down here. Um, this is where I'm looking at this information. Um, if you go to our repositories,you can also see we have a, um,we have a site for, uh, schemas,which we just released as well. Um, so in addition to the SDKI, like I just posted,you can also see, um, this site here, for example, uh,schemas, which talks about the, uh, various different typesof ways that we and some of our customers have, like triedto use our solution to extract, uh, information.
So, um, you can, uh, check out this site, uh, definitelyfor information about how, um, how, howto actually put this into action. Uh, but then there are also some, um, notebook examples, uh,in the last link I provide as well. So I'll paste this link in there too. Alright, thanks. Um, the next question is,have you tried yol on Wiki dataor any large open source graph dataset?Hmm.
Um, we've tried out a lot of stuff. The Wiki data specifically, we have not. Um, I, I, I would actually loveto do another these with a co-founder. He's a, he's a PhD in, in, um, he,he did his PhD in knowledge graphsand I think he actually has a lotof really interesting experimentsthat he can like, run share with you all. Um, we've done this with largely, um,what our customers are asking for today, which is a varietyof different, like, domain specific, um, um,like enterprise, uh, like PDFs for example.
Uh, we're also working with a lot of like mostly regulatory,uh, industries that have a, that have a high barfor accuracy, uh, reliability, so think financeor healthcare, um, legal for example. And a lot of that data is currently in PDF format. So most of the data we are working with is in, uh,is in that format today. Uh, but there's a lot ofreally exciting implications for this. Um, and the last question is, uh, support other languages.
Um, I'll, I'll take a crack at this. Right now we're, we're just, uh, supporting, um, like our,our with Python SDK, um, we'll,we'll definitely look at adding support for other languagesas, as our customers need it right now. Python is probably the highest in demand, um,but definitely, definitely looking out for feedback if,if you have a use case for, for anything else. Um, also John,I'm not sure if you wanna talk touch on that too. Um, no, that's, that's good.
Okay. Last call for any questions. If not, I think we're also running out of time. Thank you so much Chris, for the great presentationand um, for, you know, webinar attendees,if you are interested in yha. aior Zillow, feel free to check out our websiteand uh, link the resources, uh, from the chat.
Thank you. Thanks. Um, you know, thank you everybodyfor attending this webinar. We'll see you next time. Okay,Thanks all.
Meet the Speaker
Join the session for live Q&A with the speaker
Chris Rec
Co-founder, Whyhow.ai
Chris is the Co-Founder of WhyHow.AI. WhyHow helps developers build more accurate, deterministic RAG applications using the power of knowledge graphs.