We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode LLMs and Graphs Synergy

LLMs and Graphs Synergy

2025/2/10
logo of podcast Data Skeptic

Data Skeptic

AI Deep Dive AI Chapters Transcript
People
A
Asaf
G
Garima Agrawal
Topics
Asaf: 我主要从事网络分析,对知识图谱的了解相对较少。知识图谱在数据存储方面很优秀,但一些网络算法在知识图谱上的效果不如在单一维度网络上好。许多知识图谱项目高度依赖于创建者个人的偏见和世界观。知识图谱本身可能并不有趣,其价值在于与其他应用的结合,例如谷歌搜索中知识图谱元素的辅助检索。知识图谱可以被创造性地应用于不同行业中相似材料或工艺的识别,从而找到解决问题的方案。知识图谱可以帮助减少大型语言模型(LLM)的幻觉问题。知识图谱和大型语言模型的迭代过程可以结合两者的优势,LLM可以更好地从知识图谱中提取数据。大型语言模型可以用于生成知识图谱查询语句,并利用知识图谱提高客户服务等领域的ROI。 Garima Agrawal: 我的博士研究重点是领域知识表示,利用知识图谱和大型语言模型来增强AI的领域知识。知识图谱是一种静态模型,构建基于知识图谱的问答系统比较繁琐,更新知识图谱也存在挑战。知识图谱可以表示各种各样的数据,例如城市、机场、组织、产品、客户等,并且可以包含节点和边以外的附加信息。大型语言模型可以用于理解问题,并结合知识图谱进行推理和回答,从而弥补知识图谱在自然语言理解方面的不足。评估知识图谱增强LLM的成功方法包括精确匹配、Top K答案以及提取事实的有效性。评估知识图谱增强LLM的效果还可以关注幻觉减少量、错误答案改进数量以及推理能力提升。知识增强旨在引导LLM朝正确的方向获取信息,可以使用正确的提示、少量示例或知识图谱的本体来实现。仅仅依靠LLM提取实体并查询知识图谱可能无法理解问题的意图,需要更精细的策略。大型语言模型可以理解问题的意图,并根据意图从知识图谱中提取更准确的信息。LLM和知识图谱的交互应该是一个双向过程,LLM不仅作为接口,还应参与理解意图和提取信息。大型语言模型不需要专门训练就能与知识图谱交互,只需要调整提示即可。知识图谱本身不具备智能,它只是一个存储信息的静态数据结构,需要通过查询语言进行访问。知识图谱是一种静态的知识表示,其上可以进行推理,例如使用Sparql查询。知识图谱比关系数据库更灵活,更容易建立多个表之间的关系,尤其是在数据量大的情况下。知识提取的基础是语义相似性匹配。目前知识图谱增强LLM的研究热点在于检索增强生成(RAG)模型的优化和改进。知识图谱增强LLM的研究经历了从直接训练到微调再到检索增强生成(RAG)模型的演变过程,RAG模型目前是最具成本效益和易用性的方法。在客户服务领域,可以通过优化RAG模型来降低成本,例如识别和利用高频问题(FAQ)。在客户服务中,可以利用AI从对话中提取客户意图,从而减少人工干预,提高效率。与过去基于Elasticsearch等数据库的系统相比,当前基于LLM的系统在解决问题方面取得了显著进展。当前的AI技术已经能够解决许多过去需要人工处理的问题,例如客服系统中的意图识别和对话路径设计。我目前的工作重点是降低RAG模型的成本,并通过我的公司Humacon为AI投资和实施提供咨询服务。目前AI技术仍需要人工干预,AI并不会完全取代人类工作,而是提升效率。AI不会完全取代人类工作,而是会改变工作性质,人们需要提升自身技能。

Deep Dive

Chapters
This chapter explores the definition and applications of knowledge graphs, contrasting them with traditional networks. It highlights the unique capabilities of knowledge graphs in handling diverse data types and their potential for creative problem-solving, as illustrated by an example of optimizing a wax application process in woodworking by drawing parallels with the automotive industry.
  • Knowledge graphs can model diverse data unlike traditional networks.
  • Network algorithms may not work as effectively on knowledge graphs.
  • Knowledge graph applications depend on data input and schema design.
  • Example: Using knowledge graphs to find similar processes across industries (waxing wood and vehicles)

Shownotes Transcript

Translations:
中文

You're listening to Data Skeptic Graphs and Networks, the podcast exploring how the graph data structure has an impact in science, industry, and elsewhere. Welcome to another installment of Data Skeptic Graphs and Networks. Today we're talking about knowledge graphs and RAG and some of these topics. Asaf, what's your background with knowledge graphs?

Knowledge graphs, you can model everything as a knowledge graph. We know it from if you follow the episodes, right? And what I usually do is network analysis. So I usually, I don't work so much with knowledge graphs. I work with, let's say, let's call it plain networks, networks where each node is the same entity and the connections are all the

have the same logic. In a knowledge graph, you can have a variety, much diverse data. It's a cool thing when you want to store data, but the problem is that some of the algorithms, the network algorithms that you want to use, won't work as good as on networks that are one dimension.

You can use some network algorithms, but they'll get you results that are not as clear-cut as you get in a one-dimensional network. Makes sense. I've often felt that many knowledge graph projects are highly attuned to the bias of the person who entered all the data and the way they look at the world and chose to

create these data structures to represent that knowledge. That network on its own isn't necessarily that interesting. It's maybe what other application can tie into it. So maybe the best example is like Google search, where sometimes you'll query something like, I don't know, uranium. And along the right side will be all these knowledge graph elements they've quickly retrieved that are related to topics to that.

When you build a knowledge graph, of course, it depends on who builds it and if you use the right schema and so on. And usually it's like very, I don't know if intuitive, but straightforward, right? Sometimes it's

It doesn't seem like a big deal, but I saw people use it creatively. There's a company I know that what they did was use knowledge graphs in order to try to... We talked about materials, right? They wanted to use materials and not just materials, but also processes, same processes on the materials that they can apply in different industries. Okay, so the example they gave me was

What they wanted to do was they wanted to apply wax on wood. It's a difficult process, lots of wax is lost in the process, so it's very expensive.

They used a knowledge graph in order to understand in which industry are there same processes and some same materials, something that is similar. And what they found was the auto industry, when you apply wax on vehicles. But the process was magnetizing. And you think, okay, so what they did was magnetize the wood. It sounded, I don't know, as I said before, I'm not a physicist.

But probably if you pass lots of, and I don't know if it's true, but that's what they said. So if you pass enough current through the wood, some of the wax sticks. And it's enough to make it, you get a better return of investment on it. So they used the knowledge graph to look for similar patterns or more in a semantic case, same materials or same processes in different industries.

So that's a non-intuitive way to use a knowledge graph, for example. Well, the topic we're going to cover in this interview is can knowledge graphs reduce hallucinations in LLMs? It is the era of LLMs. We got to touch on this a few times. And it's very commonly reported that they hallucinate, even though I don't have that experience myself too often.

But of course, zero hallucinations would be ideal. So maybe augmenting with a knowledge graph is the secret recipe. Yeah, I saw you hallucinating a few times, but I think...

But what I liked with the talk with Garima was that the idea of using the iterative process between the knowledge graph and the LLMs, right? So as she said, you can get the best of the two worlds, the knowledge graph, the way to extract the data and store the data in a way that you can use it. And the LLMs that help you to extract the data from the knowledge graph and

As we know, knowledge graphs are, you need some querying, you need different query languages and so on. And LLMs are much better, I think, and are the future of extracting things from knowledge graphs. So I think that was cool. I'm sort of a novice with RDF, the...

resource description framework and it's accompanying Sparkle query language. I don't think I could effectively query it, but I'm pretty sure I could ask an LLM to write me a Sparkle query to get me what I want. So moving in an interesting direction there. The second thing I liked was the ROI solutions. Instead of training or improving the prompting, the prompting in, let's say, a customer service, not a customer service, it was a call center, right?

Instead of training or improving prompting for call centers, you can just listen to the conversation. Let the algorithm listen to the conversation and figure out the context by itself. That's self-prompting, you might say. Absolutely. Well, let's jump right in and see if we can get an answer to that question. Let's do it.

My name is Garima Agarwal. I recently did my PhD in computer science from Arizona State University. I've worked for like 15 years in software industry and data science and AI world. And right now I'm working as a senior researcher and AI consultant with Minerva CQ. And I also run a consulting AI consulting company called Humacon.

It's like human connections where human connection meet innovations. Well, this may be a little out of scope for our main discussion about your research paper today, but I'm curious if you could share any thoughts on what it's like to go back and get your PhD after being in industry for a little while. It's a question very close to my heart. It was a tough call because I was working as a data scientist in Bangalore in India

It goes back to a little bit of history about my background. So I did my bachelor's in computer science and then master's. And that was 2003 when I started my master's and completed in 2005. And then I worked in neural networks and fuzzy logic and, you know, those neural network stuff. But it was just, you know, beginning to start there. And there was no AI in the industry.

I wanted to do get a PhD that time, but for some reasons I never got it started. And then I started job and worked at multiple, you know, MNCs, software development and all that.

And then years later, fast forward to 10, 15 years later, I found that again, AI is a thing. So, you know, and then I was working with 24 seven.ai and we were building chatbots. I started working with them as a data scientist. And then I was like, okay, it's not too late. Maybe I can go back.

Yeah. So I have a 12 year old daughter now. She was seven that time. It was a tough call to leave her there, move from Bangalore to United States. And, you know, more importantly, get back to school because, you know, it's just hard at this age and I'm 45 now. So I started when I was like 40 almost. And

And yeah, I study with those 25-year-old bright kids. But yeah, we made it. Congratulations. Thank you.

What was the focus of your PhD? So my focus of PhD was domain knowledge representation. I used knowledge graphs for it, and then I used large language models to show that integration, augmenting domain knowledge helps AI. So the basic premise was that if you are building an AI, we need some domain expertise to

The general purpose AI is good for general purpose, but not for a domain specific question answering system. And what was the state of knowledge graphs before LLMs came into the picture? How well deployed were there and how did people use them? Well,

When I was working on knowledge graphs, so it was like 2001. In 2002, we saw a surge in knowledge graphs and eventually they started to die. I should say 2019 and 20, they were like pretty much at peak and then they started to die because knowledge graphs are a static model.

Pretty much to build a question answering system on top of a knowledge graph is a little bit cumbersome because you have to first model all that knowledge and then build a question answering system on a static model is...

Slightly difficult because, you know, just to understand the question, the entities and what's going on, just to traverse the graph and all that. So people were working on those question answering models. And then some of the companies were doing pretty good, like Neo4j and all that coming up in Knowledge Graph.

But still the question was, you know, how do I use the static knowledge? How do I update my knowledge graph? How do I model this knowledge? How do I take the, you know, huge data? How do I get started if I don't have existing knowledge graph? So I think it was pretty much a research topic. Some companies were using it, but not that much.

You know that feeling when you discover just how much of your personal information is floating around online? I certainly do. When I got my first Delete Me privacy report, I was shocked to find over 700 listings containing my information across the internet. But here's the exciting part. Within just the first week, Delete Me had already removed almost 40 of those listings.

What really impressed me was their thorough communication. They keep you informed every step of the way. Sometimes data brokers like WhitePages need extra verification. DeleteMe lets you know to watch for those emails so you can complete the process. Plus, you get detailed privacy reports showing exactly which data brokers they've found your information on and what they're doing to remove it.

The best part? You're assigned a dedicated privacy advisor who's always ready to help with any questions or concerns. It's not just automated software. There's a real person watching out for your privacy. They continuously monitor for new data brokers and handle removals automatically as part of your membership. Take control of your data and keep your private life private.

by signing up for Delete Me. Now at a special discount for our listeners. Today get 20% off your Delete Me plan by texting DATA to 64000. That's DATA, D-A-T-A, to 64000. Message and data rates may apply.

This week's episode is sponsored by Math.Fashion. That's a real website. Find the right t-shirt for the math geek in your life. Are you ready for Pi Day? That's March 14th coming up. Check out the line of Pi-related apparel in a variety of sizes and styles. Visit https://math.fashion.

What are the edges and vertices in a knowledge graph? So it could be like a graph of cities or airports or your organizations, your products, your customers, pretty much anything. In social media, it could be people you are connected to. So this is a word as a person. And then you have an edge where it shows the connection. Now, what makes a knowledge graph a little different from the simple graph

is that it gives me a flexible structure to represent my organization or my data or my domain, along with those nodes and edges

I am also able to give out some more information about that node. Because a simple graph might just say a city, say Phoenix, but not say that Phoenix is a capital of Arizona and it's a metropolitan city, it's a sun city or things like that. You know, it gets hot there in weather, so don't come in summers.

you know, things like that. And we have beautiful mentors. So all that information, you know, you can encode and embed in that node and that makes it Knowledge Graph.

Can we teach an LLM to ask an expert in the form of a knowledge graph? Now, the one big thing which we have been able to solve, and that's why knowledge graph has picked up again, because they were good at knowledge representation, but not at understanding the entity or the question. Now, I have a model which can very well understand the English. Maybe I cannot understand that good English, right? I might not have that extensive vocabulary in being a non-native speaker.

But my model is able to understand, give me great grammar, good punctuation. Everything looks very nice and flowing easy in the language. So I'm able to solve the problem of natural language understanding. And why not use the model only for understanding the question? It also understands a little bit further, not only the question, like what is the intent

of the question, why somebody asked this question, right? What does this question mean? Now I know, okay, you have given me a list of symptoms you are going through. I understand this. Now I can go refer to a book and just like looking into a book and see, oh, okay, this is a catalog and it says, these are the symptoms, then prescribe this.

And knowledge ref is great at that, to represent, you know, if this is a symptom, then do this, then do this and do this, right? So we have the connections defined. We have the knowledge represented very well. Now I use this model, my LLM, to understand my question, to get my intent, go to my knowledge base, extract what I can, the

path, the reasoning path, go back to my graph algorithms, just do the reasoning, you know, through traverse the path or do embeddings, whatever, vector search, whatever works for you. Extract that knowledge, bring it to LLM. Now it is again able to curate an answer for you, right? The way it is required, match it with your question and then give it to the user. So that's how I see the knowledge augmentation with LLM.

So I get the broad vision for language augmentation. How do you measure the success of any effort? There's probably many methods we could pursue, many ways we could implement it. Someone will be in first place and second place and all that. How are we judging them? So there are different methods of evaluation in our paper. We spoke about it. So, you know, you might be looking for an exact match.

Because you have a question answer already and you know the answers and you might be looking for an exact match. You might be looking for the top K answers and see that, OK, if my knowledge, which I have extracted, has been useful.

to give this answer or have I been able to extract some facts about it? Like things like, you know, where people were born, what happened here, there, all geography, you know, general knowledge questions, right? So what is there on Wikidata? So we, Wikipedia, we can just extract that and just, you know, have some model to formulate an answer, give us, right? So we can do an exact match there. We can extract that, hey, is top 10 entities which I've extracted or

or top 10 relations or whatever information which has come from the knowledge graph, has it been useful for me to formulate an answer? If yes, then okay. And then I can define that how many, you know, top K, like top two or top five or 10, what I'm going to use. It is the same thing with RAG, like we have tons of PDFs available there.

for a normal person to look into those documents, it will take maybe a couple hours, right, to extract them. Now, the model is able to get me at least top K articles, which may be useful. Now, this model also can scan through these top K. So it's not using its own knowledge, which is a general purpose knowledge. It's not going through the entire PDF document, which might just confuse it.

So we extract some little parts of it and then, you know, maybe give it as a knowledge graph, like graph rag. We call it like create connections between the whatever makes most sense. Extract that and give it to the model. It might be at least better than not augmenting any knowledge or just blabbering on its own. We are checking the accuracy in terms of percentage data.

And if we know exactly the answers, then we do the exact match. Can we look at it in terms of the amount of hallucinations that are reduced or something along those lines? Absolutely. Yeah. It depends on your problem. You know the answer. It's a fact checking. Then you look at the exact match. You also look at the amount of hallucinations reduced. You

A lot of times we see that how many questions, the wrong answered questions were improved. Are we able to improve the reasoning of it? Are we able to improve that where it is looking for the information? My idea of knowledge augmentation is that the model has been trained. It understands some part of it. It has a lot of knowledge. Now I'm trying to steer its information into a right direction.

By giving my correct prompt, adding some few short examples to it, or maybe I like in one of my work, I use the ontology of the knowledge graph to tell it like, hey, this is how the system works. So just go by this. Don't try to add extra information to it. Right. So I'm trying to steer its path and give it some cues, little cues on how to think about it.

So I'm hearing maybe one of two paths this could work on. Either you could query the knowledge graph for relevant data and then provide it to the LLM, or you could teach the LLM to make its own query and then tell it the response from the knowledge graph. Is that a correct description? Is one of those a correct description?

I mean, I think it's more or less similar. Like, you know, it depends on how you are engineering this problem. As you said, the first part is like you could query the LLM and then ask it like, you know, hey, this is what I need. So extract

the main entities and then you have an interface where you go look for those entities in the knowledge graph and extract that information bring all that those nodes and then give it to llm it

gives its answer. But in one of my papers that was on mindful rag, I addressed the exactly same problem. Now what happens when you just ask the LLM only to extract the entities goes to knowledge graph, it does something, gets the knowledge, but we are relying on the capability of the interface.

We're just sending the entity there, but there are tons of information out there. It might not be making sense what we bring to the LLM and then we just give it to the user. That might not be meaningful. When we did experiments, that why, where it is failing. So I took the error logs and looked into it, and I realized that the model is not even understanding the intent of the question. It's not getting the context right.

So I use a very, you know, a favorite example, which was able to show exactly what's happening there. So there was a question on some celebrity's spouse.

A model takes the question and says, hey, I need this guy's spouse. So the interface goes, extract the first spouse it sees and gives it out. Model just does it work. You asked it to give the answer. It gives the answer. But

model your LLM is smarter even further. You can use more of its potential by asking, by telling the model, hey, try to understand the intent of the question. Get more context. Okay? So now I ask the model, hey, what's the intent of the, the intent of the question is to answer the spouse. Now the model understands there can be multiple spouses.

So I need to go find the current spouse. When we go to the knowledge graph, we look for, okay, there's a list of spouses and there's a relation which says current spouse. So it extracts the exact information which I need. So

My idea is that we should be able to use LLM not just as an interface, but also able to, you know, it's a two-way process, just, you know, like a conversational. Okay, I understand that you give me this information. Now, what was my intent of the question? I got this.

but it is not matching my intent. This is an ex-spouse. So, okay, go back to the Knowledge Drive, get more information because Knowledge Drive has tons of data. And now if we are traversing there statically, you know, just looking at the nodes in the relation, if I just get the static information, it won't solve the purpose. So it's also like how mindfully we can, how we can be smart. And that's why I call it engineering problem.

Do the large language models need to be trained in order to interface with, you know, specialized in some way in order to interface with a knowledge graph? Just by giving you a prompt. So the same setup. I just change my prompt and ask the LLM to understand the intent and

and then go back and forth, ask more information, use the context around the question or what it understands when somebody asks about a spouse of a person. It is a current spouse, right? LLM can be used in that way and not just as a static. So it can be trained. You don't need to. It's already trained.

When it comes to the natural language variations, it's already trained, although it is an ill-conditioned model. By ill-conditioning, we mean that if you change the input slightly, the output varies a lot. So if I tell the model, hey, guide me on this.

Or how do I make a coffee? So, you know, very common example. And if you ask it, like, show me the steps to make coffee, it might give you a visual picture. So it also depends on how you are asking. You have to be mindful in that and play around the model.

Is there any intelligence in a knowledge graph? Certainly it's a data structure that holds a lot of information and if one knows the data structure, you can query it in whatever way. But does it offer up a query language or help me find the information in some semantic way or is it just a data structure?

So KnowledgeRef is a static representation of your knowledge. So whatever I extracted as a domain or as books or articles, you know, whatever information I had, I just represented it. It's a dump of my knowledge. Now, on top of KnowledgeRef, I can do a reasoning.

It could be a query like using a Sparkle query. The difference between a usual relational database or Oracle DB or whatever we used to study, the difference between then and KnowledgeRef is that KnowledgeRef allows you more flexibility. It was cumbersome to find relations between multiple tables and as

as the data grows, it just becomes unmanageable to find the connections. Now knowledge raft is just easier because you just, you know, it's easy to give connections between subgraphs and graphs and

And entity can be linked to multiple nodes and there's no end to it, right? And then it's easier to find. And then we have methods like, you know, where we can do entity linking, we can define multiple entities might mean the same thing, things like that. And we can write Sparkle queries, we can represent in an RDF structure. So the structure is very flexible and there are property graphs, there are different kind of representations of graphs and you can write different

queries using graph query languages and just work around it. So basically everything is working on the basis behind the knowledge of extraction is semantic similarity matching.

Well, I know from your paper that you did quite an extensive survey of research in what people are doing in this general area. I'm wondering if you could summarize maybe where is the most tinkering going on? What are the research opportunities and where along this sort of pipeline we've been describing can improvements be made? Sure.

I would just quickly give a overall ecosystem what is going on right now. So where we started and then where we are. So what we wanted to do is train it into our, that's the first thing which would come, let me train it on my knowledge base.

But training a huge model, including all your knowledge is difficult. So then you do fine tuning that, okay, my architecture remains same, I might just fine tune it on my data. Then people thought, you know, what are the better ways because how many

times I'm gonna fine-tune it so then we came about with the concept called retrieval augmented knowledge generation the which we is very popular right now and very cost effective why because you know I can give my KB articles and my KB can change it can be just a simple PDF and I augment it with a large language model have a retriever in between which can go retrieve my knowledge give it to the LLM so this is basically the state of the art right now

So there are different ways in which it can be used, but the most cost effective and easiest is right now the RAG model. We are doing a work in Minera where we are trying to improve the RAG cost to optimize, like optimize the cost of RAG. And we are in a customer service industry where we have customer calls.

Now for every call, for every query, if I have to make a RAG and get all that information, so we like I as a human agent, so the platform is handled by the human agent, but the intent is to give AI assistance. Now if I do a RAG call for multiple queries,

And customers might have similar queries. So why not I identify what are the frequently asked questions? And we did a patent on this work so I can go a little further to explain that, you know, we extracted the frequently asked questions from the historical conversations. Then I created a frequently asked question database.

And now before we do a rag, I go and check. It can be my vector search or it can be my model again, which understands the ongoing conversation before even the agent has to query the database. And if I don't find anything in there, then I go to my do the rag, just a traditional rag. Well, we save the time of the agent and we create the query from the conversation because

It understands the conversation very well. And you give the prompt and you ask us to summarize what the customer is asking. Just don't worry about the email kind of questions, phone number, those kinds of questions. Just find the intent. That's what my paper on Mindful Rag, the intent of the conversation. Just take out that summary. It brings that beautifully, that question.

So it's a lot of engineering and research problems, you know, we have to work around and it's not about writing the code. Nobody cares about you writing the code because the cloud can do it better than the code I write. But this is, you know, just getting into these little details and trying to find out how a better design can just help us. Yeah.

It sounds like you've been through a progression of improvements there. Can you maybe compare and contrast for what it looked like in the past, maybe when all they had was some Elasticsearch database? Is there a metric you follow for the yield in some sense that you get from all these enhancements? So we've come a long way and I worked on Spark, Kafka, Hive and those kind of databases.

And from there, we have come a long way to now we have LLMs, which are people are focusing more on what LLMs cannot do and criticizing the models. I just like to focus on the solution. What all I can solve, right? There are a lot of my problems being solved. So I don't have to make it like a superhuman, all-purpose model. If it becomes better, of course, it's good for us.

But it's able to solve a lot of my problems. And we are able to advance the software and the contact systems and the technology which we are able to provide to our customers is just a lot more better, right? I worked on those chatbots when we were doing SVM model intent recognition.

and just to write the paths of conversation. You know, first designing those and then just to identify the intent, mapping it. We were doing it all manually, mapping the intent because we had a team of taggers who would just sit and then do the tagging. Oh, this goes in intent, this intent, right? And, you know, you tag the chats. And on that, you build a model, a chatbot, right?

and the customer is still not satisfied because the chatbot don't answer what it is supposed to. Then you give a lot of buttons there so the customer can click on it and then go to some path,

But you have to imagine all the possible paths. One thing you miss, and that's what Bing asked. It was a lot of frustration, both from the designer developer and the user point of view. I feel like we have come a lot further there, way further.

Well, Garima, what's next for you? So I am right now working with Minerva and we are trying to further reduce the rag cost by seeing that, you know, if there is not a question in FAQ, but maybe it's asked in a different way, then can we semantically match with the current FAQs we have and then curate an answer for that?

And other than that, in my personal space, I'm trying to grow this company Humacon. The purpose is that a lot of investors, they probably do not know or they struggle that where to invest in AI if they do not understand AI. I want to help people to invest in the right direction because I understand it from the model perspective, not just as a sales or a solution expert person.

And all the business leaders, the executives, they struggle right now. And then just people are not able to implement their AI strategies. It is just a small tweaks in your AI pipeline, which you need to do. And you need to have that human intervention. I think we are not at this point. And I don't...

No, like maybe in another 10 years, we are not at a point where you can give it to the AI and rely on it.

it unless you train it for specifically for that. And for that also, you need a human intervention at every milestone, every stage. Yeah. And then the startups and the businesses, if they are looking for any consulting on AI, then yeah, I'm happy to help. And I think people should stop being scared about AI taking over.

But maybe it's a time when we need to raise our bar a little bit.

For the monotonous routine jobs, maybe yes, the AI is going to take over. But I mean, if you really enjoy doing those kind of jobs, then I'm sorry. You know, it's time that we level up. Yeah. Well, is there anywhere listeners can learn more about Humaco and also follow you online? Oh, yeah. I'm available on LinkedIn as Humacon AI Consulting.

and my website, www.humacon.com.

And send me an email. I'm there on LinkedIn. Or just, you know, connect me on LinkedIn. Then, yeah, I'll be very happy to help you. Sounds good. Well, thank you so much for taking the time to come on and share your work. Of course. Thank you so much for having me here. It was nice talking to you. Great conversation. And great questions. Thank you. Thanks.