This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.
This podcast is supported by Google. Hi folks, Paige Bailey here from the Google DeepMind DevRel team. For our developers out there, we know there's a constant trade-off between model intelligence, speed, and cost. Gemini 2.5 Flash aims right at that challenge. It's got the speed you expect from Flash, but with upgraded reasoning power. And crucially, we've added controls, like setting thinking budgets, so you can decide how much reasoning to apply, optimizing for latency and costs.
So try out Gemini 2.5 Flash at AISTudio.Google.com and let us know what you build. It seems like AI is sometimes just alphabet soup of the buzzword of the day, right? Yeah, we all want to use...
AI and Gen AI and LLMs and what happens when AGI comes or ASI, right? But I think it's important to first understand the basics, right? And not just rush toward what everyone else is using. So today we're going to be breaking it down and how to choose the right AI. And we're going to be talking about algorithms, agents, and large language models.
I'm excited for today's conversation. I hope you are too. What's going on, y'all? Welcome to Everyday AI. My name is Jordan Wilson, and this thing is your daily live stream podcast and free daily newsletter, helping us all not just keep up with what's happening in the world of AI, but how we can
all actually use this information to get ahead, to grow our companies and our careers. If that's what you're doing, you're in the right place. It starts here with the unedited, unscripted live stream and podcast, but where you are actually going to grow is on our website. So please go to youreverydayai.com. There you can, yeah, go listen to and watch more than 550 episodes for free. It's a free generative AI university, but also you need to sign up for today's
daily newsletter. It is free. We're going to be recapping not just the best insights from today's episode, but also everything else you need to be the smartest person in AI in your company. All right, before we get started.
I'm bringing back the AI news for today. Podcast audience, let me know in live stream audience. Sometimes I do the news right before. Sometimes I don't. I've taken a little bit of break. Let me know if you want it back. But let's just go ahead and go into the AI news for today, June 21st.
So first, Amazon Web Services has lost a pivotal AI leader as the talent wars intensify in tech. So AWS has lost Vazi Philemon, a vice president who helped lead its Gen AI efforts and the Amazon Bedrock platform, according to reports.
So Philemon's departure follows eight years at Amazon incomes as competition for top AI talent accelerates with companies like OpenAI and Google setting the pace and reportedly sometimes offering companies and meta. We've been talking about all this buzz of $100 million annual contracts.
So Amazon continues to invest heavily in AI, including an $8 billion stake in startup Anthropic. Amazon has recently rolled out Nova and Sonic AI models, expanding capabilities in text, video, and image generation. According to its CEO, Andy Jassy, Amazon advances in agentic AI could lead to fewer traditional corporate jobs.
as automation replaces some tasks, even as demand grows for new roles in AI development. All right, our next piece of AI news, Meta has won a key copyright lawsuit over AI training. So a US judge has ruled in favor of Meta, dismissing a copyright lawsuit filed by authors, including Sarah Silverman and T. Nessie
Coates, hopefully I didn't get that name wrong, who alleged that Meta unlawfully used their books to train its AI model, Llama. So the judge said the plaintiffs failed to show that Meta's AI would harm the market for their works, leading him to call Meta's use of the material
However, the judge did emphasize that his ruling does not mean all AI training on copyrighted material is legal, noting that using such works without permission could be unlawful in many situations. And this comes just kind of hours after Anthropic won a similar ruling about their use of essentially training their model on books. All right.
All right, last but not least, a little one for developers. So OpenAI has unveiled its deep research model and webhooks for its API. So OpenAI has announced two major updates to its API, the introduction of its deep research mode and supports for webhooks the company just announced on Twitter.
So the new Oh three deep research and Oh four mini deep research models are the same advanced version that power deep research within chat GPT. So essentially you have a version in the Oh three or the Oh four mini. So if you are one of the countless companies building on top of open AI, or maybe if you're using services like, I mean, you probably don't even know, but your bank is probably using open AI's API. Uh, right. So now, uh,
The capabilities are really going to be expanded now with this deep research rolling out to the API. The models also come equipped with features like MCP and a built-in code interpreter. And with the rollout of webhooks, developers can now receive real-time notifications for key API events, including competed responses and fine tuning jobs.
All right. For those stories and a lot more, again, go to our website at youreverydayai.com and check out in the newsletter. All right. Let's get to the real stuff here. How the heck do you choose the right AI? Should we all be using agents, multi-agent, like multi-agentic workflows, agentic workflows, traditional algorithms, large language models?
I don't know. It's a question that we're always talking about and something business leaders are always tasked with. So let's bring on someone to help guide us through today's conversation. So please help me welcome to the show live stream audience, if you don't mind. We have Michael Abramov, the CEO of Keymaker and Key Labs. Michael, thank you so much for joining the Everyday AI Show. All right. Hi.
Thank you for inviting me. All right. Well, thanks. Thanks for having us, Michael. So first, before we get into it, can you tell us a little bit what you do at Keymaker and Keylapse?
so actually yeah so keymaker and key labs are data labeling platform uh lab platform and the and the service provider so what we do is we prepare data sets for training the models whether it's computer vision models whether it's lom models or any other ai actually we are preparing the whole like all the training materials for for that
Yeah. So, and yeah, the training materials and the data, it's the hot topic, right? Even mentioning that in the AI news today, but maybe let's fast forward
to the end, right? So for companies, it's hard to keep up with everything that's available. So there's traditional AI algorithms, there's large language models. Now there's these models, these large language models on the consumer side that are agentic, and then you have literally agents. So where do companies start? How do you choose the right AI?
Oh, that's an amazing question. And I don't have a very specific answer to that because it's like choosing your life partner. When you're choosing your life partner for life, you might be 20 years old or any other age. And then your requirements at that specific moment are for something specific, but then
Your requirements change over time, right? And you want to choose the person who's going to be suitable for different
ages, right? The same with AI. You have to think about it. Do I choose it for short term? Do I choose it for long term? Do I, you know, what kind of tasks it should solve? Also, there is some kind of over promising in social networks because you, I mean, people like us look at LinkedIn every day, right? On the LinkedIn website.
timeline and you see people saying, "Oh, I just replaced 50 employees with this. I just replaced 16 employees with that." And you kind of sit and think, "Hey, what am I doing wrong? Where did I..."
Why do I still have employees? Yeah, how does it work? Didn't I see? And then you try to do it, you try to play with some tools and they are, eh, they might help you here, they might not help you there. So it's actually very hard. And I think there is no one very specific answer. But what I do believe, like every person, now
nowadays should do is play a lot with AI. I mean, play it yourself. Don't listen to anyone. It's not like, okay, I'm a CEO, right? And I'm managing a company with 480 people. And my team is pretty big. I have pretty...
busy day and I don't have time for stuff. But my calendar is always closed for half an hour per day just to play with tools, just to play with things. And when you do, just register to all the platforms, try to see how it can help you. You're always going to go to your biggest pains. That's natural, that's organic.
I mean, you don't have to sit and think like, what am I doing? Like, what am I testing first? My emails, you know, elaboration the emails or my slack problem. You will always go to the things that, you know, the first, firstly, we'll always go to things that are painful, the most painful. Now, after you play with it by yourself and you see how it helps or it doesn't help you. And that now, Jordan.
For reference, it doesn't matter whether it's agent, agentic, LLM, BLM. I mean, just put all this terminology aside, right? Try to find what helps you, what solves your pain, your problem. Once you do, give an example to your employees, to your peers, to your colleagues. You can just come up and say, "Hey, I'm using this. I'm doing this and it helps me." Right?
And then you're going to see what's best for you. Yeah, I think there are some great points there, right? It sounds simple, but I love that it's like you're actually blocking out the time
experimenting with the latest technology, seeing what works and then sharing about it. That's literally a great kind of roadmap. It's a similar roadmap that we share all the time. So I love that. But let's...
Even though you don't need to define everything, I think it'll be helpful for our audience because it can be confusing, right? Because you hear these things about agents and then you hear these things about multi-agentic AI and then you hear large language models. And now these large language models have agentic capabilities, right? Help us with the definitions. What the heck is an agent? What is a large language model? What's an algorithm?
Yeah, that's an amazing question. And I think that, okay, we speak here about three things, first of all. LLM is, think about it just like a black box, right? That you can ask any question and it can answer you any answer. And that's it. That's all it can do. So let's call it model.
And agent, think about it as an automated multimodal. So you can have like five or 10 or whatever amount of models, and then you can put them on the dashboard and draw some arrows from one to another with if and else. I think many people are programmers, like software developers here and might understand the if-else terminology.
But anyways, anybody else can also understand that. And now think about it. You want to go to Europe and you have built your travel agent, right? So you ask, hey, what about going to Paris next week? So what is it going to do? It's going to go to model that checks your bank account if you have enough money. It's going to go to weather account.
model that will check the weather in paris and we'll see if it's good for you or not right so so let's think about agent as a just multiple models that know how to speak to each other and uh how to do this you know if else inside the the agent now agentic ai
It's a much more interesting concept. It's a concept. It's not a specific tool or specific thing. Agentic AI is when you have an agent, but all of these if-elses are being decided by another model. So it's not hard coded that you have to check the bank account and the weather in Paris and the flights on different flight companies. So another model, which let's call it supervisor model.
We'll look at the problem, at the task, and we'll decide what else it wants to put inside this thing and where it wants to go and what it wants to do. And then apply agents inside the system.
Yeah. So help us break down a little bit more for our non-technical audience. Explain the if-else conditional statement. What does that mean specifically in the context of AI or large language models? But explain that if-else conditional statement as it pertains to AI. Yeah. Okay. So every kind of decision will go through some kind of decision trees. And you can
Take an example of your day-to-day life. So you wake up in the morning and you want to take your child to school, right? Let's say your kid to school. Now you have a lot of if-elses. So first if-else, did he wake up or she? If yes, ask them to take a breakfast. If no, wake them up first.
right? And then do they have a fever? Maybe they have a fever, right? If yes, stay at home and then, you know, rest. If no, let's go to, you know, let's brush teeth and go to school, stuff like that. So there is lots of decisions. Let's call it decision tree because it's hierarchical. Usually it goes like from top to down or whatever direction you choose. But there is initial
condition. And then there are lots of final items that are dependent on your decisions that you made on the way. And all of these decisions will be made by if and else. If this then, else something else. Yeah, I think that's a great way. And even I love the example that you just played, but that just goes to show and emphasize ultimately
even like what your company does, like data, right? And making sure that you have the structured data to help answer those if else conditions, right? Like whether it comes to traditional algorithms, large language models, agentic AI, right? Whatever it is, how important is having your data correct?
Oh, that's the most important thing. So I had this problem in my company that I was requesting from all of the people who work with me to be data driven. Now, let me just explain what data driven we mean. Sometimes you can come and tell me, hey, Michael, most of the people in the world are afraid to lose their job now.
Okay. And when you say most of the people in the world, like how many, like, is it 88%? Is it 90%? Is it business people? Is it people from United States? Is it like, what kind of people are you talking about? Now, the second thing is what does the most mean? Like, where did you take this information from? Is it reliable? Can you rely on this information? Okay. You got it from Gartner or you got it from Google search. So like who wrote it? Right. And maybe it's your, your personal, your,
your personal afraid and you are afraid of losing the job and now you extrapolate this emotion on most of the people in the world. I don't know. So you have to be data driven, you have to come and say that I read like 12 different articles from different sources and that's why I think that's true. Now there is another problem.
I asked most of the people in the company to be data-driven and I explained what data-driven means and how to acquire data and how to look at data, et cetera. I got reverse problem, the mirror problem. People, what they did was they were super data-driven. They took data and they have built conclusions on the data and they would come to me and say, hey, this is the problem and here is the data that proves it.
And that was super funny to see that the data didn't prove it at all. It was their perception of the data that proves it. Okay? And that's... If I give an example, we could say, you know, when you say the sun has fallen and that's why the... I don't know. I even don't have an example, sorry. But...
But you know what I mean. I mean, you can relate things that are unrelated and many people are tending to relate everything because people have to explain every single thing. And so this pseudo data drivenness is even worse than not being data driven at all. And yeah, so we prepare data for machine learning training.
We have lots of problems over there. We have lots of misunderstandings of how the data should be structured, how it should be labeled, how it should be perceived. Even if you structured it really well and you labeled it pretty well, the developers of the model might do wrong things on the training.
So, you know, one thing you mentioned in there, Michael, was something about, you know, whether you're getting your information from from Google or Gartner. So I want to ask you here in a second about a recent Gartner study on agentic AI. But before we do, we're going to take a very quick break for a word from our sponsors.
This podcast is supported by Google. Hey, everyone. David here, one of the product leads for Google Gemini. Check out VO3, our state-of-the-art AI video generation model, in the Gemini app, which lets you create high-quality, 8-second videos with native audio generation. Try it with the Google AI Pro plan or get the highest access with the Ultra plan. Sign up at Gemini.google to get started and show us what you create.
So as we go back and forth between algorithms, large language models, and agents, it's no surprise over the past year or so, the rush has been toward agentic AI. And it seems like every single company, even if maybe they don't need it or if they don't even fully understand it,
They're trying to dive all into a like agentic AI. And there's a recent Gardner study that we covered in our newsletter yesterday about that predicted that 40% of agentic AI projects will be canceled by 2027, either due to a high cost, unclear business value or inadequate risk management.
controls. So I'm not going to ask you to predict a percentage, Michael, but one thing that caught me from that study is risk, right? Can you talk a little bit about the difference between risk in large language models, algorithms versus agents? Because in my mind, agents, it can get kind of risky if you really aren't kind of have a strong human in the loop connection.
Yeah, so okay, so the risk is not in LLM or agents, the risk is, I mean, we speak about the risk for the businesses who develop, who tried to, you know, make money on developing these agents and selling them to someone. Now, I have this idea of, I call it corporate blade. It's a huge blade that goes like, when the new when new technology evolves,
Everybody is running out to implement some kind of wrappers or plugins or some kind of things on top of the new technology. And we see a lot of it with AI. Lots of my friends, lots of people out there are trying to build startups with AI.
Now, what usually happens in the first stages of such technologies when they are still not stable, some projects die because of what you said, because they are not sustainable or something else. But some of them, which are...
you know, successful, they also might fall. And they might fall because the giants will take the idea and for the giants to take the idea and to implement it is like one week, two weeks of work. So if you think about perplexity, perplexity is a multi...
million dollar startup and I think it's multi-billion dollar valuation already. But all it does is it's wrapping a chat GPT and then it adds a little bit better search capabilities on top of it. And also some different user interaction dynamics.
But it's nothing that chat GPT, like the open AI can do in two weeks. Right? And if they like the idea, they can do it and just, you know, blade cut out all of these startups that did, you know, very interesting things. And we see a lot of this.
We see a lot of agents, a lot of mass tutors and personal assistants, psychologists, therapists, AI therapists, etc. or calendar assistants that are being wiped out by the next feature of
Lama or Entropic or ChatGPT, etc. And maybe Gartner's research relates to that as well. Not only to unsustainable businesses or bad ideas. Yeah. And so I'm curious, even if you could walk us through. So you said, how many employees are at your company again? It's 480.
480 employees. So 480 employees and you obviously specialize in data labeling for machine learning. So I'm guessing that you've had a healthy amount of AI use, right? Make that assumption, right? Yeah. How are you even deciding as a CEO of a growing company, how are you deciding, hey, when do we
step outside of the traditional decision tree algorithm to large language models, to agents. How are you making those decisions? I push my people every day to innovations. Now, there's something special about my team because I think in the beginning they were very curious about doing that, but at some point they were exhausted because they said, "Hey, too much innovation. Let us
be a little bit in a stable position for some time. And now we found a balance. So there is balance between routine job and coming up with innovations, how to make this routine work more performant, let's say, et cetera. But the main thing here is that I'm not saying, hey, come up with
llm id or agent id or agentic ai id or something else i'm not kind of trying to say use this or that tool so i'm just saying hey guys here is a whole tool set
We even have once in a week technical education session where we show up new things, everyone shows to another, etc. And then I say, here is a tool set and you have problems. Every day you have daily problems, you have fires that you work with, etc. Just see if there is a better tool than you're using today. That's it. Okay?
I'm not asking every person to be a software developer, agent developer, or stuff like that. And you'll be amazed by how many people who have never did any if-else in their life,
They take these tools, sometimes they ask questions, sometimes they ask chat GPT, "Hey, I have this problem, how do I solve it?" Sometimes they ask their friends or myself or YouTube and they build amazing things. Amazing. Many times I'm just saying, "Hey, this is a startup.
I really have such examples and I'm not going to spend your time on that, but I mean, it's amazing. And if we look at kind of the linear progression, and obviously this goes over the course of many decades, right? But if we say algorithms led to large language models, which led to agents, right?
And then there's a lot that probably comes after this. But what are some of those unknowns? Right. Because obviously the pace of innovation is going very quickly. We've heard the smartest people in the world, as an example, say, hey, once we get to quote unquote AGI and multi agentic AGI, things really escalate there. But what are those kind of unknowns that may come after whatever connects on that linear graph?
Yeah, that's my favorite question because we have known unknowns, which is like, oh, when we get to AGI, every person will lose their job and the AGI will do everything for us. And in that case, we have two directions. Direction number one, we're all going to die because AGI is going to kill us. And direction number two, we are going to live on the welfare because AGI will work.
we're going to get payments from the government and we will have a lot of time for creativity to be artists, musicians, etc. Because we don't have to work. We have enough resources to exist without working.
So, and there's lots of speculations like around these two ideas, and I would call it known unknowns. Now, there are some unknown unknowns. And a good example of it is imagine the times when people were flying on zeppelins, right, or on the balloons, I don't know how to
Balloons, right? Yeah, yeah. Air balloons. Yeah. And the capacity of one basket in the air balloon was like 20 people. And then the engineers, they were thinking, oh, in 100 years, people will fly on the air balloon with 100 people capacity. And in 200 people, it's going to be 500 people capacity. Right. So it's going to be like a bigger balloon. They could never imagine an airplane.
That was the unknown unknown for them. That's something that you can't even imagine. Now, for us in AGI or in overall AI, I don't know how you name it, right? There might be unknown unknowns in, I don't know, teleporting.
Did you think about teleporting? Would you like to be... I mean, for me... Oh, absolutely. Yeah, tell me if you need one. If I could choose the next feature I would love to purchase in this game world, it would be teleporting or invisibility. Right? So maybe we can get there. And maybe we can get to something that we can't even think about right now.
I don't know. Yeah. So, so, so Michael, we've, we've, we've covered a lot in today's conversation. Um, but you know, as, as we wrap up, what is the most important, um,
piece of advice or actionable insight that you have for business leaders out there that are maybe just scratching their head when it comes to algorithms, large language model, agentic AI, agents? What do people need to focus on to make the right decision for their business? Experiment, guys. I mean, people experiment, experiment, experiment.
and your every day should look like you are starting up a new company. Even though your company is amazing and your company makes lots of money and you are super successful in your company, you should think about it today. I'm going to create something new. I'm going to create some new idea and experiment with it. And then you can just implement it in your company. You don't have to open a new company for it. And I would say that most of the
interesting things that are happening inside Keymaker in terms of improving performances in the teams, etc. You can take it and make a separate startup out there. I mean, we won't spend time on it, but that's all because of experiment. You have to experiment.
Awesome. Amazing, amazing advice and a very insightful conversation. Michael, thank you so much for taking time out of your day to join the Everyday AI Show. We really appreciate it. Thank you. Bye bye. All right. And if you miss anything, don't worry, we've got it all for you. A lot of great insights and advice there from Michael. So if you miss anything, it's going to be in our newsletter. If this was helpful,
If you're listening on the podcast, please make sure to follow the show and subscribe. Drop us a note as well. And then when you're done with that, go to youreverydayai.com. So make sure to check out the recap for this podcast. We're going to be dropping some additional information that we probably didn't have time to get to, as well as keeping you up to date with everything else you need to know. So thank you for tuning in. Please join us next time for more Everyday AI. Thanks, y'all.
This podcast is supported by Google. Hi folks, Paige Bailey here from the Google DeepMind DevRel team. For our developers out there, we know there's a constant trade-off between model intelligence, speed, and cost. Gemini 2.5 Flash aims right at that challenge. It's got the speed you expect from Flash, but with upgraded reasoning power. And crucially, we've added controls, like setting thinking budgets, so you can decide how much reasoning to apply, optimizing for latency and costs.
So try out Gemini 2.5 Flash at AIStudio.Google.com and let us know what you built. That's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit YourEverydayAI.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.