我最近与Cogbias AI的CEO Anatoly Shilman进行了一次对话,主题是AI的常见失败原因——认知偏差。这次对话让我更加深刻地认识到,盲目依赖大型语言模型(LLM)的风险有多么巨大。
许多人习惯于直接复制粘贴LLM生成的结果,认为这些模型由科技巨头(如OpenAI、Google、Microsoft)开发,必然准确可靠。然而,事实并非如此。LLM如同互联网和社会的镜子,反映了其中存在的各种问题和偏见。它们并非全知全能,其输出结果可能存在缺陷,甚至完全错误。
Anatoly指出,AI模型的偏差源于其创造者——人类。工程师在设计和训练模型时,会不自觉地将自身的偏见和认知方式融入其中。这导致模型在信息处理、数据筛选和结果呈现方面,都可能带有主观色彩。
他强调,我们应该摒弃“AI无所不能”的幻想,树立“信任但验证”的理念。 对AI的盲目热情,往往会让我们忽视其潜在的风险。 Anatoly举了一个大型律师事务所使用AI进行案例研究,结果AI编造了不存在的案例先例的例子,这足以说明问题。
我们讨论了多种常见的认知偏差,例如:
这些偏差不仅存在于LLM的输出结果中,也体现在模型的训练数据和提示工程中。LLM的训练数据量巨大,模型需要自行进行数据标注,这不可避免地会引入偏差。此外,提示工程(Prompt Engineering)也会影响模型的理解和输出,不恰当的提示可能会引导模型产生偏见的结果。
Anatoly还谈到了大型模型与小型模型在偏差方面的差异。大型模型由于训练数据量巨大,通常比小型模型具有更多固有偏见。 然而,即使是专门针对特定任务训练的“窄AI”,也无法完全避免偏差。
为了减轻AI模型中的偏差,Anatoly建议企业采取以下措施:
最后,Anatoly强调,在处理可能存在严重偏差的重要数据时,寻求外部专业帮助至关重要。 我们不能仅仅依赖AI的输出结果进行决策,而应该对数据处理流程进行客观评估,并结合自身专业知识进行判断。 只有这样,才能最大限度地减少认知偏差带来的风险,确保AI技术的安全可靠应用。 AI技术发展迅速,但我们不能忽视其固有的局限性,盲目信任将导致严重的后果。 我们需要保持警惕,不断学习和改进,才能更好地利用AI技术,避免其潜在的风险。
This is the Everyday AI Show, the everyday podcast where we simplify AI and bring its power to your fingertips. Listen daily for practical advice to boost your career, business, and everyday life.
Do you ever just blindly copy and paste what a large language model gives you? Right? I get it. We're all overworked. We're stressed. There's so many things. Your manager is demanding more now that you're using AI. But
that can actually be very dangerous, right? Just blindly trusting what a large language model like ChatGPT or Gemini or Copilot or Claude spits out. And one of the biggest reasons, and I think a reason that sometimes AI fails is because of
Right. Essentially, large language models are a reflection of the Internet. They're a reflection of society. And there's a lot of things wrong. And sometimes these models aren't the absolute truth. Sometimes they're very flawed. So we're going to be talking about that more in depth today, as well as what you can do about it and how to keep an eye for different types of bias biases. Right. I guess that's how it's said that can show up in your large language models. Right.
All right, I'm excited for today's conversation. I hope you are too. Welcome to Everyday AI. Maybe it's your first time here. If so, where you been for the last three years? We do this every single day. My name is Jordan Wilson, and this is your daily live stream podcast and free daily newsletter, helping everyday people like you and me, not just learn AI, but how we can leverage AI
it to grow our companies and to grow our careers. I want you to be the smartest person in AI in your department at your company. So if that's what you're trying to do, you're going to want to go to our website. That's youreverydayai.com. We're going to be recapping today's conversation as well as
really recapping everything else you need in the world of AI. And we do that every single day in our free daily newsletter. So if you want more insights from today's show, make sure you go sign up. All right, before we get started, and I'm excited to talk about the top reason for AI failure, cognitive bias, let's first go over what's happening in the world of AI news. So Microsoft has launched two new AI sales tools, and it looks like it's to compete directly with
Salesforce, yeah, Salesforce kind of picked a fight with Microsoft and Microsoft has now introduced two new AI tools, a sales agent and a sales chat to streamline their sales process as part of its Microsoft 365 co-pilot platform.
platform. So sales agent automates lead qualification, meeting scheduling and follow-ups while sales chat delivers actionable sales insights using CRM records, emails, and meeting notes. So both tools integrate with Microsoft Dynamics 365 and Salesforce, funny enough, minimizing reliance on traditional CRM systems.
So this tool will be available in public preview in May, signaling Microsoft's aggressive expansion into AI powered business applications. So yeah, Salesforce's CEO has been a little critical of Microsoft's AI approach and they launched their agent force. So Microsoft just clapping back.
All right, more big tech. Google has launched its new AI mode in search, kind of going after perplexity and chat GPT, which were eventually or which were originally just going after Google. It's like this full circle weird moment. So anyways, the new AI,
AI Mode is part of the Google One AI Premium subscription plan, and you can access AI Mode starting this week through Search Labs, which is Google's experimental platform. So the feature is powered by Gemini 2.0, Google's latest AI model, which enhances reasoning, thinking, and multimodal capabilities to handle exploratory and comparative questions effectively.
So AI Mode uses a query fan out technique to issue multiple related searches simultaneously across various data sources, consolidating results into detailed and accurate responses. So yeah, if you are a paid subscriber, you can access AI Mode or at least sign up on the wait list by visiting Search Labs or you can go to google.com slash AI Mode.
All right, last but not least in AI news, a big, big one. OpenAI is betting people are going to really love agents so much so that they reportedly might be offering one that costs $20,000 a month. Yeah, that's a month.
So OpenAI is making headlines with its bold pricing strategy for its advanced AI agents, reportedly charging up to $20,000 per month for enterprise-level automation tools. So these AI agents are described as PhD-level and are designed to take actions on behalf of users, targeting large companies looking for scalable automation solutions.
So a lower tier version of the AI assistant might be priced at $2,000 a month. And that's aimed at high income professionals seeking premium AI capabilities. This marks a major shift from open AI's previous subscription model where its highest price plan was $200 a month. So this is not,
This is just according to reports. These are more or less just well-vetted rumors. So this hasn't happened yet. So don't log in yet to your ChatGPT account trying to sign up for a $20,000. I mean, that's wild, right? All right. Enough about that. We have more on those stories and a lot more in our newsletter. So make sure you go to youreverydayai.com and check it out. All right. But let's talk
cognitive bias, because I think so many people are just blindly following what comes out of a model, not even knowing some of the dangers that that entails. So please help me welcome to the show. I'm excited for today's conversation. I hope you are too. So we have with us Anatoly Shilman, the CEO and co-founder of Codbias AI. Thank you so much for joining the Everyday AI Show.
Thanks, John, for having me. All right. I'm excited for this one. Livestream audience, thanks for tuning in. Big Bogey and Michelle and Marie and Jamie and Vincent and everyone else. If you have questions, get them in now. But let's start at the top, Anatoly. So what is Cogbias? What is it that you all do? Well, we built a platform that detects and mitigates cognitive bias in communications.
we what we what started off as our own internal project to help us ask better questions during customer discovery is now to turned into a full platform we're able to take people's questions for customer discovery marketing research nps scores assessments anything
And give them a breakdown of the biases they may be facing within the questions they have. And also, on top of everything else, we rephrase and give them better suggestions on how to do it better. The same now applies for their emails. So if you have that difficult email to write, say, got to break some bad news or an angry mail, you know how they say, wait 24 hours to write an email. But our client, you can actually write that email. And our system, based on the actual context you give it, will
we'll rewrite it for you in a better way and also tell you what was problematic about your original email. And we found that a lot of folks such as salespeople, obviously marketing research, UX, UI, product managers have been using our product. And it's funny, one of the things we've been discovering that people are constantly creating new methods of using it. One of the things that's coming out very soon, as you mentioned, AI agents,
is that we are actually able to audit AI agent conversations and detect the biases that they have and make reports for companies to make the changes necessary to make them better. So I want to kind of skip to the end here, and this is kind of how I started off the show, right? Because I think so many people just blindly either copy and paste what comes out of a large language model, or they just inherently trust it.
as being accurate and factual and bias-free. Why is that a mistake? Well, the biggest is because AI is like people. It was built by people, just like Google was built by people. Before we had the whole AI explosion and people went on Google, "Well, it's on Google, it must be true."
was a very constant refrain that people gave and that's just absolutely not correct and uh i think one of the things that you'll discover in ai more and more often we call people are calling hallucination it's really more of a form of bs there's actually a very popular paper called ched gpt is bs and it was written simply from the perspective that you know ai is really more like your know-it-all friend you know everybody has one in their circle they tell you all these incredible things and
Most people are like, well, they know everything. They accept it as fact. But what the AI has to do is they have to answer your question. So even if they can't find the answers, they'll make the answer up. Obviously, many lessons in that. Most recently, one of the bigger law firms in America had a scandal where the AI that was being used for them to do casework with actually created precedent cases on its own. So it created a whole universe of the cases that never existed.
It's a consistent theme over and over again. What we end up with is people are so enthusiastic about this new leap in innovation that they forget that just like anything else, you should have an attitude of trust but verify. I recently did a TEDx event, and the conversation there was the future we make and the future that is AI. What was most interesting about it, the initial one, I asked people, how many of you are enthusiastic and excited about AI? I
the whole room raised their hands. It was very exciting. But as soon as I started talking about some of the things that have been observed and actually started asking people individually, like, well, tell me your real thoughts on AI. It was always a trust, but verify attitude. And I think the issue is a lot of time people think, well, you know, obviously look, it's made by open AI. It's made by Microsoft. It's made by Google. It has to be good.
But they're just like, they're not infallible. They can make the same mistakes. And because they're built by engineers, they have the biases that those engineers have. So they have the same human extensions of our personalities. So the way they gather the information, the way they disseminate it is reflective of humanity.
So I think maybe let's break this down piece by piece, or we'll go chain of thought on this episode title here, right? But what is cognitive bias, right? I think all people kind of understand what it is, but what is actual cognitive bias? The best way to kind of think about cognitive bias, there's a really long scientific definition that I will not bore you with. It's honestly irrational beliefs based on our perception. That's the best way to kind of
Extremely simplistic, mind you. So please, none of the psychologists in the crowd yell at me, but I'm just trying to make sure that it's something easy to understand. And what I say about irrational is that we don't think clearly when we have certain beliefs, right? Because our ability to have cognitive bias is what gets us through the day. Ultimately, we make a choice every morning when we get up
to get dressed a certain way, to do our hair a certain way, to drive a certain type of car and everything else because of the way either we want to be perceived or we perceive ourselves or the feeling that it gives us. These are all biases. The key point of cognitive bias is they're not bad. They're just part of our humanity. So in some cases, some of the most well-known biases are there. Confirmation bias, framing bias, availability, heuristic. Those are things that
help us and hurt us whenever the situation calls for it. Sometimes availability heuristic is reach for the first thing that is closest available to us to solve the problem that we have. So in some cases, it's a hammer to nail a nail to the wall. Other times, it's going to be a flat object because that's the closest thing to us.
It's kind of the same way we operate with a lot of the things that we do. So choices from a perspective of, hey, I need to get 10. I have to send out a survey to my customers. Let's ask ChatGPT for the top 10 questions about car buying. ChatGPT spits out the questions and bam, all of a sudden you get through a whole process where it becomes, here's the questions. And you're like, well, these sound good to me. They're perfect. There's no breakdown. There's no analysis. There's no belief.
So I want to break down two key words I heard you say there. So, you know, you said irrational beliefs based on perception. So beliefs and perceptions, right? Because these are things that most people probably don't think go into large language models, right? Beliefs and perceptions. Those aren't fact-based. Those aren't scientifically researched beliefs.
How does that happen and how can people be on the lookout for when that does come through a large language model? Well, I wish there was a simple way, right? The first thing is it happens because humans are the ones who make it. So even if AI makes another AI, it's based on the original programming of the human. So you're just going to have a new permutation of the same biases or an evolution of some new biases based on the old biases. The key thing to understand is like, you know,
For instance, an engineer will program an AI and say, I want you to put the information out this way. And I want you to put, when they ask for a list, this is how the list will be based on this thinking. And when you're pulling from news sources or media sources, this is the first 500 you're going to look at before you look at anything else to solve the problem.
Is it because of their personal beliefs? Is it because of their perception of what's reputable versus what's not? As a news source, we don't know, right? And the same applies on the other end. As it kind of goes through the process of coming up with answers, if it can't find it in those 500, and I'm just making up that number. I don't really know what the real secret sauce is in those cases.
All of a sudden, it becomes a situation, well, if they can't find those 500, it may think the other ones are less reputable. So instead, it'll come up with its own answer. Or it will add its own little spin to it. And because it's AI, you think, well, it's a computer that answered it. It must be correct.
Yeah, yeah. That's the worst thing you can do with a large language model, right? Is this like, oh, it's a computer. It has to be right. But so many of the things that we ask large language models, there's nuanced, right? It's not binary. We're asking it for strategy to make decisions. We're not necessarily always asking it to count the number of R's in strawberry or the capital of Illinois, right? But maybe if you could, could you walk us through just what are the types of
of biases and, and, you know, maybe just briefly, you know, like I know, like, you know, confirmation bias, right. Maybe could you walk us through briefly, you know, two or three of the most common types of biases that, that show up in large language models and what they mean? Yeah. Obviously confirmation bias is probably the most well-known one. You know, it's confirming it's in its own initial beliefs. And quite often what it'll do is the way it best, the best to consider is not from the point of the AI, it's from the point of view.
How biases really impact us is our perception of what is being said to us, shown to us, etc. So AI is going to respond to us in a specific way and bias us in that way. So in some cases, we'll ask it a certain question, it'll respond back, and it'll trigger our confirmation bias because it's going to be confirming our facts. So if we ask an AI question that has an obvious answer,
It's going to spit it back out at us in a specific way, just make it prettier, more or less, or more sophisticated, or expand on it more. So confirmation is a big one. Framing bias. We frame something in a specific way to get a specific answer back.
So if we say, just make it up, Mercedes Benz is the fastest car and the best car for the money based on the luxury, blah, blah, blah. And then we're going to ask questions about it. Now the AI is going to be responding back in the same way, just like humans would. Because again, the AI is not here to argue with you. I know we've seen those comical stories where AI starts arguing facts with you, but that's not really the reality of how it operates. And then obviously I talked about availability heuristic, which is
which is one of the most interesting ones, because like I said, it's the lowest hanging fruit, basic way of saying it. People reach for the first thing that's available to them that may seem like it solves the problem. Yeah.
you, you bring up a fascinating point here that I want to dive a little bit deeper in. Right. So when, when models are, you know, essentially mirroring our, our own beliefs. Right. Um, but I think what's important to call out is, you know, a system prompt, right? All large language models have system prompts. And one thing you said there is most of them, they are designed to be a helpful assistant, right? So, uh,
even if there's not an answer, they kind of want to be helpful. And I think that's why sometimes you get these, you know, halfway answers or, you know, things that maybe you look at and you're like, is this right? Well, sometimes it doesn't always matter because it's ultimately trying to be helpful. But, you know, I want to ask you, how does the conversation in the context
of a large language model when we're, you know, whether it's Copilot or ChatGPT or whatever, how is that going to influence it? Like actually how we're prompting it and, you know, what the outputs we get in terms of bias? Well, it's a huge, it's actually a big, big factor. If you think about it from a perspective, say you ask it for,
for a specific element or a specific answer. And then you say, well, now I want you to write it, but pretend you're a 20 year experienced engineer and write in that term, but write it nicer. So now there's the bias element of what is truly a 20 year engineer? How do you write nicer? What is nicer?
You know, sheer definition elements and how the biases are perceived from then becomes a hot mess. And quite often that's where the prompting kind of falls apart. And that's why for a while people like, well, you know, we don't have to know how to do prompting anymore because AI is so smart. I'm like, unfortunately we do, because the one thing that AI claims to do that it actually doesn't do is it doesn't really understand well. It understands initially in a very bare bones thing. I heard a very great quote yesterday.
at an event where they said this, they said, "At this point in time, AI is the worst it's ever going to be." And that's a, it's a true statement. Right now we're at the very beginning, at the very earliest stages. So quite often people have expectations of a flying ship when we're probably somewhere closer to a horse-drawn carriage by AI standards.
We'll get there. But the problem is, as the more things get sophisticated, the more complex they'll get from a perspective of how do we ask the question that is perceived by the AI in the right way. Because we'll say, write it nicer. So it'll change a few words. It'll sound nicer to us. But if the context is still something that has harmful biases in it to what our objective is, is that really helpful to us? No, it was just an answer given because it's a write it nicer. Fine, I'll put some puffery around it. I made it nicer.
And so our prompting doesn't necessarily help it be better at its job. Our prompting just helps it, again, confirmation bias, helps us confirm that it helps it confirm that we want something nicer. So it'll change the stone to its perception of nicer, but not necessarily solve the actual problem.
Let's maybe talk about the root of this, right? Because, you know, I kind of, I kind of referenced that, you know, large language models are a reflection of the internet and that's a reflection of humanity and right. And that's why there's sometimes stereotypes and biases, you know, to begin with, but walk us through how our models like actually reflecting these biases in the long run. So maybe can you just walk us through training data?
and like where do some of the you know issues in terms of cognitive bias where do they get inserted into this whole equation when it comes to training data well i mean right we kind of talked about the idea when they even begin the telling the ai this is how you're going to parse out information this is how you're going to pull it apart this is how you based on these requests this is how you're going to gather together then we have to start thinking again
Once those issues are inserted, then once they have to start thinking, what's a reputable source of information?
and then it starts trying to pull that data out, we still have to consider the element that it's very much the equivalent of drinking from a fire hose. I can't even imagine the sheer amount of petaflops and God knows what other measurements we could apply of information that are constantly flowing through that it has to parse through to delineate whether or not one specific minuscule thing that we're asking has the answer for it. And it still returns it in seconds.
And so that stuff is where you really start falling apart on training data because it's not, you know, there was a recently a big thing with deep seek, you know, how they trained it for 5.6 million. Obviously it's not true. But it was a cute episode on that, but thank you for calling that out. Yeah. It was, it was a cute, it was a cute number though. Right. And, uh, but the big thing about it is that kind of brought to the forefront, what does training really mean? Right. Yeah.
And when we think about training a small model, it's literally us humans sitting down and working through the labeling elements of specific data points and how the AI should treat those elements based on the labels we assigned to it. In the case of
a massive model with massive, massive, massive amounts of data, it labels it itself. So it's only thought to label things. And what's the issue when you teach AI to label things? You wish it was more like, you know, all colors of the rainbow, being able to see all points. It can't. It's much more limited. It doesn't have that neural depth that we have right now.
So instead, what it does is, you know, it's more or less if and then rules. A lot of those are applied. And I'm simplifying way too much. That's not really how AI is. But for the basis of understanding, it's really how it kind of perceives information.
Does this answer the question? Yes, no. Next. Does this answer the information? Yes, no. Next. And it goes through that whole routine. And when it gets to a certain point where it was like, well, this partially kind of answers the information. If you extrapolate this and then kind of, you know, smooth it out, which is your BS factor, that's the information. Yeah. So in every hallucination or BS point that AI makes, there's always elements of the truth in it, which makes it so convincing. Yeah.
And that's the biggest thing to consider like during the training, because when you consider what's the training, you know, they do spend months and months and potentially years training models, but it's not like they're hand labeling stuff. What they're really doing is just overseeing this enormous model, trying to label everything.
And then doing audits and checking and checking again and rechecking and seeing like, whoa, that's way wrong. And if it's big enough, they catch it. The problem is sheer amounts of data is just not possible right now to catch everything. Will it ever be? I can't possibly answer. I don't think they can either. And that's why even though you see these new evolutions in technology,
We saw these new evolutions in agents literally almost every week, a new tool comes out and it feels like it's the next tool and the next tool. The consistent theme is the same. They're not actually creating better depth. They're creating better response time, maybe a lower latency, cheaper. They're sometimes making a better conversational piece to it.
But the info being put out is still very much the same. And for us, when we actually looked at the idea of how it measures cognitive biases versus the scientific models we have, the consistency of ChugGPT and Claude and a couple of the others were around 30 to 40% versus what we do. And the reason being is because, again, sheer amounts of data and how you label it and how the scientific application actually applies to a specific word or specific nuance in the sentence is not the forte.
So when you see where AI is heading, we have the general AIs, and I think you probably might have talked about this already in the past, the explosion of narrow AIs that are going to be good in specific elements, and that's going to be their main motif.
Are you still running in circles trying to figure out how to actually grow your business with AI? Maybe your company has been tinkering with large language models for a year or more, but can't really get traction to find ROI on Gen AI. Hey, this is Jordan Wilson, host of this very podcast.
Companies like Adobe, Microsoft, and NVIDIA have partnered with us because they trust our expertise in educating the masses around generative AI to get ahead. And some of the most innovative companies in the country hire us to help with their AI strategy and to train hundreds of their employees on how to use Gen AI. So whether you're looking for chat GPT training for thousands,
or just need help building your front-end AI strategy, you can partner with us too, just like some of the biggest companies in the world do. Go to youreverydayai.com slash partner to get in contact with our team, or you can just click on the partner section of our website. We'll help you stop running in those AI circles and help get your team ahead and build a straight path to ROI on Gen AI.
So a great question here from our audience, and you kind of mentioned about the future of AI, right? Like everything going, you know, agentic or multi-agent environments. But, you know, obviously now we have these models that, you know, think, these models that reason, these models that, you know, take their time. So a good question here from Cecilia asking, you know, what do you do to detect cognitive bias and encourage maybe slow thinking or
versus the fast thinking when we want AI to be fast. Yeah, so like, I'll even add on to her question, you know, do reasoning models, you know, that take their time to think, is that also a process that maybe we're going to see less bias? We may potentially, but again, it still falls back on the developers, right? It's a...
Unfortunately, it's like a wheel. As soon as you insert humanity into the wheel, we have our biases. All of us do. That's not a bad thing, like I said. It's just, unfortunately, how we perceive information and other things will dictate some of the more harmful biases that pop up.
Which is quite often when you hear about stereotypes and stuff. It's not that the engineer wrote, "Yes, men are better than women," or some other thing. No, it's literally their biases and how information is parsed, what gets priority over what, causes the model to extrapolate into the next piece, which is, "Well, this is how I'm going to perceive things."
And while they have hundreds upon hundreds upon hundreds of engineers constantly looking and auditing and checking, it's just, again, such a vast amount of information and such a vast amount of variants that it's not possible. To Cecilia's question, what you're dealing with, you know, there's a great book by Daniel Kahneman called Thinking Fast and Slow, which kind of is...
I don't want to say he's the father of modern cognitive bias behavioral economics. He kind of is, though, right? And so the best way to kind of consider it, there will be a space for these models that are more analytical and focused on things. It's the same thing as we have other things that do certain more complex things. I always liken it to the idea, you know, people have an Apple Watch and then people have a manual watch that still has gears in it.
And they prefer that element because they feel to them it's more reliable because it doesn't run out of batteries. It just moves because you move. Yeah, I think that's a great example. And we'll definitely put that in the newsletter, just kind of about the system one versus system two thinking. I think it's important when we think about using AI. Another good question here from Douglas asking, are there some models that have more inherent bias than others because of how they were trained?
Honestly, that's very subjective. Unfortunately, all of them have bias and there's no such thing as harmful bias. It's simply their perception of certain information points, how you ask questions, right? Quite often what we do is, what we noticed with a lot of our users is they actually will generate stuff on like chat GPT, dovetail, name a system that generates questions.
And then they'll run them through our system. And then that will be the result that they use for their actual product. And the reason why they do that is twofold. Sometimes they're fine with the questions. They just want to understand, how am I going to be biasing my audience? Because no matter what you ask, you're always biasing them some way. But the question becomes, am I biasing them just to answer the question truthfully if I'm doing marketing research? Or if I'm doing sales and I want to prompt them to action, am I doing that effectively?
So those are the things you're really pushing on. But right now, for the most part, like for instance, I think if you look at size of model, large language models have a lot more inherent biases than smaller models. And that's just by the sheer amount of data they consume and also the sheer amount of hands that touch the model. At the same point, narrow AIs totally have a bias in them. And the key element that we try to do,
I'm always been a big proponent of having a very diverse team look at the model and label it and look at everything else and how it perceives data. And that's the reason why. Because if you have a large variety of people look through it, you're not going to have the perspective on how to parse data from one or two individuals or 10 individuals. You're going to be able to do these things in a much more, you know, almost a...
objective way to a degree, if it was the proper word. And when we developed our model, for instance, and again, narrow AI, right? Very different. We developed, we started with 17,000 questions and we looked at their sentence structure. And from then we were able to extrapolate to 450,000 questions somewhere in that neighborhood, probably even more. It's one of the numbers I heard.
And the idea is now we're able to label those 17,000. We can give the model the nod of these are the scientific facts of those cognitive biases. This is how you're going to react to this based on the scientific definition, not our perceptions, but what science defines. Now, does that mean science doesn't have biases? No. But science, based on the best thinking we have in existence for our society, this is what it is. And as it evolves, we evolve.
This is how the bigger models do too, because as they keep coming out with new versions, it is our hope that they keep updating that particular element of the
of how the model thinks. But the question I have, and I think everybody has, is with such a fast evolution cycle, I mean, we're talking sometimes 30 days between new releases, if less. Obviously, a lot of that stuff is maybe quick fixes, but you always have to wonder what's going on in the background. Are they actually fixing the main issues or are they just adding to them by creating more features?
Yeah, yeah. And I think sometimes you spend time, you know, circumventing some of those shortcomings and then new model comes out and it's like, okay, what about all that work that we put in, you know, to build bridges, you know, around over or through some of these, you know, inherent problems.
But we've talked about a lot in today's conversation from everything from bias in training data and bias in the humans that are building it, the types of cognitive bias, which is super helpful. But as we wrap up here, I think I'm going to toss this over to Big Bogey here on YouTube. I think this is a great way to wrap the show. So he's asking, how do you remove unwanted bias when that bias may be deeply entangled with essential data? And I'll even say,
you know, hey, aside from using your platform, how should companies be tackling this because it's a huge issue? Well, I think the element has to be is the value of the data and the value of the output of the data, right? Because quite often, if you're looking at specific data where the bias may, for instance, affect the key decision the company is making,
getting outside help will be essential because and obviously my platform can help with a lot of the question stuff and the other stuff. But when you're looking at the bigger pieces, like there's consultancies like Percipio out in California, whose entire stick is to look at cognitive bias and how it impacts decisions. Because the elements that he's talking about, you know,
Essential data may have cognitive biases deeply entangled in it, but ultimately our perception of that data is what causes the biases that will impact our decision-making.
The data will bias us in a way, but we have to make the choice of how we receive it. And if we're already aware that we may have a problem with it, that's when we have to seek outside counsel on it and be able to almost bring a pair of eyes to oversee our process and to figure out if we need to have a more objective, a subjective process or objective process to how we make choices.
Because ultimately, again, humans, right? Just like trusting a machine, well, it's not a great idea, especially when the data is complex or maybe to a degree, maybe very much human-related.
and something that an AI could not possibly comprehend the same way, then it becomes a situation you have to make your best choices. There's a lot of good classes, a lot of good reading, a lot of good curriculum. I've always been a huge proponent of companies doing behavioral science and behavioral economics training for the very reason, not because it can replace a tools like mine, but it can enhance people's ability to be able to spot the issue and then seek the solution versus
Finding out after the fact, oh my God, our sales calls and our sales meetings and the way we extrapolated this data from these sales numbers was completely wrong. The customer didn't really want this. They just felt they had no choice until they found a better solution. So much good advice there. I think, Anatoly, thank you so much for taking time out of your day to join the show. You helped us, I think, make this
much better sense out of a very complex and a very important topic that we all need to understand. So thank you much. Thank you so much for your time and insights. Thank you so much for having me. And I really appreciate the podcast. Honestly, enjoy it quite a bit. That's great. Hey, I do too, but you know, it'd be weird if I didn't. So hey, if you enjoy the podcast too, if you heard something here from Anatolian, you're like, wait, what was that? Don't worry, we're going to be recapping it all in our
free daily newsletter. So if you haven't already, please go to youreverydayai.com, sign up for that free daily newsletter. If this was helpful, tell someone about it. Please subscribe to the channel, leave us a rating, tell your friend, tell your mom, tell your neighbor, tell your mom's friend's neighbor. But more than anything, make sure to join us tomorrow in Every Day for more Everyday AI. Thanks, y'all.
And that's a wrap for today's edition of Everyday AI. Thanks for joining us. If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going. For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't get left behind. Go break some barriers and we'll see you next time.