Are your brand campaigns as effective as they could be? Look Marcus, I'm going to be real with you. Probably not. I understand. If you're only getting insights when the campaign is over, then the answer is a resounding no. To make better campaign decisions, you need real-time measurement. You need lucid measurement by Sint. Let's be real. Discover the power of real-time brand lift measurement at sint.com slash insights. That's c-i-n-t dot com slash insights.
Hey gang, it's Monday, June 30th. Garjo, Jacob and listeners, welcome to Behind the Numbers, the eMarketer podcast made possible by Sint. I'm Marcus and joining me for today's show, we have two people.
Senior analyst writing for our AI and tech briefings based in New York is Gaja Seville. Hey, Marcus. Hey, Jacob. Happy to be with you guys. We're also joined by our analyst who writes long form about the same topics living in California. He's Jacob Bourne. Thanks for having me today, Marcus. Yes, sir. Today's fact, gentlemen. When you recall a memory, you're actually reconstructing it.
and it also changes each time. So what am I talking about? H. L. Roediger III from the University of Washington in St. Louis wrote a paper on the psychology of reconstructive memory. So they explained that when we perceive and encode events in the world, we construct rather than copy the outside world as we comprehend the events. So if perceiving is construction,
then remembering the original experience involves reconstruction.
We use traces of past events, general knowledge, our expectations and our assumptions about what must have happened. Because of this, recollections may be filled with errors called false memories, which include inferences during encoding, information we receive about an event after its occurrence and our perspective during the retrieval. This makes me feel better.
That's quite the fact of the day, Marcus. That's a, I went too deep. Yeah. So another way of looking at it is, so it says contrary to popular belief, memory does not work like a video recorder, uh, faithfully capturing the past to be played back accurately at a later time. Rather, um, even when we are accurate, we are reconstructing events from the past when we remember. Um, and the CBC, uh, um,
piece from the nature of things an article by a canadian writer and director josh freed who is saying once our brain has a new version of the story it forgets and erases the former version so it's almost like a game of telephone um we're just gonna everything is last recollection objective that you know our perception of the world is always subjective exactly so
Anyway, that wasn't heavy enough. Today's real topic. What exactly is artificial general intelligence? So who exactly came up with the term artificial general intelligence or AGI? Well, Gil Press of Forbes notes that the term AGI was coined in 2007 when a collection of essays on the subject was published. There was a book titled Artificial
Artificial General Intelligence. It was co-edited by Ben Goertzel and Cassio Panachin, although they seem to say that they sourced the idea for the title from a former colleague, AI researcher Shane Legg.
And in the book, gents, they say, or define AI as, they say, AGI is, loosely speaking, they say, AI systems that possess a reasonable degree of self-understanding and autonomous self-control and have the ability to solve a variety of complex problems in a variety of contexts and to learn to solve new problems that they didn't know about at the time of their creation.
So it's not the most defined term. It wasn't then. It doesn't seem to be now. Jacob, when we were discussing this episode, even definitions today are, you said, murky and not really agreed upon. So I've asked you and Gajo to come up with your own definitions, to craft your own definition of AGI. What's yours?
Yeah, I mean, I think an easy one is just an AI model that is on par in terms of intelligence and capabilities with most human beings. But the key here is the word general. Because the thing about human intelligence is that we're good at a lot of different things. We can solve a lot of different problems. We have a wide variety of capabilities. And historically, you know, AI, when it was first starting to be developed in the 1960s,
was really, the goal was to make a machine that's kind of thinks like people. But it turned out that AI is actually generally good at a very narrow area of things. And so I think that's how this term AGI came about because it's trying to get to an AI model, build an AI model that really is good at a wide variety of things like humans are.
It doesn't just excel in one category of things. So that would be an example. So IBM supercomputer, Deep Blue, when it beat Garry Kasparov, the chess master, it was good at just that one thing. So that's narrow AI, correct? Narrow AI, right. And I think with ChatGPT in 2022, we've seen it's becoming more general, but it's still not as general as a human.
So it does seem to be a big part of this, Garjo, that obviously is in the name, artificial general intelligence. And Google says there's that generalization ability. AGI can transfer knowledge and skills learned in one domain to another, enabling it to adapt to new and unseen situations effectively. So that's a big component. Would you agree with what Jacob said? And what else would you tack on?
I do agree. I think it requires beyond the narrow understanding of various topics. I also think autonomy is a big part of AGI. So you can think of it as a live algorithm that's constantly learning, that can make decisions, that understands the nuances in between subject matters. And I think that's the elusive part of AGI because, sure, it could surpass...
lot of human thought But at the same time how it applies that thought, you know might not be on a human level. Mm-hmm. Yeah One of the questions here is what what does a human level even mean? So I looked at the IBM definition and they said AGI is a hypothetical stage
In the development of machine learning, in which an AI system can match or exceed the cognitive abilities, this word cognitive keeps coming up a lot in a lot of these definitions, cognitive abilities of human beings across any tasks. And so when you're thinking about cognitive abilities, McKinsey, their definition, they say AGI may replicate human-like cognitive abilities.
abilities including reasoning problem solving but there's a ton perception learning language comprehension navigation social and emotional engagement etc um so uh but jaron lanier who popularized the term virtual reality asks does crossing some threshold make something human or not
Yeah, and I think that's a great question. What is the threshold? Exactly. Human intelligence and cognition itself is poorly understood. And so now you're trying to take a machine and then compare it to human essentially. And I mean, no one has really agreed upon where that threshold is. And that's why actually I like anthropic CEO Dario Amadei, who says that he prefers the term powerful AI to AGI.
because it's a bit more vague and
AGI has sort of become a marketing term because again, it hasn't really been defined in a precise way. Because it's difficult. How are you going to really say, how would you know when a model is really on par with the intelligence capabilities of most people? And would companies agree upon that? Yeah. Following up on that, I think vagueness is going to be
you know, a continued aspect of this. No one wants to nail down a definition because the competitors are just going to go back and say, well, no, because this is what we think. Right. So I don't expect a consensus. Neither do I expect someone to say, yeah, this is AGI. We've achieved it. It does this because it's, you know, the fallout from that will be significant. Right. So they're going to keep it vague. Yeah.
And I think it's going to be nebulous. And, you know, the target is a moving target. Considering, you know, they say, oh, we're close to it. But some say, no, we're not. And I think that just goes to show, you know, how complicated just defining the
AGI is going to be moving forward. Yeah. And to make it more complicated, there's another term that has been floating around, which is super intelligence, which is an AI model that exceeds even the intelligence of the smartest people. And that's also something that some AI researchers think is possible. So even exceeding the capabilities of an AGI.
So is it fair to say that a big part of why we want to create AI that is on par with or smarter than a human is because of the Turing test, which came from computer scientists, English computer scientists Alan Turing. He was basically, can you trick a person into thinking a computer is a human? Is that where all of this stems from?
Well, I think that the Turing test is just a test that grew out of this desire to create AI that's as smart as humans. But I think the Turing test speaks to this problem of how would you know? Because if it's just tricking you,
then it's kind of, it's a performance. It's not really intelligent, right? So I think that's part of it is humans, we know that we understand the world we're living in. And so even though AI can do things, does it really understand what it's doing? When you're talking to a chat bot and it understands
its output is really great, but does it understand the words that it's saying? And so I think that's a big part of what we think about in terms of human intelligence is that we understand what, you know, the world, we understand the language we're using, the problems we're trying to solve.
But it doesn't seem like AI does. Yeah. At least not yet. One of the definitions, this one coming from Amazon, says AGI is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach performing tasks it's not trained or developed for. The self-teaching part, would we agree that that is AGI or do we think that actually goes beyond to more towards superintelligence?
I think that's part of AGI, just because, you know, as we discussed, AGI is an ongoing thing, right? It's an unfinished state. And in order for it to continue evolving, it needs to continue learning. You know, the issue there is it can definitely learn at least all the information that's on the internet, definitely. But what it lacks, again, is, you know,
Just general reasoning, common sense, empathy, social intelligence. That's what it needs to unlock to sort of, you know, not be smarter than humans, but at least be on par with the way humans
humans process the world around them cognitively, right? Speaking about how they process the world cognitively, common sense knowledge seems to be part of this too. Google was saying AGI should have a vast repository of knowledge about the world, including facts, relationships, and social norms, allowing it to reason and make decisions based on this common understanding. Yeah.
Can you have common sense? I mean, how much is common sense intrinsically linked to being a human? Go on, please. Well, it seems very linked to being human. And I think, you know, there is a distinction that even an AGI or super intelligence won't be human, but it's this measure of intelligence and capabilities that we're trying to determine.
So you can be as smart as a human, but still not be a human. Right. And I think this lack of common sense is where a lot of criticism of AI's capabilities come in. But then the flip side is saying, well, people often act without common sense too. People make mistakes. We sort of hallucinate. We do all these things we criticize AI for. And so maybe AI's hallucinations are different. Maybe it's lack of common sense is different.
but it doesn't mean it's not as intelligent. So that's one counter argument. I think a big limitation with current AI models is that they're trained on internet data, not real world data for the most part. Now that's changing because we're seeing these sort of models being developed that are designed for robotics. And I think the future outlook is to have
AI powered robots collecting real world data that then can be used for model training. And I think that that could indicate a threshold.
that once that kind of model training is more heavily underway that we might see ai advance closer to an agi and buy real world data could you give folks an example yeah so you have an ai powered uh robot that's out in the world has sensors that is collecting data from things it touches interactions it has with people things it's seeing as it's moving around the world
It's not just using internet data to produce output. It's collecting data. It's getting from interactions in the real world in real time. Yeah. Like a driverless car, so to speak. Right. Yeah, exactly. Like a driverless car. Yeah.
So speaking of driverless cars, actually, this is a good pivot here because OpenAI, they've got a few different definitions of AGI. They say it's a highly autonomous system that outperforms humans at most economically valuable work. And then in a profile with the New York OpenAI CEO, Sam Altman defined AGI as the equivalent of a median human that you could hire as a co-worker. So there are a few definitions
of the definitions from
from OpenAI, but Maxwell Zeff of TechCrunch notes that OpenAI created the five levels it internally uses to gauge its progress towards AGI. So similar to Jacob, I think we've talked about this before, Gajo perhaps as well, the six levels of autonomous driving. You've got these six different stages from zero to five, everything from you drive the car completely yourself to the car drives itself completely by itself up to five. And then with
with this they have the five levels um from which they measure measure AGI internally so you have the first level chatbots chat GPT second level the reasoners um open AIs uh 01 um
Then the agents, level three, that's where we seem to be now or coming now. Innovators, level four, AI that can help invent things. And then the last level, organizational AI that can do the work of an entire organization at level five.
Do we think it's more and more likely that we end up with something more akin to this, that we get a set of rough guidelines on when something has reached a certain level of AGI as opposed to this one overarching AGI threshold? Yes. You know, AI, at the end of the day, it's
It's interesting, it's research, but it's also a marketable tool. And in order to market it, you have to have the specs on what you're marketing. And so I think we're going to see more of these sort of levels, I guess, be fleshed out as AI advances.
I think it's a bit different from really arguing in essence what we mean when we say an AGI. I mean, reducing it to what Sam Altman is saying in terms of an economic driver is kind of diminishing it a bit because human intelligence...
spans much further than the tasks we do at work economic value exactly yeah um so so i think that that kind of almost constrains what an agi is which which is maybe good again if we're just thinking about it in terms of a product um but i i think that's just one limited way of looking at it yeah that's a great point though because we have to think about the people who are telling us what agi is or isn't are people who run companies yeah yes
And they're all competing with each other. They're trying to productize their models. And they're getting, I mean, the competition is on every level now, right? From agents to chatbots to search. And I think for most people, most companies, the concept of AGI really won't really move the needle, but specific solutions-based tools, updates to AGI,
to the functionality of what buddy I can do. I think that, that is what matters right now. And I think for the foreseeable future, that's how it's,
That's how it's going to be measured, right? Yeah. We touched on this earlier, but I want to come back to it because I think it's a really interesting question, which is, are AI systems smarter than people already? Nicolo Conte, a visual capitalist, wrote a piece about the IQ levels of AI using data from tracking AI that ranked the smartest AI models based on their performance on the Mensa Norway IQ test.
There are a bunch of different types of IQ tests and this is one of the main ones. For context, the average human IQ score ranges from 90 to 110. A score above 130 is typically considered genius level and the results found that up top, ranked number one, was OpenAI's Tech
only 03 model scoring a 135 on the Mensa IQ test, placing it comfortably in the genius category. There were six other models that were above the top end of the human average, so above 110. There were two Claude models from Anthropic, two Gemini models from Google, another one from OpenAI and one from Grok model from XAI. And then you had 10 more that were a
between the average human IQ score ranges of 90 to 110. So Gajo, are AI systems smarter than people already? - I think if you break it down, we see that they've surpassed us in certain things. So like image recognition, I think they surpassed humans in 2015. Speech recognition, that was 2017. And then language understanding, that was recent 2020. They match humans in that.
And again, these are narrow fields of measure, right? You still need to put that together with the special sauce that makes us humans to determine whether they are smarter. They're definitely capable at certain tasks. They don't get tired. There's no fatigue involved, right? So you could say they have that endurance factor, right?
So for specific tasks that are, I guess, really just crafted with guardrails, sure, they could probably match or surpass also on a case-to-case basis, right? Generally, I still don't think so. They still lack...
They're not good at abstract reasoning. And at times, you know, that's what defines sort of intelligence, you know, the ability to problem solve on the spot, right? And to just shift paradigms.
What AI will try to do is if it doesn't know the answer, it's going to make up something because it's not programmed to say, hey, you know what? I don't know that. No. No AI has ever told me that. Instead, they'll fabricate something and try to justify it. Some people. Yeah. Yeah.
Yeah, people built them, so they're going to be a reflection of us. But you're right. There are people out there who will say, I don't know. And no AI at the moment, at least, is going to say that. Yeah. I mean, I agree mostly with what Gajo said. I'd also add that I think that IQ tests aren't a great benchmark of AGI, you know, determining if we're at AGI level or, you know, surpassing human intelligence levels.
If they were, then we would say, oh, look, the AI model scored genius level. Then we should be able to let it operate and perform tasks without human supervision, which it's not at that level yet. Of course, the reason why is because it really can't do what a human can do. And again, it gets back to this general level intelligence, which I don't think that IQ models really test for.
It's tests for something very specific versus being able to have a deep understanding of the real world and solve problems in that real world. I don't think IQ tests really do that. Yeah, I think any tests that you use would have to evolve with the AI available. You can't set a standard and say this is it because...
It's continuously changing. Yeah. Is it more of a numbers thing, more of a comprehension thing? And really that, that,
i think that's going to be the challenge yeah yeah but i think at the same time you look at how much the vast quantities of data that ai models are able to process at a much faster rate than humans can think and then make predictions and draw insights from that i mean it's it's stunning and people can't even come close to doing that so i think it's it's getting more general but
It's still quite a long way to go before it reaches the general level of human intelligence. Most Americans think AI will become more intelligent than people, according to a 2025 YouGov study. 47% of people said that AI will eventually become more intelligent than people. 13% think it already is. 24% said it's unlikely. The rest weren't sure.
That's all we've got time for for this episode. Friday, we will be back talking about the ways that AGI might change our lives and when it's most likely to get here, if at all, of course. Thank you so much to my guests. Thank you to Gajo. Thanks again. Yes, and to Jacob. Thanks for having me. Yeah.
Yes, indeed. Thank you, friend. Thank you to the whole editing crew and to everyone for listening in to Behind the Numbers, a new marketer video podcast made possible by Sint. Subscribe, follow, leave a rating and also maybe a cheeky little review if the mood takes you. Sarah will be back with the Reimagining Retail Show for you on Wednesday.