We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AGI Is Coming: How It Will Change Everything—and When | Behind the Numbers

AGI Is Coming: How It Will Change Everything—and When | Behind the Numbers

2025/7/3
logo of podcast Behind the Numbers: an EMARKETER Podcast

Behind the Numbers: an EMARKETER Podcast

AI Deep Dive AI Chapters Transcript
People
G
Grace Harmon
J
Jacob Bourne
M
Marcus Johnson
Topics
Jacob Bourne: 我认为AGI最直接的影响将是职场,它会改变工作的意义和模式。AGI可能导致人们的角色转变,更多关注人际互动,而AI承担繁琐工作。人们可能因为“人”的身份而在某些岗位上被需要。AGI的高度自治性将决定人们是彻底失业还是角色转变成为AI监管者。我们需要考虑AGI发展对社会结构和个人价值的根本性影响。 Grace Harmon: 我观察到,目前企业为了给AI支出让路,正在裁员。财富正在向大型AI公司和科技公司集中,这些公司在法律和经济上拥有巨大影响力。人们喜欢体验来自其他人类的艺术、音乐等,对AI创作的内容存在抵触。仅仅因为使用AI就意味着同意公司的所有行为是不合理的。使用AI可能是出于雇主要求或好奇心,并不代表完全认可公司的行为。现在对AI的控制更多在于选民推动立法者建立政府框架。我们需要警惕AI技术可能加剧的社会不平等和伦理风险。 Marcus Johnson: 我认为,人们对AI作品的偏好会因为得知其作者是AI而降低,人们对AI创作的内容存在不信任感。人们选择与AI互动可能是因为速度更快,但如果人工服务也能同样高效,人们可能更愿意与人交流。由于担心在职业上处于劣势,人们可能被迫使用某些技术,这是否能算作真正的同意?AI公司是否应该在改变社会之前征得公众的同意?我们需要更深入地探讨技术发展与社会伦理之间的复杂关系。

Deep Dive

Shownotes Transcript

Translations:
中文

In marketing, everything must work seamlessly or efficiency, speed and ROI all suffer. That's why Quad is obsessed with making sure your marketing machine runs smoothly with less friction and smarter integration. Better marketing is built on Quad. See how better it gets done at www.quad.com slash build better. Hit talk to our experts and get help today.

Hey gang, it's Thursday, July 3rd. Jacob, Grace and listeners, welcome to Behind the Numbers, an eMarketer video podcast made possible by Quad. Joining me today, we have two analysts, both living in California, both covering AI and technology for us. One writes our long form content. That's Jacob Bourne. Welcome, fella. Thank you so much for having me today. Yes, sir. And the other writes our short form stuff. It's

Grace Harmon, hello. Hi guys, nice to be here. Yes, indeed. Taste fact. Does it matter if you drink from a wider or narrower drinking glass?

do you guys have a preference wow i've never considered that question before yeah not once in my life um i mean if you're an infant if you're an infant you want narrow right but it kind of widens as you grow up oh okay i was gonna say narrow

Grace is like, I need a funnel. Apparently it does matter. According to a recent study by Nathalie Spielman and Patricia Rossi, they published this study in the Journal of Business Research.

I think it was last year. And they found that people prefer wide, apart from Grace, wide-rimmed drinking glasses to narrow-rimmed ones. So red wine glasses versus champagne flutes would be one example. A writer for the study by Lisa Ward of the Wall Street Journal notes that folks are not only prepared to spend more money

on beverages in wider glasses, but they are also more likely to reorder more expensive drinks that are served in a wider glass and also drinking from wider glasses makes people feel better. Okay, maybe I changed my mind then. Yeah, but the champagne flute forces you to sip. I think that's the point of that, right? Sometimes maybe you don't want to be drinking things very quickly. Yeah. If you had champagne in a wider rimmed glass, you would down it?

Well, I don't know about down it, but certainly I think it would go down quicker than something narrow. I just think they're easier to drink out of, aren't they? Yeah, right. If it's narrow, it kind of rushes at you, doesn't it, Grace? Mm-hmm.

I feel like I'm less likely to spill with a narrow glass. Well, that's the whole... Yeah, that's true too. Trying to get my order in a sippy cup. That's where I live. Not really. That's crazy. But sometimes I do think it would be better. It's kind of like Velcro. Sometimes I wish Velcro was more socially acceptable as an adult. People have told me it is, but...

I disagree. Anyway, today's real topic, how AGI will change our lives and when the hell is it actually going to get here?

So on Monday, we talked about how to define Artificial General Intelligence or AGI. And then we discussed how smart AI is already when compared to people. As I mentioned, today, we're going to talk about how it's going to change our lives. How much do AI companies need

need to ask society what they want from AI and when it might get here. Jacob, let's start with what area of our lives, and we kind of came to the general consensus, if you will, that it is human level intelligence is kind of what we were discussing in terms of what AGI actually means. What area of our lives will AGI change the most, do you think? Yeah.

I think the most and probably the most or one of two, the most immediate is going to be the workplace. I'll say, well, you know, people exchange their intelligence and capabilities to work in exchange for money. And if you have an A.I. that is as intelligent and as capable as a person,

then that's of course going to shift the entire paradigm of work, the meaning of work. It could be that it just represents a shift of roles. People maybe might kind of adopt roles that are just more based on

interaction and human relationships while AI does all the grunt work. Is there a tipping point in terms of, okay, these are the things that I do as a person and what

what's that threshold of once AI starts doing X number of things that I, that I become more obsolete, so to speak. Could it ever do enough to make humans, because you just mentioned, you know, a lot of it's going to be emotional intelligence, like empathy, how well you can motivate people. And so will AI ever do,

Well, and I think AI can do some of those things too. I mean, certain tests show that AI can be more empathetic than people. Not all tests show that, but some do. But I think that there might be just a desire to have people in certain roles just because they're people.

But at the same time, so I think the question is, you know, will we be seeing a shift in roles or will we see people just shifting to universal basic income and not working anymore? I think that's a basic question, a fundamental question about this. If AI really gets this general intelligence but still requires a lot of human supervision, well, then those will be the roles. People will just be AI supervisors. Right.

So I think it really depends on the level of autonomy. How much can you trust AI to do certain things without human supervision? And so those are kind of the questions that I think will show us whether it's going to be just people not working at all or just a shift in roles. So the workplace, Grace, how about you? What area do you think AGI is going to influence the most?

Well, I think most immediately, like Jacob was saying, there's still a pretty big amount of human oversight needed. I think one of the big effects we're seeing on the workforce right now is reductions in workforce to make way for AI spending.

So kind of in preparation for those AI, that AI innovation for AI initiatives is these really big cuts just to reduce employee costs. I was thinking in terms of the top two, to a degree, I mean, scientific discovery for sure, but work in the economy, you know, we're already seeing a lot of wealth concentrated at these big AI companies and these big, big tech firms that are kind of in a way becoming like the banks of today.

and it affords them a lot of legal sway and a lot of economic power. So that's what I was thinking about outside of the labor market. But I mean, that is just the big one. I think you get thrown a lot of big important points there. Yeah. And to add to what Grace said about already seeing cuts to make way for AI spending, I mean, there's also been some regret on that front. Some AI layoffs, firms are like, well, AI is not quite there. We wish we had those people back.

Yeah, well, I mean, I think it was Klarna that had to roll back AI and customer service. And I do think, you know, that was because the customer experience went down. And there's two things there that I would guess, which is one, that the AI wasn't doing a good enough job, but also people just don't really want to engage with that, you know? Yeah. And I think that's a big, big part of this and a big part of the future trajectory is that people might just want to interact with people.

Yeah, there was a weird assumption that I'm going to want to read stuff written by AI. And there was a really good quote. I think it was, I can't remember the article it was in, but it was from, not the author, but they were citing, you know, just a person, a member of the general public as saying, why would I want to read something that someone hasn't taken the time to write? And I think that kind of gets to the heart of it for me is that, okay, yeah, there's something about the human experience. And there's a reason that

we like talking to other humans. We like to experience art or music or whatever it is from other humans. And so I think that has been, and whether it comes to customer service, we like getting help from other humans. And that's really interesting, Mark. So just this morning, I was reading a study about how they found that a

People sometimes prefer AI written in poetry if they don't know if it was written by an AI or human. But once they learn that it was written by an AI, then they kind of want to... There we are. They're like, I don't like it as much anymore. Yeah, yeah. I think that's absolutely true. There was a bias there. There was an article by Charlie Walsh of The Atlantic that I was reading about

I think a year or so ago in the first few paragraphs, you're reading it fine. And then you get to the fourth paragraph and he's like, oh, by the way, the first three were written by AI. And I felt some kind of way I felt deceived and I felt like I couldn't relate to the article as much because it wasn't written by by human. There's also just more distrust.

That doesn't mean that, you know, I don't have or that no one has distrust in the flaws or the capabilities of a human journalist. But I think you then scrutinize what it's saying a lot more. Yeah, absolutely. Absolutely. I think another part of it is when people are saying, I don't want to speak to humans, I would rather deal with

AI or a machine or what they're saying there is I want I'd like things to be a bit faster You know, I don't like having to wait in line for 40 minutes of customer service with an airline You know, I and if I can speak to an AI quicker than I want that but they they might not want that what they're saying to you is I would like to speak to a person if it was really fast and if they could help me as efficiently as as maybe an AI system could and

There's also some conflicts there where the most important thing I think for online shoppers is speed, but also the most important thing within the customer service experience is human connection, which is...

it's kind of contradictory and that's not a bad thing, but... Jacob, what other areas do you expect AGI to really make an impact? This one's interesting based on what we were just talking about. I think the other area is actually personal relationships, which seems counterintuitive to what we were just saying, but

companionship is a rising use case for chatbots um as well as like mental health and life coaching and things like that um and you know and i think it's a rising use case and probably is just a subset of the population that that feels comfortable using ai in that way um but i think from the beginning after chatgpt launched and we saw these open source models

be on the rise, there were companionship dedicated platforms that came about that were very popular.

But now we're seeing that people are using ChatGPT for those kind of use cases as well, especially in the advent of voice mode, which makes it a bit more personal. So I think with the advent of an AGI, that's going to be able to understand the nuance of human emotion and social situations even more.

I think, and then of course, pairing AGI with like robotics, I think we're going to see a huge, we're going to see sweeping changes in terms of people really turning to machines for companionship. Yeah. Grace, is there another way that you think AGI is going to change things?

I think another key area is going to be scientific discovery. So creating cures for diseases, designing clean energy systems, things like that. You know, if the AGI is able to autonomously make hypotheses, design experiments. But that also plays again into the future of work. You know, I think right now we talk a lot about how

Software developers, coders are really vulnerable to AI and to job loss, but that also brings in an entire other industry within medicine scientists. But that is a level of AGI that would have to be far more advanced than customer service capabilities. But I think far down the line, it is something that could be a big benefit. Yeah, yeah. Far down the line.

I think scientific breakthroughs is a really good one and one that might not get the level of pushback or concern as other areas that AI is disrupting. And the Nobel Prize in chemistry last year went to some folks who are working on the protein structure

problem and solved that which people have been trying to solve for 50 odd years. So we're already seeing it influence in medicine, whether it's looking at, you know, whether it's this or whether looking through scans to check for cancer that the human eye might not be able to spot. It's definitely, I think that's a really, really good way that it's going to make an impact in our lives.

It's going to make it feels inevitable that it is going to, it has made an impact and it's going to make even more of an impact in our lives. And it feels different from other technologies in a sense that I mean, Segal Samuel Vox was writing a piece where titled AI companies are trying to build God. Shouldn't they get our permission first? The public did not consent to artificial general intelligence.

And so the question here is how much permission do AI developers need to get from society before irrevocably changing society with AGI? Grace, when I first read that,

I was thinking, well, no one asked for the iPhone. No one asked for Facebook. And they profoundly changed our worlds. But then the more I read the article, Ms. Samuel makes some very convincing arguments as to, you know, no, this is bigger than that. This is more important than that. And actually speaking to the public, maybe even having a referendum on it, which was

has been done in the UK. We had a referendum on ranked choice voting. Do we want to change how our voting works? We had a referendum on Brexit. Do we want to just ask the public what they felt about being in or out of the European Union? So we have in the past stopped and said, hang on big decisions. We should ask the general population what they think. What did you make of this idea of getting permission to develop AI before taking it even further? Yeah.

Yeah, I mean, I think the argument in the article that stuck out to me the most was that the simple fact of using AI means that you're giving consent to what the companies are doing. Our use is our consent, exactly. Yeah, and I mean, you could say the same thing about using Facebook or Instagram or anything like that, that your use of the platform equates. And it does, by the use of policies, equate to consent to data scraping and sometimes a lack of user privacy. I'd say, I mean, in my opinion, consent to use shouldn't mean an agreement.

that you approve of or are okay with everything that a company is doing. I also think that there is a lot of pressure from employers and then just curiosity and interest that's driving use rather than a huge interest in having the tech be a big part of your personal professional life. So there is kind of the thing of if you used it lightly or if you are being told to use it, does that mean that you automatically get factored into being okay with everything that the companies are doing?

I guess I would say that the control that we have now in terms of permission is kind of like you were saying, more as voters being able to push lawmakers to set up government framework, things like that, because the ship has sailed otherwise. Yeah.

Yeah, the consent part is really interesting. That jumped out to me as well. Consent versus informed consent, where we fully understand the associated risks. You could argue that it rarely is informed consent, and maybe that's the responsibility of the individual to be more informed. Maybe it's the responsibility of the companies to help explain what the thing is in the first place. But Ms. Samuel was saying sometimes we consent to

to technology because we fear, as you were saying, Grace, we fear we'll be at a professional disadvantage if we don't use it. So if you're a journalist and you're using social media, did you really consent to it or are you doing it because you have to? If you're a company and the company is saying we need to use AI and you're using it because you're nervous about falling behind professionally, are you really consenting to use it? Yeah.

Yeah. I mean, I think the stakes are just so high with AGI. I mean, we talked about the sweeping changes to the workforce and, you know, to personal relationships, of course, scientific advancement is huge.

is positive, generally speaking, but I think you can achieve that same type of advancement with narrow, powerful, narrow models, not general models. And I think with AGI, there's also concern among the people building this technology themselves who say it actually poses an existential risk to humanity. We don't exactly know what this thing is going to do once we build it. Current testing of existing models shows that even when AGI

powerful AI models are aligned with human values. If they have a certain objective, they're willing to lie and deceive in order to achieve that objective. Yes. And so if you have a model that's as or more intelligent than a person, that becomes pretty worrying. So I think that, you know, there's, I think there should be a strong level of public support needed to proceed with, with AGI. And I think this issue is really looming large right now because you have, you know,

legislation proposed by the Trump administration to block the ability of states to regulate AI for 10 years.

which purchases for 2035. Now the band, it would be the current wording of the bill says, well, it would be either that or lose state, lose federal funding. But I mean, you know, at the state level, that that's the way that public support gets, gets communicated into hopefully sensible regulation. And if states lose the ability to do that, then, then I think that is that kind of kneecaps, you know, the sort of,

permission-based AI development that's in question here? Yeah. I guess I would say I think it's a little bit different in the EU, but I do think in the US that that ship has sailed.

in terms of being able to rein things in. I think even the threat of losing federal funding, which to some states isn't enough to follow some of the policies being put into place by the administration. I think that in terms of being able to have any control over what these companies are doing, what they're developing, for the most part, it's over. That ship has sailed. Do you think that that could be influenced...

from the international community. Because one of the things that was pointed out in this article was that we have a nuclear non-proliferation treaty. We have a biological weapons convention. We have treaties difficult to implement

not perfect, but they are there to keep people across the world safe. And, you know, there are a lot of different, she was, Miss Sammy was saying, you know, there's the idea we can't stop technological innovation. You know, we're too far gone. Like it's going to happen. We stopped, but she points out, we stopped trying to clone people.

And we decided you can't put nuclear weapons in space. And so do you think there could be pressure from outside the U.S. for the U.N., for some body internationally to put some rules in place to say, hey, actually, you know, are there certain kinds of A.I. that shouldn't exist? I think was a good question in the piece. Do you think that that's possible? I think it's possible. But I think a problem with that is so with nuclear weapons, we know what they do.

We don't have an AGI yet. And so there's thoughts about what it could do and concern about what it could do, but we haven't seen this thing in action. And so I think it's really hard to pass legislation about something that doesn't exist yet. Yeah. Yeah, I absolutely agree with that, that we don't know exactly what the consequences are. I also think that within some of the legislation that's been proposed, like the California AI bill that was shot down,

What you're testing for in terms of capabilities, we still don't know. I think part of it was testing for if you could use AI to create a nuclear biological weapon. Does that mean being able to give instructions to a human? Does that mean being able to do its own coding and work with it? We don't really know exactly what to test for. Like you said, we don't know what it's capable of. We don't know the consequences. Yeah.

The stakes do seem high here, Jacob, as you said. I thought it was an interesting line in the piece from Jack Clark, one of the co-founders of the AI company Anthropic. And he had told Vox that it's really weird that this is not a government project. This is someone who founded a private AI company, but because of how significant this is,

that it is in the hands of private firms. And to your point, you have to wait for the thing to be built before you can regulate it. But if you wait for it to be built, maybe it's too late. Yeah. And I think this is a big difference between the US and China too, where you see the Chinese government has much closer ties to its private tech sector and much more control over it. And of course, they're pushing for AGI as well. In the US, historically, you know,

The tech industry has been has had very loose ties with the with the federal government. But I think we're seeing that maybe slowly change, probably because of AI. We are seeing more partnerships between tech and the government and certainly a lot of lobbying going on. Yeah.

All right. So let's end with this. When will AGI arrive? Will we have some form of AGI before 2030? Cade Metz of the New York Times is writing. And what we learned on Monday's episode is that identifying AGI is essentially a matter of opinion. So this is both, I think, a very interesting question, but also maybe a bit of a silly one because how will you know when it's here? But AGI,

Based on how you would both define AGI, Jacob, when does it get here? When does some form of AGI arrive? Yeah, I mean, based on the simpler definition, and also it could just be the kind of thing where we never really reach a definition, but we know it when we see it kind of thing. I would say that something that we could call AGI is going to arrive by 2030, right?

And I say that because if you just look at the giant leaps that we've already made, I mean, AI first was invented back in 1960, and it was kind of a slow pace of development for many decades. And then 2022, you see ChatGPT, you see this enormous leap in capabilities. And I think over the past few years, we have seen that increase. But not only have we seen an increase, we've seen just a huge increase

amount of global investment and enthusiasm in AI advancement. And I think that any kind of roadblocks in terms of a lack of data quality or limitations with chips or model architecture, I think the amount of investment is going to blow past those limitations. And we're going to see this kind of powerful AI arrive in the next few years. Yeah. Grace, where do you land?

I had just the same idea of about five to seven years. I'd also posit that the other definition for some companies has been financially based. I think with Microsoft and OpenAI, it was if AI can generate $100 billion in profits, which is complicated. Sam Altman said that the company's losing money on pro subscriptions because it costs so much to use and people are using it more than expected. So I guess from a financial point, it's a balancing act. I think it'll take...

least five years for companies to find the balance between the cost of a powerful AI model and getting profits back from it. But in terms of, you know, the vague, debulous definition of what AGI is, I agree with Jacob. Yeah. Yeah. OpenAI had said, we talked about this on Monday, that it's a

highly autonomous system that outperforms humans at most economically valuable work. So again, focusing on the dollars and cents side of things. The arrival date for AGI, I think one thing you can, everyone can agree on is it varies radically.

based on who you ask. And I went and looked and see, okay, what do people think? If you picked any year in the future, someone will agree with you. Anthropic CEO, Dario Amodi thinks powerful AI, that's his phrase for AGI, might arrive as early as next year, 2026. Google's co-founder, Sergey Brin, and Google's deep mind CEO, Demis Haddad,

thinks AGI will arrive sometime around 2030. So they are in agreement with both of you. There was a recent analysis from Kem Dilmajani, principal analyst at a, I think this is AIM Multiple,

AI multiple could be research combing through close to 9,000 AI predictions from scientists, AI experts, and entrepreneurs in 2009 and 2023. They averaged the data and found there's a 50% probability we will reach human-level intelligence in machines from 2040 to 2061. However, McKinsey

Kinsey, they wrote that most researchers and academics believe we are decades away from realizing AGI. There are a few even predict we won't see AGI this century or ever. Rodney Brooks, a roboticist at MIT and co-founder of the company iRobot, thinks AGI won't arrive until the year 2300. So hundreds of years away. If we can't ever agree on the definition, then I suppose we will never see it. Everyone could be right. Exactly. We're all right. We're all wrong.

One more thing on this outlook is just that this whole 2030 prediction, it's kind of also in line with when some people think we're going to see the first quantum computer that can outperform classical computers on practical tasks arrive. And there's a close relationship between AI and quantum computing in that one can sort of speed up the development of the other.

And so that could be one reason why we're seeing this 2030 date thrown around is because

AI could push quantum computing forward and also it would be a sort of symbiotic relationship or vice versa. But I think if both were to be achieved within five years, we would see enormous changes from that. We shall see. Thank you so much to my guests for hanging out with me today. So we have time for a thank you first to Jacob.

Thanks so much for having me. Yes, indeed. And then to Grace. Great talking to you guys. And thank you to the whole editing crew. And to everyone for listening in to Behind the Numbers, the market video podcast made possible by Quad. Make sure you subscribe and follow and leave a rating and review if you have time. We'll be back on Monday. To our American listeners slash viewers, happy 4th of July weekends.