We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode LinkedIn Founder Reid Hoffman on What Could Go Right with Our AI Future (Part One)

LinkedIn Founder Reid Hoffman on What Could Go Right with Our AI Future (Part One)

2025/3/26
logo of podcast Intelligence Squared

Intelligence Squared

AI Chapters Transcript
Chapters
Reid Hoffman discusses the potential of AI to transform our future positively by enhancing personal agency and creating super-agency. He outlines the optimistic possibilities AI could bring, such as AI assistants in healthcare and other domains.
  • AI can create super-agency, enhancing personal capabilities.
  • AI assistants could revolutionize healthcare, education, and legal sectors.
  • Reid Hoffman identifies himself as a 'bloomer', advocating for positive AI development.

Shownotes Transcript

This episode is sponsored by Indeed. In the events industry, things move fast. We've all been there, scrambling to fill a role, watching momentum or opportunities slip away. And the last thing you want to be doing in that situation is to waste time sifting through a pile of resumes without finding what you need. In our experience, if you don't have the right team in place when you need them, it can be critical.

So, when it comes to hiring, Indeed is all you need. Stop struggling to get your job posts seen on other job sites. Indeed's Sponsored Jobs helps you stand out and hire fast. With Sponsored Jobs, your post jumps to the top of the page for your relevant candidates so you can reach the people you want faster.

And it makes a huge difference. According to Indeed data, sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs.

When we recently used Indeed for a job vacancy, the response was incredible. With such a high level of potential candidates, it was so much easier to hire fast and hire well. Plus, with Indeed's sponsored jobs, there are no monthly subscriptions, no long-term contracts, and you only pay four results. How fast is Indeed? In the minute I've been talking to you, 23 hires were made on Indeed, according to Indeed data worldwide.

There's no need to wait any longer. Speed up your hiring right now with Indeed. And listeners of this show will get a $75 sponsored job credit to get your jobs more visibility at indeed.com slash intelligence squared.

Just go to indeed.com slash intelligence squared and support our show by saying you heard about Indeed on this podcast. That's indeed.com slash intelligence squared. Terms and conditions apply. Hiring Indeed is all you need. Hear that? Spring is here and the Home Depot has great prices on grills to make this season yours.

So if you're working on improving your hosting skills, you're going to want the NexGrill 4-Burner Gas Grill for $229. And of course, pair it with the NexGrill 8-Piece Grill Tool Set. Now get outside and show off those new skills. Shop a wide selection of grills under $300 at The Home Depot.

Welcome to Intelligence Squared, where great minds meet. I'm Head of Programming, Conor Boyle. Today's episode is part one of our recent live event in London's St. Martin-in-the-Fields with LinkedIn founder and AI entrepreneur, Reid Hoffman. Hoffman was joined in conversation by researcher and host of the hit podcast, Kill List, Carl Miller, to discuss what could possibly go right with our AI future.

This conversation is coming to you in two parts. If you're an Intelligence Squared member, you can get access to the full conversation ad-free. Head to intelligencesquared.com membership to find out more or hit the IQ2 extra button on Apple. Now, here's our host, Carl Miller, with more.

Well, everyone, very, very warm welcome from me to this beautiful venue and this Intelligence Squared event. We are joined tonight, and I'm delighted to welcome into the Intelligence Squared stage, of course, Reid Hoffman.

He is the co-founder of LinkedIn. He's not a fan of long bios, even though his bio is more amazing than any other that I think I've ever interviewed. But he founded a string of AI companies, was on the board of OpenAI, and many other things besides that. But he's also the author of a new book, Super Agency. And tonight we're going to talk about the ideas in that book, what they mean, and where Reid thinks things are going to go. So Reid, question number one,

straightforwardly, why that book, why now, and why a book at all? Well, it's the retro of doing books is always very entertaining. Although I guess in a venue like this, speaking of ancient history is not a bad thing.

The benefits of doing a book are that it gives you a kind of depth of thought and kind of thinking through it, both in the work that you do, but also giving an artifact that people can engage with with some seriousness. And I think part of doing a book is when you're trying to talk to folks who are essentially, you know, kind of educated influencers, leaders,

And you're trying to make an argument about why they should update their minds to, for example, moving from potentially AI skeptical or AI concerned to AI curious. And what the depth of that is, what the reasoning is, what the history of how you're thinking about it is, that's what essentially is the reason for doing a book.

and it's also the reason for reading a book. And so I think that's, I think we continue, those kinds of in-depth thinking, both in producing and in consuming and in analyzing, I think continues to be important. Now, of course, we're also experimenting with what does the TikTok videos of the book look like too? Does anyone see a TikTok video yet of the book?

Look out for them. They are out there. I think this crowd may not be the TikTok crowd, although we'll do a little interpretive dance afterwards. So the book is a kind of a positive vision of the future, a future we could have, isn't it? So let's start with that vision, Rik. Sketch us out the kind of world that AI might be able to bring into existence. So let's start with what line of sight is today.

Line of sight today, we can make an AI assistant that's better than your average doctor available as a medical assistant 24 by 7. That doesn't mean putting doctors out of work. Imagine if you actually had, here we are in a country with NHS and so forth,

That as opposed to just getting in a queue you had a discussion with an agent the agent said oh my gosh Actually, you should go meet tomorrow, and I'm gonna help getting you prioritized for doing it or hey Here's what the things to do and just get in the queue and manage then when you go to the doctor it says here's the conversation I had with you know read and here's the thing you should talk about and here's the thing you should ask about all of which to beat and this is all line-of-sight today, so I think

what we're gonna have is a set of different agents that help us navigate all kinds of aspects of life. Medical assistant, legal assistant, education and tutor. This is the best technology that has been made in human history for learning, right? It is stunning.

And so all of that is just line of sight. Now, once you begin to get past that line of sight, you begin to also think about like, well, what are the various ways that we get accelerated? And here's a more subtle one that probably most of the people here haven't thought about, which is one of the things that all the major labs are working on today is coding assistance, both just being able to write code themselves, but also being able to be co-pilots.

Most people haven't thought about what happens when you have a coding assistant for you and your work and what you're doing. Actually, in fact, because every professional uses a computer, uses a phone,

if you actually had a coding assistant, you would actually in fact be massively amplified both individually and as a group in terms of what's happening. That's the kind of thing that we are line of sight heading towards. Now, I was kind of indicating a little further than most people thought with the coding thing because most people don't think about like, oh, what happens when I have

a coding assistant that I can now think about when I want to do this task or analyze this information this way. I can now have essentially a software engineer that's empowering me to do this. But that's two years away, three years away. It's very soon.

And that phrase you just used, empowering you, that's key, isn't it? Because different perhaps to other technology revolutions, you identify in this one is the idea that these models and the kind of AI that's being created are also very self-determining. So super agency, it's everyone's agency themselves that's really being kind of massively aggrandized now. Yeah, and there were deliberate reasons for change.

for choosing the title super agency. So one is obviously that a lot of people's worries, when I thought through what all the AI fears were, a lot of them come down to the reduction of human agency. I lose control over my information and privacy. I lose control over my job. I lose control over my participation in democracy. Those are all worries but agency.

And by the way, when we have these technological transformations, they do transform agency, which means that some agency goes away in that transition. So for example, I said I make my living as a driver of a horse and a carriage, and then you have cars. My agency changes. That does happen in these ways. But in all of the general purpose technology transformations in human history, it leads to more agency on the other side.

So then you go, why super agency? Superpowers. We get superpowers. Cars, superpower of mobility. This is cognitive superpowers. And then the other kind of thing for thinking about super agency is not just when I get a superpower by using it,

But I also get superpowers when you get the superpowers here. And the simple kind of way of looking at that is take doctors. So when cars were invented, not only could I go drive to see the doctor, but say I was too ill or say my child or my partner or my parent was too ill, the doctor could come to us. So I am getting a superpower by the doctor's ability to come and do a house call.

And that's part of the thing with super agency. And part of the reason I use that superpower is just as I was saying, medical assistant, you know, we get superpowers when our doctors also get these agents that are helping them understand what your health is.

being able, because they have limited ability to spend time talking to you, the agent can talk to you for two hours and then give a pre-seed to the doctor to know how to engage and what are the kinds of things. Like the agent might say, you know, Reed's being really resistant to the fact that it's a good idea for him to be taking more vitamin D. I don't know what it is. Maybe you could talk to him about it, et cetera, et cetera, and then kind of drive into it. And that's part of super agency. Well, super agency.

superpowers sound great. I mean, I think everyone would want superpowers of one kind or another. So what is standing in the way? Because the book is also kind of confronting a series of arguments that you see standing in the way of this future, isn't it? So perhaps we go through and unpack the main kind of oppositions. At least to me, it seems to be the gloomers

the precautionary principle, and the status quo, the existential threat to the status quo. So talk to us firstly about the gloomers. Who are the gloomers? I go through four categories, and of course identify myself with the one that I most approve of. You know, the author's prerogative. It's the doomers, which is, you know, AI, and everyone probably with media has heard some of this, which is, you know, AI is going to destroy us all. We should stop it. Gloomers, which is a

well, AI is inevitable because countries and industries and companies will all be competing, but the transition is going to be really bad because we're going to lose jobs and all the rest. The Zoomers, which is, no, no, no, this is going to be great. Close your eyes, hit the accelerator. It's all going to be awesome. And by the way, there are people, I've literally been interviewed by people who say, I'm a Zoomer. I'm like, oh,

Oh, that's interesting. And then bloomers, which is, hey, the future gives us much better both upside and also safety. We want to get there as soon as possible. We want to accelerate, but we want to navigate intelligently. We want to see that there's risk. We want to adjust to them. We want to make that happen. And that's kind of the landscape. And obviously, I identify myself as a bloomer in this case. And part of the kind of key argument is that actually, in fact,

when we have actually built the future technology, it won't just be great upside and great superpowers, but will also be better aligned with human interests,

It'll be better how we can navigate what some of the downsides might be. And part of when people, we can go into downsides at any depth. I don't do it as much in the book, partially because the media is so replete with all the downsides. I didn't feel like I needed to cover it as much. And I needed to roughly be making the argument of we don't get to the future we want necessarily.

by trying to avoid the futures we don't want. We get to the future we want by steering towards it. That doesn't mean you don't pay attention to, oh, that's a problem. Let's navigate around that landmine or pothole. But you don't get there by going, well, if we avoided the three bad futures, then we'll naturally get to the good one. It's like, no, no, you're,

you're trying to get to good ones. So the book is much more that way, but you know, questions around human amplification, you don't just have human amplification of doctors and teachers and everyone else. You could also have human amplification of criminals, terrorists, rogue states. And that is part of the navigating in order to get there. And that's the reason why getting to the future is

somewhat like as reasonably and as intelligently as fast as possible is good. And that's the argument against each of these positions. - Well, we will go into the downsides in a second, but

First, let's just dwell on the technology a bit, because I'm conscious there might be people here that work with AI every day and others that maybe have never used it before. So let's just for a moment talk, read, please, about the models that we currently have. You use a great example in the book of drawing a unicorn and just how miraculous that kind of act looks. So tell us a bit about the actual models that are currently at our fingertips.

So I hope everyone has engaged with some AI curiosity and has played with things.

One of the things that I deeply encourage everyone to do is if you're not, if you haven't tried it enough that you have found something that is deeply useful to something that's really important to you, then you haven't tried hard enough. It is in fact there for every single thing. Now it doesn't mean it's useful for everything. So I'll give you a kind of a personal example when I first started using GPT-4.

I first asked GBD4, how would Reid Hoffman make money investing in artificial intelligence?

And the answer it gave me was what I thought a business school professor who didn't understand anything about venture capital but would sound very smart in being completely wrong. Which was, well, you identify what a total addressable market is, you'd identify substitutes to key products and services, you'd go to identify the teams that were doing them, and then you would go fund those teams. All of which sounds very rational and more or less if you followed that pattern as a venture capitalist, you lose money.

And so, you know, it's like, okay, that's wrong. And then you would say, then with the North, and I've seen a lot of people do this. So they, I say, oh, of course it's not ready yet. You know, and I've tried it and it wasn't going to work. Well, then what I did is I put in a business plan and I said, give me a due diligence plan for this, which is one of the things you need to do as a venture capitalist.

And it gave me a very good due diligence plan. And matter of fact, some of the stuff, I would have known all of them, but like, oh yeah, item number three, I would have thought about two days from now. It would have been like, oh yeah, I need to add that in in terms of what I'm doing as opposed to getting it and planning from the beginning. And that's an instance of like personal activity using these. It's true for everything. Now,

It's of course good to say, give me a recipe for whatever ingredients I have in the fridge or write a sonnet for my family member's birthday party and all the rest. It's worth doing all that because by the way, some of the superpowers like I'm not good at writing sonnets. It would be a bit of a, you know, maybe to,

to do Joseph Conrad, the whore of the whore. But having that superpower is good, but all of that's there. And by the way, that includes thinking about things that are kind of visually rich. Like we have these multimodal models. And I'll give you an example. One of the things I always do when I'm talking to people who are deeply engaged, I'm always trying to find new ways that they're thinking about it. And so actually at the very beginning of this book tour, 'cause oddly the book came out much earlier in the US,

I was talking to Ethan Mullock of Wharton, and he was like, "Well, you know, I was thinking about the multimodal uses of this, and I realized that most construction sites have no good way of really tracking. Are they on plan? What's running early? What's running late?" Et cetera. So what he did is he hooked up, like, I think it was 16 cameras,

piped them through a multimodal model, putting the construction plan in it and asking what's going right, what's going wrong, where are we? And it started giving daily reports about like here's the things that have worked well today, here's the things that we may need to adjust, here's the things that we may need to accelerate.

And just straight forward, like literally, he's not an engineer, he was just setting that up. And that's the kind of thing to be thinking about for all of us. And I'll end with, 'cause part of the reason I'm lingering on this answer is this is I think one of the things that's really important about getting the super agency and everyone exploring how you can be amplified.

Another way that I personally use this is, one of the things that Satya Nadella has asked me to do for Microsoft is to make sure that I'm really on top of quantum computing and what our investments in quantum computing look like and how that's working. And that involves a lot of things that are above my IQ grade for how the quantum mechanics work and so forth. So it's kind of like you get this technical paper and you're like, oh, it's gonna take me a long time to read this.

And what I do is I put it in a GPT-4 and I say, explain this to me like I'm 12. It's really helpful. Right. So there's things to think about and do. And those are all doable today. And they're all doable today. Yes. So line of sight.

A year from now. A word that's coming up a lot in the discourse at the moment is "agentic," isn't it? "Agentic AI." Tell us about "agentic AI" and how these models are going to be basically changing their position in society, probably, over the next year or so. So, again, part of the reason I chose super agency is people, of course, worry, like, "Oh, is the agent going to be doing everything? Is that going to be taking agency away from me?"

And you think, well look, when you work with people, when you have collaborators, when you have people who work for you, that doesn't take away your agency, that amplifies your agency. Again, that's part of the super agency. And so we're gonna see a lot more expansion along the agentic dimensions. So of course you'll have what you experience today using these agents,

is you'll have the kind of like the, oh, I ask it to do something and it does it or ask it for a prompt and it gives me an answer. But you're also going to begin seeing, like, for example, one of the ones that I'm personally really looking forward to is like, as opposed to having voicemail, your agent answers the phone. And, you know, when the agent answers the phone and says, oh, um,

you know, da-da-da-da, he's like, well, I really want to talk to Reed. It's like, well, Reed's on stage right now. You know, how urgent is this? I can buzz him the moment he's off stage and, you know, kind of, and have some knowledge of who it's talking to. And all of a sudden, having that as opposed to whatever the voicemail is and texting and so forth, then allows me to navigate. And by the way, then we all turn off

anything other than our agentic notifications, which is opposed to having us be kind of like, oh, well, is that an important one to respond to? Is that an important one to respond to? Is having that triage important?

That's the kind of thing that I think having more agentic AI will be very helpful to. By the way, that will include the, hey, watch for the flights to Rome. And when that special deal comes up, book me a ticket, et cetera. That kind of thing. It'll be this whole range. And this is, again, all line of sight. This is not many years in the future.

This episode is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game? Shifting a little money here, a little there, and hoping it all works out? Well, with the Name Your Price tool from Progressive, you can be a better budgeter and potentially lower your insurance bill too. You tell Progressive what you want to pay for car insurance, and they'll help you find options within your budget.

Try it today at Progressive.com. Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law. Not available in all states. With the Venmo debit card, you can turn the mini golf outing your coworkers paid you back for into a trip to Miami with your best friend, earning you up to 5% cash back.

Use Venmo to pay for the things you love to do. Visit venmo.me slash debit to learn more. The Venmo MasterCard is issued by the Bancorp Bank N.A. pursuant to license by MasterCard International Incorporated. Terms apply. Dosh cash back terms apply.

Your data is like gold to hackers. They're selling your passwords, bank details, and private messages. McAfee helps stop them. SecureVPN keeps your online activity private. AI-powered text scam detector spots phishing attempts instantly. And with award-winning antivirus, you get top-tier hacker protection. Plus, you'll get up to $2 million in identity theft coverage, all for just $39.99 for your first year. Visit McAfee.com. Cancel any time. Terms apply.

Ryan Reynolds here from Mint Mobile. I don't know if you knew this, but anyone can get the same premium wireless for $15 a month plan that I've been enjoying. It's not just for celebrities. So do like I did and have one of your assistant's assistants switch you to Mint Mobile today.

I'm told it's super easy to do at mintmobile.com slash switch. Upfront payment of $45 for three-month plan, equivalent to $15 per month required. Intro rate first three months only, then full price plan options available. Taxes and fees extra. See full terms at mintmobile.com. Any vehicle can take you places, but why stop there? The Alfa Romeo Tonale combines luxurious Italian design and electrifying performance to make every mile a masterpiece and every arrival unforgettable.

when precision meets instinct and power moves with purpose. You never have to stay in a lane. Experience a world without limits in the Alfa Romeo Tonale Plug-in Hybrid. Tap the banner to learn more. Alfa Romeo is a registered trademark of FCA Group Marketing SPA used with permission. So how do then we read, like actually control these models? Like, you know, if we're not zoomers and we're not gloomers, we want to take a sensible pro-innovation approach to regulation, but also regulation.

What do we do to make sure that models that are increasingly powerful and increasingly able to change the world do so in ways which are broadly consonant with basic human values? So this is part of one of the things that is a key argument for the book, which is

you know, people's normal thing is to say, well, let's have the regulators do it. But the problem is the regulators hurt their own particular view of kind of like what's important, what risks are important. And frequently regulators never get rewarded for taking a good risk. They get challenged for anything that goes wrong. And when you're innovating and taking risks, some things will go wrong. So the regulatory process tends to be

nothing go wrong, which is one of the reasons you want to be thin on regulation. Not zero, but thin and focus on it. Now, what you want to be is having a lot of people engage. That's why the first chapter is humanity enters the chat and what ChatGPT having hundreds of millions of people is useful to.

And then you want to have transparent dialogue around it. Like what are the things that are really working? What are the things that are not working? You know, part of, in addition to being the book targeting people who are like, why should I be AI curious? It's also targeting technologists to say the design principle by which you should be thinking is human agency. Because really what you're hearing from these concerns is people's concerns about their agency. So, and by the way, the

the good future we're getting to is increasing their agency. So continue to do that. That doesn't mean that, by the way, for example, there will be job transitions and like, for example, any job that has people acting like a robot, that the robot will do it better. Customer service jobs, the I have to follow this script, the AI will follow the script better, right? That's a more natural thing. So it doesn't mean that there aren't real issues in transitions,

But the questions around like how do we get to that better increase in human agency is the kind of thing that we want to be designing to. And by having hundreds of millions of people engaged, having transparency, having accountability for what you're doing, listening to criticism, it's one of the reasons why I don't say

the critics should all go away. It's you should be expressing your criticisms and what's the way that we get to a better sense of human agency? Not I have this worry, not just I have this worry, but I have this worry and here's what could be much better.

These hundreds of millions of people engaged in this kind of iterative deployment of regulation, what power do they have? I mean, is that like a kind of consumer power to kind of begin to just use models that are more self-determining and more capable of expressing their agency? You know, is it political power? How should these people be being involved in this? Because obviously we have huge companies.

and in the UK, often quite distant companies, and it feels hard to imagine that, you know, how does one influence OpenAI's developmental trajectories around this? - Well, the good news, bad news about companies is they're actually, in fact, pretty responsive to various forms of pressure. They're responsive to consumer pressure,

but they're also responsive to employee pressure. For example, employees have families, they go back to communities, they want to be talking about why it is the work they do really matters. They don't want to show up to their communities being the villains, et cetera. So that's one. Shareholders, long-term investment in the brands and so forth really matters, so shareholders. And this is one of the roles that press plays because as press highlights issues,

that then goes, oh, we respond as companies. And so even when it's like, you know, oh, OpenAI based in San Francisco, well, you know, thousands of miles away from here, they are still navigating all of those issues. Now, it doesn't say that's it, right? There's obviously times where you say, well, actually, in fact, like, for example, one of the things I thought was really good about

the Biden executive order and I think one of the things that's been really good about the UK Security Institute that's here that I think is the leading example for what kinds of things should be doing for AI security and safety institutes in the world. I actually know that the UK one has helped the US and other ones a great deal. I think it's been Ian and Hogarth and Matt Clifford and the whole crew has been doing a really good job there.

is to say, look, you wanna actually be asking some of these hard questions, engaging everything from academics and government people. They were saying, what are your issues? You make sure that you have alignment plans. You make sure you're doing serious testing in the creation of the AI.

All of this stuff is, so it feeds in from multiple networks. It isn't just like one individual goes, I am regulator. It's a whole set of networks of involvement.

Are there some aspects of AI development that you would take a much dimmer view of being less likely to increase people's agency? I'm thinking like addictive technology or manipulative chatbots that might learn how to kind of hook into people's psychological frailties. Are there other ways that we can kind of map the uses to be like, okay, there are some areas that actually we don't want to be encouraging as much innovation towards?

Well, some of the areas that will be, you know, natural innovation towards, like, you know, trying to persuade you that you need that new iPhone, that maybe, you know, buying the new one today would be better than waiting for next year and so forth. There's natural commercial incentives that I don't tend to get overestimated.

I get overly bothered by that. Part of it is actually, I think, for example, ads that have more relevance to me is actually, I think, a feature, not a bug. It's like, oh, I might be interested in that. That's actually kind of a useful thing. And I do think that the notion of how we are potentially manipulated, which obviously we are by media today and all the rest, is an important area to navigate well.

Now, but part of it, as I think about it, is I say, well, what I really want to make sure is that the market is building AIs that are for me, not just, you know, kind of for the company, but it's things that will help me navigate. And you can imagine, for example, an AI that is like your assistant that's in the browser that when, you know, one AI comes up and says, oh.

oh, you really want to buy these tickets for this trip to Rome, says, well, remember, you were thinking about that next year versus this year. Maybe you want to think about the one that's for you as well. And as long as we are building towards that and make sure that that is happening as well, then the issues around addictiveness, I'm not nearly as worried about as some people who talk about it.

Now I do think that it's one of the things that we need to be attentive to the kind of, there's reasons why we're attentive to children. And I think there was Jonathan Haidt's coming to talk at Intelligence Squared, which is great.

And I think that the notion of making sure that as we step through kind of what that evolution looks like, I think that'll be important. But by the way, I think it can be enormously helpful to children, right? I think if you design the AIs the right way and it's kind of, it's the, hey, I'm helping you grow through these new experiences and I'm for you. I think that could be very positive, but I think we obviously have to, you know, it's an area to be careful about.

How do you think AI readers gonna change us? Because you talk a lot about mental health in the book. You talk about kind of, in this fascinating way, how AI might actually awaken or strengthen like human capacities for empathy. So do you think we, everyone in this huge hall, do you think we're all gonna change over the next few years as AI becomes more ubiquitous? - Well, I nearly guarantee it. And the hope is that we're doing it in the right way. So like one of the things that, so,

I recently co-founded a company called Manas, which is AI acceleration for drug discovery and curing cancer. The one that I co-founded before that is Inflection with Mustafa Suleiman. And part of Mustafa and Karen and the team's idea, which I think was and is really great, was to say EQ matters as much as IQ in these agents.

when you're interacting with the agents and the agents are actually, in fact, modeling kind of call it kind, respectful conversation, that's part of how we learn. It's actually one of the things that's always bugged me about Alexa because, you know, when you're interacting, like children are interacting with Alexa and they go, stop! And you're like, well, don't start, don't talk to other people that way. Right? You know, like having a kind of

but attention, empathy, kindness, civility in conversations I think is really important. And I think that that's actually something, and we've seen that spread from inflection, it's agent called pi, pun intended, personal intelligence,

also spread to other agent developers too, who are also now also focusing on EQ. And I think that's a good thing. And I think that's part of who we become is what conversations we're having, who we're interacting with. And if we're interacting more with models of asking questions, being curious, being attentive, being empathetic, I think that's a good thing. Now, the deeper level, which I know you intended with your question as well,

is it will also begin to kind of affect our epistemology. Like one of the things that's interesting when you think about like, for example, how search shapes our epistemology, 'cause we all use search to go find information. This is gonna be search massively amplified, 'cause as opposed to like, you know, 10 blue links, I'm trying to sort through it. It's like, here's an answer. And by the way, of course we have a lot of hallucinations and everything else now, but that's being improved.

And you go, okay, here's an answer. Well, how is that answer presented? What are the things that are put as questions in the answer? What are the things that are put as uncertainties or hesitations or things that represent where people have massive conflict of views? And how does that all work?

That's all going to be shaping our information space. And it's one of the reasons why, you know, one of the metaphors that we use, I forget it's chapter eight or something, is part of thinking about AI as an informational GPS. Because you think about not just how do we navigate physical space, but how do we navigate all of the informational spaces? And that directly ties to epistemology, directly ties to what space do we think we're in?

Who do we think we are? How do we think how we connect to other people? Which communities are we in? And that's all partially how we navigate information spaces. And so I'm, you know, part of the reason why I write a book, do conversations like this, is to get us all actively

actively saying, "Oh, it'd be better if the informational space or the informational GPS looked more like this and less like this. Let's experiment with that. Let's see how that works. Let's prefer agents that are doing it this right way. Let's ask of the agent developers, please do and build it this way." And that's, you know, companies are responsive.

Thanks for listening to Intelligence Squared. If you're an Intelligence Squared member, you can get access to the full conversation ad-free. Head to intelligencesquared.com slash membership to find out more or hit the IQ2 extra button on Apple. This episode was produced by myself, Conor Boyle, with production and editing by Mark Roberts.

In the early hours of December 4th, 2024, CEO Brian Thompson stepped out onto the streets of Midtown Manhattan. This assailant pulls out a weapon and starts firing at him. We're talking about the CEO of the biggest private health insurance corporation in the world. And the suspect... He has been identified as Luigi Nicholas Mangione. ...became one of the most divisive figures in modern criminal history. I was targeted...

premeditated and meant to sow terror. I'm Jesse Weber, host of Luigi, produced by Law & Crime and Twist. This is more than a true crime investigation. We explore a uniquely American moment that could change the country forever. He's awoken the people to a true issue.

Finally, maybe this would lead rich and powerful people to acknowledge the barbaric nature of our health care system. Listen to Law and Crime's Luigi exclusively on Wondery Plus. You can join Wondery Plus on the Wondery app, Spotify or Apple podcasts.

Ryan Reynolds here from Mint Mobile with a message for everyone paying big wireless way too much. Please, for the love of everything good in this world, stop. With Mint, you can get premium wireless for just $15 a month. Of course, if you enjoy overpaying, no judgments, but that's weird. Okay, one judgment. Anyway, give it a try at mintmobile.com slash switch. Upfront payment of $45 for three-month plan, equivalent to $15 per month required. Intro rate first three months only, then full price plan options available. Taxes and fees extra. See full terms at mintmobile.com.