Each morning, it's a new opportunity, a chance to start fresh. Up First from NPR makes each morning an opportunity to learn and to understand. Choose to join the world every morning with Up First, a podcast that hands you everything going on across the globe and down the street, all in 15 minutes or less. Start your day informed and anew with Up First by subscribing wherever you get your podcasts.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life. You don't know what's true or not because you don't know if AI was involved in it. So my first reaction was, ha ha, this is so funny. And my next reaction was, wait a minute, I'm a journalist. Is this real? And I think we will see a Twitch streamer president, maybe within our lifetimes. You can find Close All Tabs wherever you listen to podcasts. From KQED.
From KQED in San Francisco, I'm Alexis Madrigal. It's not too hard to imagine the downsides of artificial intelligence. From the movie dystopias to the more realistic AI-takes-your-job scenarios, doom and gloom abound in these days, especially with Elon Musk trying to disassemble government agencies with quote-unquote AI. But LinkedIn co-founder Reid Hoffman and big-time Democratic donor has a sunnier perspective.
He foresees AI providing pivotal and creative assistance to human beings, expanding our capacities. His new book is Super Agency, What Could Possibly Go Right With Our AI Future? And we'll explore that as well as what's wrong with our too real present. That's all coming up next after this news.
Welcome to Forum. I'm Alexis Madrigal. In the early going of the new book, Super Agency, What Could Possibly Go Right With Our AI Future, the book's co-authors, Reid Hoffman and Greg Beato, lay out a taxonomy of different approaches to AI. There are doomers who think AI will wipe out humanity. There are zoomers who think AI will solve everything. Then there's a more subtle couple of
The gloomers are worried, and they believe that top-down AI regulations and close scrutiny will be necessary to stave off the worst effects of AI deployment.
And then there are the bloomers, Hoffman and Beato's square in the box diagram. Bloomers are not insensitive to the problems that AI might cause, but believe that generative AI is a net positive for human society and that deploying these tools iteratively and with proper benchmarks could allow humans to maximize the good possible from abundant intelligence. Reid Hoffman and Greg Beato join us this morning. Welcome, Reid.
Great to be here. And welcome, Greg. Thanks for having us. Reid, let's start with you. I have a sense that most of our audience is probably in the gloomer camp, and we're going to get into some more details on that in a second. But what's your best argument for a gloomer that they should move over into the bloomer world?
Well, part of it is to just start being AI curious. Namely, start playing with it and seeing how it can amplify your capabilities and what kinds of things that that might do in order to improve your life. And so, for example, you know, play with ChatGBT or a different agent and kind of see what kinds of things you can do, whether it's personal or, you know, kind of like work or other things.
Oh, go ahead. Oh, just because you think sort of that engagement and personal experience with these tools will give people a more optimistic view of their capabilities? Yeah, because in part, like when it's kind of like, oh, AI is coming for you and it's the Terminator or something else, you go, okay, I'm kind of concerned. When you begin to actually see what kinds of things you can do, whether it's like, you know, hey, I don't know how to use this new microwave thing.
you know, et cetera, or to, hey, I'm working on a report for, you know, my work and I can generate something that could save me 10 hours in beginning to do it
You can go, these are actually superpowers. And of course, part of what Greg and I are talking about is the superpowers for individuals are extremely important. But when a lot of people get them, then all of the society also gets elevated. And so you get benefited by other people's superpowers. Like if your doctor is made much more, you know, kind of, you know, kind of ability. Yeah, capable. Then that helps you, that helps your family, that helps your community. So
That's the thing to kind of really look at is to say, you know, start imagining, start being curious about what could possibly go right. Yeah. Greg, I summarized it really quickly, the Doomer Zoomer Gloomer Boomer framework. But maybe fill that picture out for us just a little bit.
Well, the doomers go to a worst case scenario where we could either a human equipped with AGI. Artificial general intelligence. Artificial general intelligence, you know, at levels that don't yet exist, could –
Use it in ways that apply asymmetric impact in terms of – Take down the power grid or create – Exactly. So that's one scenario. Another scenario is that the intelligence gets so intelligent and autonomous that it starts doing whatever it wants and we lose our capacity to control it.
And then the, you know, an alternate scenario is that it's actually thinking it's doing what we want, but because it doesn't always align with how we think it misses some aspect of it. And so it produces some awful results. This is sort of the paperclip scenario. Correct. Reid, let me come to you with a question. Why aren't you a Zoomer?
Well, because the thing is, is like Zoomer to some degree is like close your eyes and accelerate as much as possible because everything will be great. And obviously in these kind of technology transformations, which Greg and I go through, there's always a lot of difficulty and pain in transition, just as there will be here.
And so you want to actually, in fact, be kind of thinking, okay, what's good for lots of human beings? What's good for society? And how do we make that transition, you know, kind of, you know, more human, more graceful, more, you know, kind of like getting people engaged and doing it. And that requires being in conversation. It also is kind of avoiding really bad outcomes. And it's also kind of the sense of helping people get a sense of agency of,
Well, I didn't ask for this AI transformation, but I can also start leaning into it and start kind of co-owning it and what's happening. And so all of that kind of leans to a bloomer mindset psychology versus a zoomer.
I think also I would bet a lot of our audience underestimates the Zoomer mentality as being something that people hold. Like there are some people out there who think the most important thing we can do in the world right now is accelerate AI development as fast as it can possibly go. Can you see that viewpoint, Reed, or no? Oh, no. Actually, one of the funny things about kind of talking to people at super agencies, I actually have people who've read the book who come and say, I'm a Zoomer.
And so, you know, that does actually in fact exist. And they kind of think, look, there's so many good things that come from AI. Sure, there'll be some challenges and some things that are bad, but
The net will be so massively positive. The only moral outcome is to hit the accelerator, you know, full on. Yeah, I remember being quite quite shocked when I encountered, you know, particularly young people feeling in that way. Greg, what do you mean by the term super agency in this book?
Well, Reid sort of hit on it a moment ago. It's basically what happens when a technology is released that millions of people get access to at the same time versus something that's much more narrowly distributed. And therefore, you get your own...
positive effects from using it that is increasing your agency, but then you're also benefiting from other people's. One of the examples we use in the book was like in the early age of automobiles in the US when we went from millions of cars pretty quickly within a span of 10 or 20 years
All of a sudden, doctors could do more house calls per day and travel, cover more area. And employees, you could live in more places because you could commute to work easier. And that meant employers benefited as well. So it's the sort of compounding effect of when technologies are broadly distributed. So it's sort of like network effects plus AI in this particular case. Yeah. Yeah.
You know, Reid, with cars, to use this example, I mean, comes oil. And with oil, among other things, comes climate change. A lot of our people when we've done shows on AA, a lot of our listeners have said...
You know, the climate impacts of using these tools right now are quite high. Like if you look at Elon Musk's cluster of computers, I mean, they're just it's all natural gas powered. Most people are trying to build these systems very quickly. They're just installing natural gas. These are not like renewably powered, you know, data centers. And they're getting huge and driving up changing electricity markets. How do you address that within this framework?
Well, you know, Elon might not, you know, kind of care about the climate change stuff.
But actually the hyperscalers, Microsoft, Google, Amazon, others here actually do. And part of what they all have is kind of sustainability programs. And part of like, there's a couple of different ways that they're actually massively being positive in the climate change. One is billions of dollars deployed to green tech energy as almost like venture capital, but as a customer saying, hey, if you can take your geothermal, your wind, your water, you know,
you know, etc. And you can apply it and you see some of that in the news with nuclear things and make this a much cleaner tech, then it's like creating that such that it like that industry getting helping it get accelerated and off the ground. And then of course, once that industry is going, it can also begin to build plants for cities and other kinds of things. So that's one. Another one is applying intelligence to everything is actually really useful. So like,
Take, for example, something that happened about 10 years ago, which is DeepMind went to Google data centers and said, we think we know how to make this much more energy efficient. And we're very smart engineers. And, you know, you can't do that.
They actually found 15 to 35% savings within Google data centers by applying intelligence and AI to think what you do with grids. And then, of course, there's always the, hey, maybe we can accelerate development of fusion or other things. But even if that's completely like just a speculative dream,
I actually think that a lot of the AI stuff is going to be net very, very positive. Now, obviously, if you say, I don't care, and I'm just going to build a whole bunch of big-ass data centers with carbon producing, then you have to balance the equation more as well as your intelligence going to help us with the grid, and how is it going to help, and what's the net equivalent going to be? But I actually think the hyperscalers are really committed to doing green energy. Yeah.
Greg, if you're just a regular person out there, where do you think you could find something AI can do that would make your life better, sort of the easiest?
A regular person. You're not the founder of a tech giant. Right. So, I mean, I think of it so much through the lens of being a writer and how it impacts me there. I would broadly say...
that amongst my friends who are using it, it is – in the book, we talk about informational GPS and this idea that in the 21st century, we're constantly navigating new informational landscapes, whether that's, you know –
jargon that you have to keep up with your job, whether you're trying to interpret information you're getting from your doctor, things like that. What these tools are fundamentally translation machines. And so they can just help you get from point A to point B quicker.
I'm definitely using search a lot less and using AI a lot more. I call it fetch instead of search. It brings the information to you and synthesizes it. And you shouldn't necessarily trust it unconditionally, but it always makes it quicker to sort of get a bead on contextually what you want when you're actually searching for information. Yeah.
We're talking with Greg Beato and LinkedIn co-founder Reid Hoffman. They're co-authors of the book Super Agency: What Could Possibly Go Right With Our AI Future?
We'd love to hear from you. Are you a gloomer, doomer, zoomer, or bloomer? Why? What are your sort of biggest areas of interest or biggest concerns about AI deployment? You can give us a call now. Number is 866-733-6786. That's 866-733-6786. The email is forum at kqed.org. Blue Sky, Instagram, we're KQED Forum, and there's the Discord, of course. I'm Alexis Madrigal. Stay tuned for more.
right after the break.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life. You don't know what's true or not because you don't know if AI was involved in it. So my first reaction was, ha ha, this is so funny. And my next reaction was, wait a minute, I'm a journalist. Is this real? And I think we will see a Twitch streamer president maybe within our lifetimes. You can find Close All Tabs wherever you listen to podcasts.
Welcome back to Forum. Alexis Madrigal here. We're talking with the co-authors of the book Super Agency, What Could Possibly Go Right With Our AI Future. They are LinkedIn co-founder Reid Hoffman and journalist Greg Beato. We are also taking your calls, of course. I imagine some of our listeners have some concerns and some interest in artificial intelligence. Give us a call, 866-733-6786. You can email forum at
at kqed.org. You can find us on social media and Blue Sky and Instagram, KQED Forum. Reid, interesting question for you from one of our listeners over on the Discord. It is, the issue isn't what AI could do that's beneficial. It's that AI will be used by the powerful and wealthy to advance their own goals and agendas disproportionately. Do you agree with that?
Well, I think, you know, the wealthy and powerful always use a set of different tools to advance themselves. You could say, you know, we shouldn't have, we shouldn't develop new medicines because the new medicines will initially be expensive and we'll go to the wealthy and powerful and so forth. And it's like, look, that can happen, but that's not a reason to not be going. But here's part of the thing that I think is missed in the question, which is, you know, the same iPhone that Tim Cook uses is the same iPhone that the taxi driver in Morocco uses. Right.
And when you're developing technology for hundreds of millions of people, it brings a very broad-based inclusion into it. And that's one of the things we saw with the ChatGBT release, which is actually, in fact, we're trying to build it for hundreds of millions of people. And so, therefore, it's elevating across the entire thing. And I'm not trying to paper over the fact that wealthy and powerful might not benefit somewhat disproportionately, but with a massive increase.
for all of humanity. And just kind of think about the fact that, you know, the smartphone that you use is the same smartphone that's being, you know, deployed in, you know, third world country, you know. I think one question I have there is how we balance that, you know, what I would say is very legitimate democratization of these technologies against the wealth concentration that the very same companies and technologies created in American society. And one of the things that I...
wanted to look at is you have a passage in the book where you you all right well the internet has certainly created new opportunities for fraud and disinformation it's larger story is how it's functioned as an unprecedented trust machine and the evidence you give there is sort of that people bring more of their their selves and their resumes and their real lives to the internet but at the same time that that's happened and I agree with that
Trust in every institution society has basically collapsed over the last 20 years. And so how do we balance these factors? Right.
What would you say? Well, so I think part of the thing is like, I do think we have a massive decrease in trust in institutions. I'm not sure that's a result of the internet or, you know, the internet may just be one of many participants in it. I do think we need to reconstruct that trust. I mean, one of the things that I've actually been doing with this philanthropy organization, Lever for Change,
like a challenge to kind of like how do we rebuild trust in American public institutions? Because I think it's part of the society we live in today, you know, together. And I think about what to do and what kinds of things in technology as well. Like, for example, LinkedIn is obviously one of the things I think about is trying to help society, you
you know, in this way. And I think we want more projects like that in doing it. And so, you know, I think that, you know, wherever you think, hey, technology is part of the problem, part of what I say to the critics is technology can definitely be part of the solution. Hence, you know, imagine what could possibly go right. And then it's kind of the question of, okay, so what's the thing we should be doing with technology in order to be helpful to society broadly? Yeah.
Let's bring in a first caller here since we've got lots of people who want to talk with you. Rachel in Oakland. Welcome. Hi. Thank you for putting me on the show. I want to talk about just an individual user. That's myself with chat GPC. I'm a writer. I'm a creative artist. And I was very skeptical about whether or not the program, the LLM could in fact, you
play creatively with punctuation and space on the page. So I'm kind of trying to talk to those creative types out there. And I was skeptical, but in fact, 4.0 has been doing an amazing job
And I'm testing it. I'm trying to find use for it as a tool, a tool. It's not me. It's not my consciousness. It's not a person that I'm having a relationship with. But it is reaching new heights on its own by responding to my feelings.
fairly experimental work. So I just wanted to say that the skeptic out here, the artist has found really quite an amazing tool. Rachel, appreciate that perspective. I don't mean to be an advertiser. No, no, no, no, no, no, totally. I mean, I have done a lot of playing with these models as well and oftentimes find them surprising and interesting as well.
How about in the writing of this book, Greg? Obviously, you're writing a book about using AI. I imagine there was some role for the different services in the creation of this book.
Absolutely. Early on, a lot of the narrative around AI was that it was going to replace writers. What I found was in some ways it didn't replace editors because we had human line editors and human copy editors on this book. But as I was writing it, it served an editor-like function for me. And that could be anywhere from...
you know, it works like half of my Google use before these kind of tools was looking up words on thesaurus.com, right? And with this, you can contextually do it. Oh, I want to use the word in this con, you can have a conversation about the word you want to use, and hopefully arrive at the best possible word. Another way I use it is just like, here's four paragraphs in a
chapter that I'm writing? Is it flowing coherently? So
I never tried to do what I call magic button AI, which is like one shot. Give me a chapter on a prompt. For me, it's always about conversation so that I'm interviewing it as a source or having a dialogue with it as kind of a developmental editor. Reid, would you agree with that being your kind of methodology? Yeah.
Well, you know, this is part of what makes, you know, partnerships and
collaboration go well i actually do you know kind of a very different thing and it's kind of different than the both rachel and greg who you know have much more craft perhaps as a writer i hit the i'm feeling lucky button and see what comes up yeah yeah exactly no it's it's kind of things like um for me it's like okay give me like uh when i put in something i go give me an argument against it like for example some of the super agency stuff is like okay from a
from a deeply studied history, historian of technology, is our claims here. What would be the critique? What would be the, you know, the elaboration? Just call me up, Reid. I'm happy to provide some. Exactly. Go ahead. Yeah. So it's kind of, it's like the form, of course, is part of what is great about the creative process is,
But for me, I tend to use it a lot as a kind of like a combination research assistant and kind of like a counter advocate and also elaborator. Yeah. I think it's one of the fascinating things for me has been to use some of these tools that allow you to bring in different research papers, for example, and have the
the tools, in this case it's a tool produced by Google, kind of produce almost like an outline or sort of like a menu of like, here's what's in here. If you want to explore, it comes via these sources. Because I think one of the major pushbacks that I've seen over the years and have actually long believed myself is they don't, these LLMs, these large language models, don't actually have the same fundamental relationship to underlying fact that I think humans aspire to have.
And now it seems like there are at least some tools that help work around that. Do you agree with that, Reid? - Yeah, and one of the lines that I love from Ethan Mollick is the worst AI you're ever gonna use is the AI you're using today. And the entire industry is working towards making the, while keeping the creativity, making the hallucination rate go down. And there was a funny example I had recently. I was introducing Atul Gawande to deep research
And we did something for kind of this... That's an open AI kind of research tool, yeah. Yeah, exactly. And it generated a report. He was like, oh my God, this is amazing. And he sent it off to his research assistant. And the research assistant came back and said, look, I have a strange thing to report. Nine of the 10 quotes that the research report was in were wrong. They actually didn't exist. They were hallucinations. So you would have thought, oh, total garbage.
but the areas they were pointing me to were gold mines. And it saved me tens of hours that I wouldn't have been able to find this kind of stuff
you know, without starting with this report. And that kind of gives you that, you know, kind of where there's this human amplification and so forth, because there's, there's still really value there. And even in the, the kind of deep research going, I'm so much trying to get you a quote that you'd really like, I'm inventing them. It's like, you know, it would be perfect this thing, which is why I just invented it. Yeah. Yes. Right. You know, but even then it was useful, like seriously useful. And I thought that was that, that, that, that made me bemused. Ah,
You know, I don't know if this is pushback exactly, but, you know, I'm a big follower of this AI subfield that focuses on kind of interpretation, you know, how these things actually work. And one of the things that I've really taken from that work is that we really, truly, actually don't know how these systems output what they do. We know we feed data in. We know that this, you know, gradient descent is remarkably effective. We know all these things work.
At some level, we know what we're trying to do and what knobs and levers we're pulling and pushing, but we also really don't know what's going on on the inside. Do you think that matters or not?
Well, I guess I'll go first, but Greg can obviously chime in. Look, I think it's above mattering zero, right? Because, you know, the fact is we want our technology to be well understood and reliable in various ways. And the fact that it's kind of somewhat of a black box where we don't understand stuff makes us have to put in a whole bunch of work.
to try to make sure that it aligns with our goals in terms of human interest and making sure that we empower like you know students and teachers and not terrorists as kind of as kinds of things that we do and and kind of like well what does it mean if it gets more intelligent and you know how does all that stuff play in that's why it matters on the other hand you
You and I are talking to each other, and people don't know how the brain works. People don't really know how the – that we use language fully to communicate with each other and what's kind of really going on and so forth, and yet we do it. So that's – Of course, we've also evolved, right? We're evolved creatures, so we have –
might not have the scientific understanding, but we have like an intuition about how other people are going to behave, right? Because their brains are more like our brains than our brains are like, you know, chat GPT. Well, speak for yourself on being evolved. Some of us haven't quite gotten there yet. But, you know, look, yes, because we have a lot of in-depth experience with human beings, we kind of go, okay, we understand what some of the error cases are and how things work.
But this is exactly the reason why when you asked me earlier about like how to persuade gloomers to explore being a bloomer, you know, converting into a bloomer, it's like, well, engage with it, use it. You start getting that kind of predictability. And the fact is these things aren't completely random. They are actually, in fact, you know, producing within an out scope. That doesn't mean that you're not sometimes surprised or sometimes they, especially when you ask them something very specific, they are so trying to be helpful that
that they go, oh, well, here would be a quotation that you could use, but that's fictional. Or a book that should exist but does not. Well, given that, Greg, do you think that there are some areas where deployment of AI should be limited, like areas where we can't really prove that they're safe and feel like we should be able to? I think that's a changing standard. We definitely talk in the book about
interpretability and explainability, but... And those are necessary and certainly useful to have, but also outcomes and outputs really are what matter. So if we...
We have a lot of anecdotal data about hallucinations and when things fail. And on Twitter, you can see the latest brain teaser that someone did to make the supposedly much better model. Still can't figure out, you know, the farmer crossing the river logic problem. But we don't really know how often that happens. When you put these things on Twitter,
On platforms and systems, though, you do get a legibility that you don't have when it's just humans making decisions across institutions, right? So, like, I think it's interesting that we talk about black boxes and we don't know why the algorithm is making this decision, but because it's been sort of quantified differently.
we actually in some ways have more insight into what's happening. And that's in part why AI systems are easy to criticize. But with like engineered systems like power plants or airplanes or something, we have standards that we set that we say this is
it must be this reliable, right? So that's kind of what I'm asking. It's like, is there, let's just say we look at outputs and we didn't know the physics of flight. You'd still know that like the plane could always do a certain thing and there'd be ways of approving that, right? I mean, there's gotta be something like that. Right, and we have those standards after years of evolution, right? We didn't launch with standards in all those domains. And that's what we're talking a little bit is,
in the book about, you know, we have the chapter on benchmarks and systems like chatbot arena. And, you know, there is potential to do a lot more of that. We should certainly be moving toward that. And I see movement toward that. You know, I don't think it's going to, we're not going to have like a fully fledged
sort of system, regulatory system right from the jump. It evolves. That's part of what iterative deployment is about, right? And Reid, I mean, the argument around benchmarks is that because they allow for sort of open tracking of how the different models do on particular tasks,
They sort of serve a function of almost like a standard in other kinds of technology, as well as you argue, at least, have some kind of regulatory function as well.
Yeah, and part of what Greg and I were trying to sketch out is people tend to think, oh, regulation is there's a regulatory agency. And that's the only thing that really shapes it other than profit motive. And that's actually not true. Companies have a whole set of things that they engage in in order to – because for their own interest, which include customers and trust of brand over time and employees and the communities that they live in and their employees live in.
and benchmarks, which is like, hey, this is the way that we're talking about what we're doing. We're comparing what we're doing with each other. We're saying my product is better than this other product because we're better at these benchmarks. And those get created. And by the way, part of the value of discussions like this one, the press and everything else is to say, oh, wait, hey, that could be a good benchmark. We'll use that to competitively differentiate. And that
It itself is a way of like how these systems get steered that aren't like having, you know, some person who says, well, I'm the regulator and I'm going to, you know, tell you that you can only cross the street when, you know, the light turns green and you have to wait three seconds before afterwards green. And then you have to, you know, proceed at, you know, 2.5 miles per hour, you know, et cetera. And it's like, no, no, no, that's not the way to make an efficient traffic thing work and get everyone where they need to go.
Let's do a little comment block here into the break. A bunch of different listeners writing in. You know, Steve writes, my concern is that we're not prepared for this technology. And just because we can does not mean that we should. Patricia writes, I enjoy using ChatGPT and DeepDive fairly regularly, but I have some concerns.
the main one being intelligence implosion. Here's an example. A friend sent me a lengthy article to read about using AI to read brainwaves. I didn't have time to read it, so I had ChatGPT summarize it. I also used ChatGPT to compose a thoughtful response that masked the fact that
that I didn't read the article. Shame on you, Patricia. Rich also writes, let's hear more about the downside of AI. Huge increase in cheating and stealing other people's work that is already on the internet. Reliance on the low-hanging fruit that's available on the internet and a decrease in original thinking because that takes more work.
We'll get back to that one. We're talking with LinkedIn co-founder Reid Hoffman and Greg Bietto. They're co-authors of the book Super Agency. What could possibly go right with our AI future? If you want to chime in, give us a call 866-733-6786. We're going to get to a lot more calls after the break. I'm Alexis Madrigal. Stay tuned.
Welcome back to Forum, Alexis Madrigal. We're talking with LinkedIn co-founder Reid Hoffman and his co-author of a new book, Greg Bietto. They are co-authors of this book, Super Agency. What could possibly go right with our AI future? Let's bring in some more calls here. Let's bring in Damian in Santa Rosa. Welcome.
Thanks for taking my call. I had a question, but I figured I'd let ChatGPT summarize it a bit for me. So here's its thoughts combined with my prompt. As AI and robotics, and I stress robotics here, advance together, large-scale job displacement seems inevitable across many industries, and given the current political resistance,
to expanding social safety nets, how do you see navigating the economic disruption caused by automation? Should businesses that benefit most from AI-driven efficiencies bear greater responsibility for funding solutions like universal basic income or reskilling programs? And this question for me was kind of prompted by listening to a piece yesterday about the combination of robotics and AI around things like nursing homes, physical assistance, things that we all need, low-paid jobs that could be
Handled and we need but but that will cause disruption similar to maybe what was happened in the globalization And how we didn't really prepare for that or retool and that kind of led to our economic circumstances now and our political circumstances and how with this decrease in social safety nets and this this sort of Antipathy towards a taking society once this disruption happens aren't we kind of in a particularly negative space?
Yeah, I have to say, yeah. Thanks so much for that, Damian. I think, you know, max AI plus minimal safety net, Reid Hoffman, seems like a pretty dangerous path for the country. Yeah, no, and I, you know, I think one of the things that's really important from everyone from
government to also the industry is to say, how do we help as many people as possible? And I think that part of the question is you say, well, okay, AI will have a bunch of job transformation, job displacement, a bunch of other things over time. And you say, well, okay, how do we help do that? And part of the thing is a little bit like in the question, the reskilling
is I think one of the things that I would be doing, you know, as we are doing as some technologists to say, how do we make the AI in a way that can help you say, okay, my job's changing, how do I learn the new things? How do you help me do the job? If this job doesn't work, how do you help me find another job or help me, you know, kind of create another job? And this is a classic concern. We go through it in a bunch of different ways in the book of, you know, kind of general purpose technologies. They do create these transitions of disruption,
And part of the thing here is actually AI can help in this case because it can be an amplifier and assistant. Now, you know, so I do think that there is a responsibility like kind of as it were across the board. It isn't a, oh, it's only society. I think it's, you know, the things for the people who are building these systems to also be focused on and also be helping on. And that's part of, of course, why we wrote Super Agency because it's,
You know, it's not just for the AI uncertain to become AI curious. It's also for technologists to think about like human agency is really important. Be thinking about that as you're building your iteration of your technology.
It's also really interesting, too, because jobs are kind of bundles of tasks, right? And I was looking at this research out of Anthropic where they were using queries of Claude, their chatbot, to sort of figure out, well, which tasks are actually being done through this service? And the thing that's tricky is it's kind of they're really useful for some tasks and jobs and
not useful for other tests in the same job. And so it's kind of wrecking balling through what was a bundle that was a job. And that seems like very difficult, like a lot of possibilities for sort of dislocation. What do you think, Reed? How does it kind of smooth that out?
Well, that was part of the reason I was saying transformation because the issue is say, people say, well, I was in a job and now I'm not in a job anymore. It's no, actually the job changes, right? So like which things you're doing. So for example, you're like, I'm doing data entry. Well, we don't need data entry anymore. What we need is some kind of like, you know, use of it in terms of data analysis or, you know, getting it, you know, kind of sent to the right locations and strategize for how can best help your particular mission or task, right?
And so you go, okay, the job's changing, but I used to be a data entry person. I don't know how to do the data entry. Like, that doesn't work anymore. Yes. And these are the things that maybe you can do. That's what I was, like, the specifics of an AI agent could say, well, here's things you might be able to do, and here's jobs, and here's how you might learn the new skills to keep doing this job that you're sitting in. Anyway, that's the gesture of it. Let's bring in Tony in San Francisco. Welcome, Tony. Hi, how are you?
Good, good. Thanks for calling. My comment was about the sort of nature of AI's impact on human psychology and the balance of society over time. So I work at a design and storytelling company, and we do a lot of business strategy and advertising. And my wife's a high school teacher.
And we have this debate in our household about, you know, the author bringing the example of using the thesaurus to find, going to thesaurus.com and looking up, you know, five words and then having to go through the cognitive process of deciding, you know, yourself, which of these words actually is the best versus just saying, chat to TTPD, which word's the best? And it's like this word, right? And so we kind of have this debate of over time,
what do you think the impact, net impact is on human sort of ability? Do you think it's going to sort of make us dumber over time or is it going to make us smarter or force us to become smarter as, you know, one of the most adaptable sort of species on the planet?
And then related to that, the second question is, do you think that it will balance the scales of society or continue to imbalance them? You brought up the example earlier of the iPhone. And yes, a person, you know, perhaps a lower income class person has an iPhone, the same iPhone as, you know, Steve Jobs had or whatever. But if the question is, well, now both of them can just ask, you know, AI and get that right thesaurus word.
Does that balance things or when you layer on society and what happens next, is it further imbalancing? Well, and if the bottom 50% of society's wealth hasn't increased for the last 50 years, kind
Kind of makes you ask some questions about technology's ability to redistribute wealth, as far as I'm concerned. But let's stick on the first question, Greg, with you on the kind of impact on critical thinking as someone who has spent a lot of time in this kind of conversation with these different tools. Do you find it reshaping your mind in the way that, you know, let's say someone spent a lot of time on Twitter and wired Twitter into their brain? I think people...
oftentimes felt like an effect in their actual consciousness. Do you feel that? Yeah, I think that's almost inescapable, right? We're definitely, we make our tools and then our tools make us. So that's going to happen. One of the reasons we stress agency so much in this book is
And why we thought ChatGPT was such a paradigm shifter in that regard, it was the first time that people had access to a general purpose AI tool that wasn't embedded in some other service. So they had to affirmatively choose to use it. And then in using it, you have to keep track.
providing input. So it's a very, in my mind, an active type of technology. Now, it will get to more agentic uses, but this one version of it will always exist. And one of the benefits of it, I think, is because it actually does put you in a more active state of mind,
I'm not asking ChatGPT to say, what's the best word and you make the choice for me here. I'm having a conversation. I always have the last word in my conversations with ChatGPT and always make the decision on my own. So to me, it makes me happy.
I feel like it's an activating tool. Will that be for everyone? I'm not sure because agency isn't any more evenly distributed than intelligence is, right? In some ways, what we're going to find with this new resource is that intelligence is going to get abundant more.
And in part, you know, the early sort of discourse around, well, what kind of human attributes matter most? People were like saying emotional intelligence, intuition, things like that. And Chad Shibiti and the others, they don't really experience that, but they can simulate it pretty well. And so...
In some ways, I actually think the most valuable human attribute moving forward is going to be agency because these are tools that allow you to amplify what you do. So if you have a lot of agency, you can use them effectively. That's not going to discount the second half. I guess it was actually the second half of Damien's question maybe about like,
So again, we think that situating these tools that put users in the experience is key to do because a lot of people will be using to automate and replace. But what we want then at least is to have people have access to tools that complement and amplify and augment, right? And that's what we got with ChatGPT for the first time. Yeah.
You know, underlying a lot of the discussion around AI, particularly here in the Bay Area, Reid, is this kind of moment in the tech world. You know, we've kind of seen a split in tech leaders as people. You know, some have gone to the Trump camp. Others have remained in a kind of liberal democratic mode. How do you think that particular dynamic will affect the rollout of AI in this country? Yeah.
Well, we could finish answering that question next week sometime. But, you know, I think, look, you know, as one of the people who, you know, is a great believer of, you know, kind of democracy and, you know, kind of a more progressive society in terms of where we're heading towards, I think progress is a good thing. You know, I think that the contention of things will be complicated. I think that the
notion of, for example, what Biden did with the executive order is saying, hey, I want you to at least kind of do kind of like alignment testing and have a safety plan and all the rest of the stuff is a very good... This was Biden's executive order on artificial intelligence for people who don't remember. Exactly. We did a show on it. Go back and find it.
Yeah, exactly. And we can, you know, that's a good thing to do. And the fact that it's canceled, I'm hoping it just cancel a name and that, you know, it'll get reinstituted with a new, greater plan. You know, it's kind of as ways of happening. And so I think those things, you know, kind of are important in terms of where we go. And on the other hand, I do think that one of the things that frequently happens
you know, is too, like there was this earlier question of like, well, but I really worry it's too fast and we should slow down. The clock is not set by us kind of sitting around going, we're just going to hit the accelerator. I mean, there's, you know, part of what the lesson from deep seek is, is, you know, the, the, the development of these things is as a foot. And so within this kind of time window and shaping things, we, we can't simply say, Hey, we're going to
you know, not do anything. And this is part of the reason why Greg and I wrote about, you know, what happens almost a speculative, you know, historical fiction, what happens if the Luddites had won, then it would be, it's a disaster for, you know, our generations to come, for our children, you know, their children and an economy. So there's a, we're in, you know, the tide is going in this direction. We need to steer intelligently.
One of the big questions there, too, is where is this innovation going to come from? When I was coming up in tech reporting, a lot of people argued that startups, the nimble startup ecosystem was where this came from. I feel like one of the things I've observed over the last 10 years is we've moved more to this national champion model, where our largest tech companies are going to be the ones who are deploying and developing the most important technologies globally.
Do you think that is the right model that we should have national support for our biggest companies to do this? Well, I think that it's both. I think it's a little bit of a false dichotomy. As a matter of fact, if it was only large tech companies, then me as a Greylock venture capitalist, I'm making desperate errors because I'm starting Manus to cure cancer with AI, accelerated drug discovery as a startup.
And so I think it's both. And I actually do think though, it's important to not, like sometimes what happens of some of the people on the progressive left, it's they say, "No, the really important thing is to rein in these big tech companies." And you're like, well, but actually in fact, what we really want the big tech companies doing is a bunch of things that help us. Like if you wanna return the manufacturing industry and vibrancy in the country, that is AI and robotics. We wanna say, "Hey, help us with that."
You know, you say, hey, one of the things that could really raise everyone's quality of life is a medical stuff or educational stuff or the job, you know, kind of transition help do that stuff, too. That's the kind of thing that we want to be doing. And I think that that's great and can also come from both startups and the big companies.
Last couple comment here, you know, Eric writes in to say, watching the richest man on earth, Elon Musk, who also controls more technological power than anyone on earth, using that power to help elect a president and then spread misinformation tells you everything you need to know about the future of technology and it isn't good. Meanwhile, Donald writes, please ask Mr. Hoffman to mention one or more ways in which AI might help to preserve democracy and integrity in the U.S.
So, you know, saying like, for example, you know, case one of something is the case for the whole argument is just, it's just fallacious argument. You know, you can, you can say, well, because, you know, a white person murdered someone that tells you everything you need to know about white people, you know, it looks like, okay. You know, and it's, it's, it's, it's, you know, I think that the question is to have to be more sophisticated on that. Now, in terms of democracy, you know,
I do think that one of the things about, like, for example, if you say, hey, we are constructing, you know, kind of intelligences that help us as co-pilots. One of the things I would like, I'll give you a micro example. One of the things that an academic paper came out recently and it said we did this longitudinal study on people who believe in conspiracy theories. And we found that by engaging with LLMs.
we got 25% of them, this is kind of like flat earth, that kind of stuff, to change their mind for six months, you know, and that's like much better than any human can do. And so the notion of being able to kind of help like kind of shape an information space, like you can think of having an agent
that when you're reading something says, hey, by the way, did you know, and by the way, this isn't just a red or a blue comment. It's like, did you know there's some other expert commentary over here that you might want to pay attention to on it? And that kind of evolution of informed nature is one of the places where it could be helpful. Yeah.
Last quick one for Gloria, kind of a service question here. She writes, I wonder what your thoughts are on how AI will be changing jobs in the future, especially how to educate our kids to be ready for their future job. When thinking or deciding about future fields and colleges, I wonder what you think may be more appropriate for the future. The old answer, I feel like, used to be learn to code. Now it seems to be that's what these AI systems are going to do best. So what do you think?
Well, I think actually, by the way, how you're going to learn the code is different. Learning code is still very useful.
Um, critical thinking is still very useful. Learning is still very useful. And it's kind of like know the tools that help you be a economic vibrant participant in society. And one of the things that, you know, as part of the reason Greg and I wrote super agency is to say, those tools are essentially going to be AI. Start using them as kids, like, you know, engage them. Like for example, you know, for the high school teacher, engage them in your classroom because that's part of what's going to enable them in the rest of their life. Hmm.
I'm just going to leave this last question from one of our listeners. A little bit of putting the ball on our court here. Matthew writes, I'm a linguist and a polyglot, and I'm wondering about the impact of Aon language learning. On the one hand, there'll be an increased capacity for personal teaching agents that can tailor the curriculum for each person's learning styles. I've experienced this myself. Parenthetically, it's kind of amazing. On the other hand, people tend to use the tools that is a translation tools in lieu of learning a language.
Learning a language is hard and so valuable. Will this and other human functions atrophy in this new age? Or I will add, will people not do that? We have a choice here. We've been talking with LinkedIn co-founder Reid Hoffman and Greg Beato. They are co-authors of the book Super Agency. What could possibly go right with our AI future? Thank you so much for joining us, Reid. Thank you. And Greg, thank you so much for joining us. Thanks for having us.
Thank you also to all of our listeners for your calls and comments. I'm sorry we could not get to all of them. We tried. There were a lot. Great to be back. Thank you also so much for your help during our pledge drive. Ultra successful. Thanks to you. So much gratitude. I'm Alexis Magical. Stay tuned for another hour of Forum Ahead with Mina Kim. Thank you.
Funds for the production of Forum are provided by the John S. and James L. Knight Foundation, the Generosity Foundation, and the Corporation for Public Broadcasting.
Each morning, it's a new opportunity, a chance to start fresh. Up First from NPR makes each morning an opportunity to learn and to understand. Choose to join the world every morning with Up First, a podcast that hands you everything going on across the globe and down the street, all in 15 minutes or less. Start your day informed and anew with Up First by subscribing wherever you get your podcasts.
Hi, I'm Morgan Sung, host of Close All Tabs from KQED, where every week we reveal how the online world collides with everyday life. You don't know what's true or not because you don't know if AI was involved in it. So my first reaction was, ha ha, this is so funny. And my next reaction was, wait a minute, I'm a journalist. Is this real? And I think we will see a Twitch streamer president, maybe within our lifetimes. You can find Close All Tabs wherever you listen to podcasts.