Can you guarantee me that killer robots will never be built? The only existential risk for human beings is not killer robots. There's pandemics, there's asteroids, there's nuclear weapons, there's climate change, and the list kind of goes on. And so you have to look at existential risk as a portfolio.
Namely, it's not just one thing, it's a set of things. And so when you look at any particular intervention, you say, well, how does this affect the portfolio? My very vigorous and strong contention is that AI, even unmodified at all, is net, I think, very positive on the existential risk portfolio.
Welcome to Bankless, where today we explore the frontier of AI. This is Ryan Sean Adams. I'm here with David Hoffman, and we're here to help you become more bankless. The question for today, will AI give us super agency or will it be used to enslave us?
We have Reid Hoffman on the podcast today. He gives his bull case for AI, why it's good, why we should accelerate AI into the future, and how it will turn each of us into super agents. And to him, that equals more freedom for everyone.
I think the bankless journey is all about becoming a more sovereign individual. That's what David and I have talked about since inception. And it's increasingly hard to imagine being a sovereign individual without crypto, which we've talked a lot about, but also without AI. Like crypto gives you the ability to own things, but AI seems to be the ability to control your own destiny.
And that's why we're doing an AI episode today with Reid Hoffman to help stay ahead of the AI curve. A few things we discuss, super agency, doomers, bloomers, and zoomers, what could go right, how to use AI, American super intelligence, and finally, we end with the question to Reid, what if this whole AI thing is overhyped?
Stay tuned for that answer. Super Agency is the title of Reid Hoffman's book, which is coming out this week, if you are listening to it at the time of release. And so this is all on the backs of that. Ryan, of the two co-hosts of Bankless, read the book. I did not. And so I'm more along for the ride. I'm in listening mode, asking a few questions here or there, but it's really Ryan in the driver's seat for this episode. So I hope you guys enjoy the episode with Reid Hoffman. But first, before we get there, a moment to talk about some of these fantastic sponsors that make this show possible.
Are you ready to swap smarter? Uniswap apps are simple, secure and seamless tools that crypto users trust. The Uniswap protocol has processed more than $2.5 trillion in all time swap volume, proving it's the go-to liquidity hub for swaps. With support for growing numbers of chains, including Ethereum, Mainnet, Base, Arbitrum, Polygon, ZK-SYNC, Uniswap apps are built for a multi-chain world. Uniswap syncs your transactions across its web interface, mobile apps and Chrome browser extensions, so you're never tied to one device.
And with self-custody for your funds and MED protection, Uniswap keeps your crypto secure while you swap anywhere, anytime. Connect your wallet and swap smarter today with the Uniswap web app or download the Uniswap wallet, available now in iOS, Android, and Chrome. Uniswap, the simple, secure way to swap in a multi-chain world.
With over $1.5 billion in TVL, the METH protocol is home to METH, the fourth largest ETH liquid staking token, offering one of the highest APRs among the top 10 LSTs. And now, CMETH takes things even further. This restaked version captures multiple yields across CARAC, Eigenlayer, Symbiotic,
and many more, making CME the most efficient and most composable LRT solution on the market. Metamorphosis Season 1 dropped $7.7 million in Cook rewards to METH holders. Season 2 is currently ongoing, allowing users to earn staking, restaking, and AVS yields, plus rewards in Cook, METH Protocol's governance token, and more. Don't miss out on the opportunity to stake, restake, and shape the future of METH Protocol with Cook.
Participate today at meeth.mantle.xyz. Celo is transitioning from a mobile-first, EVM-compatible Layer 1 blockchain to a high-performance Ethereum Layer 2 built on OP stack with EigenDA and one block finality. All happening soon with a hard fork. With over 600 million total transactions, 12 million weekly transactions, and 750,000 daily active users, Celo's meteoric rise would place it among one of the top Layer 2s built for the real world and optimized for fast, low-cost global payments.
As the home of the stablecoins, Celo hosts 13 native stablecoins across seven different currencies, including native USDT on Opera MiniPay and with over 4 million users in Africa alone. In November, stablecoin volumes hit $6.8 billion, made for seamless on-chain FX trading. Plus, users can pay gas with ERC-20 tokens like USDT and USDC and send crypto to phone numbers in seconds. But why should you care about Celo's transition to a Layer 2? Layer 2's Unify Ethereum, L1's
fragmented. By becoming a layer two, Celo leads the way for other EVM compatible layer ones to follow. Follow Celo on X and witness the great Celo happening where Celo cuts its inflation in half as it enters its layer two era and continuing its environmental leadership. Bankless Nation, very excited to introduce you to Reid Hoffman. He is a founder investor. He co-founded LinkedIn, which I'm sure many of you have used in the past. He's extremely active in Silicon Valley, particularly over the last couple of decades. And more recently, he's been very close to what we would call the
of this whole AI thing. So he was serving on the board of OpenAI starting in 2018. Notably, I should mention, because whenever someone talks about the board of OpenAI, a lot of things, you'll come up, but he actually left to go focus on AI investing before Sam Altman and the ousting drama. And you guys remember all of that.
He's also a gifted writer, a communicator. I've read several of his books. I think one of the canonical books for tech founders is this book called Blitzscaling, which is just phenomenal on how to grow an internet scale business. And now, all of this preamble to say, now he's written a book on AI called Super Agency. And I'd pretty much describe this as maybe Reid Hoffman's thesis.
for artificial intelligence and how it will impact us in the decades to come. Reid Hoffman, welcome to Bankless. It's great to be here, and I look forward to not only this conversation, but future ones as well. Yeah, I mean, I think we're going to really focus on AI because that's the subject matter of your book, but maybe in a future episode we get into crypto because I know you have a lot of thoughts on that. Yeah, no, I actually think I bought my first Bitcoin in 2014. Congrats. It's a little late, but you know. It's earlier than most.
Yeah, that's a good seasoning of a time to buy Bitcoin for sure. So let's talk about this book. Let's talk about your thesis for artificial intelligence. And when I heard that you were writing a book called Super Agency, my first question without reading anything further was like,
Okay, super agency, who is Reid talking about? Like, who are the super agents? Is this the humans? Do they become the super agents? Or is he talking about the AIs themselves? Do the robots become the super agents? So maybe you could kind of start there. Could you define what you even mean by super agency and like who gets it? Yeah, so let's actually start even a little bit earlier with agency and then get to your excellent question, which is what is agency? Agency is the ability to kind of make plans, do things in the world,
you know, kind of make parts of the world, you know, according to your intentions and desires and express yourself in the kind of the ordering of the world around you. And obviously, nobody has perfect agency. You know, that's, you know, kind of theoretical, deistic-like creatures. Like God has that, maybe. Yeah, perhaps. And it depends even on what your particular theology is. So that's the reason I was being a little bit more vague. Non-denominational today in this podcast. Yes, exactly.
And so super agency, the precise term is kind of when millions of human beings get access to kind of an elevating technology, a transformative technology. The superpowers not only they get as individuals, but society gets transformed. And so, for example, a canonical example is cars. So you go, well, it gives me superpowers because I can go far. I can drive. I can get to farther distances. Right.
But as other people in society also get cars, you know, like suddenly you'd had to go down to the doctor's office to get an appointment. Now the doctor can come to you. And obviously later instantiations of this is you get, you know, Instacart deliveries and, you know, all the rest. And so super agency is kind of how we all get superpowers. And so to your opening question about are humans the super agents or are AIs the super agents, to some degree it's both.
But the important emphasis is that rather than us as human beings and humanity losing our agency, we are gaining agency. And by the way, in a very similar pattern to the way that I gain agency when you guys also get cars.
Right. It's not just me that gains agency with my own car. I gain agency when you guys get cars. And so that's the elevation of agency and hence super agency. I was almost thinking about your like book title. If you used a different synonym besides super agency, if you just titled the book Superpowers.
Right. And like who gets them? That's almost the same discussion. Like maybe when we get into this term agency, is the term superpower and super agency, are they kind of synonyms? Is the short form version of this to just like we get all of this additional choice surface area? We have new abilities to do things that previous generations could not have imagined before.
That feels like a superpower to me. Is it kind of one and the same in your mind? Well, superpower, it's deeply related. The Venn diagrams have a high overlap because the kind of the elevation of capabilities are superpowers. And every, you know, kind of new major technology gives us new kinds of superpowers. Now, some of it is with a superpower and as lots of people get superpowers, you know, individuals, institutions, societies, governments, etc.,
your agency changes some. So it isn't, for example, like the agency of people who are kind of driving horse and buggy carriages. That changed with
with cars because it was like, well, no longer are the streets set up for you. No longer can you be doing this thing that, you know, you had been doing and were planning on doing, you know, no longer, for example, was the horse's transport industry, you know, kind of central. And by the way, even like earlier technologies like trains, those changed because
in the kind of the ways that people would express their agency and be able to work on it. So superpowers are a way you extend your agency. But when it happens in a super agency context, it also transforms it and changes it. So that's the reason why it's not 100% the same, but closely related. Read something that we share in common is we actually both have podcasts. You have a podcast called Possible. And I think Ryan and I stumbled on your podcast and we noticed that you did an
episode, an interview with yourself. But yourself was, we would might call it in the crypto world, an AI agent. Now, maybe this illustrates what you mean by super agency. And maybe you can take that metaphor all the way home. But how do we know that we are actually talking to the real Reid Hoffman and not
your AI co-host bot that now is with us, actually. And the real human Reid Hoffman is somewhere else doing work in a different direction. How do we know you're the real Reid Hoffman? Well, that will get to be a more and more complicated question.
At the moment, the video avatars are not actually in fact real-time. So the Read AI discussion has to be a little bit scripted even though it looks like it's on a podcast that it's a completely real-time thing. There's actually in fact running it through the ChatGBT instance that's trained on 20 years of my writing, and then more specifically getting the audio and video produced with the right kind of quality.
doesn't really, you know, enable that for kind of a full real-time stack today. But, you know, part of the reason, of course, you know, I did it, put it on possible because what could possibly go right was to start getting people familiar with the future universe, just as you guys are doing, you know, in the kind of all of the technology broadly, but also around, of course, crypto and what is sovereignty and identity and all the rest of that mean.
it's kind of like, here, here's a lens into the future. And we don't know exactly where the future is going, but we're trying to get everyone, you know, kind of participating, ready, navigating well, et cetera. And that was part of the reason why doing Read AI. But there is, you know, obviously at some point,
One could get to that as an interesting question. And my own hazard of an answer here is something a little bit more like, well, crypto signatures and identity is sure what's happening. But of course, given that I'll probably have both the crypto signatures for me and for Read.ai, that might still even be a live question.
Yeah, it's really interesting, though. There's something very empowering about the experiment that you're running with Read AI because it leads to a promising future of like, if individuals are sovereign over their own kind of AI agent twin, maybe that AI agent twin could go do work while they're like goofing off. They're going like doing something that they enjoy. Maybe they're watching a movie, they're doing art, they're like working out, something like that. And then there's Read AI doing podcasts, like, while all of this goes on. And you know, the real Reid Hoffman sort of
has ownership over that and somehow like that feels very democratizing. I want to get back to the through line of this conversation, though, when we talk about super agency, though. So your thesis is we understand what super agency is and how that's similar versus different to superpowers. And you said very emphatically that it's not the robots that get it. The AIs do get it, but also the humans get it. Right. Your view here is that it's going to be humans amplified by
by AI. That's the real unlock here. But like, I have a question within that subset of humans who get it. Which humans are we talking about, Reid? Are we talking about the Silicon Valley elite in your thesis? Are we talking about, you know, the 1%? Those that control most of the capital in society? Are we talking about governments? Or are we talking about individuals? Because the distribution of this seems incredibly relevant to how we actually view whether this is a good thing or not. So...
I think the path we're already on, you know, with hundreds of millions of people using chat GBT and, you know, exposure to, you know, agents in other contexts, whether it's, you know, Anthropic, Gemini, Copilot, etc. So I think we're already seeing hundreds of millions of what you're referring to as individuals, but, you know, kind of call it access from individuals.
kind of a bulk of at least middle-class Western folks. Although, one of the things that was very cool that I had heard about from a friend who was traveling
in Morocco recently is that the taxi driver was using chat GBT as the translator for, you know, like, where do you want to go for the tourists? And so, you know, it's very broad indeed. Now, that being said, I don't want to paper over the fact that we live in a human society that has, you know, kind of differences of wealth, differences of power, differences of position, not just between nations, but within nations,
And, you know, that's not going to go away. And so it wouldn't surprise me, you know, if you said, well, but actually the kind of AI that the wealthy have access to has some improvements and betterness than, you know, maybe real-time, you know, responsiveness, maybe, you know, number of GPUs available, et cetera, et cetera, than, you know, kind of a lower income person.
Now, that being said, part of the reason I'm really optimistic is a little bit like, you know, kind of smartphones, which is, you know, three quarters of the world today has mobile phones. But the smartphone that, you know, Tim Cook has or Jeff Bezos has or Sundar Pichai has is the same smartphone that, you know, the Uber driver has. And so I think that the natural drive in technology, which includes AI, is
is building it for the very mass market, you know, the billions. And so I think that I can confidently assert that superpowers will be available very, very broadly, even if, you know, there's also some differences in superpowers based on, you know, country and wealth and, you know, kind of access. But I think democratizing will be the name of the game.
So in your world, AI is really a democratizing technology. It's pretty like, you know, of course, you know, if you're in the early adopter curve, maybe you get things a little bit sooner. But generally, it's going to take the form of the way cell phones did, where in the 1980s, there's a large, you know, big brick.
It costs thousands of dollars until then the technology democratized or the way the internet has kind of democratized things. It's not because there is this fear out there, Reid, that AI is kind of going to be controlled by superpowers, let's say, governments, governments.
or a small cabal maybe in Silicon Valley, that they're going to have the technology and kind of the rest of us plebs like maybe won't. But you're saying it'll be more similar to, I guess, the propagation of the internet or the cell phone in that it will be fairly widely distributed and actually be like a technology that's available to the general public? Yes, in short. And part of that's also because, you know, the same...
call it Silicon Valley ecosystem that built smartphones, that built the internet. And obviously, it's not just Silicon Valley, but there's a lot of Silicon Valley contribution, is also very similarly building kind of AI, both in the hyperscalers and the large models, but also at this point, there's so many thousands of startups that they kind of, you could start mapping various cryptocurrencies,
There's similar numbers of orders of magnitude.
Let's talk about the AI religions that exist, because I think this was a fairly fantastic framing in your book and one of my chief takeaways. So you talk about, and I'm using the term religion, you could say ideology, you could say philosophy, but just the point is that each of these categories, I think all of them have an expected outcome or an article of faith, because of course the future is unknown. But anyway, so the four categories in your book are
of people with thoughts about AI. And it's useful, I think, to categorize them to sort of understand the worldview a bit better. One is the doomer, okay? The second is the gloomer. The third is the bloomer. And the fourth is the zoomer.
Okay, now these are four different categories, subsets of groups with different perspectives on AI. Could you just define those four categories for us? Absolutely. I'll go through them in that order, which is, doomers basically are like, AI is the destruction of humanity. And, you know, it's very much like the Terminator robot or other kinds of, you know, kind of popular Hollywood themes argued in a way that's kind of like, well,
It'll be more intelligent than us. It'll want to run the Earth. It'll look at human beings as either hostile or ants or equivalent. AI should just be stopped.
And gloomers are essentially, look, I don't think the AI future is going to be particularly good. I think it'll, you know, take away a whole bunch of jobs and kind of disorder society. It may lead to much more misinformation and kind of unbalanced democracies. It'll have a whole bunch of kind of more information, you know, kind of surveillance. And so their privacy will be worse, etc.
And so, like, I don't think it's stoppable because, you know, multiple countries and multiple, you know, companies around the world are building it. And, you know, that's the way that humanity rolls. And, you know, companies are going to become a lot more productive for this. But I think it'll be an unfortunate outcome. And it's gloomers, by the way, because they only see the gloomy side if that helps people. Exactly. Exactly.
And actually, I'll do Zoomers before Bloomers because I want to spend a little bit more time on Bloomers since I self-identify there. Zoomers are essentially like, no, no, no, this technology is great. It's like the opposite of Doomers and
And it's like, everything that we're going to build with it is going to be really amazing. You know, the sky isn't even the limit in terms of what kinds of things could be made. Or, you know, maybe AI is going to invent fusion rather than us inventing fusion. And everything that comes out of this is just spectacular. And Zoomer, Zoom refers to just hitting the gas pedal. Yes. Just go forward, go fast. Exactly. Yeah. And then Bloomers, which I describe myself as, is kind of a Zoomer, but it's supposed to just...
like maximally hitting the gas pedal in all circumstances, you go, well, drive intelligently, like avoid the potholes, slow down at the curve, you know, be looking at kind of like, oh, look, this is a little bit of a dangerous area. Let's go through with this a little bit more care. Still accelerationist, the kinds of things that we can build in the future, whether they're medical outcomes or climate change outcomes or, you know, kind of human enablement with work and with education, right?
All of that stuff is super important to get to. But, you know, let's kind of make sure that we're not enabling rogue states or terrorists or unbalancing crime waves or other kinds of things as ways of doing this.
And let's make sure that we don't, for example, inadvertently create Terminators, you know, because it's a little bit of a question of how we drive. It's not inevitable. So that's the Bloomer category, and that's the category I'm in. And obviously, if you said, well, you can't pick Bloomer, I'd be closer to Zoomer, much closer to Zoomer than Gloomer or Doomer. But it's also part of the reason why the subtitle of Super Agency, which parallels the podcast, is what could possibly go right here.
is because we always, as human beings, encounter new technologies with that, oh my God, the world's coming to an end. I mean, remember all those discussions around crypto? Maybe we're still having them, right? And also, you know, by the way, the internet, and by the way, cars, and by the way, the printing press, it always starts with, oh my God, this is the end of society. And then when we start navigating, we go, oh wait, if we do that this way,
we make society a whole lot better. By the way, we have in every technological instance in history so far,
made that happen and gotten super agency through all of them, one can argue the AI technology, it is new and unique, whether it's new and unique in that characteristic or not. And that's, of course, why to write the book and go out and talk to people and so forth to show. Actually, in fact, the only way you can create a positive future is by imagining it and steering towards it. And so that's what we should be doing. Let's make sure we understand these examples of these four categories, like maybe by wave example, actually.
So somebody on the Zoomer side of things, and again, we're not referring to Gen Z here. We're talking about Zoomers. I was thinking in my head, another term that bankless listeners might be familiar with is EAC. If you've heard that term, read Effective Accelerationists, of which we've had Beth Jaysos on the podcast. He's basically like full speed ahead of
Like, let's harness energy, let's harness AI and, like, conquer the universe full speed ahead. Marc Andreessen, you know, put together a techno-optimist manifesto that has some EAC characteristics. Zoomer is basically the EAC group. Is that right? Exactly. Although I think you might say that Zoomers and Bloomers are...
kind of two variants of the EAC group. Because I also, by the way, I think, you know, I started using the term techno-optimism some number of years ago. Like, EJ, I'm a techno-optimist, not techno-utopian, which is you can build great things with technology. It doesn't mean everything you do with technology is great, right? So, you know, do it with some care. I'd say it's the Zoomers are, hey, anything that anyone's doing with this, it'll end up good,
And the bloomer is, hey, most of the stuff is going to end up really good. Let's try to like steer a little bit. It's hard for me to actually put people in boxes like somebody like Mark Andreessen. I don't know if he's full, like kind of everything technological, technical is good or how much of this is sort of, you know, a personal choice to just amplify this extreme position in order to kind of. He might need to plant a flag in order to shift the Overton window. Move the Overton window. Right. And like, I think that's.
part of the meme games that people like Beth Jesus and maybe Andreessen are doing, but it's hard to speculate on. Okay, so that's the Zoomer. Now, the Doomer is pretty easy. I think we've also had guests on Bankless. Elias Yudkowsky, he very much clearly thinks that everything that we're doing right now in AI, basically, we only have years, maybe decades to kind of live before AI actually supplants us. He genuinely thinks that. That's the Doomer category. So you don't have to go into more detail there. But how about
the gloomer category a little bit more. It seemed to me that this is sort of the mainstream media type of take on things. And it might even be the popular narrative around AI. Like if you ask the average American, what do they think about AI? I think in like the 2020s with the current spirit of the age, I think there'd be some cynicism about AI. There'd be some pessimism about AI. It would definitely be the glass half full type of
And I think that's the popular idea, but who are some archetypes for this gloomer category? So I do think that it's kind of generally speaking, you know, kind of the discourse because the discourse now, just like earlier times in history with earlier technologies, tends to focus around like all the things that could possibly go wrong. Right.
And so many journalists, definitely the vast majority of people in Hollywood who are like, oh my God, this is the destruction of the content production industry. And, you know, when Sora and VO are going, you know, all of our jobs are going, a lot of it's focused around job displacement. So.
So worries and concerns about job displacement. So, you know, I think it's more or less kind of like if you can't put the person clearly in another bucket, they're probably in the gloomer bucket. It's probably the and that's a little bit like mainstream media. It's the everyone else bucket.
How about from a political landscape perspective, would you look at the axis that way here? Because I think a lot of people listening would be like, okay, Democrats are a bit more on the gloomer side of things and Republicans are a bit more on the, maybe not the zoomer side of things, but the bloomer side of things. Do you think that's an axis at play as well? Well, I think it depends, right? Because there's also a lot of modern Republican that's kind of anti-big tech and
thinks that big tech is too big for its britches and should conform. So, you know, I think that there's kind of, as it were, gloomers in both
So, Reid, I think for the rest of this podcast, I think we want you to make the case for bloomerism here. Like, why is AI going to go really well for humanity? This idea of humans really amplified by artificial intelligence, and it kind of leads to really positive outcomes.
One of the early chapters in your book talk about some history I actually wasn't familiar with. And maybe this is an analog that will be helpful for some. It was helpful for me. So this is the history of the mainframe computer. And you go back to 1960s.
And apparently, I did not know this, maybe some Bankless listeners also don't know this, during the 1960s, when the mainframe computer kind of entered the cultural public scene as a new technology, we had computers that could do incredible things for the time, there was a media hysteria that broke out.
Okay, and there was concerns about this new computer that had the ability to recall in a few seconds every pertinent action, including all of your failures, your embarrassments, or incriminating acts from a lifetime of every citizen. There were many comparisons to a 1984, the book, of course, that's in Western canon by George Orwell, just like this Orwellian society that would be built out by these mainframe computers. There were even congressional hearings, guys.
So one lawmaker warned of the danger of the computerized man, which is a citizen that would lose all of their individuality, their privacy, basically their agency, and they'd be reduced to magnetic tape. That, of course, was the technology to program computers at the time. It's like literal magnetic tape. So give us the history of the mainframe computer in this hysteria. Why do you think this is analogous to what's happening today? So, well, you've covered it.
pretty well. Thank you for actually reading the book. That doesn't happen that often these days. And so...
I think that the question is anytime that we encounter a new technology, and in this case, the mainframe, they were looking at like, okay, what could possibly go wrong? And they think about, well, actually, in fact, this could track everything, make all the decisions, take away the agency of people by putting it into government centralized control. A little bit of the discussion of what's happening with AI in some circles today.
and then make us as human beings
essentially powerless and agency-less. And that's, of course, part of a lot of the, and we talk about this in Super Agency a bunch, is a lot of it was the 1984 George Orwell worries, where that kind of centralizing technology became a control over individuals, and individuals through this kind of control of information, control of power, become almost irrelevant cogs in a machine.
And, you know, if you look at it the same way, well, what's AI doing with my data? Oh, am I going to be able to make decisions because AI is going to be so persuasive and manipulative and advertising systems and information systems? You know, am I going to be able to control my life and work? Or is AI going to be doing all the work? All of those are very parallel, not just obviously to the mainframe discussions, which are relevant and
close and we at least gotten through I think punch cards to magnetic tape before we started having all the worries so we're magnetic tape not punch cards that's meant to be a joke and so anyway that was essentially what the dialogue was going and people forget it now because it seems absurd looking back on
on it. I mean, it's kind of like, well, yeah, I don't know why those people thought that. I mean, look at all the computers we have now and look at, you know, the smartphone that everyone has in their pocket is, you know, thousands of times more powerful than those mainframes, right? And, you know, kind of everyone has one and it's kind of, you know, working, you know, throughout the entire place. And by the way, I think everyone's going to have an agent too and, you know, with AI. And so I think that's why the parallel of the discussion to say, you
We're going through all this energy to imagine like every possible bad outcome when a lot more of the energy is better put into what are the good outcomes that we should be steering towards and which specific bad outcomes, you know, that are not ones that are easily correctable as we get into it. So, for example, you can put a car on the road without bumpers.
It's good to build bumpers later. You can put a car on the road without seatbelts. It's good to put in seatbelts later. But you don't try to imagine all 10,000 things that could go wrong before you will put the car on the road.
You got to put the car on the road and start learning as you're going. And that's what the AI thing. And so for most gloomers to kind of persuade them to switch from, call it AI skeptical to AI curious within the kind of the bloomer category is to say, start using it and start using it not just for, hey, I have these ingredients in my refrigerator. What can I cook? Totally good use case. Or my relative is having a birthday party and I want to create a sonnet for them. Great. But for real things, like for example,
Actually, I'll give a personal example because I think this might be useful in particular to the bankless community. So when I first got access to GBD-4, I sat down and said, how would Reid Hoffman make money by investing in AI as a proxy for what degree of job replacement do I have with GBD-4? And it gave me back an answer that was powerfully written, compelling, and completely wrong.
Because it gave me back the answer that a business school professor who was very smart, doesn't understand venture capital, would say. First, you analyze which markets have the largest PAM. Then you analyze, you know, kind of what the substitute products might be. Then you go find teams that could possibly build those substitute products and stand them up in order to invest in them. And you're like, yeah, that's not the way any capable venture, any venture capitalist who's successful does not operate that way. Yeah, it's like business school slop, I guess, right? Yes, exactly. Exactly.
And so it's like, OK, but then you say, well, then is it completely irrelevant to investing? And the answer is no, actually, in fact, one of the things that I like figure this out by the next day was, hey, I can feed in the PowerPoint deck or feed in the business plan. And I say, what are the top questions for answering and due diligence? And while as an experienced investor, I might have known all those questions and gotten to them all.
It helped me go, oh, yeah, question number three, I would have figured that out as the right question to ask three days from now. And it's useful to have it now while kind of composing a due diligence plan. And so that kind of acceleration or that kind of amplification, you know, or that kind of agency, super agency is part of the kind of human agency. And so
All of this is a personal story to go back to the bankless community and say, well, start using it for things that matter to you. And even if the first one, like how do you invest in cryptocurrency, doesn't give you anything useful.
Keep trying different things and you may find something, oh, this helps me with how I can operate at speed and with accuracy. And then that gives you a wedge to start learning, you know, kind of how you can be kind of superpower enabled. The Arbitrum portal is your one-stop hub to entering the Ethereum ecosystem. With over 800 apps, Arbitrum offers something for everyone.
Dive into the epicenter of DeFi, where advanced trading, lending, and staking platforms are redefining how we interact with money. Explore Arbitrum's rapidly growing gaming hub, from immersed role-playing games, fast-paced fantasy MMOs, to casual luck battle mobile games.
Move assets effortlessly between chains and access the ecosystem with ease via Arbitrum's expansive network of bridges and onrifts. Step into Arbitrum's flourishing NFT and creator space where artists, collectors, and social converge and support your favorite streamers all on-chain. Find new and trending apps and learn how to earn rewards across the Arbitrum ecosystem with limited time campaigns from your favorite projects. Empower your future with Arbitrum. Visit portal.arbitrum.io to find out what's next on your Web3 journey.
What if the future of Web3 gaming wasn't just a fantasy, but something you could explore today? Ronin, the blockchain already trusted by millions of players and creators, is opening its doors to a new era of innovation starting February 12th. For players and investors, Ronin is a home to a thriving ecosystem of games, NFTs, and live projects like Axie and Pixels. With its permissionless expansion, the platform is about to unleash new opportunities in gaming, DeFi, AI agents, and more.
Sign up for the Ronin Wallet now to join 17 million others exploring the ecosystem. And for developers, Ronin is your platform to build, grow and scale. With fast transactions, low fees and proven infrastructure, it's optimized for creativity at scale. Start building on the testnet today and prepare to launch your ideas, whether it's games, meme coins or an entirely new Web3 experience.
Ronin's millions of active users and wallets means tapping into a thriving ecosystem of 3 million monthly active addresses ready to explore your creations. Sign up for Ronin Wallet at wallet.roninchain.com and explore the possibilities. Whether you're a player, investor, or builder, the future of Web3 starts on Ronin.
I completely agree. My lived experience of using tools like ChatGPT is that it does amplify my productivity when I use it in the right way. And I have to spend time to figure out how exactly to apply this to my own amplification of what I do. I guess when I was reading this section about the 1960s and the mainframe computer, I was sort of putting my head in the minds of people at that time. And you could kind of...
see at that time, the way compute was sort of playing out, it was really controlled by a small number of companies and governments. It was sort of like, I mean, the computers were the size of buildings, right? And so you can sort of take a 1960s mindset and extrapolate that and get very scared. What ended up happening was, of course, the personal computer revolution.
where everybody got those building-sized computers in their own home as an amplifier for their own productivity, and society completely forgot the 1960s hysteria around Mainfree. But I can't help but also wonder if some of the criticisms were sort of right. You go back to the 1960s, and they talked about surveillance and the lack of privacy.
And they weren't completely wrong. You know, we didn't get the worst case scenario of what they were projecting, but we did get a lot of good and then some bad outcomes. And this is why I sort of want to ask you about your framing of like, do you actually think the doomers and the gloomers are completely wrong? Or do you think that there's some probability of like a doomer style of outcome or even a gloomer style outcome where AI is like not so...
sunshine and rainbows, that it actually is kind of negative for society. Like what do you think about that from a probability distribution perspective? And do they have a point? So I think smart people always have a point. And so I think the question is good because it's always to listen to what is the thing that they're thinking about? I think the two answers are very different between doomers and gloomers. Let's start with doomers who, you know, another thing, you know, the bankless community may be familiar with is, you know, X risk.
And so they tend to be existential risk predominantly, you know, especially Yudkowsky and others. Now, the thinking starts like this. It says, can you guarantee me that killer robots will never be built either in the hands of humans or autonomously? You say, well, you can't guarantee that. There's lots of things you can't guarantee. You say, ah, so then we have an existential risk that's being added.
And we should stop that existential risk because why should you add any existential risk? QED, my argument's over. Well, until you consider the fact that existential risk is not one thing, like the only existential risk for human beings is not killer robots.
There's pandemics, there's asteroids, there's nuclear weapons, there's climate change, and the list kind of goes on. And so you have to look at existential risk as a portfolio. Namely, it's not just one thing, it's a set of things. And so when you look at any particular intervention, you say, well, how does this affect the portfolio? Now, my very vigorous and strong contention is that AI, even unmodified, and we'll get to why steering is good,
But unmodified at all is net, I think, very positive on the existential risk portfolio. Because when you get to, for example, pandemics, one of the things we've experienced in our lifetimes, and obviously if it was a lot more fatal and everything else, it could have been substantially worse than the many thousands who died.
The question is to say, well, how do you see it, detect it? How do you analyze it? And how do you both do therapeutics and preventive vaccines at speed in order to navigate that? And AI is the primary answer to that. Like none of that can work without the speed of AI.
And then you get to, oh, well, how about asteroids? Well, identifying which asteroids might get to us, being able to intervene on them early. You get to, like, for example, climate change. You go, well, actually, in fact, whether it's anything from accelerating the invention of fusion to how do we manage our electric grids better, there's positive contributions across all this. So you go, okay, given all of that, I think AI even, like,
unmodified. Just let the industry know exactly what it's going to do. It's going to be strongly positive in the existential risk bucket. And I'll pause there in case you have a
contention on that before I get to the gloomer category. No, I'll just say it in another way where you're just saying the most fully zoomer, the fastest engine going into the AI revolution, it hits every single pothole. It's on two wheels as it's going around the corners. Even under that situation, the solutions that it provides to all alternative existential risks is still net positive in your opinion. Exactly. Yeah. Yeah.
So that's the reason why I'm very far away from doomers. Okay. Well, how about the gloomers? Do they have a point? Yeah. Well, no. And by the way, I thought the doomers have a point too, which is you say, hey, by the way, we should try to minimize the killer robot risk. Yes, that is something we should be doing.
And we can get back to that. I guess your answer would be like through use of AI to help us also. Yes, exactly. Okay. Yes, exactly. That feels a little recursive, but. Hey, whenever technology is part of the problem, it's almost always the best part of the solution too. Okay. Right. Okay. That's the optimist. That's the EAC talking, I think. Okay. But I think I have history on my side, which is good. Yes. And we can get back to the privacy, you know, thread from the mainframe things as well. How about the gloomers though? So on the gloomer side.
The primary thing where I think I'm very sympathetic to the gloomers is that if you look at, and we cover this some in Super Agency, as you know,
If you look at the transitions for human societies in these technologies, we as human beings adopt and adapt new technologies very painfully, like the disruption. So you go, ah, the printing press. We could not have anything of the modern world without the printing press. You can't have science, scientific method. You can't have literacy. You can't have, you know, kind of a robust middle class. Yet there was a century of religious war because of the printing press. When we as human beings come
come to this, the transition period's almost always very painful. And I think even with AI, we're going to have pain in the process. I don't think there's any way, unfortunately, around it. Part of the reason I'm writing Super Agency and doing these conversations is to say, well, let's try to be smarter about it than the times we've done it before. Let's try to make the transition as easy and kind of more graceful. But it will still be painful, like in terms of, you know, even if you say, hey, most human jobs will be placed by humans using AI, you
that process itself is still painful. People have to learn AI, maybe it's new humans, maybe the human who couldn't learn AI feels out of place, you know, is suffering because of it. And that's the kind of thing that I think the gloomers are kind of putting, as it were, an intuitive finger on, which is, hey, look, all this kind of transition, they'll project it to infinity, but all this kind of transition, boy, this is going to be difficult. And you're
And you're like, yes, it is. Right? It's not. No, it's not. And we're going to try to make it as good as possible. And that's, again, part of the reason why I'm arguing about we should be intentional here about what could possibly go right is you say, well, and this gets back to the technology being the solution. It's like, well, okay, so we're going to have some job transitions. We're going to have in transformations. We're going to have information flows and misinformation flow transformations. And we're going to have some expectations of privacy transformations. What should we do?
And the answer is, well, I actually think AI can be helpful in all of these cases. And like one of them, part of the reason why, you know, inflection and pie was, you know, kind of, you know, something that I helped getting as an agent for every human being that's on your side, that's for you, you know, and by you is one of the things that can help you then navigate.
Because it can be like, okay, how do you help me navigate this new world? And I think it's one of the things that's really important for us to provision early that goes all the way back to your democratization question. And one of the reasons why I think that's an important thing to make sure that there's very broad access to. Okay, let's underscore this point because I think some of the reason why the gloomers sort of are winning right now in the narrative war is because like, of course, fear is a bit more viral and it's easier to imagine. It's much easier to imagine an Orwellian future in the 1960s or the 2020s
than it is to imagine a more optimistic future. And as soon as you start talking about this optimistic future, it sounds like too utopian. It just doesn't even sound real, right? But we are limited in terms of our imagination. But that question, that prompt that you just raised is like a chapter in your book. It's the question of what could go right. And the gloomers rarely ask what could go right. And I think, to be fair to them, they have some limitations on their imagination. So I want to ask you, as a kind of a techno-visionary, like,
How would you answer that question? So if human beings, if every citizen in the United States had an AI agent that amplified what they do, and we have this across society, this technology was widely deployed, what could go right? Like, what are the benefits for the average American here? So line of sight, namely no technological innovation. It's just a question of how we get it built and deployed. A medical assistant that's better than your average doctor that's available 24 by 7 in every pocket.
So you have a health concern. It's 11 p.m. You have a health concern for your kid, your parent, your grandparent, your pet, anything. You can begin to address it. And it can help you, including going, oh, for that, you should go to the emergency room right now. Right? And so that's buildable. A tutor on every subject for every age. Anything from 2-year-old to 82-year-old. Like, hey, you'd like to learn this. You'd like to understand more. By the way, there's obviously economic implications of that.
That's, I think, another thing that's available. Then to your democratization point, there's a lot of services, not just medical and access to doctors. You know, some people have concierge doctors. Most people have to go through, you know, kind of their medical plan. And some people don't even have medical insurance. Even in the U.S., there's a bunch of people who are uninsured.
What other kinds of things could be? It's like, well, actually, in fact, like I'm reading the lease for my rental. Like, how do I understand that? What's important to know about it? Well, the agent can help you with that too. And that's all line of sight today. That's not even getting to, hey, how can it help you like code better? How could it help you, you know, create marketing plans better? How could it help you sell better? How could it help you? Like all of that stuff is also coming. But like those three basics are,
for everybody is life-transforming. What about the societal level? So when those things for individuals kind of aggregate and compound, we have better healthcare, we have better learning capabilities, we have better things in all areas of our life. What does that mount to from the United States from a societal perspective? Do we have more free time as a society? Does our happiness increase? Does our GDP double or triple? Do we get those things as well? Well, I definitely think
The equivalent of what GDP is supposed to be measuring should be increased. Now, GDP has this challenge that it's measured in kind of an industrial dollars for things way. So, for example, all the benefits you get from Wikipedia are actually deflationary in GDP. But the quality of that, you know, another thing that people worry about with AIs is, oh, I'm not going to spend time talking to people. I'm going to spend all my time talking to agents.
And so loneliness will be increased versus decreased. I think that to some degree, that's a design, you know, kind of choice. And I think what we want to both see and I hope we'll get and we want to nudge towards is, you know, like when you ask, you know, inflections pie, hey, you're my best friend. It says, no, no, I'm not your friend. I'm your AI companion. Let's talk about your friends. Have you seen them recently? How would you like to talk to them? You know, maybe you could set up a lunch date, you know, that kind of thing.
And I think that could lead to much greater happiness for this. And I do think that actually, you know, part of what I love about, you know, the Bhutan, you know, evangelical concept is I actually think measuring, you know, kind of gross national happiness is also a good thing that we should be, you know, aspiring to as a society. And I think that could be, you know, increased with this. But I think that the place where we'll see it is,
is in being much more like kind of fulfilling lives. And the fulfilling life might be, you know, kind of like, hey, I get more time to do my hobby. You know, I love fishing. I'm going to have more time to do fishing because I can do my work in a shorter amount of time. Or for people who, because, you know, American society tends to like to work, it's like, oh, I can accomplish a lot more in my work. I may be still working the same amount, but
But as opposed to putting a whole bunch of time into form entry, I can now do the parts that are not just like form entry and do the other parts of the work in much more kind of fulfilling and capable and productive ways. Yeah, one way I think about it is like 100 years from now, will the average person in the United States or wherever this technology is deployed, like have a better quality of life? I think of, you know, the 2020s and I compare that to the 1920s. And I would like hands down say,
like prefer to live in the 2020s for all of its problems than in 1920s. But before the advent of antibiotics, like, you know, look at kind of mortality rates from that time, look at kind of the amount of society that had to basically like do a grueling agrarian type farm job in order to just to get by, right? And it's like much better for most people now than it was previously. And there's lots of stats we could get into on that. But
Let's just pause and go back to kind of another gloomer objection. So they would say, Reed, everything you're saying sounds so amazing. But like, yeah, we've heard it before. This is another bait and switch from Silicon Valley. Okay.
They promised, remember the 2020s, the advent of Facebook, not to mention LinkedIn, they promised that we would connect the world. And what ended up happening, Silicon Valley got rich. They extracted our attention. There's, you know, the term extraction is used a lot about this. They sold us product.
They sold us as products to the highest bidder. And now I'm thinking about even the time I spent with ChatGPT. And it feels really good right now. It's amazing. I spend more time with ChatGPT than I do with Google. And as a result, I think ChatGPT knows me even better than Google. I mean, a lot about me could be revealed by my search history, but even more so with ChatGPT. And I'm getting to the point of daily use where it's like, who knows me better than ChatGPT? Maybe my wife knows
maybe a handful of other individuals, but like it knows me and that all feels good because it's amplifying what I do, okay? But what happens if things go dark? If we get this kind of like bait and switch? If suddenly OpenAI or whatever, your Silicon Valley like corporation here,
start saying, oh, you know, all this AI stuff is pretty expensive. We're going to have to start harnessing all this data we know about Ryan to like do something and sell it to the highest bidder. Cambridge Analytica 2.0. Yeah. Or maybe they sell it out to the government or something, or they control me in all of these subtle ways by recommending things that aren't in my best interest. It's in their best interest or some government's best interest. Okay. This is the crux of the bait and switch.
And so address that head on, Reid. Like, how do we know this isn't a Silicon Valley bait and switch? Because it feels like that's happened previously with social media. Well, I mean, it a little bit depends on what you mean by bait and switch. Because I think, for example, let's take your example with Google and AdWords.
Yes, Google gets a bunch of data from you and can advertise to you, you know, better. And, you know, by the way, hopefully that means that the products that you're seeing, you know, are actually things that might interest you, which I think she thinks is a feature, not a bug in terms of things you might want to buy and actually has so far the best business model that's been invented, certainly in the media world, maybe in any part of the world today.
And they say, well, what do you get for your data? Well, you get a panoply of amazing free services, you know, free search, you know, free email, a bunch of other things. And so it's a voluntary, you know, something you participate in, you know, because you get a bunch of value, you know, kind of transaction.
And by the way, you'd probably rather be having it figure out how to monetize off your data than saying, oh, in order to get our RPU, you got to pay us, you know, 50 bucks a month, right? For this. Like, no, no, no, I'd rather, you know, get the advertising right and give me all this stuff for free. And so it's possible that, you know, the kind of AI agents will end up in a similar kind of thing where they say, well, hey, look, we could charge you 50 bucks a month like Google could for our search.
but actually, in fact, figuring out a way that it's kind of, you know, transparent and voluntary and engages with you because it shouldn't be deceptive. It should be with kind of your awareness in engaging and using it that this becomes a positive, you know, kind of economic transaction for you. But it could be other things too. It could be a subscription model. It could be integrated into the various productivity apps that you're using. It could be any number of things for that. I think that the dialogue between
That's, you know, kind of very well captured in this compelling slogan. Surveillance capitalism is misleading because it's like, well, but like I, for one, like surveillance medicine. I like, you know, the fact that my, you know, watch is tracking my sleep and health things for me and it helps me and it's part of that positive thing.
And a lot of the uses of these data in these internet systems are as a way of making it free for you where they have the economics for expanding and improving the free product.
And so, you know, I think that would challenge the bait and switch kind of methodology. And, you know, the last thing I guess I would say is like, for example, you say, well, you know, whether it's a social network, you know, and I, by the way, obviously think LinkedIn has handled this, you know, the best of, you know, all of them, whether it's Google, whether it's these things.
These are all voluntary participation questions. You might say, well, it's very hard for me to participate in modern society without being informed in the way that I could be informed this way. And it's like, OK, you know, like, yes, I myself use search a lot, but I think I can do it in a way that is, you know, maybe you should say, hey, Google should offer a paid alternative to.
On the other hand, you know, for that to be economically viable, at least two or three percent of the audience would have to opt for it. Right. And I'm not even sure two or three percent of the people would opt for it. Right. I mean, you'll get individuals that I would do it, but like that might not be, you know, economically relevant unless at least two or three percent of the people were doing it. So anyway, so I think it's a challenge to the challenge as it were. Yeah.
I think the Bloomer take on this was sort of interesting, right? Which is like acknowledging that there are some potholes and there's some costs, right? Maybe to society and to individuals, but also saying that the benefit far exceeds the cost. One way you underscored this was like, you said this, and I wanted to get you to justify this because it kind of blew my mind when I was reading it. Even if LLMs get no better, that is no better than today, the consumer surplus to the average 20-year-old living today is millions of dollars over their lifetime. Yes.
Okay, so what you're effectively saying is for an actual zoomer, so somebody in Gen Z, do you know somebody in that age demographic, they're going to be able to harness LLMs, and it's going to deliver millions of dollars in value to them. And that's not even talking about the LLMs and AI of the future. That's talking about the AI of today.
How? Some people think about that, and they're like, how does that deliver someone millions of dollars in their lifetime? Well, let's just start with something that's really simple, which is, you know, legal assistance. So you're going to encounter employment contracts, you're going to have rental contracts, you're going to have, you know, products and services you might be engaging in. And today, your average person just basically can't afford to pay a lawyer, right? Because the lawyer is hundreds of dollars an hour.
Well, now, even today, with GBD-4 today, you can put it in there and get useful analysis, useful kind of participation. So if you just take every single contract that you're potentially engaging in and use that now, that gets you a lot of dollars towards your millions of dollars.
Then you say, well, what about like medical stuff, right? Like consulting medical or other kinds of things, or especially if I like, you know, in the periods where since, you know, in this country, we tend to do, you know, insurance in kind of challenging ways, mostly through employers, right?
Like, okay, so getting medical advice. Well, that's another area where you can get a bunch. Then you say, okay, well, how about amplifying my ability to find and do economic work? That's another place. And so when you add all that up and you add it up for hopefully what is even a longer life, because if you're getting kind of call it
pre-critical medical advice about how to preventively stay healthy and preventively avoid certain kinds of, you know, catastrophic health conditions or navigate like early signs in ways that you can do it before you're in critical condition. Not only is that hugely economic, but that should also lead to longer lifespans. And so all of that is part of how, you know, we get to, hey, today it's already worth millions to you.
So Reid, this gets us to regulations. And I think the gloomer camp has one take on how we regulate AI and like the e-acts and the bloomers and the zoomers have a different take on this. But generally, what I'm seeing coming from the establishment government is like,
It's no gasoline on this thing. It's all brakes. They have this precautionary principle, which is like they think about what could go wrong and how to prevent all of the things that could go wrong. You make a different argument. I think that your argument is that innovation is actual safety. So you're making the argument, and I want to hear how this makes sense, but you're making the argument, I think, that actually hitting the accelerator on AI is how we make this thing safe. And that feels very counterintuitive, right?
What's that claim based on? Why do you think innovation is safety? So, for example, when you get to, like, how are modern cars able to go these speeds and able to go them much safer than earlier cars were doing them, is that as you iterate and deploy them, you realize there's like, oh, actually, in fact...
We could put in anti-lock brakes. Oh, we can put in seatbelts. Oh, we can have crumple zones. Oh, we can have bumpers. And that's an innovative path to making the car safer. And the car can then go faster and navigate circumstances because you've innovated safety into the car as part of the innovation with the car.
And the parallel with that is essentially doing, you know, kind of like, well, what are the future features of AI and what are the things that we could be doing
that make them much safer from these kind of aligned circumstances. You're like, okay, so can we make the AI really enable, you know, people who are trying to figure out stuff with like their health and other kinds of things, but also make, you know, any efforts at terrorism, you know, kind of much more difficult and much harder. And by the way, this is of course what, you know, red teams and safety and alignment groups are already doing at Microsoft and OpenAI and Thropic and others, you know,
you know, for doing this because they're aware of these kind of safety things, but it's that innovation in
into the future, that is the kind of really important thing. And the way you discover that is by iterative deployment, by actually like making it live and then seeing what things needed to be modified. Now, obviously on really extreme things like, well, okay, terrorists who are creating weapons of mass destruction, we want to make sure that that's as little possible in any field as absolutely the case.
And, you know, for example, safety groups more or less use as their minimum benchmark, let's make sure that these agents are not any more capable of doing that than Google searches today, right? And obviously, we want to drive both of them to the lowest.
But that's what, you know, kind of innovation to safety means in a kind of historical and easy to understand car example and what it means in terms of technological features for building future software. Last objection here that I think comes up is that this idea that AI kind of kills human autonomy. Like this is a control technology. It's not a freedom technology, basically. So like, and in this AI world that we're all moving towards,
I mean, where is my agency? I mean, you title the book, read, you know, super agency, right? It's like, but I feel less, like I have less agency if the AI is making all of the decisions for me. I want you to address that too, because it kind of ties into this concept of freedom. And I sort of wonder how much of this is also like boiling the frog, like we just kind of get used to it. And maybe that's okay, but maybe it's not. So if you went back to the 1960s and pulled those same people and you told them, hey, in 2020s,
Most adults, they would actually meet their future spouse or mate or partner by computer algorithm. It's basically computers decide. That's actually the lived experience of how most people meet and get married today. It's like they meet via social network of some sort. They meet on like, you know, Tinder or whatever dating site they subscribe to. It's kind of the algorithms that are almost matching them. That sounds dystopic.
in the 1960s, now it's like, oh, I've kind of gotten used to it. I've met many couples in healthy relationships and they sort of met online by route of computer. Anyway, back to this.
This argument that AI agents making the decisions and outsourcing that part of our intelligence will actually restrict our freedoms. What do you make of this? Or do you think that there's some merit to this argument? So I think one of the things that I said at the very beginning is agency changes. It isn't just new superpowers, but it also, as you get to super agency, it changes some things around. And so, for example, think of it as kind of different agencies.
kind of tactile perceptions of what it means to be kind of human and human gauge in life. Like when you first make a technology, it feels kind of alien, you know, fire and agriculture and, you know, glasses and computers and phones. Like, you know, starts feeling like, you know, kind of everyday life. Like our grandparents use phones now too, even though at the beginning of the kind of smartphone era, it was like, oh, this is one of those newfangled things. I'd rather just, you know, go get on the hard line and call my, you know, grandchild or whatever.
And so it does make changes. And part of the iterative deployment and learning about it is how do you make those changes such that when we get to the future state, we go, oh, yeah, this one's better. And you say, well, is our current state just adapting? And actually, the fact the previous state judgment was correct is
Well, if you kind of look at it, like take your 1920s and 2020s, like, you know, do you actually understand what the world past, you know, kind of penicillin and antibiotics and all the rest of the stuff really fully looks like and what the consequences of all that is and why the portfolio of it is so much better?
And so you actually have to take that state that you learn into. It's kind of like, think about it as, you know, kind of the judgments that you make as a child, the judgments you make as an adult. You go, well, look, there's a certain, you know, kind of innocence to the children thing, but like we get wiser as we learn and we get experience and we use that for the viewpoint of kind of making good judgments. And that's part of the reason why I think, yes, you'd say, hey, you know,
You're meeting your life partner now on a internet service. Like, whoa, that seems really alienating. But actually, in fact, it's the, okay, how do we make that a lot better than the lottery of college or the workplace, which was very limited. Yeah, and what you had before. And again, it's an iterative process. It doesn't mean that there aren't still some things that are broken in the internet ecosystem.
you know, kind of dating things. But it's one of the things that we say, we know how to continually improve it. And that's one of the things that we continue to work on. Reed, as we begin to close this out, I want to ask a question about the United States and America. And we, you know, based on your different religious preferences for AI, you might decide to kind of regulate this thing in one direction or another. And the question becomes, okay, how do we implement this technology effectively?
across America, across society. There are some that get to this stage of the conversation and they're like, well, the doomer take and even the gloomer take is not sustainable because we live in a multipolar world with many different actors and this is kind of an AI race. And so if not us, then our adversary doubles their GDP and we kind of stay stagnant and that leads to a world that maybe we don't like. So I want to ask you this question. What do you think America should do here? Like what should our approach for AI be?
Well, one of the things that I started doing is calling artificial intelligence American intelligence for precisely this reason, which is it's really important that we embrace this cognitive industrial revolution because the societies that embrace the industrial revolution are
had prosperity for their communities, their children, their grandchildren, and kind of made, you know, kind of the modern world. And I think the same thing is true for the cognitive industrial revolution with artificial intelligence or amplification intelligence or American intelligence. And we want the kind of the spree decor of American values, the American dream, the
the empowerment of individuals, the ability to, you know, kind of do your best work and to, you know, kind of make progress from wherever you start in the rungs of society to take more economic control over your destiny. And I think it's one of the reasons why it's particularly important that American values are deeply embedded in this and that it's an empowerment of American society. And it's part of the reason why I think that our regulatory stance is
needs to be much more, you know, bloomer, zoomer, and accelerationist than it does, you know, putting on the brakes because I think that's part of the future of the world as we can help make it become. As we close this out, then just a final question. Is there any chance in your mind that all this AI stuff is kind of overhyped?
We basically, like, we flatline here that we have chat GPT-4 and the innovation really slows to a crawl. That, like, none of this matters that much because it'll happen very slowly over time. I think there's zero chance of that. So I think that already we see enough in these scale compute and learning systems that are just...
only beginning to get deployed. Like part of what 2025 is going to be the year of there, we see the acceleration of what happens in software coding across the board. And that software coding is both going to enable a bunch of other things, like all of us as professionals are going to have a coding co-pilot that helps us do our work in various ways. But it's also, it's a template for how you advance a bunch of other functions.
of all of this work. So I think even if you say, hey, GBD-5 is only going to be like, you know, 10% better or 20% better, I think it'll be a lot better than GBD-4 and that the progress of the increased cognitive capabilities slows down. I think the implications throughout the cognitive industrial revolution, the technology is already visibly present and
It's just a question of how we build it, configure it, deploy it, integrate it. And I think that's part of the reason why, you know, American intelligence. There you go, guys, from Reid Hoffman, 0% chance that all of this stuff slows down. So into the frontier we go.
And we're going with you, Bankless Nation. Reid Hoffman, thank you so much for joining us here today. It's been a pleasure. My pleasure as well. I look forward to the next. Yeah, we'll have to talk about crypto in the next conversation. So everyone listening, the book is called Super Agency. It is out now. We'll include a link in the show notes. Fantastic book with Reid's entire thesis around this distilled. Gotta let you know, of course, crypto is risky. So is AI. You could lose what you put in, but we are headed west. This is the frontier. It's not for everyone, but we're glad you're with us on the Bankless journey. Thanks a lot.
♪