I'm Reid Hoffman. And I'm Aria Finger. We want to know what happens if, in the future, everything breaks humanity's way. With support from Stripe, we typically ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take. This is Possible. ♪
Reid, we have been hearing for a long time that OpenAI was going to convert to a for-profit as a way to get enough capital that they needed to essentially fulfill their mission. They have confirmed that their nonprofit parent will continue to hold majority control over the for-profit subsidiary, and the for-profit subsidiary will be a PBC, a public benefit corporation.
corporation. And so Sam Altman said specifically in his letter that this keeps mission ahead of margin, even as the company scales enormously and takes in an enormous amount of capital. So my question for you is, is this a good outcome vis-a-vis the alternative of just becoming a traditional for-profit? And why do you think a PBC in particular was the right move or was it the right move? It's actually not only just good, it's essential.
for getting the kind of humanity benefit that is the mission of OpenAI. In case people don't know what a PPC is, it's a fairly simple thing, which is there is a mission, a statement about what the company is about that the board of directors
should prioritize ahead of revenue, ahead of profit. Now, there's a reason why it's in a company, right? Which is, hey, you need the revenue to scale. You have a customer focus in terms of how you're operating. You know, there's shareholders where the shareholders and capital both invest heavily.
you know, kind of make the mission possible. But also, by the way, of course, then you have stock benefits for employees who also then make the mission possible. And so part of the reason why, you know, I tend to think that, you know, there's a lot of people for their own political reasons or ignorance reasons that are kind of attacking this, but they don't understand that this is fundamentally just an elaboration of the mission that OpenAI has been on.
It could be people who are kind of, you know, academics or even, you know, anti-capitalists who go, oh, company's bad. You're like, well, actually, in fact, the only thing building these kinds of things at scale is companies. And, you know, that's the thing that allowed OpenAI to get where it is because it had this as a, you know, kind
kind of as a subsidiary corporation, then there's competitors. There's people who are like, oh, no, no, that shouldn't be. Even as I built a company, that shouldn't be a company because I would want it to not raise the capital, not be able to compensate talent. I'd like to be able to hire all the OpenAI people away for the stock option packages that I'll be giving them in my non-
public benefit corporation in order to do this. And, you know, we've seen obviously a ton of efforts on that too. And so it's part of the reason why I think it is an absolute mandate for keeping the mission, keeping the
the, the, the kind of AGI for humanity as at the most at the center of this organization as of any other scale organization. And so do you think there's something specific or special about LLM's frontier models? Like you said, okay, inflection, anthropic, now open AI are all PBCs. I think people are sort of generally familiar with B Corps and you have companies like Patagonia and Danone, like,
should all companies be PPCs or B Corps? Or is there something special about these from tier models because they are going to affect so much of humanity and we hope for the better? I don't know what percentage ultimately of companies should be PPCs. I mean, to some degree, it's a choice about what you're shaping and building and which risks you're going to take and all the rest. Now, when you have companies that have a
you know, kind of call it a humanity-wide impact, even broader than society-wide.
How are you maximizing benefit for humanity beyond the normal commercial applications? Hey, new product services, new contributions to the economy, new jobs, you know, da, da, da, da. Like all of that's really great, but like, what are you doing beyond that? Also, of course, that brings up that question in societies. It's part of the thing that I, you know, I've been advocating for, you know, maybe a decade now that as technology companies get to a size, they have to think of society as a customer as well.
In terms of how it operates now, you don't necessarily need to be a PVC in order to navigate that. It kind of depends on how urgent that topic is. And so all of that kind of aligns to why you might have a public benefit corp.
And obviously part of the reason why Mustafa Karan and I did that with inflection, which obviously continues the BBC. It's part of the reason why Dario, the crew and anthropic did that. And it's part of the reason why open AI is setting that up. And actually, in fact, you know, I would like to see every new, you know, kind of major, you know,
AI company that's seeking to be a frontier model to be similarly clear on their kind of understanding of how this enters into the mix with humanity and what are the specific things that they're steering towards and what are the specific things that they're trying to steer away from. Right. You have to have humanity as a stakeholder. And
So switching gears, one of the places that I think both you and I are most excited about AI is in the education sphere. The idea that you can have a personalized tutor, like this could be totally transformative and positive, yet there might be sort of a messy middle time. So Google just announced they will soon allow children under 13 who are supervised through Family Link to chat with Gemini for homework, help, storytelling, etc. And so that's at the young end.
But then at the higher end, published an article called, appropriately, Everyone is Cheating Their Way Through College. And they're talking about how AI-assisted cheating, essentially, is rampant. You have quotes from students that are saying, like, all my college degree is worth is I'm good at using AI. Professors can't wait to retire because they can't figure out this AI thing. And so I
I sort of agree. This is like a messy time. It's not so easy to figure out how to do this. It doesn't mean we should throw out AI. But so if you were talking, and I know you are talking to educators and administrators who are reacting this moment, like where should they begin? How should they think about this time? Wishing for the 1950s past is a bad mistake. The fact that universities have not
And it's like, well, but I already have my curriculum and this is the way I've been teaching it for the last X decades, et cetera. It's like, well, exactly as you say it, obviously the interim is messy and likely there will be a bunch of things that are broken. And so obviously, you know, part of what will happen is, you know, technology tends to get adopted by.
People who have the most intense need and use for it. And obviously a student goes, huh, I could spend 30 hours writing an essay or I could spend 90 minutes, you know, with my, you know, chat, GBT, Claude, you know, pie, whatever prompting and, and generate something for that. And obviously, uh,
you know, to some degree they are underserving what they actually really need, which is the whole point of this stuff is education and learning. There's also a point about having accurate assessment of how you're learning and so forth, because that's part of how we, we do things. And so all of that being disrupted and in turmoil right now is not great. Now,
A university professor would say, well, we should slow it all down until we figure it out, or many universities. I've talked to a number of university professors who had exactly that point of view. And part of it is because you say, look, I get it. You're in the same kind of disruptive circumstance that other people are when they're encountering this, whether or not they're coders or whether or not they're lawyers or whether or not they're doctors or whether or not they're analysts or financial people or et cetera, et cetera, which is,
hey, you can't just say I'm going to ignore the new tool. And so there's a whole bunch of ways that no AI development, you know, teachers, professors can be using it. They have to bestir themselves to do so. Here's something a professor could do today, a teacher could do today. He's like, all right, so you're teaching a class on, you know, Jane Austen and her relevance to technology.
you know, kind of, you know, call it early literary criticism or something like that. And you say, okay, well, I went to, to chat GBT and I generated 10 essays and here's the 10, the ones that are, these are D minuses. Right. Right. Do better. And yes, use a tool for doing it.
But if you essentially said, hey, as opposed to 90 minutes, what I was doing is I was actually spending 20 hours with it, refining it, refining, understanding essays better, doing that kind of thing, then actually you'd say, well, actually, in fact, I'm probably learning more than I was learning before when I had to type the whole thing. I'm not learning some things and I'm learning new things, but it's probably ultimately a transformative side. And so-
So that's where you need to be going. Now, part of the reason why I'm absolutely confident of this educational approach in the long term is I think that it is practically guaranteed that the way assessment is going to change is going to be essentially the AI booth.
Right. Like, you know, whether it's an essay or oral exam or anything else is you're going to kind of go in and the AI examiner is going to be with you doing that. And actually, in fact, that will be harder to fake than the pre-AI times because the pre-AI times, you know, most people, including myself, who had, you know, some moments of being great, of getting great grades, actually figured out how to hack it.
Like what's the simplest way to study? What are the – when you're in that essay that sit in the classroom and write the essay, what's the way that you could produce something that isn't really that grand but works within the 30 minutes that you're supposed to write it, et cetera? And so there's a whole bunch of techniques on that. And so you could actually, in fact –
hack that and know less about the overall subject. Part of the reason why oral exams are hard, generally reserved for, you know, PhD students, sometimes master's students, et cetera, is because actually, in fact, to be prepared for oral exam, you gotta be across the whole zone.
Now, let's think of every class had an oral exam, essentially. Oh, you're going to have to learn a whole lot more. Right. In order to do this. And I think that's ultimately how this stuff will be. Now, as per your question again, look, we're in a disruptive moment. We have a bunch of professors just like classic professors.
you know, kind of, you know, established professionals who go, I don't want to be disrupted. I want to keep my curriculum the way it is. I want to keep doing the thing that I'm doing. And it's like, well, no, you can't. Right. And so you need to be learning this. And that's part of the reason why
you know, with LLC and others, I'm doing this kind of like, okay, what does this mean for thinking about new curriculum? What does this need? New education, new learning, new teaching, new assessment, et cetera. To put a bow on it of something that I know you also agree with, because we've talked about this a bunch. The most central thing is preparing students to be capable, healthy, happy participants in the new world. And, and,
obviously, your ability to engage with, deploy, leverage, utilize AI, AI agents, etc., is going to be absolutely essential. And, you know, it's part of the advice that I give young people is say, look, one of your advantages is you're much more deeply and much more naturally AI native. You can bring that to the workplace. Because just like professors are, you know, kind of
you know, hey, I'd like to just keep this in the same kind of take-home essay that I've been doing before and exactly the thing have to change. Workplaces have to change too. And the question is, how do they find it? Well, the new blood,
gives you really, really great opportunities. On this podcast, we like to focus on what's possible with AI because we know it's the key to the next era of growth. A truth well understood by Stripe, makers of Stripe Billing, the go-to monetization solution for AI companies. Stripe knows that when launching a new product, your revenue model can be just as important as the product itself.
So that fits perfectly into my next question. But I have to say, I thought it was pretty hilarious. My good friend was applying for a job at Anthropic. And in fact, she was applying for a job at Anthropic.
And in the Anthropic application, she screenshotted and sent to me, it said, hey, we love AI. Okay, good, check, Anthropic. But please do not use AI for any aspect of this application, which I just thought was like a little absurd coming from one of the frontier models. So I feel like, you know, that will change over time as well. Absolutely. By the way, look, the absurd thing should be is,
Part of the thing, and maybe this will also go back to education, which is the reason why before we move on, is like say, how did you use AI to do this? What were you uniquely differentiated? What was your theory of it? What did you see that other people don't see in doing it? That's obviously what should be on it. Yeah, it was one of those like question mark moments. But I feel like a lot of people always, but especially recently, we had discussions.
Derek Thompson come out with an article about sort of the wage premium to a college degree is decreasing. Is that because of AI? Is that because of a million other factors? We've also had all of the memos. We talked about Shopify's memo about this is an AI-first company. We saw the same thing coming from Duolingo. And so I...
think I know the answer to this question from you. But so everyone's talking about like, you know what they're always going to need? They're always going to need electricians. So they're always going to need, you know, certain healthcare things. They're always going to need that nurse. Like there's certain things that lag in automation. And so people think they're, you know, sort of AI proof. If you were entering today's labor market, is your advice to double down on sectors that lag in that automation, like the skills trades or
Or do you think people should lean into AI intensive fields where the tools are table stakes? So generally speaking, I think everyone should be learning and using AI. It's kind of like if you haven't found something
That where it could be seriously helpful to you, AI today, and in the words of Ethan Malk, the worst AI you're ever going to use is the AI you're using today. If you haven't discovered it, you haven't tried hard enough, you haven't been creative enough, you haven't been studious enough, you haven't been asking other questions enough, etc. So everyone should be doing that. Then it kind of probably forks into how comfortable are you
with using this kind of AI tool set that's going to be evolving very quickly and therefore changing what your interface point with it could be. Like, for example, you could easily see getting back to the, hey,
I'm kind of deploying a set of AIs on a problem, like the most extreme. And I'm actually, in fact, just trying to keep up and help and make judgment because the AIs get so good at doing that kind of thing. You go, okay, well, like that would be AI intensive task. Am I comfortable with that being a possible outcome with that, you know, kind of going there? Or you're like, no, no, no. I need to know that this is my unique value. This is the thing I'm doing. And so I'll be a nurse. I'll be a,
I'll do stuff that the AI embedding in the world or other things are much, you know, kind of slower to do. And I think that gets down to individual preference. Now, both should be engaged with AI seriously. Yeah, absolutely. I mean, it was interesting when I saw the news last week. It's like, of course, you don't want the returns to college degrees to lessen. You don't want new college grads to be unemployed. But one of the things they cited in the research was that more job descriptions are not requiring college degrees. And that's obviously out of a lot of the work that
you and I have done with Byron at Opportunity at Work to make it so that a degree isn't actually a barrier to the job market. So it's also, no, we want to make sure that people without college degrees also have access. So awesome. Reid, thank you so much. Pleasure. Possible is produced by Wonder Media Network. It's hosted by Ari Finger and me, Reid Hoffman. Our showrunner is Sean Young.
Possible is produced by Katie Sanders, Edie Allard, Sarah Schleed, Vanessa Handy, Aaliyah Yates, Paloma Moreno-Jimenez, and Malia Agudelo. Jenny Kaplan is our executive producer and editor. Special thanks to Surya Yalamanchili, Sayida Sepiyeva, Benassi Dilos, Ian Ellis, Greg Viato, Parth Patil, and Ben Rellis.