Why does the United States pay higher drug prices than other countries? Because America's the only country in the world where 340B hospitals mark up drug prices and PBM middlemen charge billions in hidden fees. Meanwhile, Americans subsidize the research and development for new cures. Other countries benefit, but don't pay their fair share.
Crack down on the middlemen. End the free writing. Lower drug prices. Go to balancethescales.org to learn more. Paid for by Pharma. Support for this podcast comes from Is Business Broken? A podcast from BU Questrom School of Business. What is short-termism? Is it a buzzword or something that really impacts businesses and the economy? Stick around until the end of this podcast for a preview of a recent episode. ♪
WBUR Podcasts, Boston. This is On Point. I'm Debra Becker, in for Meghna Chakrabarty. With technology rapidly transforming our world, especially artificial intelligence, there's a lot of interest in the executives leading technology companies. Some call them tech bros. Others describe them as a new tech aristocracy.
Among them is the CEO of OpenAI, Sam Altman, known primarily for the chatbot ChatGPT. Altman has not only worked to develop and improve artificial intelligence, he's also spent a lot of time thinking about the risks involved and how AI might be harnessed.
Here's Altman describing where he thinks AI is headed. The world that, you know, kids that are about to be born will, the only world they will know is a world with AI in it. And that'll be natural. And of course, it's smarter than us. Of course, it can do things we can't, but also who really cares? So I think it's only weird for us in this one transition time.
ChatGPT was developed by Altman's company OpenAI. In just five days after its public release in 2022, ChatGPT garnered one million users, making it one of the most successful product launches of our lifetime. As of March of this year, about a half a billion people use ChatGPT every week, with about 20 million paid subscribers, and that's according to Forbes.
Just three years ago, not many people knew who Sam Altman was. Now many of us have at least heard the name of this 40-year-old executive. Altman's meteoric rise to the top of Tech Mountain has been unusual and at times controversial. He's had nasty public arguments with some of the other tech titans, and he's ruffled enough feathers to get fired as CEO of his own company and then quickly reinstated.
Our guest today knows a lot about Sam Altman and the AI ecosystem. Keech Hagee is a reporter covering the intersection of media and technology at The Wall Street Journal. Her new book is titled The Optimist, Sam Altman, Open AI, and the Race to Invent the Future. Keech, welcome to On Point.
Great to be here, Debra. So let's start with the title of your book and the subject of your book. Who, if you could give us sort of the headline, who is Sam Altman? How would you describe him and why Optimist?
So Sam Altman is sort of the essential Silicon Valley figure. Before he was a CEO of OpenAI, he was the president of Y Combinator, which is a startup accelerator that is really at the very heart of Silicon Valley culture. And he proved himself to be a very skilled fundraiser, an incredible salesman, a great storyteller, and kind of a futurist who could convince other people that he could see the future. And it was out of Y Combinator that OpenAI grew.
And he was able to bring in investment and kind of create a company that challenged the big tech companies like Microsoft, which eventually became its partner, and Google in this AI race. And really, this was sort of a network of folks that Altman got involved with. And your book really talks about how important that network is, that network of Silicon Valley entrepreneurs. Right.
Yes. Sam Altman has an incredible Rolodex of contacts that he's been building since he was really a teenager at his first startup. Even when he was working in his first startup, he was taking meetings with other startup founders. This all really happened through the Y Combinator network. He would give advice and do favors for people and built this sort of enormous
massive power that he was then able to leverage and start OpenAI. Now, you did say at first, Sam Altman, I just want to get this on the table before we really dig in here. At first, Sam Altman did not want to talk with you for your book, but eventually he did agree. What were first his reasons for not wanting to speak with you and why did he change his mind?
Yeah, Sam was not thrilled about the idea of this book project at all initially. He said that it was too soon. That was one of the reasons. You know, he's still a young man, and he has big ambitions for open AI, as well as other investments, such as in things like nuclear fusion. And I think he really wanted to see those things come to pass so one could write about them. So he feels like the story is still very much in the middle. And he had trouble with the idea of it being about him.
You know, maybe a book about the company, opening eye, he would have been okay with, but he did not like the idea of a traditional biography, which this mostly is. And so then, what made him come around? Well,
As I wrote in the book, this is my second book, and my first book was about Sumner Redstone, the media mogul who was at the very end of his life. And he was in something close to a vegetative state the entire time I was writing the book, so I'm pretty used to writing about figures without their consent. Basically, I don't need you, Sam. I'm going to write it anyway. Yeah, so I just kept making calls, you know, for many months. And I think, you know, he finally came around.
And were there certain things, I mean, did you have an agreement? Were there certain things that were off limits or could you write whatever you chose?
No, this is an independent work of journalism. I knew that from the outset. I didn't ask for his permission and just tried to be an objective journalist. So let's talk about some of the things you write about. Let's start at the beginning. He grew up in the Midwest. He got his first computer at eight years old, went to an exclusive high school. What do you think are some of the things from his childhood that really shaped the person that he's become?
So Sam grew up pretty comfortable. He grew up in this very comfortable suburb of St. Louis called Clayton. And as you said, you know, went to a prestigious private school. And I think that gave him an enormous amount of confidence. He was always sort of the smartest person in his class and very charismatic. Even in high school, he was drawn to tech.
But one of his teachers that I spoke with said that, oh, Sam, don't go into tech. You're too personable because he was also interested in all these other things. So we always had a bit of like a politician's ability with him. Which has served him well.
Absolutely. And one really key moment from his high school experience was, you know, he was gay and he came out as a teenager, which was not easy in conservative Missouri in the late 90s. And there was this moment where he had to sort of stand before the student body after a sort of what he perceived as like a homophobic event had happened and kind of stood his ground and said, we will not tolerate this anymore.
And others perceived him as being very brave and a leader for doing that. And I do think it kind of galvanized his leadership abilities. So where did that confidence come from, do you think? You know, I searched and searched for it. And some of it is just innate. His mom said that he was just born kind of an adult and that she could have dropped him in New York City at the age of 10 and he would have figured out how the city worked.
He was able to use the VCR at a very young age, these difficult things. So some of it is just innate. I do think a lot of it comes from, you know, he was one of four siblings. He grew up in a loving family in a safe neighborhood, and I do think that gave him a lot of confidence. But some of it is just how he has a very, very unique quality where he is less afraid of things than other people. Does that make him reckless?
Some say yes. You know, he's talked about like he doesn't have this ability to feel fear in the same way that others do. And his critics would certainly say that it does. You know, he says he doesn't feel fear, but he certainly has expressed some pretty fearful concerns about the potential of AI if it's not properly harnessed. So there is some fear there.
Yes, he does have sort of like a philosophical appreciation for the potential downsides of AI. And early on when OpenAI was created, he went around warning people that even after TapGPT came out, if this technology goes wrong, it could go very, very wrong. But when I'm talking about fear, I'm really talking kind of about the investor's fear, the poker player's fear. He's able to take very, very big risks.
risky bets that other people would be scared by. One of his associates told me that, you know, Sam's the only person that if there's even just a 1% chance that something is going to work, but if the upside is enormous, he will take that bet.
And other people, I think, wouldn't. Let's talk a little bit more about his background here. Went to Stanford, but he didn't finish because he got some offers from tech companies and he began his career instead. But Stanford was really important for him. Can you talk a little bit about that start for him?
So Sam dreamed of studying computer science at Stanford his whole life. This was his absolute goal, but he only got through two years of it. While he was at Stanford though, he met his co-founders for this startup idea called Looped, which was kind of like a friend finder mapping app for the flip phone age.
And it was based on this idea that cell phones were going to become location aware through GPS technology and other things very soon. And he just wanted to figure out something they could do to use this new technological ability. And the idea was good enough that they pitched it at a local business competition for the students. And someone from NEA, one of the biggest and most venerable venture capital firms, heard it and thought, this is a real business. You should really do this.
I ended up inviting him and introducing him to folks at Sprint, which was kind of one of the first meetings he had that made him realize, oh, this is really going to be a business.
So he was able to acquire venture capital backing as a sophomore in college and went to Y Combinator over that summer after sophomore year, which is a startup accelerator, and came out the beginning of junior year and realized, like, I'm going to drop out of school. I'm going to do this thing for real. Yeah.
And that was really the beginning of him sort of getting into this world where he could network and learn about other opportunities and really find mentors. Who do you think from this time in his life was his most significant mentor and what did he get from that experience? Unquestionably, it was the Y Combinator founder, Paul Graham,
who was a sort of a guru, kind of a philosopher king of the tech world at that time. He'd been writing these essays online that served as a beacon for young, mostly guys, who were interested in technology. And he had this idea that, hey, what if we just got a bunch of kids together, gave them a little bit of money, not too much for a summer, and instead of a summer internship, we'll invest like venture capitalists and at the end, you know, kind of send them off into the world to try to raise real money.
And Sam was in the inaugural class of Y Combinator, Batch, as they call it, the inaugural batch. And there he met his people. And Paul Graham was really his ultimate person. Paul Graham describes the first meeting where they had a brief 25-minute interview, and he thought, okay, this is what a young Bill Gates was like. But Loopt didn't ultimately do that well. Was there a lesson also there for Sam Altman? No.
Absolutely. So in the end, Looped was solving a problem that people didn't really have. And one aspect of it that many people working on it saw in retrospect was people thought it was creepy. They didn't really want to be tracked and have people see their location as a blinking dot all the time. Maybe they wanted to know where their friends were, but they didn't really necessarily want their friends to know where they were.
So I think it was sort of a basic misunderstanding of human behavior. They were so enthralled by what the technology could do, they didn't stop to think about what humans really needed. So I think that's one key lesson. We're talking with Wall Street Journal reporter Keech Hagee about her new book about the tech leader Sam Altman. We'll be back after a break. I'm Deborah Becker. This is On Point.
Support for On Point comes from Indeed. You just realized that your business needed to hire someone yesterday. How can you find amazing candidates fast? Easy, just use Indeed. There's no need to wait. You can speed up your hiring with Indeed.
Indeed is all you need.
Support for this podcast comes from Is Business Broken? A podcast from BU Questrom School of Business. A recent episode explores the potential dangers of short-termism when companies chase quick wins and lose sight of long-term goals. I think it's a huge problem because I think it's a behavioral issue, not a systemic issue. And when I see these kinds of systemic ideas of changing capitalism, it scares me.
Follow Is Business Broken wherever you get your podcasts and stick around until the end of this podcast for a sneak preview.
Now, Altman, we should say, had co-founded OpenAI with Elon Musk initially as a nonprofit research lab. And the aim here was to create artificial general intelligence or AGI. And that apparently is AI that outperforms humans. Now, this is something that Sam Altman has been interested in for quite some time. Here he is speaking to Bloomberg in 2018.
You know, what does it mean to build something that is more capable than ourselves? Like, what does that say about our humanity? What's that world going to look like? What's our place in that world? How is that going to be equitably shared? How do we make sure that it's not like a handful of people in San Francisco making decisions and reaping all the benefits? Like, I think we have an opportunity that comes along only every couple of centuries to redo the socioeconomic contract and how we include everybody in that and make everybody a winner.
and how we don't destroy ourselves in the process is a huge question. And there are numerous concerns about the power of AI. Gary Marcus is a cognitive scientist and professor emeritus at New York University. Here he is at a Senate hearing in 2023 talking about artificial intelligence. And Sam Altman at the time was sitting right next to him. Fundamentally, these new systems are going to be destabilizing. They can and will create persuasive lies at a scale humanity has never seen before.
Outsiders will use them to affect our elections. Insiders to manipulate our markets and our political systems. Democracy itself is threatened. Chatbots will also clandestinely shape our opinions, potentially exceeding what social media can do. Choices about datasets that AI companies use will have enormous unseen influence. Those who choose the data will make the rules, shaping society in subtle but powerful ways.
So, Keech Hagee, in your book about Sam Altman, you've taken up some of these questions, you know, this potential destabilizing force of artificial intelligence and also Altman's vision of this is going to be inclusive and improve the world. How does Altman propose balancing these things?
Well, earlier on he was really interested in universal basic income and actually funded some studies into how well it works. And in some essays he suggested that this might be one way that we can make the world more inclusive after AGI sort of destabilizes everything.
In more recent years, we haven't heard as much about that. And now Sam is more likely to say that the way that he sort of broadly distributes the benefits of AI is by just making ChatGPT relatively low cost at $20 a month or, you know, sometimes free.
which is a very different kind of equitable distribution. As far as the more terrifying prospect of will the robots take over and kill us, they have an AI safety division that has, it's still there and it's still a core part of what OpenAI does. But after his blip moment when he was nearly fired,
You've seen some of those forces in retreat at the company. Well, let's talk about that blip moment. Why don't you remind us of that blip moment at OpenAI? And then we'll also get into, you know, the fight with Elon Musk, the very public, ugly fight with Elon Musk. But first, you know, Altman was almost ousted by his mutiny at OpenAI. Tell us what happened.
So it was one of the craziest business stories I've ever covered in my career, especially because I was writing a book about Sam Altman. I looked down at my phone and there's a headline that he'd been fired. And, you know, OpenAI has a very unique structure in that it is a nonprofit company that sits atop a for-profit company. And the for-profit company has taken a huge investment from Microsoft and, you know, it's
ballooning its valuation every few months. But who's really in charge is this nonprofit board. And it was a nonprofit board that suddenly, stealthily fired him one day in November of 2023. And
It was really a result of a confluence of factors. There had been a long struggle at the board over who was going to be on the board instead of different factions. There was Ilya Sutskovor, who was also a co-founder of the company and its chief scientist, who was on the board, had lost faith in SAM and had over some sort of management issues. And there are folks on the board also who had
watched Sam what they believed was be deceptive to them about some safety issues. Some of them were small, but, you know, they felt that Sam was being deliberately deceptive and trying to sort of lie about whether something had gone through safety review. And while the stakes were small now, you could see as...
ChatGPT was getting smarter and smarter through 3.5 and GPT-4 that the stakes were about to become very, very grave. And so through a number of meetings, they all got together and then were fed some new information from Meera Marathi, who was the CTO at the time, about SAM's management failings.
And they met, and while they were talking, they caught Sam in a lie about one of the board members. Helen Toner had written a paper that appeared to criticize OpenAI's policy
safety record compared to other companies. And Sam basically tried to get her fired for it off the board. They kind of came to an agreement, but in the course of all this, there was a little game of like high school lunchroom telephone where, you know, one of the board members said, well, Sam said that this other person said that Helen should be off the board and that person hadn't said that. And they realized that, you know, he was
And he's since apologized for this lie and said it's true that he did not tell the truth about this. But because they caught him sort of in a live lie while they were talking about whether he's really the right person to lead the company, they decided to fire him. Right. But then they didn't. Well, they did fire him. So they did fire him. For five days, right? Yeah. Okay. But then he came back. Tell us how he came back.
Yeah. So the thing is that, you know, he still had a lot of power, even if not on paper. And all of the investors and the employees and Microsoft, they all had a lot of stake in him running this company, not least because there was a tender offer, which is a way that for the employees to basically be able to liquidate some of their shares and get money that Sam was putting together because he's the master fundraiser. And that was clearly going to
going to die if Sam was out. And after about a few days, the employees of the company basically mutinied and threatened to quit and go to Microsoft unless they brought Sam back. And so the board buckled and Sam was replaced. Sam came back. Why do you
I think these employees weren't as concerned about these deceptions. Were they too small for them to worry about and they needed to focus on the bigger picture of the fundraising and the other things that Altman brought to the company? Or what do you think was their thinking? I mean, clearly there were people having issues with him, right? So why was there that reversal, do you think?
Well, they had no idea. At the time, I think a big part of the reason why the firing was reversed was that the board, when they fired him, they just said, "Sam was not consistently candid with the board," which is a polite way of saying he lied to the board. But they never said about what. And so for a really long time, no one really knew about what. And part of what I tried to uncover in this book was, well, what did he lie about?
And so the people in the company really didn't know what the board's reasons were. They just knew that the board wasn't saying and they wouldn't answer their questions about why. And that was a huge part of sort of the pushback.
Another person who's questioned Sam Altman's credibility is Elon Musk. And there's some, you know, that's some big public nasty questioning that's gone on. Now, these two co-founded OpenAI together. Musk gave a lot of money for this company to go forward. But, you know, in 2023—
We heard Elon Musk talking about his new feelings about Sam Altman, and he was particularly concerned about what was happening at OpenAI as it attempted to go for-profit or partly for-profit and non-profit. This is what he said to CNBC in 2023. It does seem weird that something can be...
a nonprofit open source and somehow transform itself into a for-profit closed source. I mean, this would be like, let's say you funded an organization to save the Amazon rainforest, and instead they became a lumber company and chopped down the forest and sold it for money. So I wonder, Keech, what do you think? Can you tell us a little bit about this relationship with Elon Musk and then the falling out?
Yeah, so Sam and Elon got together over a series of dinners in 2015, so the spring sort of that OpenAI was created. And it was really the fear of Google that brought them together, the fear that Google was developing AI and that AI would be locked inside this for-profit corporation that was already too powerful and that humanity would have no control, really, of
of how AI would go and the profit motive would be driving it. And in fact, their first collaboration was on a letter to the Obama administration saying, "Please regulate AI."
which is kind of a funny place to start. But in the course of these talks, you can see now in the emails that have come out in the lawsuit that Sam suggests, "Hey, why don't we start our own lab? It could be nonprofit. I'll throw in some Y Combinator stuff. You can bring some stuff." And we'll be a counterweight to Google. And this nonprofitness was to try to push back against their fear of the profit motive. And that was all fine until
They did some breakthroughs technologically that made it clear they were going to need a lot more money than they thought.
Because what really worked, the kind of AI that ended up really working, required huge amounts of data and huge amounts of computing power. And they really weren't going to be able to just raise that from donations from people. And there was a big power struggle at the company around this time. This is like 2017, 2018. And at one point, Elon Musk said, even he, you know, suggests like, okay, maybe it needs to be for profit and become part of Tesla. And he said,
wanted to basically control it uh he wanted to be the biggest shareholder he wanted to be the leader of it and the other co-founders didn't want that not just sam but also elia satskover and greg brockman another key co-founder from the beginning um and sam won the power struggle and elon basically took his ball and went home and that was in 2018 he like left the board um he stopped he promised a billion dollars but he actually over the years did about 50 million
He stopped funding them and it was pretty quiet for many years until after ChatGPT came out. And then it was the success of ChatGPT then that sort of prompted this public questioning of what are you talking about? Why is it really okay that you turn this thing into essentially a for-profit enterprise against our initial mission? And around that same time, Elon Musk, of course, founded his own AI company, XAI, that's a competitor to OpenAI.
And it was in the cauldron of all that then that the lawsuit started to fly. So what would Altman say, though, to, you know, I get it. He needed more money than he thought he was going to. So perhaps he did have to partner with a for-profit entity to be able to go forward. But what about this idea that is there?
some sort of an inherent conflict to go with a for-profit source for what was supposed to be this research lab of developing a technology that's going to be world-changing and very lucrative, right?
Yeah. I mean, Sam acknowledges that it is like an awkward transition. And their explanation is exactly what you said. You know, we didn't foresee that we'd need this much money. The only way to get this much money is to go for profit. So this is just what we had to do in order to actually bring forth AGI, which is our goal. There have been some accusations in the lawsuit that the nonprofit idea was some kind of a ruse.
And just from my reporting from the book, I don't think it was a ruse. I think it was something that, in retrospect, they regret because they haven't been able to get out of it. They've now been trying to basically wiggle out of it and have been blocked so far by the attorneys general of California and Delaware and other entities. I think it was just kind of a weird idea that, in retrospect, they regret.
I wonder, Keech, could you say a little bit more about this idea of having a lab? I mean, it sounds like an interesting idea to kind of explore where AI is going. Is there anyone else doing this type of thing, this sort of, you know, really exploring all of the issues with AI and using resources devoted to that? Or was that something that was pretty exclusive to OpenAI?
Well, it was happening inside Google at DeepMind. So DeepMind was an AI startup that Google purchased shortly after it had this extraordinary demo of playing an old Atari game. You could watch the bot teach itself how to play Atari without being taught.
So it was the deep mind knowledge inside Google that OpenAI was sort of founded to counter. And that's still going on. Google is still a major player. Of course, you have Elon's X.AI and you have, there is a faction of folks inside OpenAI who broke away to create Anthropic, which actually has a very similar business model to OpenAI, although they've sort of branded themselves as safer and more careful. So they're quite
quite a few players in this space. A lot of competition, a lot, a lot of money, really world-changing technology here. And as you mentioned, you know, Sam Altman has had some difficulties in terms of deceptions, perhaps some recklessness. Do you think he can be trusted?
I think that there are people who have dealt with him who feel betrayed by him. And these are personal relationships. Broadly speaking, in my personal experience, I feel like
he's been a trustworthy person that I, in my personal interaction with him. But broadly speaking, the question I think really is, can capitalism be trusted, right, with this? And right now, OpenAI remains a private company. But down the road, you know, they've raised so much money, there has to be some exit, right? If you raise money from investors, you basically have to go public at some point.
And I think it's a huge question, what would a public company look like with so much power over the future of AI if OpenAI does indeed remain at the forefront as it is today? Because those quarterly earnings calls and incentives are pretty challenging for this very, very long-term technology. How would you describe his overall vision, Sam Altman's overall vision right now?
He has a very broad vision that extends beyond just developing AI. He's described a world in which AI flows like electricity or like water. And a lot of his emphasis right now is about building out the infrastructure to make this possible. Because he's been quite candid that really, AI is not going to work or take over until it's very cheap. And right now it's not cheap, like none of these AI companies make money, right? They're all losing tons of money because it's extremely expensive to do what they're doing.
And he's been really working on the world stage with President Trump, and he's been doing a deal in Abu Dhabi to invest in huge data centers and chips to basically drive down the cost of AI so that it can be basically like water or electricity. We're talking today with reporter Keech Heggy. Her new biography is about OpenAI CEO Sam Aldman. We'll continue talking after a break. I'm Debra Becker. This is On Point. ♪
Put us in a box. Go ahead. That just gives us something to break out of. Because the next generation 2025 GMC Terrain Elevation is raising the standard of what comes standard.
As far as expectations go, why meet them when you can shatter them? What we choose to challenge, we challenge completely. We are professional grade. Visit GMC.com to learn more. You may get a little excited when you shop at Burlington. What a low price! Did you see that? They have my favorite! It's like a whole new world! I can buy two! I'm saving so much!
Burlington saves you up to 60% off other retailers' prices every day. Will it be the low prices or the great brands? Burlington. Deals. Brands. Wow. I told you so. Styles and selections vary by store. What do you think is his reputation in the tech world? How do other key leaders view him?
They see him as a fixer, as someone who you can reach out to who will solve your problem or do a favor for you or help you get an investor. Everyone has their little story of how Sam has helped them out or, in some cases, gone to war against them and pulled strings and made things difficult. He often was in that role as the president of Y Combinator. He had to sort of be a cop, as he said it, in the world between
the venture capitalists and the startup founders. So he has personal dealings with many, many people. He responds to texts almost immediately. He's very open. Like many people have his cell phone number. He has many, many, many, many relationships across the industry. And people do fear him a little bit or sometimes a lot because he has this ability to make their lives difficult if he wanted to. Mm-hmm.
Does he believe that the potential of AI can be very frightening if it's not regulated properly? How does he describe that? There's been a lot of talk about that.
Yes, he has even testified before Congress and said that if this technology goes wrong, it could go very, very wrong. Although we've seen his talking about that pivot a little bit in recent years. So back then, right after ChatGPT came out, when I think the world was experiencing this sort of broad collective belief that maybe AGI could actually be something that is imminent.
He very much was asking for there to be more regulation, asking for there to be something like an IAEA of AI, some global entity that would police and patrol and make sure that it doesn't do unsafe things.
More recently, we've seen him say that, well, we don't really need regulation, that the industry can self-regulate. And now that there's more money at stake, it's sort of understandable maybe why he would say that. So his message has changed a little bit. Yeah. We have a clip of him speaking in May to a Senate Commerce Committee hearing about AI. And it's interesting.
It's, I guess I would call it moderate. Here's what he said. Let's listen. To continue that leadership position and the influence that comes with that and all of the incredible benefits of the world using American technology products and services, the things that my colleagues have spoken about here
the need to win in infrastructure, sensible regulation that does not slow us down, the sort of spirit of innovation and entrepreneurship that I think is a uniquely American thing in the world. None of this is rocket science. We just need to keep doing the things that have worked for so long and not make a silly mistake. Not make a silly mistake, and none of this is rocket science. It doesn't sound as if he's very worried. Would you agree?
Yeah, I do think that after his firing, the talk about how scary AI might be one day really went away.
And in part that's because the faction that came after him was tied to this philosophy called effective altruism that has made the fear of existential risk from AI sort of the centerpiece of its work or, you know, one of them. And I think he kind of agreed with them,
Many, many years ago, he was friendly with them. He read books that overlap with ideas in the early years of OpenAI. And after he got fired, you just don't hear that much about that anymore. And a lot of the EA-related people left the company. So the idea of it, because we've heard about effective altruism from others in the tech world, the crypto world, right? This has been a philosophy in this world. Just explain it a little bit more so people understand what it is.
yeah so effective altruism is basically like a data-driven cousin of utilitarianism that tries to use you know rationality and data and logic to decide how to do the best the most amount of good
and be the most ethical but be kind of dispassionate about what that is. So, you know, very famously they had this whole idea of earning to give, meaning that it makes more sense for a young idealistic person to go join a hedge fund and donate money to charities rather than go be a doctor in Africa and try to save starving people.
because you could just save more lives, you know, in a spreadsheet, basically, with your hedge fund money. And of course... Can I just say, is that justification for being greedy? Sorry. Some would say that. And of course, the most famous person who did this was Sam Bankman Freed. Right. Right. And the whole FTX company was sort of built by these EAs for this purpose. And that all, you know,
crumbled in spectacular fashion and I think really tarnished the brand of EA. So it's kind of fascinating. You will rarely find someone that will say, oh, yes, I'm an EA. They'll say, oh, I believe in EA ideas or, yes, they have a lot of good ideas, but I'm not one. I think that...
that our friend SBF has a lot to do with that. But they're still incredibly influential, and there is a ton of money behind it. There are all these nonprofit organizations and just tens of millions of dollars, hundreds of millions of dollars, that are funding a whole network of entities that sort of push forth this idea. And one of the ideas in EA is that it is our moral responsibility to save lives of people who have not yet been born,
by making sure that AI does not, you know, wipe us out. And this was a philosophy of Sam Altman, but he seems to have toned it down after some bitter experiences with the folks that he worked with.
Yeah, I would never describe him as an EA, but you could hear him sort of doing some of the EA dog whistles in his blog posts from 10 years ago because, interestingly, the people who are most interested in AI were often the people who were most concerned about its existential risks.
in the early days. So there was this book by a Swedish philosopher, Nick Bostrom, called "Superintelligence" that was published in 2014 that both Elon Musk and Sam Altman read and posted about. And this was kind of the intellectual basis that OpenAI grew out of. And in the book, he talks about the possibility of AGI, but also the real risks
that it poses to humanity. So both the enormous upside and the enormous downside. So let's talk about sort of political power here. It's my understanding that Sam Altman was at President Trump's inauguration back in January, not right up front with some of the other tech leaders, but there nonetheless. You mentioned some of the deals
he's making with President Trump. Of course, he's had a past relationship with Elon Musk. Does he have political ambitions? And how does the tech power translate into political power for Sam Altman?
Yes, he does have political ambitions and he's had them his whole life. He explored running for governor of California and he even talked to some friends about wanting to run for president around that time. And he's always been, you know, very politically engaged. He, you know, has his own political platform that he put online when he was still at Y Combinator. And
I think it's interesting to see how OpenAI has actually gotten him into the room where it happened in a way that his more direct political efforts did not. After Chat2BT was launched, he went on this world tour shaking hands and taking photos with presidents and prime ministers all around the world. And he could not have been more in the room where it happened. And today we're seeing him
Sue these deals with President Trump, even though, you know, historically he has not been a pro-Trump person at all. He was very critical of Trump around the time of his first election. But on the first full day of the Trump presidency, there was Sam Altman at the podium with President Trump announcing a $500 million, you know,
a humongous Stargate AI infrastructure plan that would, again, bring this idea of AI running like electricity or water to fruition. And so then how are the conversations about regulating AI affected here? And what's happening in terms of Sam Altman's role in that area?
It's been really fascinating to see this. OpenAI just recently did a deal to basically bring AI infrastructure to Abu Dhabi. And during the Biden administration, that was something that Sam had been trying to push for. And the Biden administration just thought it wasn't safe to bring the most advanced kind of chips to the Middle East because of the historic relationships with China. There was basically a fear that like the Chinese would get these chips.
if they were brought there. In the Trump administration, like, just greenlit it all. So we see just a more relaxed approach to chips. We also see in the big beautiful bill, there's this provision from the House side of it to put a 10-year moratorium on states doing AI regulation.
which is an extraordinary giveaway to the tech industry. I've really never quite seen anything like it. We'll see if it actually ends up, you know, making it through the Senate. But that's something that would have been unthinkable in the previous administration. Right. I hadn't heard that before. I didn't realize it was in the bill. So I guess, you know, what do you think is next for him? Do you think that he'll continue along this
this same path, working to make sure that, you know, that open AI gets a big slice of the AI pie here as much as it can and that he's really a big force. Yes, I do think that we will see more and more connection between open AI and government. I mean, they just announced their first Defense Department contract, which is a pretty wild thing for a Silicon Valley company that young to do.
And more and more, this is going to be a government story because this is really about infrastructure. And it's so expensive that at some level there needs to be sort of government level policy behind it. So I think he'll be spending more and more time doing that kind of work. And he's already seen kind of extraordinary success in matching his dealmaking skills with the sort of dealmaking ethos of President Trump.
And of course, this also goes back to his background in terms of a public-private partnership, right? I mean, this is what his father did. And government needs to be involved in things that are going to be this transformative, right?
Yeah, I really do see the shadow of his late father in this work. You know, his late father was ultimately a real estate developer, but worked in affordable housing for many years and was constantly pioneering these ways for the private and public sectors to work together.
both as sort of an ingenious banking mind and as a really idealistic person. So I feel that is imprinted on Sam, and he's always trying to kind of cook up some new way for the public and private to work together. And he has wanted from almost the beginning of OpenAI for the government to be the back
They went to the government in the early years when they were casting about for more backers and asked the government to back them. At that point, the government said no. They didn't really have technology to show for it. So he doesn't blame the government for it. But I do think that's kind of the long-term vision here. And what's the lesson, do you think? The lesson of Sam Altman, the lesson of OpenAI? What does this say about where we are in terms of
of a tech world, a tech aristocracy, as some folks say. Because, you know, reading your book, it's very much, it seems as if you gave us a look inside this world of very privileged, smart, young, ambitious men who are really, really running things. And it's a pretty exclusive club.
Yeah. So I think one lesson I took away from this book is that the AI era, yes, will be defined by brilliant AI scientists, but Sam is not an AI scientist. He is a money guy. He is a fundraiser. He is an investor. He's a venture capitalist. He's a storyteller and a salesman. And
The form of AI that has proven to be the breakthrough of our time is one that requires enormous piles of money, and he is the man for that moment. But is it? Does it run the risk of...
of really sort of benefiting a few and not really benefiting all, as Sam Altman has said? You know, is it really going to, what can be done to make sure that it's used properly?
I think it's an excellent question and I am personally troubled by how the talk of sharing it equitably has kind of fallen away as the truth of the technology has emerged. I think it really does threaten to concentrate wealth even more than it already has.
And right now, while even everyone from the Catholic Church to governments have sort of wrung their hands about the labor implications, I don't see any signs of us having the tools to stop it from having really disruptive effects on the labor market. Even though we have all these deals with government and we're working with government, it just it seems like it would be very difficult to try to rein this in.
Absolutely. I mean, I think that is sort of part of the logic of this. You know, OpenAI was founded with this idea that they didn't want to create an arms race, an AI arms race. But when they released ChatGPT, that's exactly what they did.
And they forced all the other companies that were kind of holding back. Google had this technology, but it did not think it was a good idea to release it, to step forward and release it. And now we are in a world where they are all furiously competing with each other. We just saw Meta try to poach all the other companies' talent initiatives.
in AI recently. And the numbers there are just extraordinary, right? There's an absolute arms race both for talent and for chips and all of these things. So those pressures feel like they are more overwhelming than any kind of breaks about labor concerns or environmental concerns that could be placed on it. Do you know if Sam Altman's read your book?
He told me he wasn't going to read it. And I understand that, right? It's hard to like watch clips of TV things that I've been on. So yeah, I expect he won't. So no reaction from the Sam Altman book, from the Sam Altman camp about your book just yet? Well, I mean, he did tweet early on that he publicly told the world that he did give, you know, he sort of
participated in this book, and it was one of two books that he participated in. So he gave it his blessing in that way. All right. Well, Keech Hagee is a reporter covering the intersection of media and technology at The Wall Street Journal. Her new book is titled The Optimist: Sam Altman, Open AI, and the Race to Invent the Future. Keech, thanks so much for being with us today. Thank you. I'm Debra Becker. This is On Point.
Some matches are temporary, but your privacy shouldn't be.
With Line 2, you get a second phone line just for dating. No need to share your personal number until you're ready. You can chat, text, and even block numbers, all while keeping things fun and private. It's perfect for online dating, blind dates, or just keeping things light. When you're ready to move on, Line 2 lets you cut ties without any drama. Dating should be fun and carefree. Line 2 keeps it that way. Ready to date on your terms? Visit line2.com slash audio or download Line 2 in the App Store today.
Support for this podcast comes from Is Business Broken? A podcast from BU Questrom School of Business. How should companies balance short-term pressures with long-term interests? In the relentless pursuit of profits in the present, are we sacrificing the future? These are questions posed at a recent panel hosted by BU Questrom School of Business. The full conversation is available on the Is Business Broken podcast. Listen on for a preview.
Just in your mind, what is short-termism? If there's a picture in the dictionary, what's the picture? I'll start with one ugly one. When I was still doing activism as global head of activism and defense, so banker defending corporations, I worked with Toshiba in Japan. And those guys had five different activists, each one of which had a very different idea of what they should do right now, like short-term.
very different perspectives. And unfortunately, under pressure from the shareholders, the company had to go through two different rounds of breaking itself up, selling itself and going for shareholder votes. I mean, that company was effectively broken because the leadership had to yield under the pressure of shareholders who couldn't even agree on
on what's needed in the short term. So to me, that is when this behavioral problem, you're under pressure and you can't think long term, becomes a real, real disaster. Tony, you didn't have a board like that. I mean, the obvious ones, I mean, you look at, there's quarterly earnings, we all know that. You have businesses that
will do everything they can to make a quarterly earning, right? And then we'll get into analysts and what causes that. I'm not even gonna go there. But there's also, there's a lot of pressure on businesses to, if you've got a portfolio of businesses, sell off an element of that portfolio. And as a manager, you say, wait, this is a really good business. Might be down this year, might be, but it's a great business.
Another one is R&D spending. You know, you can cut your R&D spend if you want to, and you can make your numbers for a year or two, but we all know where that's going to lead a company. And you can see those decisions every day, and you can see businesses that don't make that sacrifice. And I think in the long term, they win.
Andy, I'm going to turn to you. Maybe you want to give an example of people complaining about short-termism that you think isn't. I don't really believe it exists. I mean, you know, again, I don't really even understand what it is. But what I hear is we take some stories and then we impose on them this idea that had they behaved differently, thought about the long term, they would have behaved differently. That's not really science.
Find the full episode by searching for Is Business Broken wherever you get your podcasts and learn more about the Mehrotra Institute for Business, Markets and Society at ibms.bu.edu.