What's up, everybody? My name is Demetri Kofinas, and you're listening to Hidden Forces, a podcast that inspires investors, entrepreneurs, and everyday citizens to challenge consensus narratives and learn how to think critically about the systems of power shaping our world.
My guest in this episode is Karen Howe, an award-winning journalist and the author of Empire of AI, an important new book that tells the eye-opening account of arguably the most fateful technological arms race in history.
While the book tells the inside story of OpenAI, it is at its heart a meditation on power that reveals how an idealistic nonprofit devoted to the safe development of artificial intelligence morphed into one of the most valuable private companies in the world.
Today's conversation takes you inside that transformation from the heady idealism of OpenAI's founding through the multi-billion dollar Microsoft deal and the 2023 boardroom coup to the unresolved questions that hang over Silicon Valley and Washington alike about the private accumulation of immense sources of power around the deployment of this technology and the nature of the world we are building.
Whether you're an investor, policymaker, or simply a concerned citizen trying to make sense of the headlines, this episode will equip you with the context you need to understand what's really at stake in the race to build AGI and what levers we still have to steer it.
If you want access to all of our premium content or you want to learn more about the Hidden Forces Genius Community and how to attend one of our many virtual Q&As, in-person events, and dinners, you can do that at hiddenforces.io slash subscribe. And if you still have questions, feel free to send an email to info at hiddenforces.io and I or someone from our team will get right back to you.
And with that, please enjoy this incredibly important and timely conversation with my guest, Karen Howe. Karen Howe, welcome to Hidden Forces. Thank you so much for having me, Demetri. I'm so excited to have you on, Karen. I told you that this episode, I think, follows nicely on our previous episode with Patrick McGee.
Even though that episode was focused on China and Apple, there's a tech focus here obviously, but there is also a China angle as well, which hopefully we can get into with respect to DeepSeek and your observations of the book about the scaling hypothesis and what is the best way, an optimal way to scale these systems that we're going to talk about today. But before we do that, well, first of all, congratulations. Thank you. On the launch of your new book. Yeah. How does it feel?
Both stressful and relieving. I don't know. It's like an interesting tension. I'm excited to get it out in the world. And also it's, yeah, then the reaction of it goes out and it's like watching your baby fly out of the nest. Yeah, I'm envious. I interviewed so many authors and I feel like, well, I'm not envious of the process. I feel like the process would be draining and difficult. And I'm always amazed that people can have a full-time job and also write a book.
Maybe it's easier for journalists because you've got a network of editors and the experience of writing it, but then you've also got all these crazy deadlines. So anyway, it's quite a feat. I'm going to mention the name of the book a number of times, but it's Empire of AI. Again, we'll get into that. Before we do, tell me a little bit about you. How did you get your start in journalism? Did you start with an interest in journalism? Did you come in through the technology side and you ended up just writing about the stuff that you were already interested in?
What's your story? Yeah. So I never thought that I was going to be a journalist. I studied engineering and specifically mechanical engineering at MIT for undergrad. And when I graduated, I went straight into the tech industry, moved to San Francisco, started working on a startup. And I thought that that was going to be my career. That's what I had studied. That's what I was set out to do. But I very quickly within my first year at this startup, I
I realized how Silicon Valley works. And what I mean by that is the startup very quickly imploded because it didn't have a business plan, even though it had this incredible mission and it was a very mission driven group of people. And it was focused on problems that I really cared about, which was building technology to foster sustainability.
And it suddenly dawned on me that a lot of the problems that I was most interested in and the kinds of technology I was most interested in building most likely would not have a business plan and therefore could not survive in Silicon Valley. And so I then started thinking, what should I do? Like I've been, I studied engineering for four years in order to do this specific thing. And now I don't really see myself doing it.
having a long-term future in this industry. And kind of on a whim, honestly, I decided to go into journalism in part because I also really love writing. And I didn't know anything about journalism other than that it was an ability to use writing as a tool for social change. And originally, I'd gone into technology to use technology as a tool for social change. And so that was kind of my naive analysis that
That led me to go into the industry. And then kind of through getting my feet wet in journalism, I very quickly initially thought I was getting shoehorned into technology reporting because I was actually not that interested in reporting on technology. I wanted to report on the environment. My cause was very much sustainability. But over time, I realized that reporting on the technology industry was just this really great way to dig deeper
into kind of what I saw as the misaligned incentives within the most dominant technology making power there is today. And really use that as a way to explore what's sort of gone wrong with why we can no longer seem to make technologies in the public interest and try and figure out how we can actually get back to a place like that.
You know, this is so clarifying. I did not know this about you. I knew that you studied mechanical engineering, but I didn't know all these other things, including your interest in
I mean, it's obvious that you have an interest in sustainability from reading your book, but I didn't realize that you had one coming into it. This makes so much sense. This is very clarifying. Because this is a sort of also in some sense, I mean, it isn't a philosophical exploration of the questions that you just raised, but they're kind of undertones of them and certainly the public interest and what is in the public interest and what are we building and how are the tools we're building aligning with that larger interest kind of runs through the book.
So, I think I can imagine in your future a book where you're actually maybe delve into some of these things more concretely. So, tell me a little bit about how you got interested in OpenAI because that's what the book is about. Again, you do explore these larger themes within it, but what was the origin of your relationship to the company and to Sam Altman, its CEO?
Yeah, so I started covering OpenAI in 2019. And at the time, I was an AI reporter at MIT Technology Review. And MIT Technology Review is a publication that is very focused on emerging technologies that are extremely cutting edge, not really... Like we used to say, by the time the technology has some kind of commercial application, it's too late for us.
And so I was covering the bleeding edge research that was coming out of academia, coming out of corporate research labs. And in that context,
OpenAI was a really interesting organization at the time because it had been founded specifically as a research nonprofit by big names, Elon Musk and Sam Altman. But they were focused not on commercialization. They were focused on those kind of early stage ideas of how do you develop AI to be more advanced than the kind of techniques that we were already seeing there.
And so I developed an interest initially because of that, because they were kind of squarely within what I was focused on and what MIT Technology Review was focused on. But I also came to realize that they were very rapidly building influence as an organization, not just within the AI research world, but also within the tech industry and within the policy world. The organization from very early on
kind of was already quite savvy in understanding that it needed to position itself and it needed to cultivate relationships with people in power across different, both DC and Silicon Valley. And so I saw that they were sort of beginning to influence
not only the direction of research that was happening within the AI world, because every time they made a research release, they would kind of do this big splash with a lot of marketing and a lot of communications, and you could see other researchers. It would sort of grab the attention of other researchers, and it was slowly starting to shift the way that other researchers were doing AI research. But it also had an outsized influence on the way that policymakers and therefore the public was starting to talk about AI, because
It was doing that kind of legwork to shape the public narrative early. And so I decided to
profile the company under the suggestion of an editor of mine at MIT Tech Review, where I was like, this organization is really interesting. He was like, why don't you just go and ask them for interviews, see if you can profile them. And so I ended up embedding within the company in the summer of 2019 for three days, just trying to understand what is this organization about when they say that they are
just doing nonprofit research for the good of humanity. Like, how are they doing that? How do they see themselves? Why are they spreading out their tentacles in all of these different directions? And very quickly over the course of those three days and also many other interviews afterwards, I realized that everything that they were saying seemed to be a facade.
and that the organization, it had already transitioned into having a for-profit arm within the nonprofit. It seemed to be on the cusp of starting to engage in commercial endeavors. It said that it was a highly transparent organization, but I found that it was incredibly secretive. And it said it was a very collaborative organization, and I found that it was incredibly competitive. And so
There was this disjointedness that I was starting to discover, and I ended up writing a profile kind of just calling that out. It seems like this organization that has kind of accumulated a significant amount of goodwill and also a significant amount of capital based on these altruistic notions that is going to be a bastion of ethical AI research is...
all just that. It's just storytelling. It's really effective storytelling that is not actually reflective of what's happening beneath the surface.
My relationship with the company after I published that profile immediately soured. They were extremely unhappy with my reporting. And ever since then, it's been a very tenuous relationship where over time I've continued to report on the company. I've continued to be critical of the company because I think a lot of the things that I said in my initial profile ended up being exactly right. And they have continued to be really frustrated by my work.
So, I feel like there are a competing set of incentives within the company or objectives and cultural threads as well. Obviously, there's the accelerationist narrative that grows
Over time, there's also the influence of EA, which isn't necessarily a compatible effect of altruism. And there's also Sam Altman and his personality. And so much of what you're describing here seems to really be a description, or it seems to align very closely with multiple descriptions, not just from your book, but other descriptions I've also gotten of Sam. How much is what you're describing and the ethos of OpenAI and where it is today really a reflection of his own political goals and ambitions?
Yeah, it's a great question because I think there's kind of two ways to answer it. One is, yes, we are talking about Sam and opening eye is very much an embodiment of him, which is something that I sort of came to the conclusion after doing reporting for my book is there's so much about
that company and that organization that really is just a direct reflection of kind of the way that he operates in the world, which is very much like he is just a phenomenal storyteller. He is like a once in a generation storyteller. This is why he's also a once in a generation fundraising talent.
He is able to create these sweeping visions of the future that you really want to be a part of and that you want to give lots of money to. And he also figures out how to frame things in a way that if you
are a part of it, you will be enriched greatly. And I think the other thing that I've sort of come to the conclusion about over time is that he also has a very loose relationship with the truth. And that's part of his superpower when it comes to storytelling is if you can kind of divorce yourself a little bit from the truth to tell whatever story you need to tell, you're going to be incredibly effective. Like that's a phenomenal weapon.
The other way to answer this question is that Sam Altman is also the perfect product of Silicon Valley. And so like, is OpenAI just a manifestation of him, the man, or is it also just a manifestation of the system? And he is sort of like the ultimate product of that system. And I think it's a little bit of both in that like,
The tech industry, Silicon Valley, has been on this trajectory over the last two decades where it really does index on storytelling. And it really does index on using narrative as a way to accumulate wealth and power. And so he is one of the most effective at doing that. But I don't think he's uniquely effective.
He's not like an island. He exists in his context and he is a product of that context and therefore opening eyes and extension of it as well. So what is the, maybe this is a good time to talk about origin stories because the book begins actually with the 2015 meeting that happens with Sam Altman, Elon Musk, and some other folks that are part of the original team. How did the company get started? And how was that relevant to the telling of this book?
Yeah, so OpenAI was originally started as an idea that Sam Allman had, and he approached Elon Musk to essentially get a big name attached to the organization. And the way that he approached Musk was by kind of
to specific things that Musk had become deeply obsessed with during that particular time in 2015. So Musk was really, he was starting to think a lot about AI as a technology for power and control. And he was particularly concerned about the fact that one, he was not the one that was accumulating that power and control, but two, that it was happening under a for-profit entity because Google and DeepMind
at the time had essentially really effectively created a monopoly on the best AI research talent in the world. And so he was worried that therefore if they monopolize talent, they can monopolize the technology and the technology is going to be developing and evolving under this for-profit framework. And so Sam Altman tells Musk like,
Like, I totally agree with you that this is a really deep concern and we shouldn't allow Google to do this. And it seems to me that the best way to counter Google would be to just create another lab that competes with them, but on totally opposite ideals. We're going to be a nonprofit. We're going to be transparent. We're going to be completely open and share all of our research to the world.
And so Musk really loves this idea. And Sam Altman then organizes this dinner with Elon Musk there with a bunch of AI researchers that ultimately become kind of the leaders of OpenAI. And the book opens with that dinner and the kind of discussions that they were having at the time where they were really...
For the time, they were talking in these very outlandish and very highfalutin ways about AI, that they were going to shape the future of humanity through shaping the future of this technology.
And part of the reason why a lot of the researchers that were actually even came to that dinner was because they heard Elon Musk was coming. So this kind of also gets to Sam Altman's very strategic ability to kind of bring the right people into the room such that he's able to get them focused on like a particular goal. And how important was just Musk's brand name?
in terms of, let's say, Sam's vision for fundraising and building talent at the company? How much was he focused on actually bringing in someone like that in order to get the flywheel going at the company? I think it was a key part of the strategy because...
Altman kind of identified early on that in order for this to work, he would need to get some really top talent AI researchers and he would need to compete with the fact that they were being paid millions of dollars at Google and he was not necessarily going to have that kind of budget.
And so he was cognizant of like, if I can't compete on money, I need to compete on something else, like some kind of reputation. I need some kind of draw. And he had identified specifically Ilya Sutskever, who ended up becoming the chief scientist at OpenAI, as the main person that he wanted to sway to get to OpenAI because he
Ilya had a lot of cachet within the AI research world. So if he could get Ilya over, that would then be the reputational draw for other AI researchers to come in. But in order to hook in Ilya, Musk was like the key to that. And Ilya at the time specifically went to the dinner because he heard Elon Musk was attending. And so it really was like a linchpin of the strategy to kind of bring Musk in to then gather all these people and then to be able to fundraise off of all of their names and reputations.
Just another quick question, not to take us too far afield, but back then, so this is 2015, right, was the dinner? Yeah. That was the same year, if I'm not mistaken, that Nick Bostrom published Superintelligence? I think he published it in 2014, but yeah, it was around that time. You might be right. I read it in 2015, so you might be right. And this is basically around the time that I was introduced to the subject, beyond Terminator 2. And the existential risk thread was sort of the dominant thread through which I was introduced to it. And it seemed like this was
what a lot of the conversations were about. How important was that also, the fact that Musk was talking about it, Musk was interested in it, he was getting influenced by these ideas, Ilya was also sort of focused on the risk component. How big was that here?
Yeah. The founding of OpenAI in terms of where the focus was. Yeah, it was a really core part of OpenAI's founding DNA was, as you said, Musk was really obsessed with this idea that AI could somehow kill humanity. It could destroy all of humanity. And in this kind of brand of existential risk, the fear that people with this belief have is not that it'll... It's specifically that it will destroy humanity beyond its regeneration. That that is like the end, end, end, end.
And he had read Nick Bostrom's book. He really loved the ideas in it. He was actually so obsessed with this idea that he thought about writing his own book about existential risk. And there were about half, maybe more than half of the researchers that ended up coming to the dinner were all kind of
similarly minded in that they were all deeply concerned and unsettled by the notion that AI could become super powerful and fall into the wrong hands. And so that was like an impetus for, it's not just Google that we're worried about, it's just any evil corporation or government we're worried about. And the only way to solve this problem is for us, the goodies, to actually do it first and
And so that was ultimately what led to the formation of this organization that then became a company. Yeah, again, I want to bring us back to the origin story and how things progressed from that initial meeting with my follow-up question. But I just want to make the observation that this is really, this is important because
So much of the early messaging coming out of OpenAI was focused on existential risk and this sort of high-minded, in fact, isn't it right there in the sort of, not the tagline, but the mission statement of the company about we want to build AI that benefits all of humanity? And yet what we have seen in the last few years has been a move away from that and this sort of broader push for commercialization and also a framing within the paradigm of the US-China strategic competition that we need to build...
AI in the Western world before China gets it, which is really a complete flip of the narrative that we were living with for the last however many years.
Yeah, it is completely a flip of the narrative. But what's interesting is, I mean, they have not moved away from their mission. The mission stays the same to ensure HEI, artificial general intelligence, benefits all of humanity. And one of the things that I talk about throughout the book is how this particular mission statement is so vague that it's effectively, it's just like whoever wants to interpret it however they want, it gets to do so. And so...
That's why you're able to see this organization just do a 180, but continue to argue in the public sphere that they are continuing to uphold their mission in exactly the way that they're supposed to. Because there's no good definition of what insure means in this phrase. There's no good definition of what AGI means. There's no good definition of what benefits humanity means. Like,
And so they can just interpret it, reinterpret it, redirect it, however they want to do whatever ultimately is the most effective way to continue accumulating power. Because when you actually just look at what they're doing at face value, that is really ultimately the single clearest thing that they have accomplished. All right. So we'll get more into that as well. But let's bring us back to the dinner and to the early founding. Yeah.
So how did things progress from here? I mean, eventually Elon left the company. I think you started to delve into that. And of course, OpenAI moved from a nonprofit into a quasi for-profit. I mean, I forget the particular name. They had a unique name that they used for that. Capped profit. A for-profit limited partnership. Yeah, capped profit, which was first I ever heard of something like that. I don't know if they innovated. Yeah, they invented it. Yeah. They invented that. So walk me through that process from that 2015 dinner to maybe up to the Microsoft deal and through that.
Yeah. So when they were founded as a nonprofit, basically a bunch of different backers pledged to give a billion dollars to the organization. And by pledge, they just signed like a statement saying that they would be willing to contribute that much. And it was Musk's idea at the time that he wanted the billion dollar figure because he wanted the announcement to be splashy and he wanted to, quote unquote, not sound hopeless.
against Google and DeepMind and the budget that they would have. And so he said, like, everyone else chip in and whatever, we fall short. I will then fill in the rest. And then in 2017, 2018, the organization at that point had, I mean, early days, they were very aimless, you could say. Like, they were an extremely, like, talent-dense group of people that had an extraordinary amount of runway, or so they were told.
But they didn't really have any like articulated vision of what they were actually doing. And so they were kind of just throwing spaghetti at the wall and there was like not really that much interesting stuff coming out of the organization. And so in 2017, in part because Musk was starting to get quite frustrated about Google and DeepMind making rapid progress and AlphaGo coming out and making all of these international headlines and just clearly bringing lots of love and adulation to DeepMind,
must start putting a lot of pressure on the organization. Like we need to figure out an actual plan. And so in 2017, 2018, Greg Brockman and Ilya Satskever, the CTO and chief scientist, sat down and started figuring out what would it actually take to try and make rapid AI progress such that OpenAI becomes the number one leader.
And around the same time, there were other researchers within the organization that were kind of asking a similar question, but from a different angle, which was what has been the pace of AI progress thus far and how much computational resources have been required to keep up that pace. And essentially through the strategic conversations and the research that was happening, they came to this conclusion
decision that the way to accelerate their progress was simply to try and build the biggest supercomputers possible. Because if they could own the largest supercomputers in the world, then they could train the largest AI models in the world and therefore kind of brute force their way to being number one quickly. And that is when they realized,
A billion dollars is not even enough. We need way more money to build the largest supercomputers in the world. So quick interjection for clarification. At what point did the view emerge that computational power and training ever more data and brute forcing it, as you said, was the way to actually scale the fastest and to sort of achieve, quote, AGI?
Which, again, is a fuzzy benchmark. We can get into that as well. But when did that view emerge? So that was a view that had been floating around in the AI field as one extreme in a range of different opinions about how to make AI progress.
So the other end of the extreme was this idea that whatever is going to get us to more AI progress doesn't exist yet. Like we need new research, fundamental research. We need new breakthroughs, new ideas to actually reach that progress. And then the compute computational resource view was we actually already have all the ideas.
And it's just a matter of throwing an insane amount of computational horsepower behind it. And it so happens that Ilya Sutskever was of that extreme camp. And so when he became chief scientist of OpenAI, he brought his particular belief into
into the organization and started orchestrating its research around that belief. And so when all of the researchers and Greg Brockman and he sat down to think, how do we actually get to number one as quickly as possible? He came already with this philosophy of
well, the best way to do it is through computational resources. And therefore, let's just go, go, go and try and buy the biggest possible computer we can get our hands on. You know, like one of the things that emerges in the book is that obviously this was convenient for certain folks because it allowed them to build in their mind, at least a sort of moat around the company and around the project. It doesn't seem like that would have necessarily been
Ilya's thinking, do you have any sense of sort of what convinced him so early on that this was the way forward? I think throughout opening ice history, what's interesting is different people have totally different objectives that kind of then end up coalescing around certain directions because they end up aligning in that way. So Ilya, he's very scientifically motivated. He is just really fascinated by what would happen if...
Like from a research perspective. And in his own career, at that point, he was already considered an AI luminary. He had already been part of a number of breakthroughs, but the most important one had happened in 2012, which actually kicked off the entire deep learning revolution. And that was when he was a grad student studying under Jeff Hinton, who recently won a Nobel Prize in part for this work.
where they used deep learning to do image recognition at an academic contest called ImageNet. And they showed in 2012 that their deep learning system was able to dramatically outperform any of the other techniques that were not deep learning. And it was like the first time in that contest where there had just been a step change in progress in recognizing deep
what's in an image. And that specific breakthrough and other breakthroughs that Ilya was part of had in part been aided by scale.
Scale was not the only thing that they were aided by. There were also new techniques that were being deployed. There was more data curation methods, things like that. But for Ilya, I think he just became really convinced that scale was the most important factor in all the breakthroughs that he had been a part of. And he's talked before about like, there's sort of this
One thing that's really interesting when listening to Ilya talk about deep learning and his particular philosophy is he does describe it as a belief. He doesn't describe it as any kind of like observational science. It's like he has a kind of religious belief in the idea that scaling deep learning will create magical things. And he puts up this chart sometimes where he'll talk about like the size of human brains being the largest brains of all.
all animals. And if you generally map out like animals in the animal kingdom along a trajectory of both intelligence and brain size, you see that there's sort of a linear relationship with brain size and intelligence.
And he also has this belief that neural networks, the software that underpins modern AI systems today, are an effective approximation of brains. Not everyone believes this or agrees with this, but like he just has like, that is his religion. Like he believes that neural networks have effectively...
figured out how to replicate brains. And therefore, if you put those two ideas together, you just have to scale. You just need the biggest brain possible to get the most intelligence. So that's essentially like why OpenAI set down the path from his person, like it got his sign off. But to your point, like there were many other reasons that other executives didn't
signed off for like, they weren't necessarily interested in the science. They were interested in the competitive moat that it would bring open AI. If you can get the biggest supercomputer, then that means other people are not going to be building this technology that you think is going to give you, accrue you a lot of power. And so it creates like a really effective way to solidify your advantage. Yeah.
Yeah, I remember coming across the work of someone named Giulio Tononi, who proposed this emergent theory of consciousnesses back in the early 2000s called integrated information theory. It was one of a number of theories being bandied about in various transhumanist and
Kurzweilian thought circles. And the idea was essentially that the more connections you have, the more integrated the information is, and ergo, the more consciousness whatever thing you are examining has. And it feels very much like this was the same sort of connectionism idea being proposed here. And I love how you point out that this scaling law isn't some kind of testable hypothesis, that it's more of a religious belief.
And that this field in general, AI, the commercialization of AI, even that term commercialization doesn't quite fit the rhetoric used to describe both the opportunity and the responsibility to broader humanity associated with the development of this technology. And I feel like many of the people within this field are also grappling with some very unfamiliar questions.
engineering questions per se, but teleological questions like why are we building this technology and who are we building it for that fall outside the scope of traditional discussions about product market fit and audience targeting. And maybe what's missing in this equation is that the public isn't represented as a stakeholder in this process the way he or she was during the development of the atom bomb
or the Gemini and Apollo missions. Does that resonate with you?
Yeah, I think that's exactly right. And I think there's kind of two reasons that I feel like this has happened. One is that is what you're describing, that there isn't really like a clearly articulated idea of what we should be doing the technology, building this technology for or who we should be building this technology for. And I think that ties a lot into some of the ideas that you've talked about in the past of just like the kind of decline in the moral decline in like people are doing things differently.
purely for the sake of the game now. They're doing things because it is like, yeah, there isn't a strong conviction around why should I be doing anything that I'm doing? It's just they're kind of chasing numbers and chasing economic value. But I think the other challenge with AI is
The original field of AI, which is a very old field, it was founded in 1956. It was based on this idea that we're going to create this field that tries to do everything to recreate human intelligence. And to this day, there is no scientific agreement on what human intelligence is.
And that's very different from any other technology in that there's basically every other technology, you have a very clear definition of what it is in the first place.
And so that's part of the reason why this AI is so ripe for ideological interpretation, religious projection. It's just so ripe for... It's sort of just a mirror that people can put up to themselves and they see whatever they want to see. And there isn't really a collective consensus around what it is, why it is, who it should be for, and all of these things. Yeah.
All right. So I took us on a little bit of an interlude there, but we were talking, I think we were back in the 2017, 2018 period where folks at the company decided that scaling and throwing more compute at this was the way to achieve AGI or achieve a competitive advantage over other companies. How did things progress from there and how did that lead us to the partnership with Microsoft?
So once they realized that they needed a lot of money in order to pull off this particular path for getting to AGI first...
They actually then immediately lost their main backer, Elon Musk, because they started talking about we need to convert into a for-profit because there's no way that we're going to be able to raise the amount of money that we foresee ourselves needing under a nonprofit. And once there were discussions of a for-profit, Musk started getting really pissed off about that. And there was sort of a tussle between him and Altman over, okay, well, if there's a for-profit,
who would be in control of it. And they both wanted to become CEO. There wasn't any agreement over it. And then eventually Musk pulled out. He pulled out of the project because he was like, I don't like the fact that there might be a for-profit and it might not be CEO. So I wanted to stay a nonprofit. If you're not going to stay a nonprofit, I'm out.
And so when he left, it sort of sent the organization into this bit of a crisis period where they had just realized they needed an extraordinary amount of money. And now they have also in the same moment lost the capital that they originally thought they had. And so Altman did what he does best. He started going out and fundraising. And essentially, he ended up going to the Sun Valley Conference, which is
This kind of gathering of a lot of billionaires, I think with the intent to kind of just rub shoulders with people that might be able to give money to this effort. And he ends up running into Satya Nadella, Microsoft's CEO, on the stairwell, a stairwell at the conference, and kind of quickly pitches him this idea of opening up and it peaked immediately.
Nadella's interest just enough that they started having more conversations about it. And then it kind of very rapidly then turned into Microsoft deciding to plug the hole and joining up with OpenAI in a partnership where Microsoft would give a billion dollars. And specifically...
Part of the reason why Altman was looking for a backer like Microsoft was because Microsoft also had the ability to give them computational resources. And for Microsoft, they were very intrigued by OpenAI because they felt that they had also fallen behind Google.
And Google had, because of its ambitions around AI and because of its acquisition of DeepMind, had done a lot more to innovate on hardware than Microsoft had. And Google had surpassed Microsoft's, well surpassed Microsoft's ability to build powerful supercomputers. So Microsoft wanted to kind of give itself a kick in the butt.
by backing OpenAI and using that to add urgency to its own development or own advancement of hardware so that it could also kind of learn by doing and start developing
making really powerful supercomputers. So now you have this really important commercial relationship and you have this kind of quasi nonprofit commercial entity, which I think eventually sort of leads us to that infamous boardroom coup, which was in 2023. November. November of 2023. Walk me through what happened there and what do we know about the internal discussions and rifts within the company that emerged during this period?
So because OpenAI had this Frankenstein corporate structure where it was a nonprofit, but then it added this capped profit arm within it to raise money and do the partnership with Microsoft, it ended up being plagued from 2018 all the way through 2023. It was just plagued with...
very different factions within the company that had very different understandings of what even OpenAI was. And this is, it's like, they had very different understanding of what OpenAI was, also very different understandings of what the mission was, because as I mentioned, like the mission is just so vague that anyone can kind of put the mission up to themselves as a mirror. And it's just a projection of whatever they want it to be. And so there were
two main factions that kind of emerged. One that kind of had these, still had this very existential risk mindset around why they were building this technology. They wanted to be the ones that build it and keep it in the hands of good people, not bad people. And they wanted to make sure that it was done with utmost caution so that it wouldn't lead to the demise of humanity. And they were basically overlapped with the group of people who also then believed that open AI at
the end of the day, even though it has this weird cap profit arm, is spiritually a nonprofit and it should operate like a nonprofit where even if it's commercializing things on the side, ultimately the decisions that the organization should make at a high level should never prioritize capital over safety.
The other faction was people that were being brought into the organization to run the capped profit and to actually figure out how to build technologies that could commercialize and therefore give a return on investment to investors.
And so those were the people that were much more philosophically similar to just a standard Silicon Valley person. They're trying to figure out how do we build really exciting user interfaces? How do we build a sustainable business model? How do we try and get like rapid user growth so that we can demonstrate hockey stick growth to people then convince them to give us money? And those were the people that saw OpenAI as...
maybe it was a nonprofit at one point, but now it's a for-profit and we need to operate like it's a for-profit. And also we need to operate just within the reality of the world today, which is like, if we want to achieve our mission to get to AGI first, we have to have capital. We have to play that game. And that group also increasingly also felt that
as OpenAI started deploying AI technologies more and more into the world, that the world wasn't ending. And actually, in fact, they were seeing lots of evidence of getting this technology into people's hands, bringing benefit. And so their interpretation of the mission became, actually, we need to deploy this technology as fast as possible to ensure AGI benefits all of humanity. And so
Obviously, you can see that these are totally diametrically opposed views, both on what benefit means and on what OpenAI even is. And it was the board of the nonprofit. The board had for a while been sort of 50-50 split, or I should say it was 50-50 split among the board members that were not also employees of the company. So they were the independent board members. And over the course of...
end of 2022 and early 2023, or maybe just early 2023, the three board members that were most inclined to think about the company as a company left for various reasons, in part actually because of ChatGPT's success. And two of them ended up wrapped up in conflicts of interest where they started being involved themselves in opening our competitors. And so they couldn't serve on the board anymore.
And so what happened with the board crisis was basically there were three board members, independent board members left that were most inclined to think about opening eyes still as spiritually a nonprofit and therefore something that needs to be always prioritizing safety over profits.
And then three employees, one of whom, Ilya Statskover, then increasingly also became just disenchanted with Altman, both his personality and his vision for how he was taking the company. And so he then ended up allying with the other three board members. And that resulted in just the most spectacular drama in probably the history of the tech industry. Yeah.
So let's walk through, I mean, again, I don't think the most interesting thing here is the drama, but people remember this episode. And I think that was when I first reached out to you actually, because we had been in touch before the publication of your book and it might've been around this time because there was a lot of confusion about what was going on. And also Ilya was, he signed off on the quote coup, but then he reversed himself and then he ended up leaving the company.
So, did his reversal just reflect the power that Sam Altman actually had and that it just wasn't practical? Also given the relationship with Microsoft and the fact that Sam could leave and take a bunch of engineers with him, it just wasn't practical to actually try to unseat him? Yeah. I think Sam and Ilya are opposites, very much so, in that Sam is a political animal and Ilya is not. Ilya is someone that is quite politically naive.
And honestly, he just did not understand. He didn't even think to think through what would happen once they fired Sam. I think that's honestly what happened is he didn't, it never occurred to him that other people would not celebrate this and that they didn't see Sam the way that he saw Sam.
And he never thought about the fact that they would be up in arms if he didn't give them a good answer. Not just him, but the board didn't give employees a good answer or investors a good answer or Microsoft a good answer about what had happened. And so basically, he did this because he wanted to strengthen OpenAI, because he was gravely concerned about how OpenAI worked.
might turn out under Sam's leadership and how AGI, his conception of AGI might turn out under Altman's leadership. But the moment that it became kind of clear to him that it was not at all strengthening OpenAI, but in fact might lead to its dissolution, that's when he flipped.
So, it wasn't that he didn't have high conviction in his decisions, it's that he was outmaneuvered. He did not- He's naive. Very naive. Yeah. He just didn't consider the fact that there would be consequences. So, has the industry just abandoned the alignment problem as a primary focal point of concern for developers in this space? I mean, are we just full blown now just focused on commercialization of AI?
I think maybe to take a step back, one of the things that I...
I'm kind of critical of in general is I think a lot of the framing around the AI debate today ends up being framed as like safety versus profits, but I'm actually critical of both. Like I think, I don't think it's one or the other. To me, they're actually a little bit two sides of the same coin in that the safety people and the profit people, they both have this argument that they should be the ones that control this technology because they're the ones that are going to do it best. And part of through the history of AI
the organization, you see how like each of these ideologies, even though they hate one another, actually end up just pushing to accelerate this technology faster and faster and faster. Like they both bear that responsibility for doing that.
And so, has the industry started moving more and more, indexing more and more towards commercialization and less towards this Doomer safety ideology? I think so. But does that really mean much? One of the things that comes across in your book is both of those orientations are ends justify the means approach. Yes, exactly. Exactly. So, is that also what you're getting at?
in the service of these larger ideals, we're sort of losing touch with the larger externalities that this race is creating. Because you go through that in the book, not just the ecological effects of building these data centers, but also like a fascinating looks into people responsible for cleansing some of the data, Venezuela and Kenya, or some of the reinforcement learning through human feedback that is sort of outsourced to these farms of people working for one to $2 an hour. Like how much of that story is actually not
not known to most people, do you think? Because it wasn't really, I'll be honest, I didn't really know much about it. Yeah. I don't think a lot of people know about it. And I think it's by design because I guess the way that I would talk about this particular part of the book is I've always been really fascinated by the AI supply chain because people don't really think of digital technologies as having a physical supply chain the way that fashion or coffee or gold has a physical supply chain, but it does.
And this is by design that Silicon Valley has always tried to make themselves appear magical by hiding the physical supply chain. And a lot of industries hide their supply chains by putting them far and away from where consumers can see them. But I think there's even more layers of obfuscation because the average consumer doesn't think
when they're using ChatGPT or they're using some other AI tool, that there is actually an incredibly large physical footprint somewhere out there in the world. And so the thing, I guess, to tie back to what you were saying about ultimately what is my criticism of these two different groups is they are having conversations completely detached from the reality of
of how AI is impacting people today, in part because they are completely detached from the reality of how people in the vast majority of the world live today. And when I talk with people within the AI world who are either optimistic or pessimistic,
They have no recognition of the fact that their bubble in Silicon Valley is not representative of, I mean, obviously they realize that they live differently potentially from other people, but they can't even conceive of how differently other people live. And when I probe them, I remember I was talking with
One researcher who was very much in the utopia camp and was like, you know, we get to AGI and that's it. Like, euphoria for everyone, utopia for everyone. And I was like, can you help me understand, like, can you walk me step by step through, like, how you see it helping the bottom of the rung? Like, the people that can't put food on the table for their kids. And he went, oh, I wasn't talking about, like, the bottom. Yeah.
I was talking and I was like, wait, so what do you mean when you say all of humanity? Because it seems to me that only some people deserve humanity here. And there's just other people that are written off as, oh, and the other people. Yeah. So that's my biggest criticism is like, I don't really see either side, the profit or the safety side, actually be cognizant about humanity.
the fact that the majority of the world lives in precarity. Yeah. I mean, actually, not to mix up metaphors here, but the alignment problem is also an alignment problem of cultures. And Silicon Valley has its own culture and their concerns have grown ever more sort of disconnected from those of, I feel like, the mass of humanity. There's been this long obsession about... I never quite understood it about becoming an intergalactic
species and also we need to go to Mars and that's the first step because humanity is not going to survive long term on this planet and we need to seed other planets. I never really understood that quite frankly. And I don't feel like most people really care about that. I mean, I don't know how popular that idea really is. It's popular on social media. And this religious thing keeps coming up. I had done this episode with Megan O'Giblin and actually even Nick Bostrom who had been on the podcast. I was introduced to his work through super intelligence and it was super interesting, all these philosophical ideas.
And then he published Deep Utopian. It was almost like kind of complete reversal. So he went from being sort of like Doomer focused to being sort of Utopian focused. And I was saying, I did this episode with Megan O'Giblin, who wrote a book about her experience, both with religion and then also delving into a lot of these communities. And there's such strong religious overtones that it almost feels like...
A lot of what's also happening is that there is a kind of religious cult that's developed within Silicon Valley and the conduit for expressing these views is AI development. 100%. I think that's exactly the right way to think about it is, yeah, I usually call them quasi-religious, but they are just fully religious when you think about
Right.
But the difference between previous religions was that there was a superior higher power that was determining whether you go to heaven or hell. And in this religion, you are the one that determines whether the human race goes to heaven or to hell. And so there is this profound religious undertone to the whole enterprise of AI development that you can't really...
Some people say that the story of the AI industry is just a story of capital. And I actually think that is not true at all, that you would be unable to understand all of the strange phenomenon and the kind of clashing and the decisions that people and organizations make just under the lens of capital. It is also 100% a story of ideology. Yeah.
But in that sense also, it reminds me of a hundred years ago during the Gilded Age and the rise of social Darwinism and eugenicism as popular theories among elites in society. And it feels like this is something similar, but I guess a key distinction is rather than us breeding supermen and superwomen, we were getting to a place where we created a God around which we can transform ourselves and the people that are in a position
to have the capital to be able to do it, will be able to sort of escape. I mean, there's also this strong... The other thing that's really interesting about the time we live in today, which wasn't true 100 years ago, there was no talk 100 years ago about escaping from the earth, escaping from this sort of dystopian future and getting out. Today, there's this strong cult of escapism within the AI community. I actually saw a recent interview of Paul Tudor Jones, where he was talking about how he was with sort of in this small
group of AI luminaries and someone who was very high up in the field was saying that his strategy is he bought, I forget how many acres in the Midwest, and he's got provisions and everything else. He's preparing for a doomsday where 50 to 100 million people die because AI developed some kind of new virus that kills all sorts of people. Yeah. All of the big wigs in Silicon Valley are doomsday preppers. I mean, Sam Altman's also a doomsday prepper. Even before he developed, he started working on opening AI. And I think this ties into...
What you talk about with nihilism is like the average person is checking out, but also the elites are checking out. Like they no longer want to deal with all the problems themselves.
that are present today. Like they no longer feel conviction around salvaging our current planet and our current country and our current, you know, like preserving just the things that work well. They don't feel this kind of attachment to it anymore. They just want to throw it all behind and move to something else. And so I think that's part of the reason why
why we just have all of this capital and all of these environmental externalities that are coming out of developing a technology. I think in part, it's because they don't really care how much cost it takes on the current planet and the current paradigm of our social order, because they're done with it. They're just trying to, while this earth still exists, pump all of that
the resources that they can. Rape as much of it as possible. Yeah, exactly. Extract as much of it as possible. Exactly. To try and get that rocket ship to liftoff and then they're done. I think that, again, that speaks to the alignment problem that we talked about. Not the traditional alignment problem in AI, but the larger alignment between the haves and have-nots in society.
and this area of real just concentrated power. So like what is the conversation that you feel, Karen, should be happening today that isn't happening? And how do you feel like your book or how do you hope your book can help sort of get that conversation going or bring it more to the fore?
So I think the thing that I'm most disappointed by in AI discourse in general is that people still deeply buy it. Not everyone, but there's still so many people that deeply buy into this idea that there is such a thing as artificial general intelligence and that we shouldn't trust
Silicon Valley to build it and that it will somehow bring us to utopia. And I think all of these things are leading us instead to just continuously seed more and more control to people, a group of people that have already demonstrated that they have no interest in protecting good things in this world.
And we're leading to a place where ultimately we are undermining the liberal world order. We're undermining democracy. We're reversing all of the progress that we've made and returning back to an age of empire. That's why my book ultimately is called Empire of AI. I make this argument that we are rapidly recreating a time where
There are small group of people that get to make all of the decisions and they act completely in self-interest and they extract and exploit and do whatever they need to do to continue to enrich themselves and fortify their empire. And everyone else kind of lives with the ramifications of their whims.
This has already been something that I think within Silicon Valley has already been brewing for a long time. And, you know, people have colloquially called tech companies empires for a long time. But I think we need to take AI companies much more seriously as literal empires because the amount of capital and the amount of resources and the amount of extraction, the amount of exploitation has all upped companies.
one to two orders of magnitude from previous social media and search companies. Like the amount of environmental devastation that is happening because of the need to build, to cover our planet with data centers and supercomputers is unparalleled. We have never seen this level of build out at this speed before. Just one figure, like President Trump announced earlier this year, the Stargate initiative, which is a
meant to be a $500 billion private investment into building data centers and supercomputers, which OpenAI says it's all for itself over the course of four years, $500 billion over the course of four years. The Apollo program to send the first man to the moon was around $300 billion 2025, over 13 years.
So $500 billion for a single company to just create a technology that it's still not entirely clear. And it's proprietary. And it's proprietary. And it's not entirely clear what it is and who it's going to benefit.
That is insane. And yeah, that's ultimately why I want to write the book is like to just point out like, what are we doing here? You know, like when this is happening, there are still people buying into the idea that we should just wait and see and hope that on the other end, we reach euphoria. Like it's, there's not going to be an end that we can see because at that point it's too late. Like the empires have already constructed themselves. The earth has already been laid to waste and,
And democracy is no longer tenable. So like we can't, yeah, this is, we don't get another chance.
I think that's a really insightful framing. Didn't Shoshana Zuboff endorse your book? Yes. Yeah. So that's quite fitting. She had been on the show back when she published her book and that book was incredible. It was such an important book in terms of giving people a framework for thinking about the nature of the problem. And both of you have actually touched on this analogy of colonialization. And I think she actually
Use the example of the Spanish conquistadores coming to the Americas and the natives welcoming them.
as sort of gods. And there does seem to be something similar happening here in sort of how you're describing this. Yeah, yeah, exactly. And Professor Zuboff was, I mean, her book and her ideas in general were so hugely influential to my work and this book. But yeah, it's exactly that. Like she opens her book with this idea that when you are confronted with something so new, it sort of short changes your brain circuitry to really even recognize that
And she talks about her own personal experience, like where a fire started in her house and she thought that she would have more time. So she was like running around the house trying to grab things. And then it wasn't until a firefighter like threw her out of the house and then the house burst into flames that she realized that she had been completely anchored on historical examples that were irrelevant. Right.
And I think that is what happens with emerging technologies a lot is that they are so different that people don't recognize anymore that they're actually the same. And like the fact that Silicon Valley has somehow been able to reinvent itself from a social media era where everyone, everyone has sort of in society started agreeing that social media is bad for society, you know, like,
We're no longer having that debate. And yet somehow the same cast of characters has reinvented themselves now under this new branding of AI. And it's short-circuited everyone into thinking, oh, wait, this time it's going to be different. This time it's going to be better. It's going to work out. They are doing it with our best interests at heart. And it's just not, that's just not what's happening. Yeah.
The same is true for Elon. We live in this era of cult personalities and populism, and Elon has always reminded me of Trump. The two of them are so similar in terms of how they appeal to people. And it's disheartening that the current administration has really aligned itself with these same power structures.
The other thing is that, I love that you brought up that example that Trishana brings up with the fire in her home, because she also made the point in that same chapter about how social media has resulted in the loss of sanctuary and the loss of the spaces in which we sort of come together as communities. And so how do you think, and this is sort of
Brings back to my question about the conversation that you hope people can have. I don't know that you have an answer to this, but it's something that I've tried to explore in the show. How do we get to a place where we can build a resistance to what we're talking about here, which is the accumulation of private power in the hands of ever smaller groups of people so that we can actually realign
our economies and our political systems toward the public interest. Yeah. Going back to the AI supply chain idea, I've always thought of, okay, so modern day AI systems, these colossal AI models that companies are building, they
They need a lot of data. They need a lot of computational resources. They need a lot of land, energy, water. These are all, to me, sites of resistance. Like, we have somehow...
entered a headspace now where we just feel like there's nothing we can do and they're just going to take up all this stuff. But actually, if we can contain the company's ability to actually access just endless quantities of these resources, like they can't do what they're doing. So to me, like
When thinking about like how to contain their access to data, like the U.S. still doesn't have a federal data privacy law. We should have one. We should not be eroding away our copyright laws such that they get even more data to train their models on without any more intellectual property protections on like people's life's work. There should be like consortiums of people that actually debate copyright.
publicly owned data and whether or not it should be allowed to go to these companies and be used to train these models. And there should be public debates about the content moderation of the data sets, because in the course of my book, I talk about many instances in which OpenAI actually just... There were people within the company that just decided themselves, should we include pornographic content in our data set or not? Actually, yes. Why not? Because it is representative of the human experience.
And like that was like not at all done in any kind of like democratic way. It was just completely like a snap of judgment decision that now has like huge ramifications about like the way that these models work and the risks that they have to exposing, you know, kids to potentially illicit content.
We should also be like building coalitions of people that are informed about and can ask questions about and demand changes to data centers being developed within their communities. Right now, companies actually...
A lot of them enter communities under shell companies, shell entities, where they won't even say that it is them that's building a data center. Like Meta has a shell company called Greater Kudu LLC that they use to do some of their data center stuff. And communities are like, what is this random company that, and like, what project are they doing? And then once the first brick is laid, it's like, ta-da, it's Meta and it's a data center and you can do nothing about it anymore.
And we need more regulation around how utility prices are affected, because when data centers move in, it raises the energy costs for regular common, for families. And there needs to be a protection of basic human resources being affordable.
So I think that's kind of how we need to contain the empire. That's the way that I think about it is like all of these different inputs that the empire needs to continue its accumulation are places that there can be resistance. There can be coalition building to kind of
start pulling the control of those resources back to the people instead of to the empire. So Karen, I think this is a very important book that you've written in large part because it helps with that specific framing. In terms of thinking about what are we actually facing here? What is the challenge that we're
that we're facing. And I think also, to be quite honest, it was very courageous that you did write this book. It was very courageous that you've taken such a contentious stand on this issue and with this company, which is not easy. I mean, so much power has accumulated in Silicon Valley and it's not easy to begin a career or to sort of grow your career by necessarily sort of facing that down. It certainly isn't the easy way of doing it. You have to really carve out your own
Besides the fact that people can find this book on Amazon, which is everyone knows how to use Amazon. How do people follow you? How do they follow your work? What would you recommend people do?
I am primarily these days posting on LinkedIn and Blue Sky, so people can follow me there. I have not created a newsletter yet. A sub-stack, yeah, or the equivalent. Yeah. But yeah, I usually post my work on those profiles. And I just hope that people can also feel free to reach out to me. I try to be as responsive as possible through my website. You can just contact me there and I try to email people back.
with the different requests that they have. But I also hope that people can take the book and turn it kind of into their own and use it as a platform to do the work that they're doing. So ultimately, if I can help in any way in doing that, whether through my ideas and words or through literal reaching out and having a conversation, please let me know. Well, you're an important voice in the space, Karen. So I thank you so much for writing the book and for coming on the podcast. Thank you so much for having me. Thank you.
If you want to listen in on the rest of today's conversation, head over to hiddenforces.io/subscribe and join our premium feed. If you want to join in on the conversation and become a member of the Hidden Forces Genius Community, you can also do that through our subscriber page.
Today's episode was produced by me and edited by Stylianos Nicolaou. For more episodes, you can check out our website at hiddenforces.io. You can follow me on Twitter at Cofinas, and you can email me at info at hiddenforces.io. As always, thanks for listening. We'll see you next time.