This episode is brought to you by Hay Day. Feeling stressed? Take a moment for yourself with Hay Day. Stroll through rows of golden wheat, tend to cute barnard animals, and discover the joys of a digital farm retreat. You don't have to hop in a car or catch a plane. Escape to the farm at the tap of a finger. With Hay Day, you're not just tending to a farm, you're tending to yourself. Hay Day.
Search Hay Day, that's H-A-Y space D-A-Y, in your favorite app store. Hay Day is a free-to-play mobile game and offers optional in-app purchases and random rewards.
The Hoover Dam wasn't built in a day. And the GMC Sierra lineup wasn't built overnight. Like every American achievement, building the Sierra 1500 heavy-duty and EV was the result of dedication. A dedication to mastering the art of engineering. That's what this country has done for 250 years.
and what GMC has done for over 100. We are professional grade. Visit GMC.com to learn more. Assembled in Flint and Hamtramck, Michigan and Fort Wayne, Indiana of U.S. and globally sourced parts. In August of 2019, reporter Karen Howe went to visit the San Francisco headquarters of OpenAI. It had started as a nonprofit research lab that said it was dedicated to fundamental AI research with no commercial focus.
No commercial intent. At the time, Karen was covering AI for MIT Tech Review. And OpenAI was the buzzy new kid on the block. It had money, big names, and lofty ambitions to develop artificial general intelligence, AGI, for the good of society. Then OpenAI, which had...
really built a mission around the idea that it would be fully transparent and open source. All of its research withheld its first piece of research for sort of unclear, ambiguous reasons. And then it restructured as an organization. A few months earlier, the nonprofit that controlled OpenAI set up a for-profit arm and got a billion dollars from Microsoft. Not long after, Karen walked in.
So the first meeting that I had in the offices was with Greg Brockman and Ilya Setskever, the CTO and chief scientist of OpenAI. And I kind of went in, I went in with a generally good view of the organization. And I wanted to just ask some basic questions about how they thought about themselves, how they thought about their mission, which was to ensure artificial general intelligence benefits all of humanity.
So I began by asking, you know, how do you define AGI? What do you think it will be able to do? What are the bad things that you think could come out of it? Because another pillar of opening eyes mythology is that they need to build good AGI before someone builds bad AGI.
I also asked, why do you think we should spend billions of dollars on this problem? You know, humanity is facing a lot of different challenges. Why not spend billions directly on tackling climate change rather than on AGI that could help tackle climate change? And I realized in that meeting that they didn't have a very good articulation at all of what they were doing or why they were doing it.
Over three days, she spoke to multiple employees. And no one could quite answer that question. They were very secretive. They, you know, I would be talking to employees and they would steal furtive glances at each other and at the communications director because there seemed to be a lot of red lines for what they could and couldn't share. But one message was clear. We, OpenAI, have to develop this technology first.
We need to be first to shape the norms within the industry around how this technology is developed. And that is very competitive as opposed to collaborative. And so there was this inherent tension between everything that they said publicly versus how they were behind closed doors. After Karen's story was published, the company didn't speak to her for three years.
In that time, OpenAI released ChatGPT, CEO Sam Altman was spectacularly fired and then unfired, and OpenAI vaulted to the top of the AI food chain. All the while, Karen was reporting, culminating in her new book, Empire of AI, Dreams and Nightmares in Sam Altman's OpenAI. Today on the show, the inside story of the company that wants to control our future.
I'm Lizzie O'Leary, and you're listening to What Next TBD, a show about technology, power, and how the future will be determined. Stick around. This podcast is brought to you by Progressive Insurance. Do you ever find yourself playing the budgeting game, shifting a little money here, a little there, hoping it all works out? Well, with the Name Your Price tool from Progressive, you can be a better budgeter and potentially lower your insurance bill too.
You tell Progressive what you want to pay for car insurance, and they'll help you find options within your budget. Try it today at Progressive.com. Progressive Casualty Insurance Company and affiliates. Price and coverage match limited by state law. Not available in all states.
This podcast is sponsored by Udacity. You might be wondering how certain people are landing tech jobs with high earning salaries, unlimited PTO, remote work, and free lunch. Well, learning the skills the companies need can help you get there.
Udacity is an online learning platform with courses in AI, data, programming, and more. With Udacity, you're not just passively watching videos or reading articles. You're doing practical exercises and projects that prepare you for the job you want. That's why 87% of Udacity graduates say that they achieved their enrollment goal. With real-world projects and human experts that grade your work, you'll truly get the skills you need.
When you have a certification from Udacity, recruiters and employers take notice. So for a better job, better salary, and better skills, check out Udacity today. The tech field is always evolving, and you should be too. You can try Udacity risk-free for seven days. Head to udacity.com slash TBD and use code TBD for 40% off your offer.
Once again, that's Udacity.com backslash TBD for 40% off and make sure you use promo code TBD. OpenAI began in 2015 with Sam Altman and Elon Musk as co-chairs. Its nonprofit mission was backed by a roster of high-profile talent and a billion dollars in funding. Funding from Musk, Altman, and investor Peter Thiel, among others.
Altman gradually became the organization's face. Again and again, people described him to Karen as a boy wonder, equal parts charismatic, thoughtful, and ambitious. Altman's star began to rise in Silicon Valley after he sold his first company in 2012, location tracking software called Looped. It was a success, but not a blockbuster.
He parlayed that into leading Y Combinator, a tech incubator, making money and connections across Silicon Valley. More of an investor type than CEO. And it was quite a dramatic shift that then he suddenly became the face of OpenAI. He's not an operator. He's not a manager. He's not very good at that. And, you know, to his credit, he is sort of self-aware that that's not his strength.
And many people mentioned to me that, you know, they didn't even—he was rarely around at the company. Like, the real operators were the people below him. Ilya Satskever, Mir Maradi, the chief technology officer that took over after Greg Brockman shifted into being OpenAI's president.
He is a once-in-a-generation storytelling talent. He is incredibly good at telling stories, which is why he's really good at fundraising and recruiting talent. Because you're pitching investors, you're pitching potential employees. You are painting a picture of some kind of future that feels so compelling to
that you have to put money down or you have to join the company, the venture, and be part of that, have a piece of that future. People felt excited to join him and to invest. Part of the reason why he's so good at this is he has a loose relationship with the truth.
So when he talks to people, he can be very, very persuasive because he's able to say exactly what you want to hear without it necessarily having much to do with what he believes or with the ground truth of the matter. And so this is part of the reason why he's also a bit of a chaotic manager, not a very good operator, because he's
he will go into meetings with one team that might be disagreeing with another team, and then he'll totally agree with them. And then the other team, he goes into the meeting and totally agrees with them.
And it causes a lot of confusion within the organization about what is exactly happening, what direction should we actually be pursuing. The CEO should be making calls and resolving conflict, but instead he would sort of exacerbate conflict along the way. Early on, when OpenAI is still operating in this nonprofit structure, you have Greg Brockman and Ilya Sutskever. You also have Elon Musk involved.
It seems like when I read your book that the operations side of OpenAI, which was being run by Greg and Ilya, was sort of chaotic. Like what was going on there then? Yeah, so think about...
The general premise of OpenDI, a bunch of mostly dudes get together with a massive paycheck saying that they're going to build something that they can't really define. It's going to be chaotic. None of them have that much managerial experience.
They sort of modeled the nonprofit after an academic lab where Ilya Setskever kind of played the professor role and then other researchers would be sort of like his graduate students. And they would work on their own projects with no real unifying theory among all of the projects. And they would just have meetings with Setskever to ask, do you think this is the right direction? And then he would ordain certain directions as correct or not correct directions.
And Greg Brockman, he's a coder. He was also very much a Silicon Valley person and preferred just coding all the time. So he wasn't necessarily doing anything to structure the organization's direction. He was just sitting on the couch every day and typing on his laptop and getting really consumed by the work of just building things.
And so it was a completely rudderless environment that just happened to have, at least they thought, extraordinary runway because there was a $1 billion commitment in funding. But yeah, in this kind of situation with all of these ingredients, it was just kind of a weird rut.
rich playground of people that were running around with their heads a little bit cut off. One of the sources of funding came from Elon Musk, who left OpenAI in 2018 and really fell out with the company. Why?
Yes, and Musk was one of the biggest backers. He was the biggest backer of OpenAI. He was the one that initially told Altman and Brockman, we need to publicly say that there is a billion dollars committed to this endeavor to not sound hopeless against DeepMind and Google. Because that's who they were trying to compete with. That's who they were trying to compete with. And that was specifically who Musk had a vendetta against and wanted to beat publicly.
And he was like, this is real. Anyone who chips in, whatever we're missing from the $1 billion marker, I will fill in the gap after everyone else has made in their donation. And very quickly within those first two years of the organization, executives got together and they started realizing,
okay, thus far, things have been sort of chaotic and a bit rudderless. We need to actually start figuring out a plan. What is the plan for actually being number one in AI progress? And they hit upon their ultimate thesis, which was we need to just scale these technologies as fast as possible and faster than anyone else. And by scaling, meaning we need to build models that have
are trained on more data than ever before and on larger supercomputers than ever before. And so that path suddenly required an extraordinary amount of money. And they realized we can no longer stay as a nonprofit to raise that money. We have to have some kind of for-profit vehicle to promise investors some kinds of returns in order to entice more capital to come in. And when they started having these discussions,
Musk and Altman then clashed over who got to be the CEO of that for-profit entity. Musk wanted to be CEO and have full control. Altman wanted to be CEO and wouldn't really say to either Setskever or Brockman exactly why the CEO title was so important to him. But it basically came to a loggerheads. And then Musk went, you know what?
If I can't be CEO, this has to stay a nonprofit. We are no longer doing this for-profit conversion. And if you, Satskever and Brockman, if you too want to still do this for-profit, count me out.
I'm no longer going to continue funding, providing what is ultimately seed money for a startup. I started funding this because it was a nonprofit. And Altman successfully convinces Brockman, who was one of his close friends, to convince Sotskever that actually, yeah, let's just let Musk leave. And I'm the best person to run this thing. We need to do this for-profit conversion.
Altman gets the investment from Microsoft. They have the ability to buy the kind of computing power that they need, the kind of GPUs that would move them toward the ability to do the computing that they're talking about.
There's one thing that really struck me in this book is that there are so many overlapping friendships and rivalries and people funding one another's companies or lending money or lending expertise. And it seems like there are multiple characters in here saying the stakes of AI are so high that that guy, my frenemy,
shouldn't be allowed to control it. Only I can. How did it become this thing that was so talked up as necessary and yet so important that it must be mine, must be kept from everyone else?
I don't think it's a coincidence that basically every tech billionaire now has their own AI company. I think they all collectively identified that AI is a very consequential technology and that specifically the AGI narrative is also an extremely powerful one that can move mountains when it comes to capital, political influence, acquiring more land, energy, water resources, technology.
to build the things that they want to build. And so this AGI endeavor specifically has always been one of ideology and ego. It has always been one where the people who have the most power and influence in the world want to move into this space to refashion AI in their own image.
That's not how all the AI space and all AI research and development happens. But certainly for this particular slice of that world, the AGI world, where people say that they are on a quest to do something profoundly transformative for all of humanity, it is impossible.
a clash of all of these egos, all of these people who have totally different ideas of, no, I want to do this or I think this is better. But you talk to so many different people who all seem to struggle to define what AGI means.
Exactly. And I think that is a feature, not a bug, because if you are building something that has no definition, you can make it whatever you want. I mean, what a fantastic way as someone who has wealth and influence in the society to continue accumulating more wealth and influence.
I'm going to read from your book for a second. AGI, if ever reached, will solve climate change, enable affordable health care, provide equitable education. I mean, those are like, that's shooting for everything humanity wants, right? Exactly. I mean, they're building, they tell us that they're building everything machines, right? They tap into everything.
the deepest parts of our psyche and the deepest desires that anyone has.
Most people have had a loved one that has passed away from cancer. Like, who wouldn't want something that's going to cure cancer? And people, many people face financial insecurity, face poverty, and who wouldn't want that to be eradicated? So they use all of these different promises because they know it's so effective at reducing
kind of um
our logical brains and tapping straight into the emotional part of everyone's psyche in just like they tap into that desire, that desire for a better future, that desire for something more than what we currently have. And that's why we see all of these promises have been recycled again and again and again and again because they keep working.
When we come back, how the release of ChatGPT nearly broke open AI and then made Sam Altman more powerful than ever before. This episode is brought to you by Discover.
It's smart to always have a few financial goals. And here's a really smart one you can set. Earning cash back on what you buy every day. With Discover, you can. Get this. Discover automatically matches all the cash back you've earned at the end of your first year. Seriously, all of it. Discover trusts you to make smart decisions. After all, you listen to this show. See terms at discover.com slash credit card.
This episode is brought to you by OutShift by Cisco. While single agents can handle specific tasks, the real power comes when specialized agents collaborate to solve complex problems. But there's a fundamental gap. There is no standardized infrastructure for these agents to discover, communicate with, and work alongside each other. That's where agency comes in.
The Agency is an open-source collective building the Internet of Agents, a global collaboration layer where AI agents can work together. It will connect systems across vendors and frameworks, solving the biggest problems of discovery, interoperability, and scalability for enterprises. With contributors like Cisco, Crew.ai, Langchain, and MongoDB, Agency is breaking down silos and building the future of interoperable AI.
If you're running a business, you know how important it is to stay connected to your customers. And having a flexible and efficient phone system is essential to succeed.
Introducing Open Phone, the number one business phone system that streamlines and scales your customer communications. Open Phone works through an app on your phone or computer, and your team can share one number and collaborate on customer calls and texts like a shared inbox. That way, any teammate can pick right up where the last person left off, keeping response times faster than ever.
Plus, with AI-powered call transcripts and summaries, you'll be able to automate follow-ups, ensuring you'll never miss a customer interaction again. So whether you're a one-person operation or have a large team that needs better collaboration tools, check out Open Phone. See why over 50,000 businesses trust Open Phone to manage their businesses' calls and texts.
Open Phone is offering TBD listeners 20% off your first six months at openphone.com slash TBD. That's O-P-E-N-P-H-O-N-E dot com slash TBD. And if you have existing numbers with another service, Open Phone will port them over at no extra charge. Open Phone. No missed calls. No missed customers. ♪
In the fall of 2022, OpenAI launched ChatGPT, which many people now consider a runaway success. But inside the company, it was a completely different story. OpenAI was actually working on a different product, GPT-4, an upgrade of its large language model, GPT-3. While it wasn't a consumer-facing product, it was still anticipated to make a splash.
But before the release, the company heard a rumor. Its competitor, Anthropic, was planning to release an AI chatbot to the public.
OpenAI panicked and switched directions. It blended together a slightly older large language model, GPT-3.5, with a consumer-facing product called Super Assistant. And that became ChatGPT. They release it in the middle of December, which was during a huge AI research conference called Neural Information Processing Systems, which is also the same research conference that OpenAI had announced themselves.
when it was first formed. And of course, then everything goes bonkers. And suddenly it's not the GPT-4 super assistant that becomes the moment. It is ChatGPT that becomes the moment. And even after when GPT-4 is released and ChatGPT is upgraded, it is not nearly as much of a cataclysmic shift as ChatGPT was. And what's interesting is after all of this happens,
It turns out Anthropic was not actually releasing a chatbot. ChatGPT's release further fractured the company into two distinct groups. The AGI boomers, who thought AGI could save humanity, and the AGI doomers, who feared it could cause our demise. Both saw the success of ChatGPT as further proof of their beliefs.
There's suddenly abundant evidence for both of their arguments. There's abundant examples that each camp can point to to say, I was right all along. There are people that are using ChatGPT in all of these fantastical ways that prove the boomers' points. There are people that are using ChatGPT in all these detrimental ways that are proving the doomers' points. And so there's a lot of chaos at the company online.
Servers were breaking under the strain of chat GPT, millions of new users came online, and OpenAI was scrambling to catch up. The nonprofit board, which still sat atop the company, started hearing about all of it. At the same time, they got a very different picture, and a potentially misleading one, from Sam Altman.
In that same moment as the board is having these realizations, there are two key executives that also are starting to sour on Altman. Ilya Sutskever and Mir Maradi, chief scientist and chief technology officer. Sutskever is observing all this chaos that is happening at the company. And he is observing how Altman has always been, where he tells different people what they want to hear differently.
that are totally in disagreement with each other. And he starts having two great fears. One, that all of this chaos is leading to OpenAI actually slowing down their research progress. And two, that it's also leading to an environment in which even if there were research progress that led to AGI, he's very much an AGI believer, that AGI
It would not be able to responsibly bring that into the world with Altman as its leader.
Maradi, she's very much the operator. She loves... She's very much about, we need to follow processes. We need to follow protocols. We need to have plans. We need to rigorously implement them. And Altman is doing all this stuff to try and push the company to ship faster, faster, faster, including trying to skirt around processes, including telling her, hey...
The legal team actually told us that we don't need to do this process for this model. Let's just push it out. And then her checking with the legal team and them being like, we never said that. I don't know why Sam said that. He thought we said that. And so they both start realizing, okay, the chaos and velocity is enormous.
heading in a very, very bad direction in each of their kind of worldviews of how a functional company and how a functional open AI should be. And so the two of them then approach the board and the board goes, wait a minute, we have been having the exact same concerns based on all of the revelations that we're having that Altman is also being not forthcoming, not truthful, misrepresenting things.
We need to have a conversation then about whether or not he really is the right leader for OpenAI. And after a series of intensive discussions, they determine he has to go. Altman campaigned to keep his job, and employees quickly revolted against the board, sending an explicit message that without him, there was no OpenAI. Altman was rehired, and the two board members who voted to oust him stepped down.
OpenAI added new board members and Altman consolidated his power. One of the things that sets your book apart from a lot of the other coverage of OpenAI is it looks at it sort of in the context of building an empire. I mean, it's literally in the title of the book. And there's a chapter in which you talk about three necessary ingredients for an empire. Centralized talent, centralized capital, and a vague mission.
How do you see OpenAI and Sam Altman creating this semi-digital colonialism, this literal empire? Yeah, so the part in the book where I talk about this is where I begin to conclude that OpenAI's mission to ensure AGI benefits all of humanity is actually...
the perfect tool for empire building because it is so vague that it can be interpreted and reinterpreted however the organization wants and however Altman wants. But it is also incredibly effective in rallying people around it
And rallying resources and warding off regulation and doing all this rhetorical work to essentially supercharge people.
whatever the person wants to get accomplished, whatever the organization wants to get accomplished, and make it go forward. So, like, we have to be the champion here and we need to build this data center because otherwise China's going to do it. Yeah, we need to do this because it is in the benefit of all of humanity, because we could reach heaven instead of hell, because if China does it first, we will definitely go to hell.
You know, this is like extremely quasi-religious language with the utmost important stakes in
This is how empires operate. This is how empires of old literally operated. They went all around the world, plundered resources, exploited labor, and imposed certain ideologies on other people because they believed that they were civilizing everyone else. They were doing it for their good. They were allowing them to ultimately access heaven instead of go to hell.
and bringing them into progress and into modernity, when actually when you look at the real tabulation, the scorecard of empire building, it's that all the people at the top get extraordinarily rich and everyone else lives in the thrash of the decisions of the people at the top, and they are completely dispossessed of their land, of their resources, of their labor, of their future. The reason why I titled the book Empire of AI
is because we need to acknowledge that these companies are no longer just multinational corporations that are trying to gain a monopoly in a business market. These are political economic monopolies.
that are gaining a controlling influence on everything, on our politics. I mean, you see what's happening in the U.S. now with the interference of unelected tech billionaires taking over the government. They're gaining a controlling influence on scientific production within AI. When you have that much money, you can
buy up all of the talent, all of the best AI researchers, and they now become financially conflicted such that when they are producing any research that should be contributing to the public's understanding of this technology, it is actually, in fact, just reinforcing the empire being a benevolent force. And they're also having a controlling influence now on...
energy and utilities and where literally power plants will be placed in the U.S., around the world to plug into this colossal supercomputing infrastructure and data center infrastructure that they're trying to pop up that is ultimately hiking up people's
electricity bills, taking away life-sustaining water resources, and completely hijacking democratic policies at the local level too, because these data centers come in without people in those communities even knowing that there will be a data center coming in or even knowing who is building that data center, because it all happens in deals behind closed doors and with shell companies and
There's something much more profound happening here than just, oh, multinational corporations following capitalism. I laughed out loud. There's a moment in the book where Sam Altman is talking favorably about Napoleon and, you know, oh, well, he's just understood so much about human psychology. He's like, yeah, yeah, okay, morally, there were some issues, but kind of yada yada is past the moral part. And it does make me wonder if anything can stop this.
It is another feature of empire that empires make you feel like they are inevitable, but
They are also fundamentally very weak at their foundations, which is why historically empires have always fallen. And eventually we go from empires towards more democratic forms of governance because the reason why empires are weak is practically no one has an agency and has a say. And eventually it reaches a point where people are like, that is enough.
And we need more inclusive forms of governance where we can actually self-determine and collectively shape our future. And so absolutely, we are in a moment right now where this empire building feels unstoppable. But there are certainly many, many things that we can do to remind ourselves we actually still exist in democracy. We still actually can... Sort of. I shrugged. But what I think about is...
The full AI supply chain, data, computational resources, models, applications, all of the resources that these companies, these empires of AI need to continue building empire, those are actually resources that all of us own.
We actually are the ones that own our data. And Silicon Valley has been very clever in inculcating norms within society that they own our data. But no, we actually own our data. And these different parts of the supply chain are, to me, sites of democratic contestation where we can actually start asserting what we want, what we want to give up as our resources and on what terms.
What kinds of benefits do we demand in return? And I think if we collectively remember that these resources and all of these spaces in which AI companies operate are in fact collectively owned by us and we can assert our opinion and voice in those matters, then we can start containing the empire. I have one question that I want to ask you because I would feel wrong not asking it. Okay.
Throughout the book, there are all these tensions around who Sam Altman is. He's generous, he's thoughtful, and then he's ambitious, maybe even Machiavellian. There is a deeply upsetting moment when he is accused of childhood sexual abuse by his younger sister Annie. It's a very complicated family situation.
How do you want people to come away from this book thinking about Sam Altman? I think the thing that I really want people to focus on is I think we fixate sometimes too much on the individuals that run these companies. And then it detracts from us seeing the bigger picture of this global system of power that is being constructed. So...
Yes. Does the book explore a lot about Sam and his motivations and his personality and why that ultimately shaped the way OpenAI is and how it ended up leading the world into this particular space?
AI development paradigm, it does. And I sort of leave it an open-ended question. Is he a thoughtful person or is he Machiavellian? Because many of the people that I interviewed, regardless of how long they'd worked with Sam or how closely they had worked with him, they couldn't quite answer that question themselves. But ultimately, even if Altman were not the head of OpenAI anymore, if you removed him or if he decided to step down,
and you replaced him with someone else that is still a product of Silicon Valley, things aren't going to change.
This global imperial system of power that Silicon Valley has constructed will continue on. And I do not want people to suddenly acquiesce or feel, oh, the problem is solved now if Sam is gone or not gone. You know, this is an ongoing fight for our democracy. We have to continue reminding ourselves that...
We have democratic rights and we need to exercise them. Otherwise, they will die. They will go away. And Silicon Valley is very much trying, not just open AI, trying to reorder the world to not have those rights for people anymore. And so, yeah, so I guess the way that I would answer the question is,
look beyond the man and remember and be cognizant of the system. Karen Howe, it is an absolute pleasure to read this book and to talk to you. Thank you. Thank you so much for having me. Karen Howe is the author of Empire of AI, Dreams and Nightmares in Sam Altman's OpenAI. You should check it out. And that is it for our show today. What Next TBD is produced by Evan Campbell, Patrick Fort, and Shaina Roth.
Our show is edited by Rob Gunther. TBD is part of the larger What Next family. And if you're looking for even more Slate podcasts to listen to, go check out Wednesday's episode of What Next. It's all about how FEMA has been stripped to the bones under the Trump administration. We'll be back on Sunday with an episode on the Genius Act, Washington's controversial piece of crypto legislation. I'm Lizzie O'Leary. Thank you so much for listening.
I'm Leon Nafok, and I'm the host of Slow Burn, Watergate. Before I started working on this show, everything I knew about Watergate came from the movie All the President's Men. Do you remember how it ends? Woodward and Bernstein are sitting with their typewriters, clacking away. And then there's this rapid montage of newspaper stories about campaign aides and White House officials getting convicted of crimes, about audio tapes coming out that prove Nixon's involvement in the cover-up. The last story we see is Nixon resigns. It takes a little over a minute in the movie.
In real life, it took about two years. Five men were arrested early Saturday while trying to install eavesdropping equipment. It's known as the Watergate incident. What was it like to experience those two years in real time? What were people thinking and feeling as the break-in at Democratic Party headquarters went from a weird little caper to a constitutional crisis that brought down the president?
The downfall of Richard Nixon was stranger, wilder, and more exciting than you can imagine. Over the course of eight episodes, this show is going to capture what it was like to live through the greatest political scandal of the 20th century. With today's headlines once again full of corruption, collusion, and dirty tricks, it's time for another look at the gate that started it all. Subscribe to Slow Burn now, wherever you get your podcasts.