We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode Episode 46: AGI Funny Business (Model), with Brian Merchant, December 2 2024

Episode 46: AGI Funny Business (Model), with Brian Merchant, December 2 2024

2024/12/18
logo of podcast Mystery AI Hype Theater 3000

Mystery AI Hype Theater 3000

AI Deep Dive AI Insights AI Chapters Transcript
People
B
Brian Merchant
一位长期从事科技写作和著书的美国技术记者和书作者。
E
Emily M. Bender
Topics
Emily M. Bender 和 Alex Hanna:OpenAI 的早期发展依赖于对通用人工智能(AGI)的夸大宣传,并期望其未来能解决盈利问题。他们认为,媒体对 OpenAI 的早期报道缺乏批判性,过分关注创始人的个人魅力和宏大愿景,而忽略了其商业模式的空洞性。他们还指出,Altman 将 OpenAI 描述为一个由全世界共同拥有的非营利性组织,但这与 OpenAI 的实际运作模式相悖。他们批评 Altman 对 AI 超越人类智能的预测带有主观性和不确定性,以及他对“坏的 AI”的定义模糊不清,缺乏具体的案例和论证。他们还指出 Altman 对数据重要性的论述过于简化,缺乏对数据质量和应用方式的深入考量,并批评其将人类学习与机器学习进行类比,这种类比过于简单化,忽略了人类学习的复杂性和多样性。最后,他们批评媒体对 OpenAI 的乐观评价忽视了其潜在的风险和负面影响,以及对科技进步的描述过于乐观,将科技进步与社会进步混淆,并曲解了马丁·路德·金的名言。 Brian Merchant:他认为 2015 年对 OpenAI 的报道缺乏批判性,过分关注创始人的个人魅力和宏大愿景,而忽略了其商业模式的空洞性。他指出,Elon Musk 与 Sam Altman 的合作是 Altman 提升自身影响力的一种策略,利用 Musk 的名气来吸引媒体关注。他还指出,OpenAI 的成立可以被解读为 Elon Musk 对 Google 的一种对抗策略,源于其个人和商业上的竞争关系。他认为,媒体报道将 AI 的成功归功于少数精英人才,忽略了大量幕后劳动者的贡献。他指出,OpenAI 的‘开源’策略在当时获得了媒体的广泛认可,但实际上并非真正意义上的开源。最后,他指出早期报道中,科技公司公开表达了利用用户数据来实现盈利目标的意图。 Brian Merchant:他认为早期媒体报道主要关注 Elon Musk 的参与,将 OpenAI 的成立视为其个人行为的延伸,而非对其商业模式的深入探讨。他指出,2015 年的报道将 OpenAI 的成立描绘成对抗 Google 等大型科技公司的一种方式,这种说法缺乏事实依据,带有明显的宣传色彩。他认为,媒体对神经网络的描述过于简化,将神经网络与人脑神经元网络进行类比,这种类比缺乏科学依据。他还指出,媒体对 AI 能力的描述过于夸大,将 AI 的功能与人类认知能力混淆。他认为,Musk 对数据的理解过于简单化,认为数据量的大小决定了 AI 的成功,忽略了数据质量和应用方式的重要性。最后,他指出媒体对 OpenAI 的说法缺乏质疑,未深入探讨其商业模式和潜在风险。

Deep Dive

Key Insights

What was OpenAI's initial business model according to Brian Merchant?

OpenAI's initial business model was built on the promise of developing Artificial General Intelligence (AGI), with the idea that AGI would eventually figure out how to generate revenue for the company. This was a hand-wavy, speculative approach to attract investment without a clear path to profitability.

Why did Elon Musk and Sam Altman form OpenAI in 2015?

Elon Musk and Sam Altman formed OpenAI as a nonprofit venture to conduct open-source research into artificial intelligence, with the stated goal of ensuring that AI development would benefit humanity and prevent the risks of a single, centralized AI becoming too powerful. However, it was also seen as a strategic hedge against Google, which Musk viewed as a dominant player in AI.

How did the tech press cover OpenAI's founding in 2015?

The tech press in 2015 covered OpenAI's founding with a largely uncritical and breathless tone, often portraying it as a heroic effort to save humanity from the dangers of AI. Articles focused on the involvement of high-profile figures like Elon Musk and Sam Altman, with little skepticism about the feasibility or motives behind the project.

What role did Sam Altman play in OpenAI's early narrative?

Sam Altman played a key role in shaping OpenAI's early narrative by positioning it as a nonprofit venture that would develop AI for the benefit of humanity. He emphasized the open-source nature of the project and the idea that AI should be freely owned by the world, which helped attract talent and investment despite the lack of a clear business model.

What was Elon Musk's perspective on OpenAI's nonprofit structure?

Elon Musk viewed OpenAI's nonprofit structure as a way to avoid the profit incentive that could lead to harmful AI development. He believed that a nonprofit focused on safety and open research would mitigate the risks of AI becoming a dystopian force, though this vision did not align with OpenAI's later pivot to a for-profit model.

How did OpenAI's narrative evolve over time?

OpenAI's narrative evolved from a nonprofit focused on open-source AI research for the benefit of humanity to a for-profit company seeking to monetize its technology. The initial promise of AGI figuring out the business model gave way to more concrete revenue strategies, such as advertising and enterprise partnerships, while still leveraging the hype around AGI.

What was the significance of OpenAI's early focus on open-source AI?

OpenAI's early focus on open-source AI was significant because it positioned the organization as a counterbalance to tech giants like Google, which were seen as monopolizing AI research. The open-source approach was intended to democratize AI development and ensure that the benefits of AI were widely shared, though this commitment waned over time.

How did OpenAI's founders view the role of data in AI development?

OpenAI's founders viewed data as a critical component of AI development, emphasizing the importance of large datasets for training AI models. They discussed leveraging data from sources like Tesla's self-driving cars and Reddit, reflecting a belief that more data would lead to more powerful and intelligent AI systems.

What was the reaction to OpenAI's claim that AGI would figure out its own business model?

The reaction to OpenAI's claim that AGI would figure out its own business model was mixed. While some investors and tech enthusiasts embraced the idea, others saw it as a speculative and hand-wavy approach to justifying the lack of a clear revenue strategy. Over time, OpenAI shifted to more traditional business models, such as advertising and enterprise solutions.

What was the broader context of OpenAI's founding in 2015?

OpenAI was founded in 2015 during a period of low interest rates and abundant venture capital, which allowed startups to raise funds without immediate profitability. The organization emerged as a response to the dominance of tech giants like Google in AI research, with a narrative focused on open-source development and the prevention of AI-related risks.

Shownotes Transcript

Translations:
中文

Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. Along the way, we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of bullshit mountain, we discover there's worse to come. I'm Emily M. Bender, professor of linguistics at the University of Washington. And I'm Alex Hanna, director of research for the Distributed AI Research Institute.

This is episode 46, which we're recording on December 2nd of 2024, and it is business time. We're going to talk about how the tech companies hype themselves up in the name of learning investment dollars and maybe someday drawing actual revenue from paying customers. Who would have thought? All with the help of the everhand wavy promise of so-called artificial general intelligence.

As we all breathlessly and exhaustedly wait for the AI bubble to finally pop, doesn't it feel like the appeals to a sentient, problem-solving, self-actualized intelligence are increasing in number? And yet, OpenAI's very beginning nearly a decade ago hinged on the myth of AGI, and that it once developed would figure out how the company would eventually make money.

With us today is Brian Merchant, a tech journalist, journalist in residence for the AI Now Institute, and author of a new report on how AGI has been the crux of the business model for OpenAI and its ilk since the AI boom began. Welcome, Brian. Yes, thanks so much for having me. I'm so pleased to be on here. I love what you all do here. It's such a breath of fresh air in this stuff I usually have to pay attention to.

We have fun. It's all about catharsis here. And this is, I think, the second time that we have sort of dug into the past. The previous time we took the time machine back to 1956 for that Dartmouth report, and we had some fun costumes on that episode. This time we're going back just to December of 2015, and I am going to share our first artifact here, if I can get the machine to behave. That is this one. Okay. Okay.

So we have an article from The Verge published on December 11th, 2015, so nine years ago and change, or in a moment, I should say. The journalist is Russell Brandom, and the headline is Elon Musk and partners form nonprofit to stop AI from ruining the world. And I have to ask, Brian, were you paying attention already to this stuff in 2015? Yeah.

Yeah, you know, I was. It was on my radar. I was a senior editor at Motherboard, which was Vice's tech platform at the time, RIP. It was our publication that we covered this stuff. And we were a little, I think, naturally inclined to be more critical than most, and I know that we covered it in just this way. In fact, I looked around.

you know, as happens to so many digital media companies, Vice was bought by a private equity firm, bundled up, sold off for parts, and now it is generating AI slop of its own. So I couldn't find our coverage in the archives. But

this is like pretty indicative, I think, of how it was covered at the time. This is not to pick on The Verge or Russell Brandom in particular. This is like, I feel like a reflection of how everybody covered the inception of OpenAI at the time. Yeah, it's really breathless. I mean, all the artifacts are just really no critical eye. I mean, I guess this is preceding what has become known as the TechLash, but

But really just look at these boy geniuses and the kind of things that they're doing. And they really want to do this stuff for the good of society. And it's really, really outlandish when you look back. Even this thing's not that old, just nine years ago, but...

For me, anyway, this is a little bit before my time in the sense that at this point I was minding my own business, doing grammar engineering and sort of good old regular computational linguistics and starting to have these conversations with other people in NLP about how no... Actually, it's before even that, before even the language models were taking over our field. And I had to be saying like, no, they're not understanding. So it was sort of interesting as a historical exploration for me.

Yeah. And I think it was really, yeah, like I said, reflective of the way that this announcement was covered. I think if there was criticism or like criticality, it was more around like Musk's sort of sincerity or his intentions or he was already starting to become kind of this figure who was getting his fingers in everything and just sort of like it was a little ridiculous seeming. But I...

I think by and large, the tech press, the tech world certainly took him at his word, took him at face value. You could say something like this and expect coverage. This was just an announcement at this time. There was very little substantive around it, but the fact that Elon Musk...

You know, the founder of Tesla and SpaceX, those were his two big things at the time, had met with Sam Altman, who was not a household name at the time. And I think, as I kind of argue in the report, this was a canny move by Altman to sort of start seeding press like this to get himself associated with figures like Musk as he climbs this ladder into sort of like centrality of Silicon Valley. So let me read a little bit of this so that the people know what it is that we're talking about. So the first couple paragraphs is,

Tesla CEO Elon Musk has never been particularly shy about his fear of killer robots, but now he seems to be doing something about it. Today, Musk and a group of partners announced the formation of OpenAI, a nonprofit venture devoted to open-source research into artificial intelligence. Musk is co-chairing the project with Y Combinator CEO Sam Altman, and a number of powerful Silicon Valley players and groups are contributing funding to the project, including all of our favorites, Peter Thiel, Jessica Livingston, and Amazon Web Services.

I guess that's not all of our favorites, but some of them. Altman described the open source nature as a way of hedging humanity's bets against a single centralized artificial intelligence. Quote, just like humans protect against Dr. Evil by the fact that most humans are good and the collective force of humanity can contain the bad elements, we think it is far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there's a single AI a billion times more powerful than anything else, Altman said in a joint interview with Back Channel.

If that one thing goes off the rails or if Dr. Evil gets that one thing and there's nothing to counteract it, then we're in a really bad place. And I just have to say, like, this is nonsense. And it's just being quoted and platformed. And the journalist isn't saying, excuse me? I mean, for me, this is, of course, not the point. But it's just a blast from the past to think that Dr. Evil is still enough in the public consciousness world.

you know, from Austin powers, which came out and went like the early aughts for this to land that this is still for this to land at all. If you said that today, the zoomers would roast you. Um, but it's, it's just anyways, not really beside the point, but yeah, I mean that this, that this is even kind of a, that this is kind of an argument that,

And that open source even is argued as a kind of corrective of this. And I know this gets talked about in another article we're going to look at. But I mean, the kind of idea that there's kind of this way that bad actors are the problem, but then there's also going to be many quote-unquote AIs are going to counteract them. Yeah. Truly ridiculous. And like Emily said, it's nonsense, but it's like of a very specific...

sort of science fictionalized, you know, sort of vernacular in the way that it's laid out, right? Like it's there, you know, and now with hindsight, as I argue in this report, you can kind of map this out, right? This one big bad AI is Google. And Elon Musk has recently at this point been frustrated with Google, both personally and professionally. And I think because he's

personally invested in AI and interested in it even from a business perspective at this point like you can view the founding of open AI as through the lens of of it being this hedge against Google so like that if you look at that language again he's saying oh we need all of these other forces to counteract it so pay attention to us it's like we're like these little ones are the good guys there's a

bad guy, you know, he's a bad AI. And it is all, it like really is nonsense, but it is something that, you know, again, the readers of the tech press or like science fiction fans will immediately kind of glom onto. Yeah. And I think actually, I want to take us over to the next one because there's, these are such rich texts. And this next one has this big, long interview with the two of them. And a lot of the same themes come up.

So this is Wired, also December 11th, 2015. Notice the timestamp, 12 a.m. So this was like embargoed news that they were dropping like at the moment that they could, right? Journalist is Steven. Do you know if he's a Levy or a Levy? That's one of those names. Yeah, Steve Levy. He was like one of the founding sort of tech directors.

He wrote Hackers, which is one of these early books that kind of glorified the first wave of Silicon Valley entrepreneurs, guys like Wozniak and Jobs, and kind of wrote really early to the game. But from the perspective, these guys are starting a revolution that's going to be great for humanity. Yeah.

Yeah, I think Levy has been, I mean, he's been a bit of a simp for the industry and to be somewhat unkind. But I know he's, is he the editor in Chief of Wire now? Or am I, or did he, but he's some high-ranking editor. He said he's kind of, you know, he's written some of these bestsellers. He's written a book about Google. He wrote a book about the iPod. And I think he's made enough money where he doesn't really...

want or need to be the editor-in-chief of Wired. He's one of these perpetually kind of editor-at-large guys who gets the plum assignments because he's done such a good job of cheerleading the tech industry for so long that he gets, you know, he's friendly with the CEOs and stuff. So he gets these scoops. All right. So here he is on 12 a.m. December 11, 2015. Headline, How Elon Musk and Y Combinator Plan to Stop Computers from Taking Over.

Sorry, I had to roll my eyes just at the headline. Subhead. Yeah. Sorry, just to interject. But yeah, like exactly like this, you can buy the embargo and everything. They almost certainly like shop this to him. I mean, we'd have to ask. But like this is like, hey, we've got this story. We're going to save the world from killer robots. Do you want to, you know, exclusive and wired? And that's probably how this came about.

So, the subhead, they're funding a new organization, OpenAI, to pursue the most advanced forms of artificial intelligence and give the results to the dot, dot, dot. And I don't know if that's because, like, Wired has changed their format in the intervening nine years, but that's it on the subhead. Yeah.

Fill in the blank. Give the results to the people. Yeah. Okay. So as if the field of AI wasn't competitive enough with giants like Google, Apple, Facebook, Microsoft, and even car companies like Toyota scrambling to hire researchers, there's now a new entry with a twist. It's a nonprofit venture called OpenAI announced today that vows to make its results public and its patents royalty-free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Yeah.

Funding comes from blah, blah, blah. Let's see. There's something I do want to read on this, which is interesting, which Musk, comma, a well-known critic of AI, comma, isn't a surprise. First off, hilarious. And then as is Altman himself. Sorry, that was the prior paragraph. But why Combinator? Why Combinator?

Yes. The tech accelerator that started 10 years ago as a summer project that funded six startup companies by paying founders quote ramen wages and giving them gourmet advice so they could quickly ramp up their businesses. Uh, and they talk about everything Y Combinator has done. Uh,

But the weirdness of having something so pro-capitalist like Y Combinator joining up with Elon Musk to do something open source, bazinga, so wild. I mean, do these people not remember what the late 90s were like, where the whole thing was open source everything and you didn't need a business model? Of course not.

Yeah. I mean, it is wild just the amount of credit that they're just giving everybody involved in this enterprise from the get-go. Again, I do think it is an artifact that is very telling and very sort of indicative of just like the pre-Tech Clash sort of coverage where at least after that you kind of have to at least, you know,

drop a caveat in or two, even if you're still not going to be properly critical. But yeah, just seeing the formation of this, and we'll talk about more later, but this is really integral to OpenAI's founding myth and operative myth, even still today, what it's doing even now at its $160 billion company.

So there's something I want to take issue with in the next paragraph here, where they're talking about how this is a research lab, and they're trying to counteract these big companies. And so they say, it may sound quixotic, but the team has already scored some marquee hires, including former Stripe CTO Greg Brockman, who will be OpenAI's CTO, and world-class researcher Ilya Sutskever, who was formerly at Google and was one of the famed group of young scientists studying under neural net pioneer Jeff Hinton in Toronto. And I just want to say that it is...

so irritating to, and they do this elsewhere, they'll say top researchers or world-class researchers. And it seems to me there's a couple of things going on there. One is a world-class researcher to me is somebody who, you know, really contributes to the research community by having good citational practice and sort of connecting what they're doing to what other people are doing. And that's not what's going on here. But also it occurred to me that this notion of like top scientist is a way to locate people who don't own their own positionality.

That instead of sort of saying who they are, where they come from, and what they're working on, we're locating them at the top. And that is just rampant in these articles and in thinking about AI. Like, we've got to get the top talent.

Well, it's also like a genius kind of notion of the individual contributor, right? And we have this in many guises in Silicon Valley, the 10X engineer, the whatever, pick your poison. This kind of operates in many guises. And the idea that, and I think this is said in the other piece where it's sort of saying the two constraints are

on AI companies are top research talent in data. And I'm like, Oh, what about compute? What about power? What about these various supply chains? And it's, and it, and it really, um,

To me, this is something I'm always harping on, but it's like you're placing kind of the real labor of AI in kind of genius, genius all white men in the global north and not like the huge labor underclass that even makes any of this stuff possible, right? And I mean, that's the really...

That, I mean, is suffused through all this journalism. Yeah, it's very much about the people at the top that can participate in a hero narrative or a genius narrative. And I hadn't really thought about that much before, Emily. That's such a good point, so interesting. Because, yeah, it rarely even gets attached to his actual accomplishments or publications, but you read any of these articles over the last 10 years, and Setskever in particular is, you know,

AI scientist, top AI scientist, and he's just, you know, to the point where he's just, you know, just worth so much money to these companies and that this sort of elite tier who fit that bill that you're talking about can just charge exorbitant sums or start their own

Right, and get all the funding. Yeah. So we have this interview that I want to try to get to at least some of because it's a rich text. These guys are both bonkers. But we should also save time for the other Wired article that comes out a couple days later. But so this is Levy interviewing Altman and Musk not at the same time,

but I think asking the same questions and then merging them together. So how did this come about? Sam Altman: "We launched YC Research about a month and a half ago, but I had been thinking about AI for a long time and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we're creating OpenAI. The organization is trying to develop a human positive AI and because it's a nonprofit, it will be freely owned by the world."

That's not what non-profit means. But it is interesting that he really feels compelled to repeat this over and over, and it's funny to see it dissipate and ultimately disappear, this notion that the fruits of...

of what they're doing are to be owned by the world. It's really one of the core foundational myths of OpenAI, and today it's just all but completely gone. Once in a while he'll still say something about UBI or the need to give people something like that, but it's...

And Musk has a slightly different take here. So in answer to the same question, he says, as a result of a number of conversations, we came to the conclusion that having a 501c3, a nonprofit with no obligation to maximize profitability, would probably be a good thing to do. And also, we're going to be very focused on safety. So we're all then saying, it'll belong to the world, it'll be freely owned by the world. Musk is saying, we're going to protect ourselves from the profit incentive, which like,

I mean, that is kind of good commentary on a lot of what's happened elsewhere in Silicon Valley. Coming from Musk is surprising, but it's totally not what happened, right? Yeah. I think, you know, I think he's starting to position it as this, you know, again, anti-Google sort of way of doing things where Google was already starting to be seen as invasive and people wouldn't really respond well if...

Google's going to release something. He's going to kind of also, I think, I think it's like, I think at this point it's kind of tactical. It's kind of, it's a hedge against Google. It is, yeah. Because, you know, there's this famous piece of lore that, you know, in OpenAI's founding that just before that he has this big fight with a Google co-founder. They have a personal falling out and it's over AI and,

And he's, Google's doing AI and the subtext that I get and that even like one of sort of Musk's biographers sort of points out is that it's maybe kind of underscored by jealousy. Like Musk wanted to be involved in these sort of the big future projects and he's feeling like left out and he does what he's, now we know, we all now know that he does is lash out and kind of punish his competitors. Yeah.

Yeah, and we're seeing, I mean, what was this? There's this kind of move now. I mean, we're jumping the gun, but that OpenAI is moving toward this for-profit model, and that Musk is trying to block that right now. And then we have this founding of XAI, and kind of the moves are saying, well, I'm going to do it better. I want to be the one to do this. And so much of it just

coalescing off of just personality and all of us having to deal with this shit. So there's a comment in the chat here from Black Angus Schleem. Tech journalism was really credulous back in the day. No wonder we've been left defenseless against these assholes.

I mean, it was. And I am not immune from some of that blame. I wasn't writing articles in Wired at the time. But even when Musk started...

Tesla, I was working for the Discovery Channel at the time, writing articles on its online sites. And I think the prevailing wisdom at the time was like, ooh, he's doing electric cars because this is the heroic good thing to do. But also, he didn't start Tesla. Didn't he sort of muscle his way in with money and then claim to have started it?

Yes, he swooped in and kind of become the it was like a kind of a foundering, you know, startup and the pieces were there and he kind of yeah, he swooped in and sort of muscled out the other the other founders and became the public face of it. Yeah.

So, okay, let's keep going with these two clowns. I'm going to skip over that one, I think. So they're talking about how important it is to put things out in the public so that, like, you can have lots of AIs and everyone owns them. And also that this helps them recruit top talent because everyone who's working on this wants to be able to publish their work.

And so the Levy asks, doesn't Google share its developments with the public like it just did with machine learning? And I think that was PyTorch. No, the TensorFlow. Oh, TensorFlow, sorry. And Altman says, they certainly do share a lot of their research. As time rolls on and we get closer to something that surpasses human intelligence, there is some question how much Google will share.

Okay. Just on every level, right? It's not like OpenAI is sharing anything at all. So OpenAI is not open. They are not open about what they're training it on. They're not, you know, they aren't even publishing papers anymore. But also there's this inevitability narrative in there, right? Yeah. As time rolls on and we get closer to something that surpasses human intelligence, as if that's necessarily going to happen. Yeah.

Yeah, I mean... Oh, yeah, no, I was just going to say the one thing that is probably accurate that they said is that positioning it this way and with this mythology is a pretty good way to probably recruit engineers who are hoping to be attached to something that Musk is involved in, raise their status, maybe make a bunch of money. Or maybe they're making a ton of money already at Google, and this sounds like a more interesting project. So they were...

able to recruit, you know, a bunch of top talent as that bill that you were talking about pretty, pretty wholly. So that, you know, that, that, that was probably part of the equation at this point. Yeah.

So we have this other quote, that's the Dr. Evil stuff, which I think we can skip over. Although I have to say, so the journalist says, if I'm Dr. Evil and I use it, won't you be empowering me? And Musk says, I think that's an excellent question. And it's something we debated quite a bit.

End of sentence. They were having sophomore dorm room late night conversations about this, it sounded like. I know. Every time I see Musk debating anything, I just think of the Joe Rogan meme of him taking a huge hit of a blunt and then just opining about whatever.

Yeah. Yeah. So I think, let's do this one. What's an example of bad AI? And Altman says, well, there's all the science fiction stuff, which I think is years off, like The Terminator or something like that. And it's like, again, years off suggests that it's actually coming. Right?

Right? Yeah. He says, I'm not worried. And it just starts with science fiction, right? Yeah. Altman says, I'm not worried about that any time in the short term. One thing I do think is going to be a challenge, although I'm not what I consider bad AI, is just the massive automation and job elimination that's going to happen. So he's already sort of marketing this to the companies that will be laying off people, right? Yeah. And then another example of bad AI that people talk about are AI-like programs.

I don't know why that's AI-like-- that hack into computers that are far better than any human. That's already happening today.

So the things that he's worrying about... What does that mean? I'm just trying to read this. AI-like, does that just mean you just have a for loop going through a password list? Is he thinking of like Stuxnet and things like that? Yeah, maybe. I don't know. That's quite a bit more complicated. But yeah, I don't even know what the reference is there. Yeah. So is there anything else in here that we wanted to be sure to get to? Oh, they talk about data. Yeah.

So this is Altman again. So the question is, will your startups have access to the OpenAI work? And Altman says, if OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company, but more so than that. However, we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and SpaceX can share.

And then there's this next thing. So an interviewer asks for an example. And Altman says, all of the Reddit data would be a very useful training set, for example. You can imagine all of the Tesla self-driving car video information also being very valuable. Huge volumes of data are really important.

If you think about how humans get smarter, you read a book, you get smarter. I read a book, I get smarter. But we don't both get smarter for the book the other person read. But using Teslas as an example, if one single Tesla learned something about a new condition, every Tesla instantly gets the benefit of that intelligence.

Thoughts? What? I mean... There's a... Yeah. I think he... There's a great comment in the chat where it says, Homestar315 says, neither of these guys actually write code and that's obvious. It's a great point. Like, look at how, like, nebulous these ideas are. They are just, like, vaguely science fiction shaped ideas based on some... I don't... You know, I...

things that they've maybe read or maybe like skimmed or, you know, remember from science fiction movies from, from, from years ago. Like at this point, neither of them have even really interacted with much like AI technology or a company even building this. This is all like, and this is part of the argument I make in the report. It's just like, it's all ideation. It's, it's all vibes really. And, and, and strategic like market positioning again. Yeah.

against Google. And with such a dismal view of what happens for people, right? Yeah. I read a book, I get smarter. Well, I mean, first of all, why are we talking about ranking intelligence? But also, I read a book, I learn something. I engage with a book, I learn something. And so here's Altman already trying to reduce what people do so that it looks something like what machine learning is.

Yeah. Yeah. And yeah, and I think it's also useful for context. And some of these emails have now been made public as part of Elon's lawsuit against Altman. But you look at like their early emails together. And to me, it just seems like.

Altman knows that Elon has made some comments in the press about AI and that he's like worried, quote worried in this, you know, only in this, you know, apocalyptic Skynet sort of sense of the word. And he writes this introductory email to him reaching out and like professing to have the same worries, but really just seeing it as an opportunity to sort of just like, you know, kind of link up and become a remora on Musk's power there. And that's exactly what he did.

and they're just building this scaffolding and it's all narrative, it's all stories, it's all sort of just detached from reality at this point. And I think that that, in hindsight now, is pretty glaringly clear. So the things they're saying about Tesla, it's just like retconning it on. Like, oh yeah, we could have Teslas driving around learning things. And Musk says...

Certainly Tesla will have an enormous amount of data, of real world data, because of the millions of miles accumulated per day from our fleet of vehicles. Probably Tesla will have more real world data than any other company in the world. My data set's bigger than your data set. Well, it's also a weird view of like, what is data too? Because it's sort of saying like, well, is Tesla data going to be helpful for building any kind of sensible language models? Yeah.

No. Well, okay, first of all, Alex, sensible language models, is that a thing? Yeah, I mean, like, I don't know if it's a thing, but I mean, I'm not saying a sensible large language model. Okay. I'm saying, like, a language model, I mean, a language model that is doing what is intended to do, right? Okay. And I mean...

but you're basically getting to a point where, you know, like there's this sort of not even, it's such, it's such an interesting view. Cause I think it's not only reducing the human to kind of a rank order of intelligence, but it's also like data to like just whatever slot bucket it is, where you just put the bigger data set into the machine and it goes, and then they go, the number goes up or whatever. So like, that's just, that's just the vibe here. Yeah. Uh,

Okay, I think I'm kind of, I think we're maybe done with this. I'm just checking if there's anything else in here, anything you see that you want to jump in on. Okay. Oh, this one. Elon, you are the CEO of two companies and chair of a third. One wouldn't think you have a lot of spare time to devote to a new project.

Yeah, that's true. But AI safety has been preying on my mind for quite some time. So I think I'll take the tradeoff in peace of mind. And yeah. And we now know that it worked. He has peace of mind and has left the public sphere and is quietly contemplating the world on a hilltop somewhere. Yeah, or jumping into his millions like Scrooge McDuck. That's what I wish all billionaires did. Just go swim in your gold. Yeah.

Okay. So I wanted this last one is also wonderful. So let's go for it. This is Kate Metz writing in Wired under the tag or sticker business.

There's a great image here. It's a picture of Elon Musk wearing a gray blazer over his head.

Over a... I guess that's like a black button-up. And he's in front of either what is like a nebula or a volcano exploding. A solar flare, maybe? Or I don't know. It's just... Anyways, just one of these... Anyways, very extra as a...

That headline, too. Like, there are some, you know, excitable headlines that we've... But there's far more than saving the world. So, like, they're saving the world is the, you know, the supposition. That's already there. And then this is... And it's not only more, but far more. Like, so this is...

This is like, I don't know, like intergalactic. They're saving like the universe. With that space image there. Yeah. And then also, again, more forces at work than just the possibility of superhuman intelligence taking over the world. So METS is taking no critical distance here. No. No.

And Christy, our producer, says, the headline is Giving Hamlet. There are more things in heaven and earth, Horatio, than are dreamt in your philosophy. Yeah.

Okay, so starting in here, first paragraph. Elon Musk and Sam Altman worry that artificial intelligence will take over the world. So the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI and then share it with anyone who wants it. At least this is the message that Musk, the founder of electric car company Tesla Motors...

correction, he wasn't the founder but going on, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor an unprecedented outfit called OpenAI. In an interview with Stephen Levy of Back Channel, timed to the company's launch, Altman said they expect this decades-long project to surpass human intelligence, but they believe that any risks will be mitigated because the technology will be, quote, usable by everyone instead of usable by, say, just Google.

Yeah, it does also give you a sense of like the amount of gravity that Musk already commands. This is an article that's like basically kind of just restating even in more grandiose terms what they had already published a couple days ago. So like this is really, it really did like, I do remember this when this news broke. We weren't in the insider circles. We didn't get any interviews with any of the principals. But it was like a week of headlines coming out like this.

Yeah, and there's this one statement here where they're talking about the interview, and I don't know if this was in the other interview in Wired, but there was also a Medium page called Back Channel. I think it's the same one that they republished it on Wired. The one that we just read, I think, was originally published on Back Channel, which was Steve Levy's special thing. Yeah.

And just highlighting just the line, because I kind of missed this the first time around, where Altman said, just like humans protect against Dr. Evil by the fact that most humans are good and the collective force of humanity can contain the bad elements. And then someone in the chat, I think it was S.J. Lett, was like,

Oh, yes, because that's how the Austin Powers movies worked. Good humans protected against Dr. Evil, and it wasn't the international man, the mystery himself. Oh, no, yeah. It said, yeah, SJ says, I don't remember humanity stopping dropper evil so much as Austin Powers and poor planning. Right.

And M. Mitchell AI in the chat says, maximize the power of AI. This is in that first paragraph. Is corporate speak being passed off as a normal thing to say? So again, we have the journalists sort of in here helping, right? Helping build the hype, helping make the rest of us like have to deal with this instead of coming at it with a critical eye. And we still see stuff like this happening. Like I just, I mean, now and it's just like, why, like,

Can we think for a second about why they might want to do this? Like, why beyond just saying, I want to save the world? Like, two people who have given us very little evidence that we should trust them. Even then, even 10 years ago, that we can just kind of reprint what's on the press release. It did, you know, our earlier commenter in the chat here was right. It did say,

you know, giving them so much slack did make things worse in a pretty demonstrable way. Yeah. Yeah. Okay. So I'm going to get into this open thing again. So increasingly companies... Oh, prior to that, I want to get into this thing Miles said. Oh, yeah. Miles, PhD student Miles. Yeah. So this is back when Miles Brunbridge, who went to work as a high up policy person at OpenAI,

Still as a PhD student at Arizona State, uh, States who said it's not yet an open to shut argument. Um,

Yeah.

I was actually curious to see when he's listed as a PhD student, did he finish his PhD? And he did. He has a PhD. He did, yeah. So, you know, good job, Miles, finishing. That's an achievement. But also... But sorry for everything you did after that. Okay. So increasingly, companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive. And again, I'm like, did you miss the 90s?

Because that already happened once, right? The new economy, if you were trying to do something other than like giving away open source software, that was so old economy. So, you know, I don't think this journalist is so young, but maybe. All right. So talking about the advantages of open and so.

So we've seen this point before, right? Yeah.

I think it's interesting how, like, now who is the actual open source player now? It's meta. It's Facebook. And it's so interesting to see, like, OpenAI get so, and Elon Musk gets so much credit and so much, you know, press for doing it then. But it just, I think it just shows how much

like narrative and like building this scaffolding is important. Like they had, they, they weren't part of one of the, one of the sort of extant tech companies. So they got the chance to sort of build this narrative as like kind of a reaction to what Google and at the time Facebook was doing. And now Facebook is,

is making, they're trying to make these arguments, but like nobody cares because it's Facebook. But also they're lying because what they're calling open is not, they're saying it's open source, but what they're doing is they're releasing model weights. They're still not giving any documentation on the data. They're not giving the training software. It's not actually open source. The only group I know that's doing something

Well, there was the whole, the pile and the Luthier AI, but then also AI2 is with Ulmo trying to be open. Well, there was also what Hanging Face was doing with Blooms. Yeah, and the model. But I mean, the thing for... But OpenAI was lying right now. At this point, we know now that they were lying too, and they got the benefit of the doubt because they had kind of a...

And I do want to give a shout out about the ecosystem element of this too, because there's a remark in this about TensorFlow. So this competition may be more direct than it may seem. We can't help but think that Google open sources AI engine TensorFlow because it knew open AI was on the way, yada, yada, yada. But even talking about TensorFlow as being this kind of ecosystem of openness is not

is not true. You're the largest player in the ecosystem. That goes for Android and Chrome as well. I want to refer readers or listeners to our episode 21 with Sarah West, Andreas Leisenfeld about open source and what open source means, including a paper that I think Sarah wrote with Meredith Whitaker and David Witter on open source and the kind of, um,

what is the word, not closure, but a kind of capture or falseness of open source itself and the way that open really doesn't mean really anything at this point. Yeah, yeah. Okay, there's some, this whole article is just like credulous AI hype. So, you know, believing them on the open and then this paragraph here, deep learning relies on what are called neural networks, fast networks of software and hardware that approximate the web of neurons in the human brain. No, they don't.

Like, really? Just because I said that, you've got to dig a little deeper. And then it gets worse. Feed enough photos of a cat into a neural net and it can learn to recognize a cat. Feed it enough human dialogue and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react and it can learn to drive. And it's like, no. Okay, so yes, you can create a program that can classify images as cat or not to a certain degree, right? But that's not...

I wouldn't even call it recognition, right? There's a whole bunch of anthropomorphization happening in here. So recognize sounds like cognition. Feeding it sounds like it's some biological thing, right? But then, okay, does ChatGPT carry on a conversation? It produces the form of a conversation, right? And that's getting us into all kinds of trouble. Have the Teslas learned to drive? No, they have not. So, yeah, all right, yeah.

Talking about data and how important it is. So Chris Nicholson, the CEO of deep learning startup called Skymind, giving... Not creepy at all. Yeah, Skynet, which was recently accepted into Y Combinator program. I'm sure Airbnb has great housing data that Google can't touch.

So this is interesting because this is 2015 and we've got like total credulous access journalism going on. And these guys are saying the quiet part out loud, right? Your data is going to make us rich. Yeah. And people didn't react as we should have.

Yeah. It is. It's just so much more out in the open right there. I mean, that specific example, too, your housing data. Like, oh, I bet Airbnb has a bunch of great data about your house that we can plug into a large language model to train whatever for the future. I mean, it also doesn't make a ton of sense, but it also just...

exactly what they're thinking and what has come to pass, right? So many of these things, as you pointed out earlier, like this, it doesn't really, you know, matter to them what, it's just size. It's like if it's Tesla cars collecting data or if it's like, you know, Reddit threads, as they mentioned in the articles, whatever is bigger, more out there, they figure they can just

brute force it into something that will matter. They can just transmute that into something that will be meaningful and useful and profitable.

Yeah.

I think Paris smarts we had on the, who we had on the pod, I think two or three apps ago has had, um, a really great kind of, um, critique of, uh, Kara Swisher and her kind of turnabout on Elon Musk, who Elon was good. And then in turns, then he just wants to do and get a bunch of money. No, no, you were playing your buddy, buddy with him. And then he pissed you off. And then, and then now you're here and there's no way to actually hold any of these people to account. Yeah.

All right, I want to get this pessimistic optimist section in here before we wrap up and head over to fresh AI hell. So subhead, pessimistic optimists. But no, this doesn't diminish the value of Musk's open source project. He may have selfish as well as altruistic motives, but the end result is still enormously beneficial to the wider world of AI.

All right. In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well if it hasn't already. That's good for Tesla and all those Y Combinator companies, but it's also good for everyone that's interested in using AI. Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook and Dr. Evil, wherever he may lurk.

Dun, dun, dun! Oh!

It's overwhelming. And again, just as a reminder, OpenAI does not exist yet. It is just – this is three days after it's been announced. This is all just a combination of projection and just like sort of a reading of what exactly Altman and Musk have said and put into press releases. That's all that exists at this point. And all of this superintelligence stuff is just like platformed as if it made sense. Yeah.

Right. And I mean, this is the people that they talk to. I mean, this is this Nicholson character who's talking about a quote, escape philosophy of escape velocity of an AI system becoming quote, smarter and smarter. And if it did do that, it would be scary. And then they talk about guardrails and, you know, the guardrails is by giving good AI to good people and,

So, and the thing that really irks me is the penultimate paragraph of this piece. Yes, me too. Where they say,

Based on their prior successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. And I'm just like, first off...

Fucking up the MLK quote to talk about technological process. You know, you can harness, you could harness the, the centrifugal force and MLK is grave to power some of these data centers. But just the second, secondly, the idea that again,

Issues of harm have to do with technological breakthrough, which of course is patently ridiculous. It's a category error. It is false. And the arc of progress will keep bending upward. As you said, Alex, that's not how that quote goes. Yeah, it's not even – exactly. First off, that's not what an arc is. Yeah.

You mean the parabola of progress? I mean, maybe you mean the exponential function? Like, what are you talking about, man? Yeah. All right. So the business model in all of this, I think, did we dog enough on the AGI will figure it out?

yeah, can I do one? Look at this, this quote at the bottom of that graph, which is thinking about AI is the cocaine of technologists. It makes us excited and needlessly paranoid. Like that maybe is that we'll give them that one. Yeah. Yeah.

So, you know, we build this episode is looking at the business model of AGI. And, you know, the business model, I don't know if it was in any of these articles, but it's in your report. What they say is the business model is build the AGI and it will figure it out for us. Yeah. Right. Yeah. These articles predate even the earliest sort of thinking about a business model. I think as we've mentioned,

mentioned earlier, the way that I think about this genesis of OpenAI is as this strategic hedge, as Altman's Silicon Valley socioeconomic

ladder climbing and then like the sum product of this is this like this, then this research project that they then actually do sort of attract people to work for. And it's from the beginning. It is just sort of

I think it's, you know, I would have loved to been in the room as any reporter would when they're actually sort of hammering out the early steps of like what OpenEye is to be. But there's people from Amazon there. There's there are other tech companies. There's VCs there. So like far from this, you know, origin story of it being this totally altruistic, world saving program. It's in the air, right? Like they're adjacent to like

huge fountains of capital and some of the biggest players in Silicon Valley. So from the beginning, it's like, we don't know what it is yet, but we think that there could be a play against Google and it's starting with this. And then so, yeah, you have a number of years where they're just kind of building the mythology, right? And I think it also has to be noted that this is all happening in sort of like the 0% interest rate period where...

where it's easy to get money for startups and to get money invested in things. And then so you have all these companies arising like Uber that aren't profitable for a decade or more. And it's leading everybody to believe this idea where the story is the most important, the strength of the conviction, getting investors on board. And then it doesn't really matter whether or not you have anything resembling a sustainable or working business model.

So all this is kind of in the air as OpenAI is forming and they just kind of mess around for a few years. They get good researchers. For a while, they're working on robots. For a while, they're working on eSports and playing and they're like,

Doing all these things very much catered towards the press, which is like, oh, we're going to, you know, remember, do you do either of you remember when they released where they announced that they had made a model that was too powerful? Oh, yeah. We were. Yeah. The whole NLP community was laughing at them and like, ha ha ha. This is marketing. Yeah. It was that it was like the whatever. I forgot they called it the Q star or whatever. Yeah.

That was a few cycles ago or whatever. And then that was around the board mix-up and everything. We've got to get ourselves over to fresh AI hell here. So I'll just wrap up by saying that, yeah, so they used all of this and then they understood or at least had some reactive understanding of how the press responded to their moves. And then they built this mythology. So when ChatGPT drops...

They really, they don't have a business model. They have a vague sense of the things that they've been saying, right? Like, as you said, about halfway through, they start saying, well, we're going to ask AGI to figure it out. And literally, like, saying that on stage at tech conferences and having people, you know, kind of nod along and investors giving them more money. So it really is another sort of snapshot of the moment that we were at when you could just kind of say that and still get billions of dollars and then invest

you have a product that people like kind of like, it's not clear that it's going to make money like chat GPT. And then from there, you kind of have to, the last two years have been the story of them trying to figure out how to harness this, this apocalyptic hype that they've built for themselves into something that's going to generate returns. Yeah. All, all, all, all bark, no bite, all, all hype, no,

I'm trying to think of something else that starts with H. Yeah. I'll hype no horchata. Horchata. I did not expect that at all. All right. So Alex, here's your prompt. I'm going to make you... Okay, but I have an idea. I have a musical styling I want to do, but you give me the prompt. So the prompt is, you are...

Not fresh AI held demon, but it's corresponding angel this time. Sipping some horchata. Disappointed to have found out that open AI was not actually altruistic. I know. All right. I'm going to do this. Since we started with a Flight of the Conchords reference in the intro, I'm going to... That's the musical styling. And I'm just like...

I've got my horchata in AI heaven. I'm ready to be benevolent. I open the newspaper. What do I see? Sam Altman. No, could it be AGI?

I like hear an echo of myself and it sounds like a thing. Anyways, that's all I got. AGI is a lie. Say it isn't so. I do a spit take. Horchata everywhere. In my corresponding angel's hair.

That's all I got. All right. So now we're going to have to make a Mystery AI Hype Theater 3000 cookbook with a recipe for angel hair pasta with horchata. Yeah. I'm about it. I'm already in the works setting up a band with my girlfriend and...

It will be maybe our side project will be rat balls. Yeah, awesome. Okay, so we've got too many, but we're going to go quickly. This first one is from the Financial Times today. Journalist is Mahumita Murguia and Christina Criddle and George Hammond. Headline is OpenAI explores advertising as it steps up revenue drive. So, a chat GPT maker hires advertising talent from big tech rivals.

I make this joke, or it was a joke in an earlier edition of the book, The AI Con, coming out in May 2025. But it's something about putting AI ads in chat GPT results and

Guess what? It's happening. There it is. Yeah. Okay. Next, Tech Crunch by Ingrid London on November 19th. Headline, Itching to write a book? AI publisher Spines wants to make a deal. So this is a self-publishing platform that claims that thanks to being powered by artificial intelligence, it can do all of the work of a publisher and do it faster and cheaper.

Yeah, that task list includes editing a piece of writing, providing suggestions to improve it, and giving users a frank projection on who might read the published work, providing options for the cover design layout, and distributing the finished product in e-book or print-on-demand formats. Gosh, imagine getting a...

Press kit on a book generated this way. Nightmare scenario. I think that just really shows what it's all about here, right? They're just trying to, it's just purely trying to degrade the work and the labor conditions of people in publishing or in a given field, a creative field. That's all it is. And then at the same time, the information ecosystem, because then you have all of this stuff just flooding the zone. Of course, because they're going to be inputting AI-generated text into these things.

And, you know, speaking as a co-author of a book where we are in that, like, the book is written, but it's not out in the world yet, and we're impatient, like, it just, it hurts to see this, because I know the work that the publisher is doing, and there's a reason that we didn't just self-publish the book. Okay, that leads in very nicely to this one from Business Insider. Analysis by Alistair Barr, updated November 30th, 2024. Headline, In a World of Infinite AI, The New Luxury Item Could Well Be Humans.

Yeah, so this is really saying the quiet part out loud, which is, hey, for all the rest of you, you're going to get AI slop, and for the wealthy, you're going to get actual human contact. Two tiers. Yeah. And then why the image is of an aerial view of a carnival parade. Residents enjoy a carnival parade on February 6, 2005 in Viareggio, Italy. Wow.

Interesting choice. I don't know. Maybe an AI-assisted journalism. It's supposed to be a lot of people. Infinite. Infinite people.

Infinite people. All right, so this next one actually is from my email inbox with a little bit of redaction. The subject line was verified avatar of Dr. Andrew Ng and then slash it in the name of this company. And it was sent to me on Friday, November 22nd at noon.

Hello, Dr. Bender. Redacted was launched by Dr. Andrew Ng's firm AI Fund with a focus on building conversational agents slash avatars in partnership with leading academics and thought leaders. Think virtual teaching assistant offering office hours or a personalized study plan. We have built official verified avatars of Andrew Ng and Lawrence Moroney and are just beginning to work with

Barbara Oakley, Terry Sanowski, Eric Brynjolfsson, and Brian Green. Would you be willing to talk with us about collaborating on a verified avatar of you? And then this was Friday at noon. How does Monday to Tuesday, November 25th to 26th look? Thank you, redacted. And there's not very many contexts where I would be comfortable saying this, but do these people have any idea who I am? Mm-mm.

Didn't do much research. They maybe saw that you were on the time AI 100 or something. Yeah, I'm guessing it's something like that. They did not get a reply. Experiment. I feel like document it on the podcast to get one. And you could do inside the belly of the beast. Nope, nope, nope, nope. At no point will there ever be a verified avatar of me. So if you see one, it's fake. Horrifying. Unverified. Unverified. Unverified. Yeah.

You know, you can't get this in the stores, man. This is unlicensed MLEM vector chatbot. Black market. Black market. Oh, gosh. Okay. TechCrunch, PSA, you shouldn't upload your medical images to AI chatbots. And this is by Zach Whitaker, published on November 19th. And he says, here's a quick reminder before you get on with your day. Think twice before you upload your private medical data to an AI chatbot.

Because people are actually using these, you know, Hatchimati or Gemini or whatever to ask questions about their medical concerns. And they are doing this through uploading things like X-rays and MRIs and stuff like that. Don't do that. Don't do that. Musk, there's that thing Musk was asking like just X users to like send him their medical data. And they were just like tweeting it out. They're like MRI scans and stuff.

Oh, my gosh. Okay. Good Lord. Keeping us going quickly. This was an NPR piece that I heard multiple times over the weekend. They kept airing it, but initially played on November 26th of 2024. A look at a pilot program in Georgia that uses, quote, jailbox to track inmates by Lila Fidel host. Thoughts?

It's a torment nexus case if you've ever seen one, right? Why would you aspire to this? Huge, huge nightmare scenario. I wonder if they'll have to have two Department of Corrections people babysitting them like they have to do for the New York City subway Dalek or whatever. Yeah.

I also, it kept bugging me each time I heard this. It starts with six foot tall robots are now monitoring inmates at a county jail in Georgia. And the fact that they lead with the height of the machine was somehow bothersome to me. Yeah. Like it's just like looming. It has to be intimidating to the, it can't be a cute little robot.

You know, what was the robot dog called? Oh, the Boston Dynamics one? No. No, no. I was talking about the prior one that I think that was more of a mass market one. Anyways. Oh, eyeball.

The AIBO, yeah. They have to assign some, like, corporeal reality to it, right? To, like, you know... So much of it is, as you both have documented exhaustively, hype and ephemeral. So you have to say, like, six foot tall. Like, sit up, pay attention, this thing is actually, you know...

being in a jail somewhere which does make it worse I agree yeah okay and so then to take us out on something of a high note I have this wonderful comment from Sage at trans.bluesky.social in this thread about how the tech bros say we can't possibly handle data carefully Sage says if an AI tech bro ran Campbell's quotes prep

Lord.

Thank you, Sage. Yeah, Sage just nailed it there. So that's it for this week. Brian Merchant is a journalist in residence for the AI Now Institute. Thank you so much, Brian, for joining us. Thank you so much for having me. Can I plug my new podcast with the aforementioned Paris Marks? Please do. Yes. Okay, yeah. We've just started a tech crazy.

critical podcast of our own called System Crash. So yeah, check it out and we'll have to have you both on there sometime. That'll be fun. Yeah, I'm excited. Very excited for what y'all have cooking up in the lab.

That's it for this week. Our theme song was by Toby Menon, graphic design by Naomi Pleasure Park, production by Christy Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify, and by donating to dare at dareinstitute.org. That's D-A-I-R hyphen institute dot O-R-G-E.

Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv slash dare underscore institute. Again, that's D-A-I-R underscore institute. I'm Emily M. Bender. And I'm Alex Hanna. Stay out of AI hell, y'all.