We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode If You Can’t Beat ’Em… Join ’Em? Journalism in an AI World

If You Can’t Beat ’Em… Join ’Em? Journalism in an AI World

2024/2/9
logo of podcast On the Media

On the Media

AI Deep Dive AI Chapters Transcript
People
A
Abbie Richards
E
Elinor Tatum
J
John Herrman
K
Kate Knibbs
未知
Topics
Kate Knibbs:AI 技术被用于批量生产低质量内容,并发布在收购的已停运网站上,例如《The Hairpin》和《Apple Daily》,这反映了AI 技术在内容创作领域的滥用和对新闻业的冲击。她详细描述了AI Clickbait Kingpin 的运作模式,以及他对已停运网站的利用。 Brooke Gladstone 和 Michael Loewinger:新闻机构面临着AI 技术带来的挑战和机遇,既要应对AI 技术带来的负面影响,也要探索AI 技术在新闻生产中的应用。他们以The Hairpin 网站的案例为例,说明了数字媒体的困境和AI 技术的滥用。 John Herrman:新闻机构与AI 公司之间的合作关系复杂且充满矛盾。他分析了《纽约时报》起诉OpenAI 的案例,以及OpenAI 与其他新闻机构达成的不同类型的合作协议,指出这些合作既有合作共赢的可能性,也有潜在的冲突和法律风险。他认为AI 技术对新闻业的商业模式构成威胁,但同时指出《纽约时报》等大型新闻机构的特殊性,以及该诉讼可能对行业产生的影响。 Elinor Tatum:《纽约阿姆斯特丹新闻报》与Latimer AI 公司的合作旨在通过将该报的历史档案数据纳入AI 模型训练,从而减少AI 技术中的偏见,并更准确地呈现少数族裔社区的叙事。她认为AI 技术的进步需要更多元化的视角和参与,以避免AI 技术的偏见和误导。 Abbie Richards:AI 技术被用于制作大量传播的虚假信息视频,特别是关于阴谋论的内容,这些视频在TikTok 等平台上获得大量观看和盈利。她分析了这些视频的制作手法、传播途径以及对社会的影响,并指出需要采取措施来应对AI 技术带来的虚假信息问题。

Deep Dive

Chapters
News organizations are exploring partnerships with AI companies to adapt to the changing landscape, while also facing legal challenges and ethical considerations.

Shownotes Transcript

Translations:
中文

Every news organization is desperate for the next thing, anything that might provide future revenue streams. That's a serious danger, and I think it's returning with AI. News outlets strike deals with AI companies, hoping they'll work better than the disastrous collaborations of the past.

From WNYC in New York, this is On The Media. I'm Brooke Gladstone. And I'm Michael Lohinger. If you can't beat them, join them. How one news outlet is partnering with a startup in an effort to make AI less racist. Garbage in, garbage out, right?

So if there are things that misrepresent our communities and what it's learning from, then what it's going to spit out is going to be misrepresentations. And how to earn big bucks by reanimating defunct domain names and filling them with AI sludge. There's a lot of posts about dream interpretation. It's just clearly written by an AI. Kind of the worst thing you've ever read in your life. It's all coming up after this.

On the Media is brought to you by ZBiotics. Tired of wasting a day on the couch because of a few drinks the night before? ZBiotics pre-alcohol probiotic is here to help. ZBiotics is the world's first genetically engineered probiotic, invented by scientists to feel like your normal self the morning after drinking.

ZBiotics breaks down the byproduct of alcohol, which is responsible for rough mornings after. Go to zbiotics.com slash OTM to get 15% off your first order when you use OTM at checkout. ZBiotics is backed with 100% money-back guarantee, so if you're unsatisfied for any reason, they'll refund your money no questions asked.

That's zbiotics.com slash OTM and use the code OTM at checkout for 15% off. This episode is brought to you by Progressive. Most of you aren't just listening right now. You're driving, cleaning, and even exercising. But what if you could be saving money by switching to Progressive?

Drivers who save by switching save nearly $750 on average, and auto customers qualify for an average of seven discounts. Multitask right now. Quote today at Progressive.com. Progressive Casualty Insurance Company and Affiliates. National average 12-month savings of $744 by new customers surveyed who saved with Progressive between June 2022 and May 2023. Potential savings will vary. Discounts not available in all states and situations.

Yup, that's who you think it is. The Grimace mug. The Hello Kitty keychain. Barbie herself. For a limited time, your favorite McDonald's collectibles filled with memories and magic are now on collectible cups. Get one of six when you order a collector's meal at McDonald's with your choice of a Big Mac or 10-piece McNuggets. Come get your cup while you still can at participating McDonald's for a limited time while supplies last.

I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.

From WNYC in New York, this is On The Media. I'm Brooke Gladstone. And I'm Michael Loewinger. This week's show is all about the use of generative artificial intelligence in journalism. People both pro and con grapple with whether to resist apps like ChatGPT or embrace them. We begin with an anecdote, the story of a late great blog called The Hairpin that underwent an unsettling transformation.

When you went to the hairpin, you were getting something you couldn't get anywhere else. Kate Nibbs is a senior tech writer for Wired. It was part of the All Network, the collection of blogs that had a very writer-friendly sensibility. Gia Tolentino was an editor. Jasmine Hughes was an editor. And Helen Peterson. It just had a murderer's row of really talented, distinctive voices. It never had a mass audience before.

But the people who read it loved it. I've heard it compared to like the Velvet Underground. Not that many people bought their album, but every one of them started their own band kind of thing. Yes, yes. Not that many people might have read it, but everyone who did became a blogger. I love that.

This website was so special to me and so many other people. What happened to it? It just didn't succeed as a business and they decided to fold it. The story of digital media. Yeah, the sad story. But a couple weeks ago, Kate heard through the grapevine that the site was mysteriously back online. Someone from the hairpin world...

alerted me to the fact that it had been revived in this wholly bizarre way. It was just like generic content mill nonsense. There's a lot of posts that are about dream interpretation. It's just clearly written by an AI.

kind of the worst thing you've ever read in your life. Typically, I feel like these people who run these content mills aren't that eager to talk to the press or just want to go about their business without too much scrutiny. How did you figure out who now owns the hairpin?

You're asking me about hairpin what?

This is the man who responded to her email, a Serbian entrepreneur named Nibosha Vujunovic. But you can call me Vujo because I'm Vujinovic, my second name. So Vujo is OK. Kate Nibs connected me with Vujo after she profiled him for Wired this past week. An article titled Confessions of an AI Clickbait Kingpin. He actually told me that he was very surprised that I was asking about the hairpin because it wasn't

one of his top websites. The Hairpin is not top 20, maybe in my top 100 websites. He said he had over 2,000, although he did not like provide me with a master list. So...

take that with a grain of salt. But I checked that he owned, you know, at least a few dozen. I have much bigger websites than this one. So it's nothing special for me. What are some other websites that you own that you're proud of? I'm not proud, especially proud of anything, I must tell you, because... Why not? This is your life's work. Why are you not proud?

No, it's not. I can be proud of my kid, you know. I can be proud of my songs that I write. He's also a pretty actually popular DJ in Serbia. I'm singing that part of the song. That song was the most popular song in Serbia ever.

You're a celebrity in Serbia? And

And this is actually how Vujo got into the content mill game. In 2005, when he was still trying to make a name for himself as a DJ, he noticed that his personal site where he posted his music was getting more and more traffic. I get the idea, okay, I will write about, I don't know, house music. And I'm purchasing housemusic.com, for example.

But he quickly learned that starting a new site from scratch, you know, churning out blog posts that advertisers like, is a ton of work. And getting people to stumble upon your site is hard too. If you Google house music, you'll probably see bigger, older music sites first. But one day I started buying already established websites. Just imagine you're buying a website of, I don't know,

closed restaurant, and that restaurant have backlinks from New Yorker, from BBC, from, I don't know, Yellow Pages, from Forbes. If the restaurant was written about and linked to on sites that Google considers to be high quality...

then it's more likely to show up on the first page of Google results. Of course, it's not so simple, but there is a bigger chance for websites like that to rank easier than another domain without backlinks. So this became a big part of his business, and it appears to be legal. Every day, he says he hangs out on auction sites like GoDaddy, looking for dead sites that he can scoop up. So I'm buying established websites, one, two, three, four, five, ten per day, every day.

Which is how he ended up with an eclectic assortment of sites, including photolog.com, an early Spanish-language competitor of Facebook, and popetoyou.net, a former official site of Pope Benedict XVI, and of course, the hairpin. The most popular site in his stable is another women's media site, actually. It's called The Frisky.

It was launched in the 2010s as well. And at one point, it was like one of the most popular women's interest websites in the US. It was kind of like a

Cosmo style, like there's a lot of sex content and dating advice. And it went out of business in 2016. And the domain was up for grabs at some point and he grabbed it. It was so popular. I think more than 10 people work every day on that website. And yes, real humans, we write about everything, especially about celebrities. Meghan Markle, she was pregnant and that was the huge story.

Today, maybe the Frisky earning $100,000 per year. I don't know. We were not able to verify his earnings from the Frisky. So he's making a lot of money off sex toy companies that still want to do sponsored posts or advertisements. So I was like looking at the Frisky search traffic and

And all of the top keywords are breast related. So when people are searching for things on the internet related to like bra sizes, it tends to send you to the frisky.

I think that helps keep the engine running. And in the past year or so, Vuyo's engine got a big new upgrade, generative AI. Yeah, it just supercharged this weird spammy corner of the SEO industry. Instead of taking like four hours to write 12 blog posts, all of a sudden you can do that in 40 seconds. They primarily use chat GPT. They just put in prompts and spit out articles and...

He does say that they fact check them. I mean, I don't know how thorough the fact check is, but there's some sort of quality control going on to avoid putting something super offensive on the internet that would end up alienating potential advertisers. We don't publish anything about politics. We write about health. We write about fitness.

It's not something that's super sustainable. Like he's already losing traffic on a lot of the big properties, including the Frisky, because people figure out that it's AI generated. The Frisky, the Pope website, it's a little silly, but there is like a kind of a slightly darker side to this, which is that he is using the same business model on dead news websites.

Yes. And honestly, the most shocking thing that he owned to me was the English language website for Apple Daily, which is like a very culturally significant website.

pro-democracy newspaper that was based out of Hong Kong that was shut down in quite a dramatic fashion a few years ago. The newspaper has had financial trouble since its assets were frozen after the arrest of, of course, its founder, Jimmy Lai, the billionaire media tycoon.

He was a very, very outspoken critic of the Chinese government. He's a frequent visitor to Washington and has been labeled by Beijing as a traitor. Jimmy Lai is currently under arrest, as are several of his top editors. Charged with, quote, conspiracy to collude with foreign forces. His crime was running a media outlet that wouldn't tow the party line.

Apple Daily was very important to the pro-democracy movement. I'm looking at it now. It's just AppleDaily.com, right? Yeah, AppleDaily.com. Funny, cool username ideas. A guide to creating memorable online handles. And then we got Unlocking LeBron's Recovery Secrets under the heading World. I guess this is like world news. Eight tips to take your healing seriously.

And then under actors, the actors heading, we see 45 plus happy birthday wishes for teacher. Just totally. They're not even trying to hide the fact that it's AI generated, right? And this is an important media outlet. It's really unsettling to see a news outlet emptied out and replaced by like the complete opposite of what it stood for. Can I ask you about Apple Daily? Okay.

Because I do think some of our listeners would be really disappointed to learn that this important website was shut down and is now posting AI content. I understand. But there is a lot, you know, I live in Serbia. I live in...

ex-Yugoslavia. There is a lot of things here. Vujo's English isn't super clear here, but he went on to talk about growing up in Bosnia during the war, which he says destroyed his childhood. He referenced a hospital near his home that NATO bombed in the 90s. Injustices that feel bigger to him than, you know, putting AI clickbait on a dead news website that the Chinese government shut down in Hong Kong. I'm not

part of that story. There is a lot of bad things in this world. A lot of things is not right. If I buy some domain legal and create anything what I want, is it a bad thing? Does it change anything if I

put on that website, peace in the world. Is it change anything in the world? No, it's not. So I think you understand what I want to tell you. I do understand. And I don't think that you're responsible for the website going away. But AI is helping accelerate the death of journalism. Do you think about that at all? I'm afraid AI can be used for bad things. I'm not a fan. What is opposite of fan? Hate, a hater. You hate AI. Maybe I hate AI, especially if...

I can see people losing jobs because of AI. You're a journalist, you're afraid about your business. Because this is striking journalism for sure. Striking all writers or content creators. I'm writing songs today. So yes, also I'm afraid it will make better music and play better music than me as a DJ. So yes, I understand, but

But, of course, he talked about how useful and popular ChatGPT is. He cited a projection that I've seen quoted widely in the press that by 2025, 90% of online content could be generated by AI. Vuyo, you are helping create an internet where there is less and less human on the internet. Is that an internet that you want to be on? No, absolutely not.

Yes, I agree with you, but I hate also using cars with oil or petrol and destroying our planet. He said, I like horses. I drive a car because we live in a society where you have to drive a car. Probably you don't also like destroying our planet and you are still using car too. And that's how he feels about AI. This

This is the way things are going, so I'm going to go in the direction that the world is already moving. Can't beat them, join them. That's what I'm hearing you say. Something like that. That is a good one. That's it. He's not sitting there being like, I'm going to destroy a beloved independent women's media blog.

I'm going to create this perverse desecration of this important pro-democracy Hong Kong news outlet. He is simply taking advantage of an opportunity that has been presented on the internet that has a very low barrier of entry. And that's it. He just wants to make money. And I think that's how a lot of the people who are making the internet worse operate. Kate, thank you very much. Thank you so much for having me.

Kate Nibbs is a senior writer for Wired. Her latest piece is titled Confessions of an AI Clickbait Kingpin. Coming up, Julio says if you can't beat them, join them. While the New York Times has other ideas. This is On the Media. On the Media

This episode is brought to you by Progressive Insurance. What if comparing car insurance rates was as easy as putting on your favorite podcast? With Progressive, it is. Just visit the Progressive website to quote with all the coverages you want. You'll see Progressive's direct rate. Then their tool will provide options from other companies so you can compare. All you need to do is choose the rate and coverage you like. Quote today at Progressive.com to join the over 28 million drivers who trust Progressive.

Yep, that's who you think it is. The Grimmest Mug. The Hello Kitty keychain. Barbie herself.

For a limited time, your favorite McDonald's collectibles filled with memories and magic are now on collectible cups. Get one of six when you order a collector's meal at McDonald's with your choice of a Big Mac or 10-piece McNuggets. Come get your cup while you still can. And participate at McDonald's for a limited time while supplies last.

I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.

This is On The Media. I'm Michael Loewinger. And I'm Brooke Gladstone. Journalism has entered an era of love-hate relationships with AI. In December, the New York Times became the first major media organization to take a chatbot creator to court. The New York Times suing OpenAI, the creator of ChatGPT and Microsoft, for

for copyright infringement. The Times says that millions of articles published in the paper were used to train automated chatbots that now compete with it as a source of reliable information. The suit says that the defendants should be held responsible for, quote, billions of dollars in statutory and actual damages, end quote. Open

OpenAI told NBC that it hopes to, quote, find a mutually beneficial way to work together as we are doing with many other publications. And so it is. OpenAI has inked a deal with parent company of Politico and business insider, that's Axel Springer, the multi-year agreement compensating Axel Springer for the content OpenAI will use to generate answers on ChatGPT and

train its models. The Associated Press signed a similar deal with OpenAI earlier last year. Now, OpenAI is reportedly in talks with CNN, the Fox News Corporation, and Time to license their work.

News Corp CEO Robert Thompson said in an earnings release earlier this week that the company much prefers, quote, negotiation to litigation. And on Monday, Microsoft, which holds a major stake in OpenAI's for-profit arm and has the right to commercialize its inventions, announced partnerships with five news organizations—

Semaphore, the Craig Newmark CUNY Journalism School, the Online News Association, Ground Truth, and Nota, itself an AI company designed for publishers.

Of course, not all these deals are the same. Well, there are two kinds of deals that we're hearing about, and they often get muddled together, which I think generally works to the benefit of OpenAI and Microsoft here. John Herman is a tech columnist for New York Magazine. One kind of deal is with

Semaphore, with Ground Truth, with the Craig Newmark School at CUNY, the ONA and NOTA. These are deals that are providing access to AI tools for news gathering and news production to experiment with large language models, text generation tools, to see if there's some way that these can make news production easier or quicker or more effective, or if there are ways to use AI to dig into big data sets.

That's all very interesting and appealing to think about as someone who works in media. But the other kind of partnership, which is much more consequential and also much more tense, is the type of partnership that OpenAI has with Axel Springer, for example, which is the parent company for Business Insider and a bunch of German language publications.

That involves OpenAI paying a licensing fee of tens of millions of euros over a few years to put Axel Springer news and content and analysis into OpenAI products like ChatGPT. And that's the result of a little bit more of a negotiation to sort of avoid conflict or potentially to avoid lawsuits. So they're really two very different kinds of partnerships.

And then you've got, in December, the New York Times lawsuit against OpenAI for copyright infringement. What was the Times' argument? The Times filed what I think many of the industry see as the definitive and most credible lawsuit of its kind against an AI firm, alleging that

OpenAI had trained its models on years and years of New York Times content, that this training was not covered under fair use, and that not only was OpenAI using this data to create software that could compete with the New York Times product by creating articles that were pretty solid, but

but also that you could get ChatGPT to regurgitate full passages from published New York Times content, which challenges the core defense that OpenAI had mounted for months at that time, that these models don't contain information, they just contain statistical relationships between different things that can produce similar outputs. Now, OpenAI says that that's a glitch and that they're fixing it,

That this is a question that isn't simply resolvable by shouting fair use. This is sort of new territory and that at the very least there needs to be precedent set around this question. I've seen the New York Times lawsuit framed as a fight for the future of journalism. Do you think this is an existential battle?

I think that broadly speaking, the fact that these new AI technologies can automate at least the basic processes of a lot of what we think of as creative work does present a real threat to, if not the practice of journalism or being a musician, they do present a clear threat to, for lack of a better term, the business models of creativity. I think that's really, really obvious.

I don't necessarily think that the fates of The New York Times and OpenAI tell the whole story. OpenAI is probably the premier AI firm in the public's mind right now, but lots of companies are developing very similar technologies. And The New York Times is one of the nation's premier news outlets, even if people frequently quibble over it. So I would think if anything would change,

determine the direction of where this would go, it might be this lawsuit.

I don't want to minimize the potential influence that they have here. But in the current media environment, the New York Times is also an interesting and sort of strange outlier. I should disclose that I worked there for seven years. It's very large. It's doing very well. It's subscription supported, one of a very small number of truly national news organizations. What matters for the New York Times doesn't necessarily matter.

matter for the rest of the news industry in a clear way. But I do think that the outcome of this lawsuit could set a valuable precedent. I also think it's worth reading the actual text of the lawsuit and OpenAI's response to sort of get some background here, which is that they were in negotiations for a deal that might have been quite a bit like Axel Springer's deal.

considering licensing options, what a sort of equitable fee might look like. And then things fell apart. Would you say it's fair to conclude that the AI companies are motivated to partner with news in order to prevent similar kinds of lawsuits like that being brought by the New York Times?

I think that it's fair to read, for example, the Axel Springer deal as a way to both suggest that these partnerships are possible and also to say to other news organizations, hey, let's talk first.

From the perspective of news organizations, the arrival of these generative AI tools was very abrupt, very threatening, and came on the tail end of a long and disappointing era of tech and media partnerships.

Right. You wrote in your piece that it's easy to fold such deals into the prevailing narrative of AI dominance as venerable publishers line up to partner with tech firms once again, despite what happened last time around and the time before that and the time before that. You mentioned some deals in the past.

In the 2010s, Facebook, they approached news organizations like the New York Times and said, hey, now we're going to focus on videos. You should be doing videos.

In order to keep traffic flowing, news organizations did divert a lot of their increasingly diminishing resources to producing video to go on Facebook. And then what happened? So when Facebook started sending lots and lots of readers to news publishers and news publishers started adapting their strategies to cater to those visitors and to reach more people on social media,

Facebook sensed an opportunity. And in the mid 2010s, they were thinking, oh, we need to compete with YouTube. Everything's going to be video in the future. How can we build that out ourselves? And one cheap way to do that was to partner with companies like The Times and say, hey, if you produce, for example, live video broadcasts for us on a regular schedule for this period of time, we'll pay you a few million dollars.

you will get lots and lots of viewership because our platform is now funneling people to these new video features. This is sort of a win-win for you guys. It all sort of felt good at the time. What happens then is news organizations staffed up for live video, even if that wasn't something they were good at before. They produced these videos for a limited time, I think about a year.

And then because it was companies like the New York Times and BuzzFeed industry leaders that were doing this, lots of smaller companies that didn't have direct partnerships, they think, oh, we should pivot to video too. And so you get these people chasing these somewhat artificial trends ending up out on a limb when Facebook decides that, well, maybe live video isn't going to be the main thing that people see on Facebook.

And that was kind of the recurring story of the 2010s. Are the businesses of AI and journalism essentially compatible or not? I think they should be considered essentially incompatible. These are very different types of firms doing different things, but with interests that sometimes align.

The Times in particular has been fairly open to deals with companies like Google and Meta, but has also been fairly cautious. It's a big institution. There's a lot of resistance to fundamental change there, which has kind of worked out in their favor in this case. They might take a few million dollars from Google to produce a series of VR videos that you have to view by putting your smartphone in a cardboard pair of goggles. And they can do that

without disrupting their business operations and maybe pocketing a little bit of money. But when they do that, it can often be mistaken for what everyone else has to do. And in an industry where virtually every news organization is desperate for the next thing, anything that might provide future revenue streams, that's a serious danger. And I think it's returning with AI.

Do you think that these deals are just about trying to cash in for now, even if this thing kills them down the road? Yeah, I think that's a fair way to look at, for example, the Axel Springer deal. One thing it's worth pointing out here is that what OpenAI is paying for is the right to include links from articles and content from articles in products like ChatGPT.

This contract is sort of premised on the idea that, well, everyone's going to be using this. And of course, they're also going to be using these chatbots to keep up with the news. There are a lot of predictions implicit in this deal that won't necessarily come true. Maybe chatbots aren't the future of news. And in that case, Axel Springer looks pretty smart in hindsight.

The other possibility is that these AI technologies are going to find their way into virtually everything we use on the internet. And the way that they collect and represent up-to-date information about the world is potentially a serious problem for them and something they're going to have to spend a lot of money on. In that case, in hindsight, Axel Springer might not look so smart. They might look like they gave something away for a lower price than they should have.

If the web is becoming ragged and full of spam and AI-generated content, if our real-time sources of information like Twitter or X and Instagram and Google search, they're all becoming polluted, maybe having a consistent feed of reported, reliable, valuable information about the outside world is incredibly valuable to an AI firm in the future.

You've worked at BuzzFeed and at the New York Times. When both companies were experimenting with new technologies and big tech partnerships, deja vu, maybe? Does this moment feel different?

This moment feels different than, for example, the era of rising social media, because at least then there was a sense of synergy and collaboration. A bunch of people are using Facebook, but they're also reading news there. We make news, and so maybe this works out somehow. Here we've got the arrival of new technologies that are just basically automating some of the basic functions of news production.

Now, you can argue, and I think convincingly, that they're nowhere near capable of producing valuable stories, valuable analysis, but they're trying. And so it's a little more antagonistic to start. This isn't about two industries aligning temporarily and then sort of drifting apart inevitably. This is two industries smashing into each other right at the beginning of their relationship.

You said to our producer, "What were we supposed to do with Facebook? People did different things, but no one won." Right. That's the sort of tragedy of covering the media's relationship with tech for the last decade is that people made a lot of mistakes, but even in hindsight, it wasn't clear what most news organizations were supposed to do. Social media took away

what was left of their revenue models. It said, "Hey, we are a better advertising product than you are. Hey, we're better at attracting huge numbers of readers than you are. What's left for you is the expensive work of gathering and publishing news."

And yeah, there is some deja vu here with AI tools where, yeah, there are smarter decisions and there are unwise decisions that you might make now. But we are also at the beginning of potentially a pretty big change in how people interact with information. And that is, I have to be frank, it's kind of scary. Thank you very much, John. Thanks for having me on. John Herman is a tech columnist for New York Magazine.

As AI technologies advance, many critics observe that these tools replicate the prejudices of the data they train on. One UC Berkeley professor was able to trick Chad GPT to write a piece of code to check if someone would be a good scientist based on their race and gender.

A good scientist it found was white and male. In 2020, a prominent researcher named Timnit Gebru said that Google fired her after she highlighted harmful biases in the AI systems that support Google's search engine.

Today, she runs a research institute rooted in the belief that AI is not inevitable. Its harms are preventable. And when it includes diverse perspectives, it can even be helpful, beneficial. But... We should have guardrails in place. And we should make sure that the group of people involved in creating the technology resemble AI.

the people who are using the technology. New players have joined the field to address that issue. In December, a startup called Latimer AI announced a licensing agreement with the largest and oldest black newspaper in New York City, the New York Amsterdam News. The partnership began when Latimer's founder, John Passmore, approached an old friend, Eleanor Tatum, the publisher and editor of the paper.

It, to me, was a no-brainer because we know how our community can be so misrepresented in media in general. And because of that and the way large language models learn... The biases are built in to the models. It scrapes the internet and there's a lot of real garbage out there. Garbage in, garbage out, right? So if there are things that misrepresent our communities and what it's learning from...

The idea of being able to be a part of something that is going to be able to give a correct narrative, I thought was something very important.

Louis Latimer, I understand, was a Black inventor whose legacy and scientific contributions were often overlooked. That's who the company is named for. So tell me about this company, Latimer. They're working very hard to make sure that the information that is coming from sources that are Black is getting out there to the public.

They are actually training the model partially based upon the archives of the Amsterdam News going back to 1926. I mean, there may be some things that just weren't covered in other media that were covered by the Amsterdam News. If we look at the Central Park jogger case, for instance, we will see very different coverage coming out of the Amsterdam News than we would have seen out of any of the other newspapers.

we may see a difference in what Latimer would produce versus another AI search because there would be very different information. Even coverage of the Macy's Thanksgiving Day Parade. What do you have in your mind there? Because it used to start in Harlem. I understand you spent several months on figuring out how to work together. Can you tell us anything about your arrangement today?

The actual agreement is confidential with Latimer, but I can say that what we have right now is not permanent and we will be renegotiating our relationship as we get a better understanding of what the real value is around the data. This is all very new, especially in terms of Latimer, because they're very much a startup. When you talk about an evolving relationship, do you expect to ever make any money out of it?

I certainly believe that we will. There's definitely a number attached to it. And the model is going to be working and looking to be placed in places like HBCUs across the country as a starting point and go from there. And they've already got relationships set up with several HBCUs around the country.

Latimer said in its press release that it's, quote, constructing an LLM that represents the future of AI, where these models are built to better serve distinct audiences.

Clearly, in this case, the distinct audience includes the countless people who've been served for over a century by the Amsterdam News and the historic black colleges and universities. And that's great. But if you had the chance, Eleanor, would you want to combat these built-in biases that your archive could help correct and

training a much bigger platform intended to reach nearly everybody, like, you know, chat GPT. Well, doesn't everyone have to start somewhere? Right, yes. But if you had a chance, you'd go as big as you could.

Well, I would like to see Latimer be as large or larger than ChatGBT or any of these, because I believe that it could be with the right technology, with the right infrastructure, with the right information being inputted into it. You see, all of the world needs to get the diversity that Latimer is going to provide.

I am hoping that, you know, Latimer gets into every HBCU in the country to start with, and then to libraries across the country, public libraries, then the general public. I mean, they're already signing up. Just general internet users are using it already. So I'm hoping that it's another commonplace usage, just like ChatGPT.

It's really refreshing to hear this perspective. It's unique because it's not based on, well, if you can't beat them, join them. It's not focused on trying to have a more efficient operation based on fancy AI tools, making lots of money, or even about losing less money at this point. It really is about improving the media ecosystem.

Absolutely. I really feel strongly about Latimer because if you don't have the voices of the people that are being represented, you're not going to have a correct representation of people. That's why I feel it is so very important to have our voices included in all media. And that includes these large language models. So you have no fear of AI taking journalism down? No.

I think everyone has some fears of it, but journalism is still very much needed. And I want to make sure that there is information out there that is quality information that's going to be added to it. Now,

Does AI need some help? Are there a lot of issues? Yes. AI has a lot more learning that needs to be done. And with every day, with every week, every month and every year, advances are made and more advances need to be made. But it's an ever-evolving process and I'm looking forward to see what comes next.

I'm very excited to be a part of it. What sort of a future are you hoping to build together? Well, one that is long and lucrative, but also one that is going to bring information to people that shows the true breadth with texture and color of our communities, that tells the stories and brings out the information that has been so long overlooked.

by other keepers of history. So when people ask questions, they get the answers that aren't so easily found. Eleanor, thank you very much. Well, thank you for having me. Eleanor Tatum is the editor-in-chief of the New York Amsterdam News. ♪

Coming up, with AI, it's easy and profitable to make highly trafficked and highly stupid conspiracy videos. This is On The Media. On The Media.

This episode is brought to you by Progressive Insurance. Whether you love true crime or comedy, celebrity interviews or news, you call the shots on what's in your podcast queue. And guess what? Now you can call them on your auto insurance too with the Name Your Price tool from Progressive. It works just the way it sounds. You tell Progressive how much you want to pay for car insurance and they'll show you coverage options that fit your budget. Get your quote today at Progressive.com to join the over 28 million drivers who trust Progressive.

Progressive Casualty Insurance Company and Affiliates. Price and coverage match limited by state law.

I'm Maria Konnikova. And I'm Nate Silver. And our new podcast, Risky Business, is a show about making better decisions. We're both journalists whom we light as poker players, and that's the lens we're going to use to approach this entire show. We're going to be discussing everything from high-stakes poker to personal questions. Like whether I should call a plumber or fix my shower myself. And of course, we'll be talking about the election, too. Listen to Risky Business wherever you get your podcasts.

This is On the Media. I'm Brooke Gladstone. And I'm Michael Loewinger. A couple weeks back, ahead of the Democratic primary in New Hampshire... An AI-generated call is falsely telling Democratic voters not to vote in tomorrow's primary. Here's part of that false AI-generated call. Another bunch of malarkey.

This bogus call, which reached as many as 25,000 phones across the state, prompted the Federal Communications Commission this week to outlaw such AI phone fakery. The episode highlights how effective and effortless these AI tricks have become, and how those charged with combating them are always one step behind.

This is especially true at TikTok, where videos of conspiracy theories, really dumb conspiracy theories, are reaching millions of eyeballs and generating serious money. Abby Richards is a misinformation researcher and a senior video producer at Media Matters, a left-leaning watchdog group. She's been studying the viral tactics behind this growing cottage industry.

So you start off by saying something that is utterly unhinged. Government just captured a vampire and tried to keep it a secret. And then what you do is you create usually a fake main character, typically an explorer or a scientist. Alejandro Suarez was an explorer from West Palm Beach. And you describe the adventure...

through which they make this discovery. Alejandro walked for an hour in the woods until he reached a large, rusted security fence. And then it all turns out to be a cover-up. At that point, the goal is to really just waste time and tell a long story because you want it to be over 60 seconds long. My favorite one that you identified in your piece is the quote-unquote Joe Rogan clip of him talking about

Some scientist who overheard a conversation. Yeah.

About like an asteroid that's going to destroy planet Earth or something and the government doesn't want us to know about it. We are all probably going to die in the next few years. Did you hear about this? There's this asteroid that is on a collision course with Earth. Pull it up, Jamie. I mean, my favorite thing about the AI Joe Rogan conspiracy theories, they almost always start with a clip of him talking into the mic. They're not even trying to dub it. So they put the captions right over his mouth

Part of the reason those videos work is that, yes, the content is really absurd, but it's also kind of something you could imagine Joe Rogan being like, oh my God, dude, like I just read this crazy thing. Like it works, you know? Oh, it does. I saw one that was him saying that the U.S. stayed in Iraq because they were looking for like a Stargate. I was like, you know what? I could imagine him saying this.

As you mentioned, the fact that these videos are over 60 seconds is important to the people who are trying to monetize the videos because it plays a role in TikTok's creativity program. Can you describe that? You have to be in an eligible country. You have to be at least 18 years old. You have to have at least 10,000 followers and you have to have at least 100,000 video views in the last 30 days.

Once you join the creativity program, the videos that you produce that are over 60 seconds long are eligible for monetization. Tell me a little bit about the kinds of accounts that are sharing these videos and how many views they're getting.

I mean, I found accounts that were getting 20 million, 30 million views on some of these videos. So we identified these two accounts. One was English language and one was Spanish language, both of which were receiving millions of views. They appeared to be affiliated. They had the same name translated in English and Spanish, and they have the same profile picture. The English language account had received over 342 million views since it began posting in February of last year.

And then the Spanish language one had received over 329 million views, and it only started posting in September. That's just one account in each language, and it's doing really well posting this AI voice conspiracy theory content. This account, in particular, very obsessed with megalodons. Fun fact. Yeah.

And when we say AI generated, like there are multiple generative AI tools that are being used on each of these videos. Yes. And it varies depending on the creator and the video itself.

My

Micah Lowinger is a chill guy who loves hanging out. And then on top of that, there's often AI-generated images in the video because just listening to AI Joe Rogan wouldn't really be that interesting. Sometimes it's all AI-generated images. Sometimes they are mixed with just regular images. In the Discord servers where they talk about how to make this sort of content and they share tips, they often will recommend using AI to help you write the script or come up with the ideas.

When you say Discord server, you're referring to the kind of cottage industry that rests on top of the actual videos and channels themselves. There are, as you said, whole Discord servers, Medium articles, YouTube channels, and these hustle bro guru influencers who claim that they can help other TikTokers make it big.

What are they preaching and what are they hawking? I mean, it seems like a pretty classic get rich scheme to me. They're offering courses or coaching one-on-one advice and feedback on your content, teaching you how to essentially create content that will go as viral as possible that you can monetize and then make money off of. And you actually hung out in some of the Discord servers. What did you find?

They're talking about how to essentially make more money. So one person said, for example, if it's a conspiracy channel, post podcast clips about how they're poisoning the food supply and then link an affiliate product that is meant to detoxify the body. I'm like, I love when they just say what they're doing. It makes my job easy. They just spell it out.

Putting aside the fact that a lot of these videos fall into this dumb occult genre of like vampires and like Wendigos and these kinds of things, what are the major tells that some of these videos are AI generated? The voice is the first giveaway often. But then there's small details that are just wrong, like the wrong number of fingers or like asymmetry, distortion, if there's ever any text issues.

In the image, it's usually not any language that we've ever seen before. AI is still pretty bad at language. Also, they have a certain look to them. You know, like when you look at art and it makes you feel nothing? Yes. It's that.

That's this. If it's so obviously fake, a lot of the people sharing them probably think that they're funny or they just think it's a captivating story. And let's just be charitable here and assume that a lot of people are not convinced that

then what's the harm? The harm is that we are essentially pushing out content that teaches people to think about the world in a way that's really broken. It's a really unhelpful framework for understanding the world. Even if they know that the AI is AI, we still have a problem with viral conspiracy theories. It pulls us away from understanding how our world actually functions.

I'm less concerned about what comes off as real and more concerned about just how easy it is to make this sort of misinformation at scale. And make money from it. Yeah. It's super profitable and you can just pump it out. The people that are making this content, a lot of them probably aren't even really deep believers in conspiracy theories. They're just following the money. Right.

So we need to make sure that pumping out conspiracy theory content just isn't profitable. The 2024 presidential election is approaching fast, and researchers have been voicing concerns over AI-generated misinformation and disinformation. There was, of course, that AI-generated Biden robocall. The FCC has just been granted the power to start pursuing legal actions against people who might be creating this stuff online.

On TikTok, how much political AI-generated content are you seeing? Have users been digging into this particular niche as a potential business model as well?

The type of people who make content about like a dragon being discovered in Antarctica, they probably aren't as interested in like niche political conspiracy theories because that's much more likely to be demonetized. And it has a smaller audience and they're really going for just scraping as many people as possible. But that's not to say I haven't seen a lot of political ones. Did you see the Biden tap water one? No. Joe Biden controls you through the water you drink.

Yes, you heard that right. It uses an AI-generated image of, like, Joe Biden over a sink. If I drink Joe Biden's water, then he gets to control my actions or something? I think so. Honestly, didn't follow the plot that much.

But they have laid out an entire framework and provided a vast amount of resources and YouTube instructional videos on how to make this sort of content that goes as viral as possible. So you're saying the infrastructure that these people have created, the educational materials, the Discord servers, the how-to guides all over the place could be used by anyone for anything?

Yeah, that is concerning when we pair that with an electorate that's already primed for lots of conspiracy theories. And then we're mixing that with AI that can just create this content at a scale that we've never seen before. You said that election misinformation has been a problem on TikTok in the past. Do you think the platform has learned any lessons and is equipped to moderate itself this time around? Yeah.

Maybe they've learned some lessons, but I don't think that any platform should be walking into this election thinking that they're safe and that they have all their bases covered. Abby, thank you very much. Thank you so much for having me.

Abby Richards is a video producer at Media Matters. Her latest piece is titled TikTok Has an AI Conspiracy Theory Problem. Numerous studies have concluded that fluoride actually causes depression and disrupts other hormones in people's bodies, making them more susceptible to suggestion and thus easier to manipulate.

That's it for this week's show. On the Media is produced by Eloise Blondio, Molly Rosen, Rebecca Clark-Calendar, and Candice Wong, with help from Sean Merchant. Our technical directors, Jennifer Munson. Our engineers this week were Andrew Nerviano and Brendan Dalton. Katja Rogers is our executive producer. On the Media is a production of WNYC Studios. I'm Brooke Gladstone. And I'm Michael Lowentjer. ♪

You come to the New Yorker Radio Hour for conversations that go deeper with people you really want to hear from, whether it's Bruce Springsteen or Questlove or Olivia Rodrigo, Liz Cheney, or the godfather of artificial intelligence, Jeffrey Hinton, or some of my extraordinarily well-informed colleagues at The New Yorker. So join us every week on the New Yorker Radio Hour, wherever you listen to podcasts.