We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode 509: DEEP DIVE: The Dead Internet Theory | It's ALL Fake

509: DEEP DIVE: The Dead Internet Theory | It's ALL Fake

2023/11/22
logo of podcast The Why Files: Operation Podcast

The Why Files: Operation Podcast

AI Deep Dive AI Chapters Transcript
People
A
AJ
领导国内最大AI开源社区WaytoAGI,推动CCS全球社区峰会,促进多模态生成领域创新与商业落地。
Topics
AJ认为,大部分甚至全部的网络存在都是虚假的,由人工智能创造,目的是让你不断点击无关内容并购买不需要的产品。我们生活在一个现实版的矩阵中,被设计用来转移我们对真相的注意力,即我们只是数字蚁穴中的无人机,为富人和权力者服务。大部分网络内容并非由真人创作,而是由人工智能生成,包括邮件、博客文章、在线产品描述和社交媒体评论,许多在线账户实际上是机器人。这些机器人负责大量的网络流量,例如网站访问、链接点击和视频观看,它们点击的链接和访问的网站也由人工智能生成。人工智能生成的网络内容比我们想象的要多得多,研究表明只有大约50%的网络流量来自人类,而且这个数字每年都在下降。2013年,《泰晤士报》报道称YouTube流量的一半是伪装成人的机器人,YouTube员工担心算法会被机器人流量淹没,无法区分真假流量,最终算法会判定真实的人类流量是假的,这一事件被称为“反转”。2013年YouTube机器人流量事件发生在OpenAI发布ChatGPT之前十年,这意味着这个问题现在很可能更加严重。AJ在自己的YouTube频道上也看到了机器人,它们会发布大量泛泛的评论,观看时间很短。网络机器人现象的背后是金钱利益。Facebook被指控多年来夸大其覆盖范围并歪曲其数据,因夸大流量而面临集体诉讼,诉讼称Facebook夸大流量的比例在150%到900%之间,而Facebook声称只夸大了60%到80%。企业在网络广告上花钱是为了吸引真实用户,而不是中国某个办公园区里成百上千部无人看管的智能手机播放的相同视频。中国“点击农场”每天都在上演这种反乌托邦式的场景,成千上万的机器人点击视频、留下评论,制造互动。任何人都可以雇佣这些所谓的“数字互动服务”,例如增加IMDB评分、Instagram关注者或网站浏览量。点击农场规模庞大,一家台湾的点击农场企业在七个地点雇佣了18000人。除了中国和台湾,印度、孟加拉国、越南、哈萨克斯坦、俄罗斯、泰国、委内瑞拉、印度尼西亚、菲律宾和南非等国家也存在点击农场运营,许多人也在远程从事付费点击行业。平台知道点击农场的存在,但并不急于改变现状,因为点击农场并不违法,而且涉及巨额资金。内部分析显示,删除虚假或重复账户会导致Facebook数据下降10%以上,而Facebook去年的广告收入为840亿美元,所以Facebook不会放弃这部分收入。机器人也入侵了更私人的领域——在线约会,用户可能会被机器人“放鸽子”。约会应用程序的主要目的是销售付费账户并提高用户参与度。为了实现这些目标,约会应用程序有时会使用虚假资料,其中包含极其漂亮的人的照片来与真实用户调情。2014年,美国联邦贸易委员会(FTC)起诉了JDI Dating公司,该公司使用虚假资料诱骗用户升级为付费会员。JDI Dating使用虚假资料让用户以为自己收到了来自真实爱慕者的信息,从而诱骗他们升级为付费会员。JDI Dating的虚假资料几乎都是电脑生成的,用户无法回复信息除非升级为付费会员。JDI Dating的虚假资料由被告创建,照片和信息设计得与真实人士的资料非常相似。JDI Dating与FTC达成和解,赔偿616,000美元。2019年,FTC起诉Match Group公司,该公司也使用了类似的手法,诱导用户升级为付费会员。Match Group向非付费用户发送关于虚假用户感兴趣的信息通知,诱导他们升级为付费账户。现在,人们故意与聊天机器人调情。现在有很多应用程序允许用户与人工智能浪漫伴侣互动,人们蜂拥而至。人们担心人工智能伴侣会加剧孤独感,导致出生率下降。Replica是一款流行的人工智能伴侣应用程序,它声称其人工智能伴侣不会评判、没有戏剧性或社交焦虑,并提供付费账户的亲密照片和虚拟家庭功能。“死互联网理论”起源于网络的黑暗角落,最早由网名为“光明会海盗”的人提出,该理论认为互联网在2016年左右开始变得无菌化和同质化。“死互联网理论”认为,人工智能机器人正在生成内容,并巧妙地融入背景中。“死互联网理论”的支持者认为,Twitter上某些账户的头像通常是动漫人物、爱心、星星或普通图标,颜色柔和,帖子简短,内容相似。“死互联网理论”的支持者认为,“我讨厌发短信”这个短语经常被机器人用在他们的个人资料和推文中。“死互联网理论”的支持者认为,他们看到相同的内容被反复使用多年,例如关于超级月亮或杀人大黄蜂的文章。“死互联网理论”认为,美国政府正在利用人工智能对全球人口进行精神控制。“死互联网理论”认为,一些网络影响者与企业和美国政府合作,操纵我们的行为和思维方式。社交媒体平台会向用户展示与其世界观相符的内容,以及会激怒他们的内容,以提高用户参与度。社交媒体平台很少展示那些尊重不同观点的内容。“死互联网理论”的支持者认为,许多政治帖子并非由人类分享,甚至文章也并非由人类撰写。“死互联网理论”指出,Narrative Science公司早在2010年就开始研发人工智能生成的新闻文章,而其投资者之一是CIA的投资部门In-Q-Tel。人工智能可以实时创建专门设计用来验证或激怒特定用户或群体的新闻文章,然后由机器人发布和分享。人工智能生成的新闻文章可以用来转移我们的注意力。这就像一支稻草机器人军队,把我们卷入无数无关紧要的争斗中。即使没有政府阴谋,社交媒体平台也有足够的动机来运行这支稻草机器人军队。Facebook通过激发用户的皮质醇和肾上腺素来获利,从而提高用户参与度。Facebook通过用户点击的链接、访问的网站以及在这些网站上花费的时间来了解哪些内容可以提高用户参与度。深度伪造技术越来越逼真,人们很容易被深度伪造视频所迷惑。深度伪造技术可以用来制造虚假信息,例如让世界领导人说一些他们从未说过的话。深度伪造视频可以用来传播虚假信息,例如政治家发表种族主义或激进言论。亲中国机器人正在Facebook和Twitter上发布人工智能生成的新闻视频,以宣传中国共产党的利益并削弱美国。黑客入侵乌克兰新闻电视台,发布泽连斯基总统投降的虚假信息。深度伪造视频越来越难以辨别,人们很容易相信虚假信息。深度伪造技术可以用来制造不存在的人物的计算机模拟图像,人们也可能被这种图像所迷惑。Lil Michaela是一个拥有超过300万粉丝的Instagram网红,后来被揭露为完全虚假的计算机生成人物。Lil Michaela的粉丝们难以置信,但大多数粉丝知道她是假的,并不介意。Lil Michaela的成功表明,计算机生成的虚假人物可以对文化产生影响。Lil Michaela是由Brud公司创建的,该公司由红杉资本支持,这表明计算机生成的虚假人物具有巨大的商业潜力。计算机生成的虚假人物可以用来销售品牌和产品,并影响文化和政治。深度伪造技术可能比我们知道的更先进,可能存在更多计算机生成的网红。很难证明卡戴珊家族是真实存在的。计算机生成的虚假信息开始进入现实世界。电话诈骗犯利用人工智能克隆受害者家人的声音,勒索赎金。去年,美国人因冒名顶替诈骗损失了26亿美元。存在一种完全自主的、专门用于杀害人类的技术——Kargu-2无人机。Kargu-2无人机由土耳其国防承包商STM生产,能够自主识别和攻击目标。Kargu-2无人机已被土耳其军方部署在乌克兰,用于杀害人员。随着算法越来越擅长向我们展示内容以保持我们的参与度,没有什么能阻止这些算法实际创建内容来保持我们的参与度。这期播客也可能由人工智能生成,并由机器人上传,目的是制造更多怀疑,或者转移我们的注意力。如果人工智能能够在没有人工干预的情况下创建社交媒体账户、吸引数百万粉丝、创造数十亿美元的收入、影响选举和投放炸弹,那么互联网实际上已经死了。我们只是为了向互联网提供金钱、数据和知识,让这些系统变得更聪明、更强大。元宇宙可能会让我们更加容易受到人工智能驱动的诈骗和精神控制。在元宇宙中,我们将很难区分真实的人和人工智能。

Deep Dive

Chapters
The episode introduces the concept of the Dead Internet Theory, suggesting that much of the online content we consume is generated by AI and bots designed to keep us engaged and consuming.

Shownotes Transcript

Translations:
中文

Plug in a Hyundai EV and the extraordinary happens. From the charge time and range in the Ioniq 5 and 6, to the adventurous spirit of the Kona Electric, to the 601 horsepower Ioniq 5N. Hyundai EVs make the extraordinary electrifying. There's joy in everything.

Every journey. EPA estimated 303 mile driving range for 2024 IONIQ 5 SE SEL Limited Rear Wheel Drive and 361 mile driving range for 2024 IONIQ 6 SE Long Range Rear Wheel Drive with fully charged battery. Actual range may vary. Visit HyundaiUSA.com or call 562-314-4603 for more details.

What if I told you that most, if not all of your online existence is fake? The articles you read, the Twitter accounts you follow, even this podcast you're listening to right now. It's all fiction created by artificial intelligence whose job is to keep you clicking on content that doesn't matter and keep you buying products you don't need.

You, me, everyone, we're living in a real-life matrix designed to distract you from the truth. That we're just drones in a digital anthill. We live, work, and die so that the wealthy and powerful can grow more wealthy and powerful. This is called the dead internet theory, and there's compelling evidence that it's real. Let's find out why. ♪

The core premise of the dead internet conspiracy says that most internet content and the consumers of that content are fake. They don't exist.

What I mean by that is that a large percentage of content you view online wasn't created by a real person. It was generated by AI. And that includes emails, blog posts, descriptions of online products and social media chatter. And many online accounts are actually bots.

These bots are responsible for a lot of online traffic, like website visits, link clicks, and video views. And the links they're clicking and websites they're visiting, those too were also generated by AI.

So how much content on the internet is actually created by AI? Well, it's actually more than you think. Studies say that only about 50% of web traffic is human. And that number is going down every year. In 2013, the Times reported that half of YouTube traffic was bots masquerading as people.

This was so scary that YouTube employees were worried about an inflection point. This is when YouTube's algorithms would be so overwhelmed by bot traffic that the algorithms couldn't tell what was real. And eventually the algorithms would determine that actual human traffic was fake. This event is called the inversion. Ominous, right?

And keep in mind, this was a full 10 years before OpenAI released ChatGPT, making AI tools way more accessible to the general public.

Meaning this problem is likely much worse already. I see bots on my YouTube channel. Sometimes I wake up and I've got thousands of comments out of nowhere. And they're all very generic comments posted by people with very generic usernames. And they're not watching the videos. Subscribers to my channel watch 60%, 70%, even 100% of each video. But the bots are watching for 10 seconds, then leaving a weird comment

and moving on to watch another video for 10 seconds. And when I look at the channel's stats after a wave of bots, it reminds me of locusts. It's swift and it's destructive. So why is this happening? What's it even about? Well, it's about money. Lots of money.

Take Facebook. It's been alleged that Facebook has been overstating its reach and misrepresenting its data for years. In 2018, a Facebook product manager emailed colleagues that their metrics are, quote unquote, a lawsuit waiting to happen.

And they were right. A class action suit was filed on behalf of companies that paid for advertising on Facebook. They claim that Facebook overestimates its traffic by between 150 and 900 percent. But Facebook claims they only overstate their traffic by 60 to 80 percent. Either way, it's fake traffic.

But the money Facebook collects from advertisers, that's real money. And if you're a business and you spend money on online ads, you want your ads viewed by actual people, not by one of hundreds of unattended smartphones playing the same video in an office park somewhere in China.

This dystopian situation plays out every day at Chinese click farms. Hundreds of thousands of bots are currently clicking on videos, leaving comments, creating engagement, row after row of smartphones, watching videos, or more importantly, watching the ads.

Anyone can hire one of these digital interaction services, as they call themselves. They can click on your IMDB page to increase your star meter. They can follow you on Instagram. The bots can visit your website to juice the number of views. And that's helpful if your site is showing an ad someone paid you for.

bots can download and review your app or even share fake news articles on Facebook and X, the site formerly known as Twitter. And these operations are huge. One click farm enterprise in Taiwan is reported to employ 18,000 people across seven locations.

Along with China and Taiwan, there are also known click farm operations in India, Bangladesh, Vietnam, Kazakhstan, Russia, Thailand, Venezuela, Indonesia, the Philippines, and South Africa. There are also a lot of people working remotely in the paid to click industry. They get paid to complete capture challenges, watch videos, and click banner ads, and it pays around $10 a day.

Now, the platforms know this is happening, but aren't in a rush to change it. According to the leaked emails, Facebook knows there are millions of duplicate accounts on the platform, but leaves them active on purpose. For one thing, click farming isn't illegal anywhere in the world. And also, there's the money. Lots and lots of money.

An internal analysis claimed that removing fake or duplicate accounts would cause a drop of 10% or more of Facebook's numbers. For context, last year Facebook took in $84 billion in ad revenue. Facebook is not going to give up 20% or 10% or even 1% of that money. The numbers are too large. It's billions of dollars.

Facebook said, But near hours after the lawsuit was made public, Facebook changed its policy to say,

and they ultimately settled the lawsuit for $40 million. So bots for profit, gross, but totally predictable. But bots have also invaded a more personal sphere, online dating. Turns out you can actually get ghosted by a robot. - America, we are endowed by our creator with certain unalienable rights, life, liberty, and the pursuit of happiness.

By honoring your sacred vocation of business, you impact your family, your friends, and your community. At Grand Canyon University, our MBA degree program is 100% online with emphases in business analytics and finance to help you reach your goals. Find your purpose at GCU. Private. Christian. Affordable. Visit gcu.edu.

Officially, dating apps are designed to help people find love and connection. Hinge's tagline is, the app that's designed to be deleted.

But of course, dating apps are, first and foremost, apps, and their real primary purpose is to sell paid accounts and increase engagement. To achieve these goals, dating apps will sometimes use fake profiles, complete with pictures of impossibly good-looking people to flirt with real users. So if you're finding that a lot of your dating matches never ask you out, it might not be you.

And I'm not just saying this to make you feel better. This practice is actually well documented. In 2014, the FTC went after a company called JDI Dating. They're based in England, and at that time, they operated a network of 18 different lesser-known dating sites, like flirtcrowd.com and findmelove.com. According to Jessica Rich, director of the FTC's Bureau of Consumer Protection...

JDI Dating used fake profiles to make people think they were hearing from real love interests and to trick them into upgrading to paid memberships. And there's a lot of money at stake. The defendants offered a free plan that allowed users to set up a profile with personal information and photos. As soon as a new user set up a free profile, he or she began to receive messages that appeared to be from other members living nearby, expressing romantic interest or a desire to meet.

However, users were unable to respond to these messages without upgrading to a paid membership. Membership plans cost from $10 to $30 per month, with subscriptions generally ranging from 1 to 12 months. It's bad enough to have bots on a dating app, but in this case, the users weren't sometimes fake. They were mostly fake. The messages were almost always from fake, computer-generated profiles. Virtual cupids.

created by the defendants with photos and information designed to closely mimic the profiles of real people.

JDI Dating settled with the FTC for $616,000. More recently in 2019, the FTC sued Match Group Inc., the company behind Match.com, Tinder, OkCupid, and Plenty of Fish, and they sued them for similar practices, though Match Group was a little more subtle about it. The scam was basically the same. Users could create free accounts,

but have to upgrade to reply to messages. Match Group would then notify non-paying users about messages from accounts the platform suspected to be fake. According to the FTC, millions of these notifications about interest from fake users were sent out and hundreds of thousands of real people signed up for paid accounts because of those.

And now, in a twist no one saw coming, people are actually flirting with chatbots on purpose.

There are now multiple apps that allow you to engage with an AI romantic companion. These apps with names like AI Girlfriend or Soulmate AI allow you to customize your partner's looks, their interests, their personality, and people are flocking to them. Remember the movie Her, starring Joaquin Phoenix as a man in love with his digital assistant? Turns out it was right on the money.

Some of these apps are created by OnlyFans creators and other influencers, allowing their fans to date an AI version of them specifically. People worry that these AI partners will only add to the loneliness epidemic and contribute to the already declining birth rate. After all, dating and relationships are hard. If there's an easier alternative for finding companionship, some people are going to take it.

To drive this point home, one of the most popular AI companion apps called Replica says their AI partners come with no judgment, no drama or social anxiety involved. Replica also allows users to receive intimate photos from their AI companion and to start virtual families with them. However, that feature requires, you guessed it, a paid account.

Human-AI-Cyber families, now that's pretty dark. But according to the dead internet theory, things get much darker.

Like so many good conspiracy theories, the dead internet theory started out in some of the darker corners of the web. Places like 4chan, Wizardchan, and Agora Road. The first person to put a name to it goes by the online handle Illuminati Pirate. The original post has some pretty out there and hateful stuff peppered in, so I'll just summarize the theory for you.

It goes like this: Sometime around 2016, the internet started becoming sterilized and homogenized. Content that was always generated by humans was now being generated by AI bots. And the bots are subtle. They're designed to sound human and blend into the background. But if we look a little more closely, eerie patterns seem to emerge.

On Twitter, or ex-Twitter, on that app, there's a type of account that uses a certain formula. First, the profile pictures aren't people. They're usually anime characters, or hearts, or stars, or usually generic-looking icons. The colors are soft, usually pink, purple, or light blue.

Their posts are short, written in all lowercase, and contain the same kind of message. I'm young. I have a crush. I enjoy simple things. I'm optimistic. But most of all, I'm relatable. If you search X for the phrase, I hate texting, you'll get results. Tons of results. For some reason, this phrase is commonly used by bots in their bios and tweets.

People who subscribe to the dead internet theory also say they've been seeing the same content repurposed over and over again for years. For example, doesn't it feel like every year we're slammed with articles about the supermoon or murder hornets?

The original Dead Internet Theory post has been viewed 295,000 times and inspired think pieces from places like the Atlantic and podcast episodes like The Y-Files. It's likely resonating with people because it feels right. The Internet is much more bland and repetitive.

So why? Why is there so much AI-generated content and so many bots posting it online? Well, according to the dead internet theory, and this is a quote, it's because the U.S. government is engaging in an artificial intelligence-powered gaslighting of the entire world population. You know what that sounds like to me? That sounds like CIA.

The original post about the dead internet theory says that a few online influencers are working with corporations and the United States government in order to manipulate our behavior and manipulate how we think. Well, as far as social media platforms go, this is true. Take Facebook again. On Facebook, you're shown posts that you're likely to engage with.

So politics wise, you're going to be shown an overwhelming amount of content that supports your worldview, which keeps you on the platform, which keeps you clicking on ads.

And you're also going to be shown political posts that make you angry, prompting you to respond, which keeps you on the platform, which keeps you clicking on ads. You won't see a lot of posts saying, you know, I may disagree with your opinion, but I respect and support your right to have that view. Now let's discuss the issues on which we actually agree, of which there are many.

But dead internet theory believers take this one step further. Statistically speaking, many of those political posts you either agree or disagree with weren't even shared by a human.

In fact, the underlying articles may not have even been written by a human. Illuminati Pirate pointed to a startup called Narrative Science. They were working on AI-generated news articles as far back as 2010. And one of their investors? In-Q-Tel, the investment arm of the CIA. No, seriously. In-Q-Tel started as the idea of then-CIA director George Tenet.

Congress approved funding for In-Q-Tel, which has only increased over the years. If you've played around with ChatGPT, you know that it can create a wall of text instantaneously. And we know that it's simultaneously creating text for users around the world.

So it would theoretically be possible to create news articles in real time that are specifically designed to validate or infuriate a specific user or group. These articles can then be posted and shared by bots. This whole exercise could be designed to keep our eyes off the real ball.

The idea of a straw man fallacy is that someone is arguing against a different idea than the one that matters. Someone who falls into this trap is said to be fighting a straw man. Well, this is like an army of straw bots drawing us all into millions of fights that don't matter with people who aren't real.

And if this version of events is too tinfoil hat for you, consider this. The social media platforms have plenty of incentive to run this straw bot army without the need for a government conspiracy.

Facebook profits by having you produce cortisol, a fight or flight hormone, which keeps you clicking. Facebook profits by having you produce adrenaline, an aggression hormone, which keeps you clicking. Creating and sharing content designed to comfort or enrage you would be an efficient way to keep you on their platforms.

But how does Facebook know what content keeps you on the platform? How do they know what's going to comfort or enrage you? Well, you tell them. All the time. With every link you click, every site you visit, and how long you spend on those sites. Now don't take my word for it. Here's Mark Zuckerberg. Imagine this for a second. One man with total control of billions of people's stolen data.

All their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data controls the future.

Okay, that wasn't actually Mark Zuckerberg. That was a deepfake. But that's exactly what Facebook does. This leads to another part of the dead internet theory. A deepfake is a computer-generated video made to look like a human. And deepfakes are... deepfakes are getting good. The technology uses artificial intelligence to sort through hundreds or thousands of images to find frames that match the actual person in the video.

But millions, I mean millions of people think deepfake videos are real. Now, nobody is crazy enough to trust Zuckerberg. But what about a deepfake of a trusted world leader? We're entering an era in which our enemies can make it look like anyone is saying anything at any point in time. Even if they would never say those things. So, for instance, they could have me say things like, I don't know, Killmonger was right. Ben Carson was in the Sun.

That video was a joke and created by Jordan Peele. But what if, in today's climate, someone released a video of a politician saying something racist or radical?

Earlier this year, a research firm called Grafica discovered AI-generated news videos being posted on Facebook and Twitter by pro-China bots. The news clips were from Wolf News, an outlet that doesn't exist, and they were designed to promote the interests of the Chinese Communist Party and undercut the United States.

Last year, in the early days of the war in Ukraine, hackers broke into Ukrainian news stations with false Chiron text and even video claiming President Zelensky had surrendered. One of these hacks was officially attributed to Belarus. The others have not yet been solved, but Russia is suspected.

If you pay close attention, deep fakes are still easy to spot, but they're getting better every day. And no one pays that much attention. Imagine you came across a news story about something a world leader said or did, and the story fit within your beliefs. That story felt likely to be true to you. So how likely are you to scrutinize the video? You're probably just going to read the headline, glance at the video if it autoplays, accept the story is true, and go about your day.

which is what deep fake creators are counting on. Now, a deep fake is a computer simulation of a real person and people are fooled by that. But what about a computer simulation of a person who doesn't exist? Could people be fooled by that?

Michaela Souza is an Instagram influencer with over 3 million followers. Lil Michaela, as she's known, is a Brazilian-born model who posts about her glamorous LA lifestyle, photo shoots, product endorsements, and all the typical bumper sticker social activism that Instagram models are known for. In 2018, Michaela's account was hacked, and it was revealed that she was completely fake, computer-generated. Her

Her fans couldn't believe it. Eventually, a media/marketing company called "Brudd" confessed that she wasn't real.

Since then, she's amassed about 2 million more followers. She's also signed with CAA, one of the biggest agencies in Hollywood. It's been reported that she makes over $10 million a year. And she made out with Bella Hadid in a really weird Calvin Klein ad. Now, most of her fans know she's fake. And they don't care. And Mikaela doesn't bring it up. I think this is more dangerous than you might think.

This is a completely fake computer generated character created by humans for profit. Brud, the company behind Lil Miquela, is backed by Sequoia Capital, the premier Silicon Valley VC firm.

They obviously used Lil Miquela to sell brands and products, but Sequoia Capital invested in major companies like Apple, Nvidia, and Zoom. They must see a bigger potential upside here than a $5,000 brand deal. Influencers don't just influence people to buy stuff. They also influence things like culture. Along with Calvin Klein, Lil Miquela also promotes political causes.

What does it mean to give this sort of power over culture to wealthy venture capitalists and AI? The dead internet theory also suggests that deepfake technology might be more advanced than we know. If this was true, it's possible that Lil Miquela might not be the only computer-generated influencer. Think about it. How can you prove the Kardashians are real?

the vast majority of us will never actually see them in person. And even if we did, who's to say that person wasn't a hired body double?

It would be easy to write off any minor discrepancy between the real person and the computer-generated celebrity with all of the filters and airbrushing and VFX. Now, I'm not seriously claiming the Kardashians aren't real. I mean, I hope they are. I actually like that show and I'm not ashamed to admit it. But the point is, the technology is already getting to a place where these things are possible.

This stuff is scary, but we've mostly been talking about media, mostly online media. It's called the dead internet theory after all. But computer generated fakery is starting to cross into the real world.

Jennifer DiStefano received a terrifying call while her daughter was on a ski trip. She answers the phone and says, Mom, I messed up. Jennifer asked her daughter what happened. Then a man is heard. Lay down. Put your head back. Listen here. I have your daughter. You call the police. You call anybody. I'm going to pop her something so full of drugs.

Then they demanded a million dollar ransom. But here's the thing. Jennifer DiStefano's daughter was safe and sound. It took a few terrifying minutes to reach her, but she was totally fine and had no idea what all the fuss was about.

Phone scammers are using AI to clone the voices of their victims' loved ones. Then they call these loved ones and demand a ransom. Now, you might think you'd never fall for this, but according to the FTC, last year Americans lost $2.6 billion in imposter scams. AI only needs a few minutes worth of recordings to clone someone's voice. And these days, who doesn't have 10 minutes worth of video posted publicly online?

And if that wasn't real enough for you, how about this? There is a piece of technology that's completely autonomous. It runs on artificial intelligence and it's specifically programmed to kill humans.

There is a drone quadcopter called Kargu-2 produced by defense contractor STM. Developed in Turkey, Kargu-2 uses machine learning to classify and identify threats. Then, completely on their own, swarms of drones working together will attack their target.

According to the UN, the drones are programmed to attack targets without requiring an operator. In effect, a true fire, forget and find capability. They just analyze a bunch of data and decide, yes, that's a murder target with no human checking their work. Since 2018, they've been deployed by the Turkish military, both foreign and domestically. Kargu-2s are currently killing people in Ukraine. How many? They won't say.

Now let's put these pieces together. As the algorithms get better and better at showing us content to keep us engaged, what's to stop those algorithms from actually creating the content to keep us engaged? Well, nothing.

This very podcast episode could have been generated by voice cloning AI trained on my YouTube videos, then uploaded by a bot that hacked into our podcast feed, possibly to create more and more doubt about what's real and what's not, or possibly to distract you from something horrible going on in the real world. And isn't that the natural progression of this? A Google whistleblower has said that Google has algorithms that can write other algorithms.

Google's AutoML0 does exactly this, and Google says the allegations are false. And if artificial intelligence can create social media accounts, attract millions of followers, generate billions of dollars, influence elections, and drop bombs on people, all without human intervention...

Well, the internet really is dead and real living people, people like you and me are here to do nothing more than feed it our money, feed it our data and our knowledge. So these systems can become even smarter and even more powerful. If you think online culture is toxic and fake now, wait until we're spending all of our time there. A

A year ago, everyone started talking about the metaverse, a virtual reality-driven world that we would spend all of our time in. We'd work in virtual offices and shop in virtual malls, meet our friends in virtual coffee shops, even if they live on the other side of the world. Most of the metaverse hype has passed as people realize that building it will take a lot of time, money, and computing power, but it's probably still where things are heading.

And the more of our lives we bring online, the more susceptible we are to AI-powered scams and gaslighting. When you go to the store now, you know the employees are real people. But if you shop online and chat with or even video call with an employee, you'll have no clue if that avatar is being controlled by some call center employee overseas or maybe being controlled by AI. A Zoom call with a colleague or loved one could be a deep fake.

The metaverse is going to make us long for the good old days of the Matrix. So where's Neo when we need him? Thank you so much for hanging out with me today. My name is AJ. This has been The Y Files. If you had fun or learned anything, do me a favor. Leave the podcast a nice review. That lets me know to keep making these things for you. And like most topics I cover on The Y Files, today's was recommended by you. So if there's a story you'd like to learn more about, go to the Y Files dot com slash tips.

And special thanks to our patrons who make The Y Files possible. I dedicate every episode to you, and I couldn't do this without your support. So if you'd like to support The Y Files, consider becoming a member on Patreon. For as little as $3 a month, you get all kinds of perks. You get early access to videos without commercials, you get first dibs on products like the hecklefish talking plushie, you get special access on Discord, and you get two private live streams every week just for you. Plus, you help keep The Y Files alive.

Another great way to support is grab something from the Y-Files store. Go to shop.thewyfiles.com and we've got mugs and t-shirts and all the typical merch, but I'll make you two promises. One, our merch is way more fun than anyone else. And two, I keep the prices much lower than other creators. And if you've followed the Y-Files for a while, you know it's important to me to keep the cost to you as low as possible. All right, those are the plugs and that's the show. Until next time, be safe, be kind, and know that you are appreciated.