We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode AI Bots Are Changing Wikipedia — For Better or Worse?

AI Bots Are Changing Wikipedia — For Better or Worse?

2025/4/13
logo of podcast AI Education

AI Education

AI Deep Dive AI Chapters Transcript
People
J
Jaeden Schafer
Topics
我观察到维基百科的访问量自2024年1月以来激增了50%,这主要是因为AI模型和AI爬虫抓取网站信息,导致了服务器成本的大幅增加。这个问题不仅影响维基百科,还会影响到所有网站、企业和个人。维基百科官方也承认了AI爬虫带来的流量超出了其基础设施的承受能力,带来了巨大的风险和成本。虽然维基百科的内容对所有人开放,但AI模型大量抓取数据造成了巨大的成本压力,因为这些模型通常会忽略robots.txt文件,继续抓取数据。Sam Altman曾向白宫建议取消AI模型的版权规则,以便能够抓取所有数据,这引发了版权方面的争议。AI模型抓取数据会增加网站的服务器费用和带宽成本,即使网站本身不直接从中获利。维基百科最昂贵的流量中有65%来自机器人,这些机器人主要访问最受欢迎的文章。维基百科的数据中心架构使得访问热门页面成本较低,访问冷门页面成本较高。AI机器人会抓取所有内容,包括冷门内容,这导致了成本的急剧增加。目前,维基百科约有35%的页面浏览量来自机器人,但这些机器人贡献了65%的最昂贵流量。机器人与人类用户的访问模式不同,机器人倾向于批量读取大量冷门页面,这导致了成本的增加。维基百科基金会正在尝试解决机器人大量抓取数据的问题,Cloudflare推出了AI Labyrinth工具,利用AI生成内容来减缓爬虫机器人的速度。Cloudflare是一个保护网站免受DDoS攻击的工具,通过吸收和分散流量来防止网站崩溃。Cloudflare的AI Labyrinth工具可以检测AI爬虫并提供AI生成的内容来减缓其速度,通过提供垃圾数据来惩罚AI爬虫。AI爬虫忽略robots.txt文件,这导致了网站成本增加的问题。Meta等公司的大型语言模型的爬虫行为导致了网站带宽需求增加和成本增加。OpenAI等公司在抓取数据的同时,也增加了网站的成本,这引起了很多人的不满。网站需要找到解决方案来应对AI爬虫带来的问题,未来需要考虑如何平衡AI代理对网站的影响,既要允许客户使用AI代理进行购买,又要避免滥用。网站需要区分哪些内容会带来销售额,并根据情况选择是否启用AI Labyrinth等工具。应对AI爬虫是一个持续的博弈过程,需要网站不断调整策略。

Deep Dive

Shownotes Transcript

Translations:
中文

Wikipedia has seen their traffic surge by 50%, and this is just since January of 2024, last year. What is the massive surge in usage, you might ask? Oh, maybe they're getting a ton of new users. Maybe everyone's like sick of ChatGPT, so they want to go over to Wikipedia.

This is all due to AI models and AI scrapers crawling their website for information and driving up the cost of Wikipedia a ton. So today on the podcast, I want to dive into this phenomenon, but not just because of Wikipedia, while it's interesting how it kind of, you know, affects one of the biggest websites on the planet.

It's because of how it's going to affect every single website on the planet, every single business, every single person that has anything online is going to have this exact same problem. And some of the solutions are actually pretty hilarious, but let's get into this. The first thing I wanted to say is an official statement that Wikipedia published on their official blog detailing a little bit of this problem and

much what's happening. They said, our infrastructure is built to sustain sudden spikes from humans during high interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs. So the thing that's really interesting here is, yes, Wikipedia is free for anyone to use and technically even AI models to scrape. It's kind of just how it was built, right? It's not like they have a big team of journalists that go and write articles. Anyone can contribute. And so it's kind of like

fair game for anyone to use this content. But the problem is that these AI models are using the content. And the bigger problem is that even if a website and Wikipedia in this case doesn't actually do this because they want to be indexed by Google, but even if a website use a robot dot TXT file to tell, uh,

you know, search engines not to scrape it. These AI models and people that are scraping data for AI, they have typically just avoided it. They don't really care. We even went so far as just two weeks ago, having Sam Altman talk to the White House and say, hey, you got to get rid of the copyright rules for AI models because we want to be able to scrape and suck up the data from literally everything. The tricky part, though, like we're learning with Wikipedia is whether or not there is like a case to be made on the copyright.

it's still going to cost these companies money just to have these AI models scraping through all their content because their server fees are going to get so high and they're paying for all this hosting, they're paying for all this bandwidth. And so somebody is paying for it and it's not the company going and grabbing it. So this is where this gets a little bit tricky. I wanted to read you something interesting. So Wikipedia says that almost two thirds, that's about 65%,

of what they're, this is what they're calling their quote unquote most expensive traffic. And it's like, well, why is some traffic more expensive than others? It's a little bit technical, but essentially content that gets hit very frequently. So like the most popular articles on Wikipedia or any website, they're stored at a different part of the data center and they're cached a little bit differently where they're very easily accessible. These are web pages that have very high traffic

And so Wikipedia is pretty much set up to say, look, these are our top 10,000 most popular pages. Most of our website traffic goes there. And for all of the less popular pages, maybe a page that only gets hit once or twice a month, they have it at a completely different part of the data center that's harder to access. It's a little bit more cached, um,

It just costs more money and bandwidth for you to actually go and access these. And essentially they've set this up in a really smart way where it's like, it's the cheapest to get the most frequent and it's the most expensive or use the most server bandwidth to get the least popular content, which is not gonna cost them a lot of money, unless they run into a situation where these AI models wanna cover every single thing, right? So typically if I'm scrolling through Wikipedia, they're gonna have related articles, maybe I'll click on some related articles and that's kind of the bubble of content I'm gonna consume.

if you're a bot, you're going to just scrape literally everything, most popular, least popular content and pictures and images that no one ever touches ever. They're going to suck all of it in. And so when this is the case, it's just really, really expensive. What's interesting is about 35% of the overall page views on Wikipedia now comes from bots. And so they're kind of accounting. They're like, look, we know that we get like

third, like a quarter of our, or a third of all of our views are coming from bots. And so it's like a quarter of all our page views, but 65% of their most expensive views are from the bot. So while the bots have a very small, uh, I'm not small. I mean, it's still a third of all their web traffic or web views. It's an outsized proportion of how expensive it is. So it's more expensive for these bots than for a lot of the humans, which is not very good for a company like Wikipedia.

So this is what they said about it. They said, well, human readers tend to focus on specific topics. Bot crawlers often tend to, quote, bulk read larger numbers of pages that are less popular. So this is the big conundrum that the Wikipedia Foundation has been trying to deal with. And there's a bunch of different ways that they do this. And there's a new tool that recently was released called

over by our friends at Cloudflare, and it is called the AI Labyrinth. The AI Labyrinth essentially is using AI generated content to slow down these crawler bots. So Cloudflare is a famous tool that I use on most of my websites. A lot of people do it. It essentially protects your website from people doing, you know, attacks where they hit your website with a ton of, you know, like a

A million people are visiting it in two seconds. They try to crash your servers and take them down. And so I think it's called a DDoS attack. And so in order to save yourself in this situation, you can sign up for a company like Cloudflare that will essentially sit between the users and your actual website. And if they see a massive surge like this, Cloudflare will essentially absorb

Most of this usage, they'll disperse it and they won't let all million people hit your website at once. And so it essentially makes sure that only actual humans and not bots are crashing your site. So this is what Cloudflare does. It's great. I use it on a lot of my different sites for a lot of different things. They have a lot of...

you know, they have free SSL certificates and all sorts of cool things that Cloudflare does. But one of the big things is kind of preventing this, these, you know, overwhelming your servers. And the thing that they've now done is they can detect if it is an AI crawler. And instead of just, instead of just, you know, trying to slow it down or whatever, they're just feeding,

feeding it AI generated content, just garbage, calling it an AI labyrinth and letting these AI crawlers absorb all of this crap to slow them down and to not let them crash your website at the same time. But it's also kind of funny because it's like it's punishing them beyond just like blocking them. It's like punishing them. It's giving them crappy data now inside of their data set. And yeah, so it's kind of funny, but people can sign up for this and use this and other people are doing this. So this is kind of clever, a little bit vengeful. But

It is kind of interesting. At the moment, it really is a cat and mouse game. People are finding new ways to make it seem like they're not an AI crawler to scrape everything from a website. But this is definitely a problem. Last month, a software engineer and open source advocate, this was Drew DeVault. He was complaining that these AI crawlers are ignoring the robot.txt files that are supposed to keep away automated traffic.

Um, Gurgly Osro also was complaining last week that AI scrapers from companies like Meta had driven up the bandwidth demands for his own projects, costing him a ton of money. So it's not just, you know, it's not just one company. It's OpenAI. It's Meta. It's all these billion dollar companies that are...

causing a lot of people just, you know, costs. And I think back when OpenAI was first grabbing their first data set, probably they were able to kind of fly under the radar a little bit. But at this point, everybody knows where this traffic is coming from. It's costing a ton of money. And in the case of OpenAI that's closed source, they're grabbing the data and charging for it. And they're costing you money while they extract the data at the same time. So a lot of people are upset about this. But

Overall, there's not a lot to do unless you start using a tool like Cloudflare's AI Labyrinth or others like this. I'll definitely keep you up to date on this. I think this is important because every website in the future is currently experiencing and will continue to experience some of these problems. There's going to be solutions that people come up with. But at the end of the day, when we start looking at what it's going to look like in the age of agents...

We need to, we have to think about that into how this all plays out because you don't really want to block an agent if let's say a customer's using an agent to come to your website and buy something. That sounds fantastic. But if a customer is using an agent to come scrape some data, maybe just cause you some server bandwidth usage and then move on and not, you know, give you any sort of

ad revenue or purchasing power, then it's sort of useless. So it's going to be an interesting thing. A lot of websites are going to have to play it by ear what content actually drives sales or what pages actually drive sales and what, you know, maybe it's like your whole blog is just free content on your website. Maybe you just turn that off from these AI agents. You turn Labyrinth AI on. But when it's over on your sales pages or your product pages where you actually want people to buy things and maybe the AI agents are actually helping you

user buy things, you want to keep that on. So it's going to be a really interesting game to play and a balance to strike. I'll keep you up to date on everything and any other new tools that come out that help in this, because I think this is an absolutely hilarious cat and mouse game, but you don't want to get on the wrong side of it because you wouldn't want to block

you know, actual customers or actual agents from buying stuff on your website. Thanks so much for tuning into the podcast. If you enjoyed it and if you would ever like to use AI tools to grow and scale your business, I have an exclusive school community where every single week I publish a video I don't post anywhere else, breaking down the exact tools and

products I use to grow and scale my business with AI. So there's a link in the description to the AI Hustle School community. We have over 300 members. It's $19 a month. And if you get it now, you'll never have the price raised on you when we increase the price in the future. Thanks so much for tuning into the podcast today, and I hope you all have a fantastic rest of your week.