Loading summary
A
Wikipedia has seen their traffic surge by 50%. And this is just since January of 2024 last year. What is the massive surge in usage, you might ask? Oh, maybe they're getting a ton of new users. Maybe everyone's like sick of chatgpt so they want to go over to Wikipedia. Wrong. This is all due to AI models and AI scrapers crawling their website for information and driving up the cost of Wikipedia a ton. So, so today on the podcast I want to dive into this phenomenon, but not just because of Wikipedia. While it's interesting how it kind of, you know, affects one of the biggest websites on the planet, it's because of how it's going to affect every single website on the planet, every single business, every single person that has anything online is going to have this exact same problem. And some of the solutions are actually pretty hilarious. But let's get into this. The first thing I wanted to say is an official statement that Wikipedia published on their official blog D detailing a little bit of this problem and pretty much what's happening. They said our infrastructure is built to sustain sudden spikes in from humans during high interest events. But the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs. So the thing that's really interesting here is, yes, Wikipedia is free for anyone to use and technically even AI models to scrape. It's kind of just how it was built, right? It's not like they have a big, you know, team of journalists that go and write articles, anyone can contribute. And so it's kind of like fair, fair game for anyone to use this content. But the problem is that these AI models are using the content. And the bigger problem is that even if a website, and Wikipedia in this case doesn't actually do this because they want to be indexed by Google, but even if a website use a robot txt file to tell, you know, search engines not to scrape it, these AI models and people that are scraping data for AI, they have typically just avoided it. They don't really care. We even went so far as just, just two weeks ago having Sam Altman talk to the White House and say, hey, you gotta get rid of the copyright rules for AI models because we want to be able to scrape and suck up the data from literally everything. The tricky part though, like we're learning with Wikipedia, is whether or not there's like a case to be made on the copyright. It's still going to cost these companies money just to have these AI models scraping through all their content because their server fees are going to go get so high. And they're paying for all this hosting, they're paying for all this bandwidth. And so somebody is paying for it and it's not the company going and grabbing it. So this is where these, this gets a little bit tricky. I wanted to read you something interesting. So Wikipedia says that almost two thirds, that's about 65% of what they're. This is what they're calling their quote, unquote, most expensive traffic. And it's like, well, why is some traffic more expensive than others, though? It's, it's a little bit technical, but essentially content that gets, that gets hit very frequently. So like the most popular articles on Wikipedia or any website, they're stored at a different part of the data center and they're cached a little bit differently where they're very easily accessible. These are web pages that have very high traffic. And so Wikipedia is pretty much set up to say, look, these are our top 10,000 most popular pages. And most of our website traffic goes there. And for all of the less popular pages, maybe a page that only gets hit once or twice a month, they have it at a completely different part of the data center that's harder to access. It's a little bit more cached. It's. It just costs more money and bandwidth for you to actually go and access these. And they've essentially, they've set this up in a really smart way where it's like, it's the cheapest to get the most frequent and it's the most expensive or use the most server bandwidth to get the least popular content, which is, you know, not going to cost them a lot of money unless they run into a situation where these AI models want to cover every single thing, right? So typically, if I'm scrolling through Wikipedia, they're going to have related articles. Maybe I'll click on some related articles. And that's kind of the bubble of content I'm going to consume. If you're a bot, you're going to just scrape literally everything. Most popular, least popular content and pictures and images that no one ever touches, ever. They're going to suck all of it in. And so when this is the case, it's just really, really expensive. What's interesting is about 35% of the overall page views on Wikipedia right now comes from bots. And so they're kind of accounting. They're like, look, we know that we get like third, like a quarter of our, or a third of all of our views are coming from bots. And so it's like a quarter of all of our page views, but 65% of their most expensive views are from the bot. So while the bots have a very small. I'm not small. I mean, it's still a third of all their web traffic or web views. It's an outsized proportion of how expensive it is. So it's more expensive for these bots than for a lot of the humans, which is not very good for a company like Wikipedia. So this is what they said about it. They said, while human readers tend to focus on specific topics, bot crawlers often tend to quote, bulk, read larger numbers of pages that are less popular. So this is the big conundrum that the Wikipedia foundation has been trying to deal with. And there's a bunch of different ways that they do this. And there's a new tool that recently was released over by our friends at Cloudflare, and it is called the AI Labyrinth. The AI Labyrinth essentially is using AI generated content to slow down these crawler bots. So Cloudflare is a famous tool that I use on most of my websites. A lot of people do it. It essentially protects your website from people doing, you know, attacks where they hit your website with a ton of, you know, like a million people are visiting it in two seconds. They try to crash your servers and take them down. And so I think it's called a DDoS attack. And so in order to save yourself in this situation, you can sign up for a company like Cloudflare that will essentially sit between the users and your actual website. And if they see a massive surge like this, Cloudflare will essentially absorb most of this usage, they'll disperse it, and they won't let all million people hit your website at once. And so it essentially makes sure that only actual humans and not bots are crashing your site. So this is what Cloudflare does. It's great. I use it on a lot of my different sites for a lot of different things. They have a lot of, you know, they have free SSL certificates and all sorts of cool things that Cloudflare does. But one of the big things is kind of preventing this, these, you know, overwhelming your servers. And the thing that they've now done is they can detect if it is an AI crawler. And instead of just, instead of just, you know, trying to slow it down or whatever, they're just feeding it AI generated content, just garbage, calling it an AI labyrinth and letting these AI crawlers absorb all of this crap to slow them down and to not let them crash your website. At the same time. But it's also kind of funny because it's like, it's punishing them beyond just like blocking them, it's like punishing them, it's giving them crappy data now inside of their data set. And yeah, so it's kind of funny, but people can sign up for this and use this and other people are doing this. So this is kind of clever, a little bit vengeful, but it is kind of interesting. At the moment, it really is a cat and mouse game. People are finding new ways to make it seem like they're not an AI crawler to scrape everything from a website. But this is definitely a problem. Last month, a software engineer and open source advocate, this was Drew Devolt, he was complaining that these AI crawlers are ignoring the robot TXT files that are supposed to keep away automated traffic. Gurgley Osro also was complaining last week that AI scrapers from companies like Meta had driven up the bandwidth demands for his own projects, costing him a ton of money. So it's not just, you know, it's not just one company, it's OpenAI, it's Meta, it's all these billion dollar companies that are causing a lot of people just, you know, costs. And I think back when OpenAI was first grabbing their first data set, probably it was, they were able to kind of fly under the radar a little bit. But at this point, everybody knows where this traffic is coming from. It's costing a ton of money. And in the case of OpenAI, that's closed source, they're grabbing the data and charging for it and they're, they're costing you money while they extract the data at the same time. So a lot of people are upset about this, but overall there's not a lot to do unless you start using a tool like Cloudflare's AI Labyrinth or others like this. I'll definitely keep you up to date on this. I think this is important because every website in the future is going to be, is currently experiencing and will continue to experience some of these problems. There's going to be solutions that people come up with. But at the end of the day, when we start looking at what it's going to look like in the age of agents, we need to, we have to think about that, into how this all plays out. Because you don't really want to block an agent. If, let's say a customer's using an agent to come to your website and buy something, that sounds fantastic. But if a customer is using an agent to come scrape Some data, maybe just cause you some server bandwidth usage and then move on and not give you any sort of ad revenue or purchasing power, then it's sort of useless. So it's going to be an interesting thing. A lot of websites are going to have to play it by earnings. What content actually drives sales or what pages actually drive sales and what, you know, maybe it's like your whole blog is just free content on your website. Maybe you just turn that off from these AI agents. You turn Labyrinth AI on. But when it's over on your sales pages or your product pages, where you actually want people to buy things and maybe the AI agents are actually helping their user buy things, you want to keep that on. So it's going to be a really interesting game to play and a balance to strike. I'll keep you up to date on everything and any other new tools that come out that helps in this because I think this is an absolutely hilarious cat and mouse game, but you don't want to get on the wrong side of it because you wouldn't want to block, you know, actual customers or actual agents from buying stuff on your website. Thanks so much for tuning into the podcast if you enjoyed it and if you would ever like to use AI tools to grow and scale your business. I have an exclusive school community where every single week I publish a video I don't post anywhere else breaking down the exact tools and products I use to grow and scale my business with AI. So there's a link in the description to the AI hustle school community. We have over 300 members. It's 19 doll a month, and if you get it now, you'll never have the price raised on you when we increase the price in the future. Thanks so much for tuning into the podcast today and I hope you all have a fantastic rest of your week.
Release Date: April 26, 2025
Host: The Mark Cuban Podcast
Episode Title: Wikipedia Flooded by AI Bots — What You Should Know
In this episode, Mark Cuban delves into a significant issue facing one of the internet's most prominent platforms: Wikipedia. Since January 2024, Wikipedia has experienced a 50% surge in traffic. Contrary to initial assumptions, this spike isn't due to an influx of new human users but rather the relentless activity of AI models and bots scraping the site for information.
"Wikipedia has seen their traffic surge by 50%. And this is just since January of 2024 last year." [00:00]
Cuban explains that while Wikipedia's open-access model allows AI models to legally scrape its content, this unrestricted access has led to unprecedented traffic generated by bots. This influx poses significant challenges, not just financially but also in terms of infrastructure stress.
"Our infrastructure is built to sustain sudden spikes from humans during high interest events. But the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs." [00:00]
This issue underscores a broader problem affecting every website, business, and individual with an online presence. The costs associated with increased server usage and bandwidth are mounting, and the burden falls on the content providers who aren't directly benefiting from the scraping activity.
Wikipedia has strategically optimized its infrastructure to handle high-traffic pages efficiently. Approximately 65% of their most expensive traffic stems from bots accessing both popular and obscure pages indiscriminately. In contrast, human users typically focus on specific topics, limiting their impact on resources.
"Almost two thirds, that's about 65% of what they're ... most expensive traffic ... is from the bot." [Transcript Segment]
This disproportionate usage means that while bots account for a significant portion of page views (35% overall), their access patterns are much more resource-intensive, leading to elevated operational costs.
Mark Cuban highlights recent developments where AI leaders, including Sam Altman, have lobbied the White House to eliminate copyright restrictions for AI models. This move aims to facilitate unrestricted data scraping, exacerbating the problem for content-rich websites.
"Hey, you gotta get rid of the copyright rules for AI models because we want to be able to scrape and suck up the data from literally everything." [Transcript Segment]
The legal implications remain murky, but the financial strain on websites from increased server and bandwidth usage is undeniable. Companies end up bearing the costs without any direct revenue benefits from these AI-driven activities.
To combat the surge of AI scraping, Cloudflare has introduced a novel tool called the AI Labyrinth. This solution leverages AI-generated content to inundate bots with irrelevant or "garbage" data, effectively slowing down their scraping activities and reducing their impact on website resources.
"The AI Labyrinth essentially is using AI generated content to slow down these crawler bots." [Transcript Segment]
Cloudflare, renowned for its robust security and DDoS protection services, acts as an intermediary between users and websites. By distinguishing between legitimate human traffic and harmful bots, Cloudflare ensures that only genuine users can access the site seamlessly.
Despite tools like AI Labyrinth, the battle between website defenders and AI bot developers continues. Bot creators constantly evolve their methods to bypass detection, making it a perpetual challenge to safeguard online resources.
"At the moment, it really is a cat and mouse game. People are finding new ways to make it seem like they're not an AI crawler to scrape everything from a website." [Transcript Segment]
Notable voices in the tech community, such as Drew Devolt and Gurgley Osro, have voiced their frustrations over AI scrapers disregarding robots.txt directives, further complicating the issue.
The repercussions of AI bot scraping extend beyond Wikipedia. Various developers and content creators are grappling with increased operational costs due to unexpected bandwidth consumption. Large corporations like OpenAI and Meta are often at the center of these concerns, as their extensive data scraping can inadvertently burden smaller projects and businesses.
"It's OpenAI, it's Meta, it's all these billion dollar companies that are causing a lot of people just, you know, costs." [Transcript Segment]
For instance, Gurgley Osro highlighted how AI scrapers from major companies have driven up bandwidth demands for his projects, leading to substantial financial strain.
Looking ahead, Cuban emphasizes the need for websites to develop nuanced strategies to differentiate between beneficial AI interactions and detrimental scraping activities. The challenge lies in allowing AI agents that enhance user experience, such as virtual shopping assistants, while blocking those that merely deplete resources without any reciprocal benefit.
"You don't really want to block an agent. If, let's say a customer's using an agent to come to your website and buy something, that sounds fantastic. But if a customer is using an agent to come scrape some data, maybe just cause you some server bandwidth usage..." [Transcript Segment]
Potential approaches include:
Selective Blocking: Allowing AI agents access to essential areas like sales pages while restricting access to non-critical sections like blogs.
Advanced Detection Tools: Utilizing sophisticated technologies to better identify and manage bot traffic without hindering legitimate user interactions.
Mark Cuban wraps up by reiterating the importance of staying informed and adaptive in this evolving landscape. As AI technologies continue to advance, websites must remain vigilant and proactive in implementing solutions that safeguard their resources without compromising user experience.
"I'll keep you up to date on everything and any other new tools that come out that helps in this because I think this is an absolutely hilarious cat and mouse game..." [Transcript Segment]
AI Bots Are a Growing Threat: The surge in AI-driven scraping activities is significantly impacting website operations and costs, as exemplified by Wikipedia's experience.
Infrastructure Challenges: Optimizing for human traffic doesn't necessarily safeguard against the resource drain caused by relentless bot scraping.
Innovative Defenses: Solutions like Cloudflare's AI Labyrinth offer promising methods to mitigate the impact of AI bots by feeding them irrelevant data.
Legal and Ethical Considerations: The push to remove copyright restrictions for AI models raises critical questions about the balance between data accessibility and content protection.
Future-Proofing Strategies: Websites need to develop sophisticated, selective strategies to allow beneficial AI interactions while curbing harmful scraping activities.
[00:00] "Wikipedia has seen their traffic surge by 50%. And this is just since January of 2024 last year."
[00:00] "Our infrastructure is built to sustain sudden spikes from humans during high interest events. But the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs."
[Transcript Segment] "Hey, you gotta get rid of the copyright rules for AI models because we want to be able to scrape and suck up the data from literally everything."
[Transcript Segment] "Almost two thirds, that's about 65% of what they're ... most expensive traffic ... is from the bot."
[Transcript Segment] "The AI Labyrinth essentially is using AI generated content to slow down these crawler bots."
[Transcript Segment] "At the moment, it really is a cat and mouse game. People are finding new ways to make it seem like they're not an AI crawler to scrape everything from a website."
[Transcript Segment] "You don't really want to block an agent. If, let's say a customer's using an agent to come to your website and buy something, that sounds fantastic. But if a customer is using an agent to come scrape some data, maybe just cause you some server bandwidth usage..."
This comprehensive summary encapsulates the critical discussions from the episode, providing listeners with a clear understanding of the challenges posed by AI bots to major online platforms like Wikipedia and the broader digital ecosystem. It also highlights the innovative solutions being developed to address these issues and the ongoing efforts required to maintain a balanced and functional web environment.