Have you heard of the dead internet theory? While it’s a relatively recent conspiracy theory that came about somewhere between the late 2010s and early 2020s, it’s been gaining more and more interest lately with the rise of AI.
The dead internet theory posits that bot activity and auto-generated content have taken over the internet to the point where real people no longer make most of the content. It is estimated that sometime between 2016 and 2017, the internet “died” as it became more bot than human.
Taken to its extreme, people have argued that this content, generated to manipulate algorithms and search results, also manipulates consumers. While we know corporations do this to an extent—after all, they badly want your data so they can serve you targeted ads—some conspiracy theorists also believe the government is in on it. One journalist cited a Reddit thread on the topic, where people claimed that “these bots attempt to influence public perception on just about any political topic, or else keep you constantly distracted and buying products, to keep you from questioning the elites.”
As with any conspiracy theory, it’s easy to go down a rabbit hole and end up with some misguided ideas. But how much of the dead internet theory is true?
According to an article from The Atlantic that first brought widespread attention to the theory, it’s a bit of a mixed bag of truth and “wacky conspiracy.” Much of the initial evidence is anecdotal and stems from the idea of recycled content. An example refers to a specific structural style for a post manufactured to be relatable and can act as an easily swapped-out template. A bot can use the template, tweak it, and then receive thousands of likes and reposts, though it’s hard to say if those come from actual humans or more bots.
Reused content is prominent on social media. Accounts are dedicated to aggregating galleries of stolen memes, posts, and viral videos. These companies, often called content farms, produce mass amounts of auto-generated web content to maximize SEO and generate web traffic for ad revenue. When you Google something, you may get billions of hits, but if you take the time to scroll through to the “end” of the results, you will often see a message about omitting entries that are “very similar” to those already displayed. “Results like this suggest that the internet might seem like an expansive forest, but it’s often just a hall of mirrors, reflecting the same content in different forms.”
Most importantly, the fear of bots is genuine. When Elon Musk took over Twitter in 2022 and converted it to X, one of his primary goals was to rid the site of rampant bot activity, a problem that has yet to be solved. In fact, some sources cite that bot activity is worse than ever. But it’s difficult to say precisely how many users might be fake—one study estimated it at 15%, while another cited between 25-68%. And it’s not just X that has a bot problem. At one point, YouTube had such an issue with fake views that it feared the fraud detection might flip and start classifying real viewers as anomalies. In fact, according to Imperva’s 2023 Bad Bot Report, 47.4% of all internet traffic turned out to be bots.
The Atlantic published its pivotal article in 2021, before the AI boom. More than ever, the dead internet theory might start sounding less like a conspiracy theory and more like reality. In my next blog, I’ll explore how AI is changing the internet landscape.