The world of AI news is moving at an unprecedented pace. When I began writing about the dead internet theory a couple months ago, it indeed began to seem real, but even in that short time, it’s become nearly undeniable. It’s common knowledge that you shouldn’t trust everything you see or read online; this notion seems to be becoming more important than ever.
Last month, Google began rolling out its generative summaries for search results, known as “AI Overviews.” You may have started to notice it in your day-to-day life when using the popular search engine to find answers to basic questions. It’s hard not to see it; the AI Overviews appear in large text at the top of the page. It works by aggregating data from across the web and is powered by various Large Language Models. “Specifically, when someone conducts a search, Google searches its index and identifies the most relevant chunks (or fraggles) of content to your search, ‘ingests them,’ and produces a newly generated AI output.”
On the surface, the idea is that this makes searches more manageable and faster for an end user. However, people have quickly noticed the pitfalls inherent in this process. The problem is that while AI may be good at pulling and summarizing data, it’s not very good at parsing what is and isn’t relevant (or fake). In no time, people started getting some absurd search results—such as suggesting the use of glue in pizza sauce or recommending people eat at least one rock daily.
While Google countered that these were very specific, one-off queries that most users wouldn’t experience, it highlights one of the most significant issues with AI: it’s only as good as its data set. People quickly figured out that Google’s AI was pulling from legitimate but ultimately unreliable websites like Reddit or The Onion. For instance, the result about putting glue in pizza sauce came from a comment on a Reddit thread where someone jokingly suggested using Elmer’s as a way to make cheese stick better when cooking pizza. While you or I can recognize the glue comment was a joke, AI cannot.
These examples are obviously among the more outlandish ones, but it’s not hard to extrapolate that there could be more significant implications of AI left unchecked. If Google, the website most people run to when they have a question, serves its users misguided results, it makes it difficult to find and believe factual information. Perhaps Google’s search engine was never meant to be the go-to place for serious research, but it’s undeniable that many people trust it for information. Therefore, it has some responsibility to ensure that it doesn’t push false or actively harmful information to the top of the page and highlight it for users.
Worse yet, this new era of misinformation only seems to be compounding in recent months—from academic journals being flooded with AI works to the shaky future of disinformation research centers. One journalist even resorted to going low-tech and buying a complete set of encyclopedias for their family, as the net was just becoming too unreliable for research. But with even the book market being flooded with AI junk, who’s to say how long those will remain reliable?