Warning: Don't Believe Everything AI Search Says
AI Search Is Broken. That's A Problem.
A recent Note detailing some of the shortcomings of the current generation of Artificial Intelligence search engines warrants some quick commentary on the current state of AI search—namely, that AI search is quite atrocious.
This may come as a shock, but it turns out that an astounding proportion of AI search results are flat-out incorrect, according to a new study published by the Columbia Journalism Review. We hope you were sitting down.
Conducted by researchers at the Tow Center for Digital Journalism, the analysis probed eight AI models including OpenAI's ChatGPT search and Google's Gemini, finding that overall, they gave an incorrect answer to more than 60 percent of queries.
According to the study, the most accurate search engine, Perplexity, was still wrong 37% of the time, but overall, AI searches are almost universally bad.
Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
Premium chatbots provided more confidently incorrect answers than their free counterparts.
Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
Generative search tools fabricated links and cited syndicated and copied versions of articles.
Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.
This is especially problematic, given findings by management consultancy Bain and Company that 60% of all searches yield not clicks but AI summaries of the results.
On traditional search engines, the zero-click trend is accelerating across demographics, with our research showing that about 60% of searches now end without the user progressing to another destination site. Even among those who say they are skeptical of generative AI, about half say that most of their queries are answered on the search page without a click.
Also, the push for “generative AI” in search appears to be pushing a massive consolidation of search results to a small handful of sources. A 2023 study of 23 websites found that Google’s Search Generative Experience caused traffic to drop 18-63%.
Our study focused on websites in the technology industry, with traffic mainly from informational keywords. There is large variance inside our sample, with some websites gaining as much as 219% in traffic while others are losing as much as 95%.
This is causing some to speculate that AI search could be “the end” for information-driven content.
According to Wikander, resources based on informational content like guides and how-to tutorials are hit hardest. With AI Overviews providing in-depth answers within search results, the need to click through to these resources has significantly declined. This shift is particularly damaging for content marketers who have built strategies on top-of-funnel informational content to create awareness.
With findings such as these out there, it has become my contention that the presumed future dominance of AI amounts to a violation of the Second Law of Thermodynamics.
By distorting search results and squelching a large majority of potential search results, AI is pushing what amounts to a structural information disequilibrium, and there is no system in nature which sustains disequilbria for any extended period of time. In all systems, the natural trend is always towards equilibrium.
My hypothesis is that there will come a point where AI precipitates a catastrophic information failure, in a situation where people get seriously hurt, either physically or financially, to an extent where product liability concerns will force AI vendors to change their business models in order to limit exposure.
What I suspect will happen in that circumstance is AI developers will alter their technology base to give the end user greater algorithmic control. Instead of the LLM chatbot being a black box with unknown processing rules, interfaces will be developed to give the end user far more granular control over internal LLM functions.
That’s the only way I can see AI proceeding in a sustainable fashion over the long term.




This is a hugely important insight, Peter! The world’s economy is setting itself up to be based on a technology that will end up being regarded as inaccurate, biased, unstable, lacking credibility, and generally laughable. You’ve also mentioned that the data input required for LLMs is so huge as to be unsustainable.
What will happen to the 1.2 trillion-dollar investment for AI tech promised by Saudi Arabia in the U.S. economy? Will the failure or cancellation of this investment precipitate a financial collapse? (And this is just one scenario!)
Another implication is that analysts such as yourself who are fact-based, evidence-based, highly intelligent, knowledgeable and wise will be needed to sort fact from AI-generated nonsense. I hope you’ve got an extra 347 million hours, Peter.