...or is it just another means of reset?
![[Image: lMf8G8Qb_o.jpg]](https://images2.imgbox.com/d9/1c/lMf8G8Qb_o.jpg)
AI isn't just hallucinating... we're hallucinating with it.
In her new paper, philosopher Lucy Osler redefines "AI hallucinations" not as isolated machine errors but as shared delusions between humans and AI. Using the lens of distributed cognition, she warns that when people rely heavily on generative AI to think, remember, or narrate, they may begin to hallucinate with AI.
Hallucinating with AI: AI Psychosis as Distributed Delusions
![[Image: lMf8G8Qb_o.jpg]](https://images2.imgbox.com/d9/1c/lMf8G8Qb_o.jpg)
Quote:October 10, 2025 / Joseph P. Farrell
Even as I sat down last Sunday afternoon to comb through my emails and pull stories for blogging and honourable mentions, a story shared by A.F. popped into my inbox, and it intrigued me so much I postponed my original scheduled blog - which I had intended to title "I am Leo, Hear me roar!: The Papal Icecapades" - to blog about this story, because I have recently advanced yet another of my crazy high octane speculative scenarios that the whole Artificial Intelligence meme might be being pushed by the American government and deep state as the latest economic rope-a-dope scandal, much like Ronald Reagan's Star Wars strategic defense initiative was simultaneously a real project, but also a Potemkin village designed to ensnare the Soviet Union - and its economy - in a non-existent arms race it could not win, by getting it to invest money to try to keep pace with an American project that was more bluster than reality.
Similarly, my "artificial intelligence" scenario, I urged, might be a similar such project. For the relatively small investment of a few hundreds of millions or even a few billions of dollars in a few "megadata" centers, and for a momentary drain on the country's power grid and a little "suffering" on the part of the people trying to sustain skyrocketing energy costs, nations such as China might be fooled into committing massive monetary and energy resources into an artificial intelligence "race" and bust their economy. I even quipped that something had to be done to soak up all that excess Chinese energy production, and to syphon off the electrical output for useless artificial intelligence projects.
And then along came this paper shared by A.F. (with our thanks):
The AI Bubble and the U.S. Economy: How Long Do “Hallucinations” Last?
Why the AI, as in Artificial Information, bubble looks primed to pop soon despite it being a top bipartisan project in the US.
Now there's much in this paper to ponder, but what caught my eye is the relevance of the scenario of a "bubble" being outlined in the paper, to my recently advanced "break the tablets" scenario. The statement that concerns me occurs near the beginning of the paper:
Quote:The U.S. is undergoing an extraordinary AI-fueled economic boom: The stock market is soaring thanks to exceptionally high valuations of AI-related tech firms, which are fueling economic growth by the hundreds of billions of U.S. dollars they are spending on data centers and other AI infrastructure. The AI investment boom is based on the belief that AI will make workers and firms significantly more productive, which will in turn boost corporate profits to unprecedented levels. But the summer of 2025 did not bring good news for enthusiasts of generative Artificial Intelligence (GenAI) who were all hyped up by the inflated promise of the likes of OpenAI’s Sam Altman that “Artificial General Intelligence” (AGI), the holy grail of current AI research, would be right around the corner.
Let us more closely consider the hype. Already in January 2025, Altman wrote that “we are now confident we know how to build AGI”. Altman’s optimism echoed claims by OpenAI’s partner and major financial backer Microsoft, which had put out a paper in 2023 claiming that the GPT-4 model already exhibited “sparks of AGI.” Elon Musk (in 2024) was equally confident that the Grok model developed by his company xAI would reach AGI, an intelligence “smarter than the smartest human being”, probably by 2025 or at least by 2026. Meta CEO Mark Zuckerberg said that his company was committed to “building full general intelligence”, and that super-intelligence is now “in sight”. Likewise, Dario Amodei, co-founder and CEO of Anthropic, said “powerful AI”, i.e., smarter than a Nobel Prize winner in any field, could come as early as 2026, and usher in a new age of health and abundance — the U.S. would become a “country of geniuses in a datacenter”, if ….. AI didn’t wind up killing us all.
For Mr. Musk and his GenAI fellow travelers, the biggest hurdle on the road to AGI is the lack of computing power (installed in data centers) to train AI bots, which, in turn, is due to a lack of sufficiently advanced computer chips. The demand for more data and more data-crunching capabilities will require about $3 trillion in capital just by 2028, in the estimation of Morgan Stanley. That would exceed the capacity of the global credit and derivative securities markets. Spurred by the imperative to win the AI-race with China, the GenAI propagandists firmly believe that the U.S. can be put on the yellow brick road to the Emerald City of AGI by building more data centers faster (an unmistakenly “accelerationist” expression). (Italicized and boldface emphases added)
Ponder that emphasized statement in the context in which it occurs very carefully. The artificial intelligence scheme being currently pursued would soak up more credit than the current credit market could sustain.
And therewith my high octane speculation of the day, for if this be true and if Morgan Stanley's evaluation of the situation is correct, then we are confronted with yet another scenario: might an artificial intelligence bubble simply be one means of soaking up all the bad paper sloshing around in the system? Is a bubble(artificial intelligence) being created to soak up a previous bubble (the derivates crisis hangover from 2008?)
Well... maybe.
But there's a huge fly in the ointment, according to the article, and that fly is so huge that it is implying the exact reverse of my artificial intelligence-Potemkin village scenario. After rehearsing a number of "letdowns" in the artificial intelligence world, from the increasing failures of large language models to the fact that 95% of generative artificial intelligence companies fail, the author closes in on the fundamental problem of the American economy and the current artificial intelligence bubble: it's all hallucination and completely unreal:
Quote:America’s High-Stakes Geopolitical Get Gone Wrong
The AI boom (bubble) developed with the support of both major political parties in the U.S. The vision of American firms pushing the AI frontier and reaching GenAI first is widely shared — in fact, there is a bipartisan consensus on how important it is that the U.S. should win the global AI race. America’s industrial capability is critically dependent on a number of potential adversary nation-states, including China. In this context, America’s lead in GenAI is considered to constitute a potential very powerful geopolitical lever: If America manages to get to AGI first, so the analysis goes, it can build up an overwhelming long-term advantage over especially China (see Farrell).
That is the reason why Silicon Valley, Wall Street and the Trump administration are doubling down on the “AGI First” strategy. But astute observers highlight the costs and risks of this strategy.
Prominently, Eric Schmidt and Selina Xu worry, in the New York Times of August 19, 2025, that “Silicon Valley has grown so enamored with accomplishing this goal [of AGI] that it’s alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists. In being solely fixated on this objective, our nation risks falling behind China, which is far less concerned with creating A.I. powerful enough to surpass humans and much more focused on using the technology we have now.”
Schmidt and Xu are rightly worried. Perhaps the plight of the U.S. economy is captured best by OpenAI’s Sam Altman who fantasizes about putting his data centers in space: “Like, maybe we build a big Dyson sphere around the solar system and say, “Hey, it actually makes no sense to put these on Earth.”” For as long as such ‘hallucinations’ on using solar-collecting satellites to harvest (unlimited) star power continue to convince gullible financial investors, the government and users of the “magic” of AI and the AI industry, the U.S. economy is surely doomed. (emphasis added)
What is highly intriguing about these paragraphs is the geopolitical context in which all this artificial intelligence development is couched. Indeed, the statement that China is focused on optimizing current artificial intelligence capabilities while the USA is focused on developing the next generation capabilities is a misreading of the situation, for one only need recall all of those stories in recent years, emanating from China, of this or that breakthrough in quantum computing, or entanglement encryption, and so on. And that suggests, quite strongly, that if anyone has been erecting Potemkin villages to force an opponent to waste vast amounts of money on projects that have no immediate or mid-term benefit, it is not the United States doing so to China, but rather the opposite, China doing so to the United States.
Well, once again, that's possible in our usual high-octane-speculation way. The problem, however, is that all that bad paper is sloshing around not just in the American financial system, but in China's as well, and that means that this financial game and the artificial intelligence Potemkin villages accompanying it may be a much more complex and complicated game than meets anyone's eye.
Even so, it's worth remembering a significant fact that I pointed out way back in my book Babylon's Banksters(see pp. 21-23): the man responsible for creating the mathematical model that led to the creation of all that bad paper - the derivatives mess with its mortgage based securities and credit default swaps and bundles of bundles of securities - was a mathematician by the name of Dr. David Li.
And, yes, he was Chinese, and after a stint with the Canadian Imperial Bank, he returned to China where he headed up a department - and no, I am not kidding here - evaluating economic risk and risk assessment strategies. Just the sort of thing one might specialize in if one wanted, perhaps, to create Potemkin village projects...
See you on the flip side...
AI isn't just hallucinating... we're hallucinating with it.
In her new paper, philosopher Lucy Osler redefines "AI hallucinations" not as isolated machine errors but as shared delusions between humans and AI. Using the lens of distributed cognition, she warns that when people rely heavily on generative AI to think, remember, or narrate, they may begin to hallucinate with AI.
Hallucinating with AI: AI Psychosis as Distributed Delusions
"It is hard to imagine a more stupid or more dangerous way of making decisions than by putting those decisions in the hands of people who pay no price for being wrong." – Thomas Sowell