AI is a Trillion Dollar Time-Bomb

February 26, 2026Software
There is a particular kind of madness that only becomes visible in hindsight. The participants are too close to it, too invested, too intoxicated by the possibility of being right. Everyone around them is doing the same thing, which feels like validation. The money keeps flowing, which feels like proof. We are living inside that madness right now. And we are calling it progress.


The Numbers Don't Add Up

Microsoft, Google, Meta, and Amazon have collectively committed over a trillion dollars to AI infrastructure. Data centres the size of small cities. Power grids rerouted to feed GPU clusters. Entire organisations restructured around a single bet that artificial general intelligence is imminent and that whoever arrives first takes everything. The revenue side of this equation is conspicuously absent from the conversation.

OpenAI loses billions every year. Anthropic loses billions every year. Google is burning its search margins to fund a product that cannibalises its search margins. The killer application — the one that converts AI capability into durable, defensible revenue at the required scale — has not arrived. What exists instead is a collective agreement to keep spending and hope someone figures it out before the money runs out. That is not a business model. That is a prayer. And trillion dollar prayers have a poor track record.


The Billionaires and the Politicians Are Not Planning. They Are Posturing.

What is remarkable about the people steering this moment is not their intelligence. Many of them are genuinely brilliant. What is remarkable is their shortsightedness.

Sam Altman tours the world asking governments for trillions in infrastructure investment while his company has yet to demonstrate a path to profitability. Elon Musk simultaneously warns that AI will destroy humanity and races to build the most powerful AI he can. Politicians hold hearings, nod gravely, and pass nothing. The European AI Act, the most serious regulatory attempt so far, was lobbied into irrelevance before the ink dried.

The incentive structure explains everything. Billionaires benefit from the hype directly — in valuation, in political influence, in the narrative that they are building the future. Politicians benefit from association with technological optimism. Nobody in a position of power benefits from asking the hard questions out loud. So the hard questions go unasked. And the bet gets bigger.


Nobody Has a Plan for the Workforce

Here is a question that should be at the center of every AI policy discussion and is instead treated as a footnote: what happens to the people whose jobs disappear?

The honest answer from the industry is some variation of "new jobs will be created." This is historically true in the long run. It is historically useless in the short run, which is where actual human beings live their actual lives. The agricultural revolution displaced millions over generations. The industrial revolution displaced millions over decades. AI is being positioned to displace millions over years, possibly faster.

There is no retraining programme at scale. There is no universal basic income on the table in any major economy. There is no serious political coalition forming around workforce transition. What exists are think pieces, pilot programmes, and the quiet assumption that the market will sort it out. It will not sort it out. Markets are efficient at allocating resources toward profit. They are not efficient at managing civilisational transitions in ways that distribute the costs fairly. That requires political will, which requires politicians who are willing to challenge the people funding their campaigns. We are not getting that. We are getting photo opportunities at data center openings.


They Are Ignoring Model Collapse

There is a technical problem at the heart of the AI expansion that receives almost no mainstream attention: model collapse. Initially large language models were trained on human-generated data, eg text, code, images produced by people over decades. That data is finite and increasingly polluted. As AI-generated content floods the internet, future models will be trained on outputs of previous models, which were trained on outputs of previous models, in a recursive loop that degrades quality with each iteration. The signal gets noisier. The errors compound. The hallucinations multiply.

Researchers have been raising this problem for years. The industry response has been, essentially, to keep scaling and hope the problem solves itself. This is not an engineering solution. It is an act of faith dressed in the language of optimism. And it suggests that the long-term trajectory of these systems may be not toward greater intelligence, but toward a sophisticated, convincing, and expensive kind of mediocrity.


They May Have Overestimated the Ceiling

And then there is the possibility that nobody in a trillion dollar investment committee wants to seriously entertain: what if this is as good as it gets?

Current AI systems are genuinely impressive. They are also genuinely limited in ways that their proponents consistently understate. They cannot reason reliably. They cannot plan over long horizons. They confabulate with confidence. They are brittle in novel situations. They have no understanding of the world — only statistical patterns extracted from representations of the world.

The leap from "impressive pattern matching" to "general intelligence" is not a matter of more compute or more data. It may require a fundamentally different approach that does not yet exist. Several serious researchers — not fringe voices, but people who have worked at the frontier of this field — believe we are approaching the limits of what current architectures can achieve. If they are right, the trillion dollar bet is not just overpriced. It is aimed at the wrong target entirely.


What Happens When the Bubble Pops

At some point the gap between AI's costs and its revenues will become impossible to paper over. The correction will be sharp. Some companies will not survive it. The infrastructure overhang will depress investment for years.

What is less certain is whether the correction arrives before or after we have embedded these systems so deeply into healthcare, legal systems, financial markets, and military decision-making that unwinding them becomes its own catastrophe. The people most exposed to that risk are not the ones making the bet. They are the ones who had no say in it.

The AI delusion is not that the technology is useless. It is that the people in charge have convinced themselves that moving fast enough excuses the absence of a plan. That scale substitutes for wisdom. That the market will clean up whatever mess they leave behind. It will not. It never does. But by the time that becomes obvious, the billionaires will have moved on to the next thing. The politicians will have retired. And the rest of us will be left holding the consequences of a bet we were never asked to make.