AI Crash Report: What Triggers the Collapse

What three physicists, a Serbian teenager, and seven men in a circle can teach us about the coming AI collapse

March 17, 2026

Why Does It Always Collapse at That Particular Moment?

In the summer of 1987, three physicists at Brookhaven National Laboratory conducted one of the most boring experiments in the history of science. Per Bak, Chao Tang, and Kurt Wiesenfeld dropped grains of sand onto a table.

That’s it. They just dropped sand.

The pile grew. The slopes steepened. They kept dropping. And then, after some unremarkable grain landed—a grain that weighed exactly the same as every grain before it, that looked identical in every measurable way—the pile collapsed.

They ran it again. Same result. The triggering grain was never special. It was never heavier or sharper or dropped from a greater height. The avalanche wasn’t caused by something unusual happening to the pile. It was caused by what the pile had become.

Bak, Tang, and Wiesenfeld had discovered something fundamental about how complex systems fail. They called it “self-organized criticality.” The rest of us might call it the answer to a question that has haunted every bubble in financial history: Why then? Why did it collapse at that particular moment and not some other moment?

The sandpile doesn’t care which grain falls last. It only cares that it has reached the critical state—as steep as it can possibly get. After that, collapse isn’t a matter of if. It’s a matter of which grain happens to land next.

This is how the dot-com bubble collapsed. This is how the housing market collapsed in 2008. This is how every complex system at criticality collapses.

The math doesn’t care what’s being traded—tulips, mortgage-backed securities, or GPU compute.

The mechanism is always the same.

How Does an Ordinary Event Trigger a Catastrophe?

On June 28, 1914, a nineteen-year-old Serbian nationalist named Gavrilo Princip stood on a street corner in Sarajevo. He was not an important person. Sarajevo was not an important city. The Archduke Franz Ferdinand, whose motorcade happened to take a wrong turn directly in front of Princip, was not particularly important either—a presumptive heir to a throne that had been declining in relevance for decades.

Princip fired two shots. Within six weeks, Europe was at war. Within four years, twenty million people were dead, four empires had collapsed, and the map of the world had been redrawn.

Every historian who has studied the assassination agrees on one point: it shouldn’t have mattered. The Austro-Hungarian Empire had survived assassinations before. Serbia was a minor regional actor. The network of alliances that pulled Britain, France, Russia, Germany, and the Ottoman Empire into the conflict was so convoluted that even the diplomats who designed it didn’t fully understand how it worked.

But that was precisely the problem. Europe in 1914 had spent decades constructing an elaborate system of mutual defense treaties, interlocking financial obligations, and mobilization timetables that were designed to prevent war by making the consequences of war unthinkable. Instead, they had built a sandpile. The slopes had been steepening for years. All that was missing was the final grain.

Gavrilo Princip was that grain.

He was ordinary in every way that mattered. The catastrophe wasn’t caused by his bullets. It was caused by what Europe had become.

What Keeps Everyone Frozen?

I want you to imagine seven men standing in a circle. Each one is holding a gun. Each gun is pointed at one or more of the other men.

This is the configuration that Quentin Tarantino made famous in Reservoir Dogs— the circular firing squad. The logic of the arrangement is perverse but stable: no one can move without triggering a bloodbath. So no one moves. The system holds because everyone understands that breaking it means mutual destruction.

Now I want you to imagine those same seven men, guns still raised, but standing on a pile of sand.

And the sand is still falling.

Who Are the Men in the Circle?

Let me introduce you to the men in the circle.

There’s Jensen Huang, the CEO of Nvidia. In a normal industry, Huang would simply be a successful chipmaker. In the AI industry, he is something closer to a deity. Nvidia controls roughly eighty percent of the market for AI chips. More importantly, Nvidia sells those chips at gross margins approaching eighty percent—a figure that would be considered obscene in almost any other hardware business.

Huang’s gun is pointed at the hyperscalers: Microsoft, Google, Amazon. His message is simple. Keep buying my chips at these prices. If you slow down, my stock price collapses—and since Nvidia has become one of the most valuable companies on Earth, that collapse drags the entire S&P 500 down with it. Your pension funds. Your 401(k)s. Your index funds. All of it.

Then there’s Sam Altman, the CEO of OpenAI. Altman has built the most famous AI company in the world on a simple bet: that investors will keep funding him at ever-increasing valuations until artificial general intelligence arrives and makes all previous investments look like pocket change. In late February 2026, he closed the largest private funding round in history—$110 billion at an $840 billion valuation.

Altman’s gun is pointed at the venture capitalists and the big tech companies that funded him. Keep writing checks, he says. If you don’t, I collapse. And if I collapse, your investments zero out, your cloud revenue evaporates, and you have to explain to your shareholders why you lit a hundred billion dollars on fire.

Satya Nadella runs Microsoft. Microsoft has bet more on OpenAI than any other company—over a hundred billion dollars in compute commitments. Nadella’s gun is pointed directly at Altman. Deliver AGI, he says. Or at least deliver enough enterprise profit to justify what I’ve spent. Otherwise, I cut your compute and find another partner.

Dario Amodei runs Anthropic, the company that positions itself as the “safe” alternative to OpenAI. Amodei’s gun is pointed at everyone—he’s competing for the same talent, the same compute, the same customers. But Amodei just made a decision that may have sealed his fate: when the Pentagon demanded that Anthropic remove safety restrictions so the military could use Claude for mass surveillance and autonomous weapons, Amodei refused. Within days, Anthropic was officially blacklisted as a “national security risk.”

Amodei kept his principles. The government cut his lifeline.

Sundar Pichai runs Google. He’s watching his search monopoly erode in real-time while spending tens of billions of dollars trying to catch up in AI. His gun is pointed at everyone, but his hand is shaking.

Larry Ellison runs Oracle. Of everyone in the room, Ellison is the most exposed. Oracle is carrying $56 billion in debt and $248 billion in lease obligations. The company has bet its future on building massive AI data centers. Its only major tenant for those data centers is OpenAI. If OpenAI can’t pay its bills, Oracle may not survive.

And then there’s Masayoshi Son, the CEO of SoftBank. Son is the wildcard. He has been the wildcard in every tech bubble for thirty years. To fund his OpenAI commitments, Son has already had to sell his Nvidia stock and take out a $15 billion margin loan against his holdings in ARM. He’s paying for his equity stake in installments. As one analyst put it: he’s buying OpenAI on Klarna.

Seven men. Seven guns. And standing by the door, shotgun aimed at the whole room: Wall Street. Show us real returns soon, the analysts say. Or we sell.

What Makes These Grains Hollow?

Now here is the part that everyone misses.

In a normal standoff, the participants are standing on solid ground. The danger is that someone’s finger will twitch. But the ground doesn’t move.

In the AI standoff, the ground is moving.

Every dollar these companies pump into the system to keep the game alive lands on the pile beneath their feet. They are building the collapse by trying to prevent it.

Let’s look at how the money actually moves.

Tech giants invest billions in AI labs. But not as cash—as cloud credits. Or with contractual requirements that the money must be spent on the investor’s own infrastructure.

The AI labs “spend” those credits renting server space back from the tech giants.

The tech giants report “cloud revenue growth” to Wall Street.

Wall Street enthusiasm justifies buying tens of billions more in Nvidia chips.

Nvidia takes its profits and invests back into AI labs and cloud companies—to ensure they keep buying Nvidia chips.

Nvidia has committed $27 billion over six years to purchase compute from its own customers. They backstopped $3.5 billion in leases for CoreWeave, a cloud company. They bought $2 billion of CoreWeave stock so CoreWeave could secure loans to buy more Nvidia GPUs.

When OpenAI closed that $110 billion round, who wrote the checks? Amazon put in $50 billion. Nvidia put in $30 billion. SoftBank put in $30 billion. The exact same companies that need OpenAI to survive so they can keep renting their servers and selling their chips.

This is an $840 billion valuation built on an industry paying itself.

In every previous bubble, the grains were at least real capital. Real money flowing from investors to companies to products to customers. The AI bubble’s grains are hollow. They’re circular. The same dollars flowing around and around, getting counted multiple times, building the pile steeper with every pass.

And what’s not inside that circle? Actual end-user revenue. Actual productivity gains. When the National Bureau of Economic Research surveyed 6,000 corporate executives about their AI investments, ninety percent reported zero measurable impact. They expect 1.4% productivity gains over three years.

Does that mathematically justify any of this?

What Does the Trigger Look Like?

Every bubble has its cast of characters. In 2008, it was Lehman Brothers, Bear Stearns, AIG, Countrywide. In 2000, it was Pets.com, Webvan, WorldCom, the telecom giants building fiber no one needed.

The names change. The structure doesn’t.

And so does the nature of the trigger.

Everyone who lived through 2008 remembers Lehman Brothers. The September weekend when the firm collapsed. The chaos that followed. But Lehman wasn’t the trigger. Lehman was the explosion.

The actual trigger happened a year and a half earlier, when a subprime lender called New Century Financial filed for bankruptcy and a French bank named BNP Paribas froze three obscure investment funds because, as they put it, the math had stopped working. Nobody on Main Street noticed. The stock market went on to hit record highs. But the grain had already fallen. The avalanche had already begun.

In the AI bubble, the triggers are already lining up.

Oracle’s credit outlook was just downgraded by S&P Global. Blue Owl Capital walked away from a $10 billion data center deal with Oracle, citing “unfavorable economics.” CoreWeave—with Microsoft and OpenAI as its main customers—just reported operating margins of negative six percent. They made more money than ever before, and their margins got worse.

In six weeks, 41 data centers have been canceled, shelved, or delayed. The industry is building ten-building campuses that are only two buildings in. The GPUs have been ordered but not delivered. The power connections that were supposed to take months are taking years.

Any of these could be the grain.

An auditor looks at a funding round and refuses to sign off. A CEO misspeaks on an earnings call. A quarterly guidance mentions that chip orders have “slightly softened.” A consumer boycott spreads faster than the PR team can contain it.

When Sam Altman signed the Pentagon deal that Anthropic refused, over 1.5 million users joined a “Quit GPT” campaign. ChatGPT uninstalls surged 300%. OpenAI’s head of robotics resigned in public protest. It’s hard to sustain an $840 billion valuation when your top talent is walking out and your customers are deleting your app in disgust.

The trigger will be mundane. It always is.

What Gets Stranded When This One Bursts?

There is one more thing that makes this different from every collapse in living memory.

When the dot-com bubble burst, the overbuilt infrastructure was fiber optic cable. That cable depreciated slowly—over fifteen to twenty-four years. Much of the “dark fiber” from the 1990s eventually lit up. It found uses. The stranded assets became valuable again.

GPUs don’t work that way.

GPUs depreciate in three to six years. They become obsolete even faster. When this bubble bursts, the data centers won’t sit idle waiting for demand to catch up. They’ll be filled with chips that are already worthless, bought with loans that won’t be paid off for a decade.

The scale of what gets stranded: $178 billion in data center deals signed in the US alone in 2025. The actual market for renting AI compute—outside the circular financing? Less than one billion dollars.

That’s not a rounding error. That’s multiple orders of magnitude.

The infrastructure cannot be repurposed. The assets decay faster than the debt. The stranding is permanent.

Where Is the AI Industry’s Gavrilo Princip?

In early 1914, Europe was at peace. The great powers had not fought each other in over forty years. Trade was booming. Technology was advancing. The system of interlocking alliances was supposed to guarantee stability by making war unthinkable.

Instead, they had built a machine that no one fully understood, with dependencies that no one had fully mapped, primed to convert any small shock into a catastrophic cascade.

All it needed was a nineteen-year-old with a pistol on a street corner in Sarajevo.

Somewhere out there right now is the AI industry’s Gavrilo Princip. A failed funding round. A bankruptcy filing. A single company that can’t make payroll. Something nobody is watching.

It won’t look important when it happens. It will look like any other grain.

By the time you read this, it may have already happened. The name of the trigger doesn’t matter. What matters is that you can see the structure now. You understand the physics. You can see the seven men in the circle, guns raised, sand falling beneath their feet.

The pile is as steep as it can get.

The next grain is already in the air.

---

Julian Whatley spent 35 years in Hollywood and Madison Avenue learning how to manufacture perception. AI Crash Report is his analysis of the gap between what the AI industry is selling and what the physics actually allow.

Related articles