Tick, Tick…Boom!

1 day ago 9

Andrew Ross Sorkin

Andrew Ross Sorkin; illustration by Lorenzo Gritti

In most episodes of speculative excess, the prevailing mood has been one of denial. From Amsterdam in 1636 to Silicon Valley in 2000, investors have typically insisted they were witnessing not a bubble but a revolution: a new paradigm, the dawn of an era. When challenged on whether prices had become irrational, they reached for the evergreen defense: This time it’s different.

The current boom in artificial intelligence stands apart for its lack of denial. The notion that we are in the frothy, hype-driven phase of technological speculation has become conventional wisdom. Venture capitalists and technologists openly acknowledge that valuations are inflated, expectations are overblown, and vast sums of capital are chasing both promise and illusion. Rather than contesting the bubble’s existence, they embrace it as not only inevitable but perhaps even essential to the breakthroughs ahead. This marks a subtle but significant evolution: where previous bubbles were about believing in the impossible, the current one seems to involve believing in the bubble itself.

The belief that bubbles can be useful—wasteful in the short run but transformative over time—is not entirely new. Among certain economists and historians of technology, it has become the dominant view. In her influential book Technological Revolutions and Financial Capital (2002), the economist Carlota Perez argued that major bubbles often accompany the early stages of general-purpose technologies. Speculative fevers can drive investment far beyond the potential short-term returns, but the overbuilt infrastructure eventually becomes the backbone of entire industries. The British railway boom of the 1840s laid far more track than demand justified but helped create the logistics system for industrial capitalism. The fiber-optic mania of the 1990s left behind excess capacity that later enabled the rise of the Internet.

The dot-com bubble that burst in 2000 is often cited in this light. While many start-ups failed, speculative funding helped build the infrastructure, workforce, and culture that would fuel Amazon, Google, and the digital economy as a whole. The libertarian economist Tyler Cowen, a guru to the Silicon Valley crowd, says that we shouldn’t worry about unprecedented levels of investment in AI because the benefits greatly outweigh the potential harm. “In fact, what we are seeing right now is a shortage in the AI sector’s capacity to meet demand,” Cowen recently wrote.

Major tech companies are investing in more computing capacity, but they still cannot serve all the customers who want access to AI systems. That augurs well for the future of the sector, even if there are dips and spills along the way.

Yet there are growing reasons to doubt whether the AI bubble—if that is in fact what it is—will leave behind anything as enduring as nation-spanning rail tracks or fiber-optic cables. Skeptics including the investor Michael Burry, made famous by The Big Short, and the technologist Paul Kedrosky argue that analogies to earlier industrial bubbles are dangerously misleading. The largest share of capital expenditure in the AI boom is going toward buying Nvidia chips that power large language models. But Nvidia releases dramatically more efficient chips every two years. If and when AI revenue materializes at scale, the data centers will still be standing and their cooling systems should still be working, but the chips that make up the bulk of their cost will be obsolete. That suggests not only that OpenAI may soon run out of money, as the economics writer Sebastian Mallaby recently argued in The New York Times, but that the infrastructure being created is more like single-use scaffolding than underground fiber, which waited patiently for future enterprises to light it up and put it to use.

These critiques challenge the soothing assumption that even failed AI ventures will leave behind the building blocks of the future. If the lion’s share of investment is being used to buy hardware that becomes outdated every two years, then we are not in a productive infrastructure bubble akin to 2000. Instead we may be incubating something far more dangerous to the economy: a financial bubble, with no upside to speak of. The rise of circular investment structures and the growing use of off-balance-sheet financing are among the worrisome signals.

If skeptics like Burry and Kedrosky are right, we may come to see this era not as one of creative destruction but as a strange moment of self-aware self-deception. The AI bubble could end up looking less like the dot-com crash of 2000 and more like the asset bubble and systemic financial collapse of 2008—or 1929. That makes understanding past bubbles all the more urgent.

At the center of questions like these, one inevitably finds Andrew Ross Sorkin. While he is best known as a New York Times journalist—and a very good one—that label hardly captures the breadth of his presence in financial discussions. He’s also a coanchor of CNBC’s Squawk Box and the founder of DealBook, a digital franchise he launched inside the Times in 2001. In addition to its daily newsletter, DealBook hosts an annual summit where Sorkin conducts a full day of interviews with some of the most powerful figures in finance, politics, and technology.

Sorkin’s reporting speaks the language of markets without gratuitous moralizing. His interviews are incisive and disarmingly fair, and his rare combination of affability and rigor has made him one of the few journalists respected by both Wall Street and Silicon Valley—industries that often flaunt their loathing of the press.

It was Sorkin’s insider access that enabled him to write Too Big to Fail (2009), the best layperson’s account of the 2008 financial crisis. The book (and its HBO adaptation, for which Sorkin served as coproducer) dramatized the terror inside the rooms where the crucial decisions were made and turned some of world’s least charismatic figures—bankers, bank regulators, institutional investors—into complex, often unexpectedly sympathetic characters. You don’t forget the image of Treasury Secretary Henry Paulson puking into a trash can from sheer anxiety as the financial system teetered. The book made clear that 2008 was the Cuban missile crisis of global finance: we came perilously close to an economic collapse on the scale of the 1930s. And even with a successful policy response, the damage wasn’t averted, just prevented from turning into something even worse.

If Too Big to Fail was journalism as a first draft of history, Sorkin’s new book, 1929, is an attempt to turn history back into journalism. Writing about the more distant past, Sorkin relies on more or less the same method: choosing a cast of characters and evoking scenes of high-stakes drama filled with juicy detail. The problem with this approach is that everyone is dead—there’s no one to interview, and few primary sources for the human drama of it all. The Great Depression was extensively memorialized through documentary photography and oral history interviews, both at the time and in the decades afterward. Not so the great crash. Sorkin draws on letters, speeches, newspaper stories, and bank archives to try to animate the starched collars. It may not be his fault that it only intermittently works.

His curtain opens on the high-stress predicament of Charles Mitchell, president of National City Bank. (New York gala goers will know its old headquarters as what is now Cipriani Wall Street.) Mitchell’s dream was to create the world’s largest financial institution by taking over the Corn Exchange Bank. National City intended to pay for the acquisition primarily with its own stock—if you owned a share of Corn Exchange, you could trade it for either $360 in cash or four fifths of a share of National City, which at the time the deal was announced would have been worth around $397. But as markets faltered in October 1929, Mitchell was faced with the threat of his bank’s share price falling too low for him to consummate the transaction. He tried to support National City’s price by buying back shares—and accidentally bought so many that he risked triggering a bank run.

The tactic not only failed but led to the public exposure of Mitchell’s shady personal dealings. During the 1933 Pecora hearings into the causes of the Great Depression, Mitchell admitted to selling several million dollars’ worth of his own shares to his wife—to offset his income tax bill without further damaging confidence in his bank. This was something of a gray area under the law—Mitchell was eventually acquitted of tax fraud charges—but he was forced to resign from National City. His reputation never recovered. Mitchell’s disastrous empire building would have its echo eighty years later, when National City, forced to spin off its securities business by the Glass–Steagall Act of 1933 and recreated after many mergers as Citigroup, became a byword for slipshod banking practices in the crisis of 2008.

Like most other characters we meet in Sorkin’s book—including Thomas Lamont of JP Morgan, Treasury Secretary Andrew Mellon, and John Raskob, the developer of the Empire State Building—Mitchell was an unquestioning adherent of laissez-faire doctrine. These men were cheerleaders for speculation on margin, opponents of regulation, and bitter adversaries to reformers like Senator Carter Glass, who wanted to rein in credit-fueled gambling on the market. They were backed by their own paid propagandists and favored prognosticators. In October 1929—two weeks before Black Tuesday—the Yale economist Irving Fisher famously declared that stock prices had reached “what looks like a permanently high plateau.” Another economist, Royal Meeker, went further, warning, according to a New York Times reporter, that any attempt to regulate speculation would “choke the arteries of modern economic life and reduce civilization to cannibalism.”

The real threat to civilization came, of course, from not reining in speculation that threatened the entire financial system. In a world gone mad, the most appealing figure is sometimes the most openly cynical. Among Sorkin’s cast of nearly interchangeable monopoly men the one who vividly stands out is Jesse Livermore. Known as the “Boy Plunger,” Livermore was a celebrity speculator who made and lost and made and lost vast fortunes shorting the market. The object of popular fascination, he was the basis for the 1923 novel Reminiscences of a Stock Operator by Edwin Lefèvre, a roman à clef about market manipulation.

While he participated in the (then entirely legal) pump-and-dump enterprises known as stock pools, Livermore wasn’t much of a market manipulator, at least as judged by the standards of the era. He was a high-stakes gambler and what we would today call a momentum trader, famous for his feel for the ticker tape. Livermore got his start as a teenage habitué of bucket shops: stock market betting parlors where customers placed wagers on the movement of stock prices, rather than buying and selling actual securities.

Livermore’s gut sense of which way prices were going was so good that he repeatedly got banned from bucket shops for winning too much. Everyone wanted to know where he thought stocks were going, and Livermore encouraged the fascination by his slyness in parceling out morsels of opinion. In 1929 he placed his biggest short, earning $100 million in the crash, only to eventually lose nearly all of it. His story offers a cinematic arc: a man addicted to the stock market whose yachts, mansions, wives, and girlfriends couldn’t bring him happiness and whose suicide in the cloakroom of the Sherry-Netherland in 1940 felt like the coda to a decade of excess and ruin. If there’s a human face to 1929, it may be his. One wishes Sorkin had spent even more time with him.

Beyond the intrinsic difficulty of revivifying the top-hatted dead, Sorkin’s rendition is limited by his desire to frame 1929 as a story about people. His focus on individuals comes at the expense of analysis—particularly of the deeper economic forces that made the crash likely, if not inevitable. Sorkin is more interested in how the crisis felt than why it happened. He has little to say about why the government failed to take any meaningful steps to prevent it—or why, unlike in 2008, its responses failed so spectacularly.

There’s nothing wrong with framing a popular history as a human drama, as opposed to exploring reasons and causes. But it’s instructive to compare 1929 with a much shorter book that manages to do both, John Kenneth Galbraith’s classic The Great Crash 1929. Galbraith’s book, first published in 1955, is wry and epigrammatic, and remains unmatched as a cultural interpretation of the crisis. Bubbles, as all economics students learn, form when credit expands rapidly and collapse when that credit dries up. By this measure, the 1920s were ripe for disaster: easy credit, enabled by a rapidly expanding banking system and minimal regulation, fueled stock buying on ten-to-one margin (putting up a tenth of the price of the stock and borrowing the rest). The result was asset inflation run amok. But as Galbraith points out, credit expands and contracts all the time without causing stock markets to blow up. The great crash required a broader set of societal illusions, including what he describes as “a pervasive sense of confidence and optimism and conviction that ordinary people were meant to be rich.”

Stock markets can also collapse without causing depressions. So why did the crash in 1929—a year in which the market went down only 17 percent—trigger the big one? Galbraith describes a series of conditions, including economic insecurity, a lack of policy tools, and what he calls a determination on the part of policymakers to make the crisis worse. Until 1933 both political parties were intent on maintaining a balanced budget, which meant they were unwilling to increase spending to counteract economic decline. Central banks around the world were slow and timid in their monetary interventions: raising interest rates in the face of a collapsing economy intensified deflation, bank failures, and unemployment instead of counteracting them. The Smoot–Hawley Tariff Act, which raised tariffs on imports in an attempt to protect struggling manufacturers and farmers, and a commitment to defending the dollar’s convertibility into gold made the patient sicker. Herbert Hoover was a proficient technocrat as well equipped as any president to understand and respond to an economic crisis, but he lacked the vision to think beyond the gold standard or the dogma of fiscal discipline. He was also a terrible communicator whose bright idea was to rebrand the market panic as a “depression.” (The “great” came later.)

These dynamics are difficult for Sorkin’s narrative method to capture. The slow-motion failure of the gold standard and the delusions of global monetary policy in the 1920s (elegantly evoked in Liaquat Ahamed’s 2009 book Lords of Finance) do not readily lend themselves to the projection of internal monologue, scenes with dialogue, or the clash of business titans. The preexisting agricultural depression, the weakness of wage growth amid soaring industrial productivity, and the maldistribution of wealth—all crucial elements of the economic story—tend to recede behind Sorkin’s personalities. But those personalities are more symptoms than causes. They may dramatize the systemic frailty, but they do not explain it.

This is not just a fault in Sorkin’s book. The scapegoating exercise that followed the crash was to some extent pointless. There was, of course, plenty of chicanery and manipulation on Wall Street to help turn the pre-crash heroes into post-crash villains. But economic policymakers before Keynes were like doctors trying to treat infections before the invention of penicillin. Ultimately, 1929 was a failure of an economic system whose complexity had overwhelmed the theories that supported it. The dominant financial ideology of the 1920s, like that of the 1980s, the 2000s, and the first half of the 2020s, was faith in self-regulating markets. It won popular support through disingenuous calls for the democratization of investing and was encouraged by government officials who declined, on free market principle, to address systemic risk.

The world’s central banks and financial regulators have become much more adept at responding to the aftermath of bubbles. In the early 1930s policymakers were hemmed in by rigid orthodoxy. The result was a global depression marked by mass unemployment, deflation, and rising political extremism. By contrast, the response to the 2008 financial crisis was economically effective—even if anger about bank bailouts and the lack of prison sentences for supposed malefactors fueled populist politics in a similar fashion. Central banks understood the need to inject unprecedented liquidity; governments bailed out systemically important institutions and ultimately ushered in reforms to strengthen capital requirements and expand oversight. As a result, there was no repeat of the 1930s. The system, though shaken, held.

But what have we learned since 1929 about preventing bubbles? Practically nothing, it seems. Indeed, the cultural shift toward accepting bubbles as either inevitable or beneficial may be making them harder to prevent. The dominant view today is not just that industrial bubbles aren’t all that bad but also that no one can confidently declare a market “wrong.” If market excesses are impossible to identify in real time, intervention to stop them risks distorting the price signals that markets rely on to allocate capital. Former Fed chair Ben Bernanke, whose academic work was on the Great Depression, insists that preventing all bubbles is neither feasible nor desirable—a view shared by most of his peers. The better strategy, he argues, is to reduce systemic vulnerability and improve resilience, through increased capital requirements for banks, stress tests that simulate their potential losses in crisis scenarios, enhanced deposit insurance, and so on.

But if bubbles are to be not just tolerated but welcomed—on the assumption that they can be managed after the fact and will leave behind societal benefits—then we must be prepared for the consequences of the next one. The danger is not just that the AI bubble might burst—though analysts like Kedrosky expect that to happen within a few years—but that it could metastasize into a broader financial crisis. Unlike 2008, when the risks were concentrated in large, regulated banks, today’s speculation flows primarily through shadow finance. Venture capital funds, private credit markets, and off-balance-sheet vehicles fall mostly outside the purview of both Depression-era banking regulation and the Dodd–Frank Act, passed in 2010. Financial regulation tends to address the last crisis even as breakneck financial innovation is sowing the seeds of the next one.

Shadow finance has helped funnel extraordinary amounts of capital into AI start-ups, data centers, and chip manufacturers, with limited transparency and increasingly loose lending standards. The result is an ecosystem of interlocking exposures, echoing pre-2008 conditions under slightly different names. Today’s AI-focused special purpose vehicles have similarities with the structured investment vehicles that concealed mortgage risk in the early 2000s. Many AI ventures are backed by enormous loans structured to keep risk off their balance sheets, which raises concerns about the potential for contagious defaults. If the AI boom ends suddenly, there is a risk that these loosely connected, highly leveraged structures could propagate distress through private markets that lack adequate regulatory safeguards.

A related concern is circularity. Tech giants increasingly invest in one another’s AI initiatives—bolstering valuations, driving demand for infrastructure, and inflating the appearance of market consensus. This pattern is reminiscent of “round-tripping” revenue schemes from the dot-com era: feedback loops in which valuation is built not on user demand but on interfirm financial engineering. These risks are amplified by the fact that many of the entities now exposed—venture funds, family offices, and retail AI investment vehicles—do not benefit from the protections of insured deposits or lender-of-last-resort support from the federal government, as regulated banks do. If losses cascade, there may be nothing to stop them.

Galbraith, for one, disagreed with the view that we can deal with bubbles only by cleaning up the mess afterward. He thought that central bankers in the 1920s should have increased margin requirements and raised interest rates in order to slow the market’s rise. But as he concedes, that would have been politically almost impossible, just as it was in the late 1990s and may be today. The Fed’s mandate is to control inflation and pursue full employment, not to burst bubbles. Politicians who take action against frothy markets will never get public credit for preventing disasters that don’t occur and are likely to be blamed if their interventions cause the market to drop.

Sorkin’s book reminds us that systemic fragility is just as persistent as speculative manias—it, too, reflects our cognitive biases and limitations. The crash of 1929 was a failure not just of market regulation but of political imagination: policymakers could not see how interdependent, leveraged, and fragile the system had become, because nothing quite like it had ever happened before. For a bubble to build requires only the elements that are always present or just around the corner: human avarice, narratives about transformative technology, easy credit, and complex financial engineering, whose implications even the engineers may not fully understand.

The central insight of 1929, though it is not always made explicit, is that bubbles are not simply periodic accidents of the free market. Those who benefit from unsustainable economic trajectories can be counted on to develop the justifications that make them feel warranted and to fight any government intervention that would cut the party short. It’s never hard to sell those explanations to an investing public that, at least in the short term, gets richer by accepting them. That bubbles are good for us is only the newest spin on the old rationalizations we always hear.

Read Entire Article