AI Spending
AI Capital Wave
$320B/yr capex
Source: AgentsPop Analysis As of: 2025

AI spending is no longer niche. If the spending wave outruns viable revenue, private companies will still have to meet investor expectations—and that pressure reaches workers.

The views, thoughts, and opinions expressed on this website are solely my own and do not reflect the views, policies, or positions of my employer or any affiliated organization.

The AI Spending Boom—And the Bust Scenario Workers Are Exposed To

the risk isn’t that AI fails; it’s that the spending thesis fails

The most important AI story is no longer about what models can do. It is about what capital markets believe they will do—and what happens if that belief proves wrong.

Across the AI economy, spending now looks like an infrastructure cycle: hyperscalers are committing hundreds of billions to data centers and compute capacity; chip suppliers are booking record data center revenue; venture and private equity have poured extraordinary sums into AI companies; and enterprises are being sold “AI everywhere” as a near-term productivity unlock.

This is the upside narrative: build the stack now, monetize later, and win a global race for capability.

But the downside narrative—rarely stated plainly—is not that AI disappears. It is that returns arrive slower than the financing structure demands. If the spending wave outruns viable revenue, private companies will still have to meet investor expectations. That pressure does not stay in boardrooms. It reaches workers through hiring freezes, layoffs, wage compression, vendor cuts, and—if the shock is large enough—a broader pullback that can amplify a downturn.

This is the bust scenario: a capital-intensive boom meets a monetization bottleneck, and labor becomes the adjustment valve.

The “war” framing that drives risk-taking

The current investment intensity is not only the product of spreadsheets; it is also driven by rhetoric. Leaders and policymakers have increasingly framed AI as an “arms race” or “race” dynamic, often implying that slowing down is not an option.

A detailed timeline from the AI Now Institute tracks how “AI arms race” rhetoric has been deployed and institutionalized in policy discourse, noting that it has also been used to push back against regulatory interventions targeting large technology companies.

This is the psychological mechanism that makes the bubble risk worse: if every major player believes competitors will sprint regardless of economics, the rational move becomes to sprint too—even if the ROI is uncertain.

That logic helps explain why capex has surged to levels that even prominent financial observers have started describing as potentially dangerous. Reuters reported Bridgewater warning that Big Tech’s growing reliance on external capital to fund the AI boom is “dangerous,” with spending rising far faster than internal cash generation.

If private companies are wrong, how does it hit workers?

The worker impact depends on where the “wrongness” is. There are at least four ways the thesis can fail—and each has a different labor footprint.

1) Overbuild: too much capacity, too little demand (the classic infrastructure bust)

What fails: data center capacity is built ahead of paying demand.

Worker pathway:

This is the dot-com pattern: the internet still wins long-term, but a generation of workers lives through the reset.

The IMF’s chief economist has explicitly warned of a dot-com-style bust following the AI investment boom—while suggesting it may be less likely to become a systemic crisis that “craters” the whole economy.

That’s not reassurance for workers inside the blast radius. “Not systemic” can still mean brutal sectoral job losses.

2) Monetization mismatch: revenues lag capex, and margins must be defended

What fails: spending ramps faster than durable revenue.

This is where the most direct worker risk lives. When revenue growth can’t carry the story, companies protect margins by cutting costs—and labor is the largest controllable cost for many firms.

A revealing tension already exists in public discourse:

Worker pathway:

3) Financial tightening: external funding dries up

What fails: investors stop underwriting the buildout at current terms.

Bridgewater’s warning about external capital reliance matters because it points to a classic bust trigger: a shift in credit conditions or investor sentiment forces an abrupt slowdown.

Worker pathway:

4) Regulatory or reputational shocks: adoption slows

What fails: legal, safety, or public trust issues slow deployment, reducing the revenue ramp.

This can hit workers indirectly: if product rollouts slow, sales forecasts miss, and cost cuts follow.

How much of the invested capital would “hit jobs” in a bust?

There is no single conversion rate from dollars of capex to jobs lost. But the channels are clear:

The job impact therefore tends to be asymmetric:

This is one reason economists treat these cycles as socially destabilizing: labor absorbs the volatility created by capital’s expectations.

Does a bust imply a recession?

Not automatically—but it can raise recession risk through concentration.

If AI investment has become a meaningful driver of growth, a sudden pullback becomes more macro-relevant. J.P. Morgan Asset Management argued that in the first half of 2025, AI-related capital expenditures contributed 1.1% to GDP growth, outpacing the U.S. consumer as an engine of expansion in that period.

That kind of concentration is what turns a sector correction into something larger:

Layoffs reduce consumption. Job losses and hiring freezes hit household spending quickly, especially for workers without large savings buffers.

A capex pullback hits industrial suppliers. Data centers rely on construction, electrical gear, cooling systems, and network equipment; when projects pause, the contraction spreads beyond tech payrolls.

Equity drawdowns tighten financial conditions. A sharp repricing in the firms most associated with the AI narrative can spill into broader risk appetite.

Sentiment shifts cause broader investment caution. When CEOs and CFOs perceive markets turning, the default response is to preserve cash and delay commitments.

Several analysts have drawn parallels to dot-com dynamics. IMF Chief Economist Pierre-Olivier Gourinchas said the AI investment boom could be followed by a dot-com-style bust, while arguing it is less likely to become a systemic financial crisis because the boom is not primarily financed by debt.

A useful framing is that a bust can be “not systemic” and still be devastating for workers who are systemic to their own households. That is an interpretive judgment, but it follows directly from how downturns transmit: sectoral corrections can remain contained in aggregate measures while producing severe local labor shocks.

“Private companies can induce a recession” — what’s true, what’s not

A recession is a macro outcome produced by millions of decisions, policy conditions, credit dynamics, and shocks. It is not something a single private company can reliably “choose” like a product launch.

That said, there are two important guardrails.

1) There are laws against certain intentional market harms

If firms collude, manipulate markets, or engage in anti-competitive conduct, antitrust and market-integrity frameworks can apply. The U.S. Department of Justice has explained that antitrust enforcement often turns on proof of market power and exclusionary conduct.

Separately, legal scholarship has described how market manipulation can undermine the allocation of capital and impair trust—mechanisms that can matter for broader financial stability.

2) “Inducing a recession” is not a standard legal category

Even if private actions contributed to a downturn, proving intent and causality at the macro level would be extraordinarily difficult. Policy typically targets conduct (collusion, manipulation, fraud), not macro labels.

So the stronger policy question is not “make it illegal to induce a recession.” It is: should policy reduce the chance that a private capex arms race creates public economic downside?

Should public policy slow the investment race until products are viable?

This is the heart of the debate, and it demands practical answers rather than slogans.

Directly ordering private companies to “slow down” is politically and legally difficult in most market economies. But there are tools that can force discipline, expose risk, and price externalities—without banning innovation. (The argument that these tools are preferable is, unavoidably, a normative position; the mechanisms below are factual.)

Policy Response 1: Require AI infrastructure risk disclosure that matches the scale

For publicly traded firms, regulators can push for clearer disclosure around:

This would not prohibit spending. It would raise the cost of vague narratives—particularly in a cycle where some major investors have warned that the financing structure itself is becoming riskier. Bridgewater Associates warned that Big Tech’s reliance on external capital to fund the AI boom is “dangerous,” arguing spending is outrunning internal cash generation.

Policy Response 2: Stress-test local and regional exposure

Because data center buildouts are geographically concentrated, states and municipalities can stress-test:

This is public-finance hygiene—applied to infrastructure-like private investment.

Policy Response 3: Make externalities expensive (impact fees and grid cost recovery)

If AI infrastructure creates incremental public costs (grid upgrades, interconnection buildouts, water usage), policy can require impact fees or negotiated contributions tied to those burdens.

This is not inherently “taxing innovation.” It is charging for public capacity consumed.

Policy Response 4: Tie subsidies and permits to demonstrated usage and community benefit

Where public incentives exist (tax abatements, expedited permitting), eligibility can be conditioned on:

This is a direct counterweight to “race to overbuild.”

Policy Response 5: Competition policy that targets race-driven consolidation

Race rhetoric often becomes a justification for market power. The AI Now Institute’s timeline documents how “AI arms race” framing has been used to push back against regulatory intervention targeting large technology firms, including antitrust, privacy, and algorithmic accountability.

Competition policy matters here because concentrated markets change how adjustment happens in a bust. When a small number of firms control the infrastructure, cost cutting tends to arrive as layoffs and vendor compression rather than competitive reallocation.

The “AI war” origin story—what it is and why it matters

The “AI race” framing has multiple roots, and it shapes investment behavior by changing the default posture from “prove value” to “spend now or fall behind.”

1) Geopolitical framing

Policy discourse has repeatedly cast AI development as a U.S.–China competition. AI Now’s timeline shows how that rhetoric became institutionalized and deployed by key stakeholders.

2) Corporate competition

Corporate narratives convert “race” language into justification for speed: rapid deployment, massive capex, and sometimes thinner proof of ROI. Evidence of that tension shows up in the market itself—Gartner predicted that over 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, unclear business value, or inadequate risk controls.

3) The race narrative can distort incentives

Researchers have argued that “arms race” metaphors can be misleading and can incentivize corner-cutting. A widely cited paper by Cave and Ó hÉigeartaigh assessed risks created by AI race narratives, including pressures that may reduce attention to safety and governance.

Separately, the Center for a New American Security argued that perceptions of an “AI arms race” can themselves create risks by encouraging competitors to cut corners, even if the “race” framing is overstated.

More recent academic work has argued that the arms-race metaphor does not accurately capture the dynamics of global competition in AI, proposing an “innovation race” framing instead.

The implication is not that competition is fake—it is that the metaphor can become self-fulfilling. When every player believes survival requires sprinting, each becomes willing to accept higher financial risk to avoid being the one who slows down first.

Closing: the worker question is the macro question

The AI boom may still deliver long-run productivity and new categories of work. But if the thesis is wrong in the short run, the adjustment will not be shared evenly:

Public policy does not need to pick a side on AI’s ultimate promise to address this asymmetry. The practical aim is narrower: reduce the probability that an arms-race investment cycle produces public downside—and ensure the public captures a modest dividend from infrastructure-scale private buildout. That conclusion is a policy judgment, but it follows from the documented concentration of spending and the documented use of race rhetoric to accelerate deployment.

Sources

  1. Reuters Breakingviews — “Capital intensity will reprogram Big Tech values”
  2. NVIDIA Newsroom — “NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2025”
  3. CB Insights — “Artificial Intelligence Report 2024 (State of AI 2024)”
  4. Gartner — “Gartner Forecasts Worldwide GenAI Spending to Reach $644 Billion in 2025”
  5. Gartner — “Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027”
  6. International Data Corporation (IDC) — “Worldwide Spending on Artificial Intelligence Forecast to Reach $632 Billion in 2028”
  7. Networking and Information Technology R&D (NITRD) — “FY2025 NITRD and NAIIO Supplement to the President’s Budget”
  8. Reuters — “AI investment boom may lead to bust, but not likely systemic crisis, IMF chief economist says”
  9. J.P. Morgan Asset Management — “Is AI already driving U.S. growth?”
  10. Reuters — “Bridgewater warns Big Tech’s reliance on external capital to fund AI boom is ‘dangerous’”
  11. AI Now Institute — “Tracking the US and China AI Arms Race”
  12. U.S. Department of Justice (Antitrust Division) — “Monopoly Power and Market Power in Antitrust Law”
  13. Duke Law (Law & Contemporary Problems) — “Macroeconomic Consequences of Market Manipulation”
  14. Cave, S., & Ó hÉigeartaigh, S. — “An AI Race for Strategic Advantage: Rhetoric and Risks” (AIES 2018)
  15. Center for a New American Security / Texas National Security Review — “Debunking the AI Arms Race Theory”
  16. Schmid (2025) — “Arms Race or Innovation Race? Geopolitical AI Development”

Related stats