Johns Hopkins Just Proved That $100 Billion in AI Training Might Be Wasted

Johns Hopkins Just Proved That $100 Billion in AI Training Might Be Wasted

A peer-reviewed study shows untrained neural networks can match trained ones. Here's what it means for investors.

The $100 Billion Question

Right now, Microsoft is spending $80 billion on AI data centers. Google, Amazon, and Meta are each pouring tens of billions more. The logic is simple: more compute, more data, more training = better AI.

But what if that logic is wrong?

A new study from Johns Hopkins University, published in Nature Machine Intelligence, suggests that for certain tasks, architecture matters more than training. Untrained neural networks—with completely random weights—can predict human brain activity almost as well as networks trained on millions of images.

What the Researchers Actually Found

The study, led by cognitive scientist Mick Bonner, tested three types of neural network architectures with ZERO training:

  1. Convolutional Neural Networks (CNNs) — the architecture behind image recognition
  2. Transformers — the architecture behind ChatGPT
  3. Fully Connected Networks — a simpler, older design

Result: The untrained CNN approached the performance of AlexNet (trained on millions of images). Transformers and Fully Connected performed much worse.

The Key Quote

"The AI field is throwing data at models and building compute the size of small cities. That requires hundreds of billions of dollars. Meanwhile, humans learn to see using very little data."

Mick Bonner, Lead Author

Investment Implications

1. Architecture Innovation May Be Undervalued

Watch for: Mamba, RWKV, state-space models, neuromorphic computing. Companies focused on novel architectures may have an edge.

2. The "Scale Everything" Play Has Limits

If architecture alone got them most of the way there, diminishing returns on training may hit sooner than scaling maximalists expect.

3. Biology May Hold Untapped Alpha

Evolution already solved visual processing. Brain-inspired architectures may offer advantages that pure scaling cannot.

The Caveats

  • This is about vision, not language. LLMs may still need massive training.
  • Brain matching ≠ practical performance on real tasks.
  • Sample size: 2 monkeys, 8 humans. More replication needed.

The Bottom Line

The AI industry spends $100+ billion per year assuming scale is everything. This peer-reviewed Nature study suggests architecture may matter more than training for some applications.

For VCs: Look for architecture innovation, not just bigger models.

For founders: Efficiency gains through design may beat brute force.

For everyone: The AI race might not be won by whoever spends the most.


Source: Kazemian et al. "Convolutional architectures are cortex-aligned de novo." Nature Machine Intelligence (2025).

Read more