Pattern Recognition

I've been in semiconductors for 20 years. I've watched this exact pattern play out before—just at a different scale, with different players, but the same fundamental forces.

We're watching the AI infrastructure consolidation unfold right now. And if you know the semiconductor playbook, you can see the parallels.

This isn't speculation. This is pattern recognition—and the outcome isn't predetermined, but the forces are familiar.

The Foundry Moment

In semiconductors, there's a moment when the game shifts from "who has the best technology" to "who can afford to keep building."

The semiconductor industry shows this pattern clearly. For decades, integrated device manufacturers (IDMs) like Intel dominated—designing chips and manufacturing them in-house. The model worked brilliantly when process leadership and vertical integration reinforced each other.

Then the economics shifted. At leading-edge nodes (7nm, 5nm, 3nm), building a new fab costs $20 billion. TSMC's pure-play foundry model—manufacturing for everyone (Apple, AMD, Nvidia, Qualcomm)—could amortize that cost across dozens of customers. An IDM manufacturing primarily for its own products faced brutal per-unit economics at that scale.

AMD saw this coming. In 2008, they made a strategic bet: spin off their manufacturing into GlobalFoundries and become a fabless design company. They acquired ATI for graphics capabilities but divested the fabs—keeping design while outsourcing manufacturing. That decision, controversial at the time, let them compete on chip design without the capital burden of $20B fabs. Today, AMD designs industry-leading CPUs and GPUs, all manufactured at TSMC.

Rock's Law caught up with Moore's Law. Moore's Law promised performance doubles every ~18 months. Rock's Law is the economics: fab costs double every ~4 years. Advanced fabs now cost $10-20 billion, and a single high-NA EUV scanner runs $400 million.

Intel has publicly acknowledged this challenge and is investing heavily in its foundry business (Intel Foundry Services) to compete with TSMC and Samsung. Their Client Computing division remains massive—comparable to AMD's entire business. But the competitive dynamic has fundamentally shifted: the pure-play foundry model proved more capital-efficient at leading-edge nodes than vertically integrated manufacturing for a single customer base.

This is the Foundry Moment: when capital becomes the moat, and vertical integration becomes a liability.

The AI Infrastructure Parallel

We're watching the same pattern in AI infrastructure right now.

Training a frontier model costs $100M+ today. By next year, it'll be $500M. The year after, $1B+. Google, OpenAI, Anthropic, Meta, xAI—it feels like healthy competition.

But look closer at the infrastructure layer. Who's actually building the compute?

  • Google: Building their own data centers, their own TPUs, vertically integrated

  • Microsoft: Spending $80B on Azure AI infrastructure

  • Amazon: $75B on AWS AI capacity

  • Meta: Building their own AI infrastructure for Llama

This mirrors what happened in cryptocurrency mining: GPUs dominated until purpose-built ASICs emerged. Once the economics favored specialization, general-purpose hardware became 'good enough' rather than 'only option.' We're watching the same shift in AI inference.

But here's what makes this decade interesting: Nvidia isn't standing still. They've been the platform for general-purpose compute since CUDA—gaming, workstations, crypto, now AI. They have some of the best engineering talent in the world and have successfully pivoted through multiple platform shifts. The next ten years won't be boring, and anyone expecting a single winner is missing the complexity of what's coming.

And then there's everyone else. Startups renting compute. Mid-size companies begging for GPU allocations. Even well-funded AI labs are compute-constrained.

And in wars of attrition, capital wins. But we're not talking about incremental scaling anymore. We're watching order of magnitude shifts—when your $10B/year capital roadmap becomes a $100B/year tsunami.

This isn't just about spending more—it's Rock's Law returning with a vengeance in a new domain. When sovereign-scale capital becomes the only path forward, the semiconductor pattern repeats: TSMC dominated fabs, and now Google, Microsoft, and Amazon will dominate AI infrastructure.

This week, Google's AI infrastructure lead revealed they need to double compute capacity every 6 months just to meet demand. In capital-intensive manufacturing, there's a brutal reality: 'Too much capacity is career limiting. Not enough capacity is career ending.' This isn't unique to any company—it's the physics of building at scale

The smaller players are already showing cracks. Inflection AI, once valued at $4B, effectively sold itself to Microsoft for acqui-hire prices. Adept, another well-funded startup, is "exploring strategic options." The models work. The technology is real. But they can't afford the infrastructure to compete.

And the hyperscalers building their own silicon (Google TPU, Microsoft Cobalt, Amazon Trainium) aren't just cost-optimizing—they're vertically integrating the entire stack. When your customers become your competitors, the moat narrows fast.

This is the Foundry Moment for AI: when the cost of compute becomes the primary constraint, and only the hyperscalers can afford to play.

The Cisco Story (Both Were True)

I was there for the Cisco story. 1999: Cisco became the most valuable company on Earth ($555 billion market cap). The networking revolution was real—businesses needed routers, switches, infrastructure for the internet. But the stock price? That assumed perfect execution, infinite growth, no competition, no economic cycles.

Both were true: the revolution was real AND the valuation was insane.

By 2002, Cisco's stock had dropped 86%. But here's what mattered: Cisco survived. The infrastructure got built. The internet didn't disappear.

The bubble narrative focused on the crash. The revolution narrative focused on what got built. Both were true.

What This Looks Like Now

What does this look like? You see the headlines about an "AI bubble"—and yes, there's froth. For every AI company that reaches $10B valuation, several others will fail to productize or scale into their early expectations. But simultaneously, the AI capabilities are real. I use Claude, ChatGPT, and other models daily for work that would've taken me 10x longer a year ago.

Both can be true. The froth will pop. Some startups will fail. But the revolution—the genuine capability shift—will compound. The infrastructure will get built (by someone). The question isn't "Is AI overhyped?" The question is "Who survives the consolidation to build the post-bubble infrastructure?" That's the game.

What This Means for You

Here's where this gets personal. If you're building your career around AI tools, you need to understand this consolidation because it determines which skills will matter in 3 years.

If the infrastructure consolidates to 3-4 hyperscalers:

  • Access becomes the constraint (not capability)

  • Integration matters more than innovation (connecting systems > inventing new ones)

  • Vendor relationships become critical (knowing who to call when APIs break)

If AI capabilities commoditize (the more likely path):

  • The models become table stakes (everyone has access to "good enough")

  • Differentiation shifts to application (what you build with the tools)

  • Domain expertise matters more than AI expertise (understanding your problem > understanding transformers)

My bet? The capabilities commoditize. The models get cheaper, faster, and more accessible. And the professionals who win are the ones who understand their domain deeply enough to apply AI effectively—not the ones who understand AI deeply enough to build models.

And as AI moves from cloud APIs to local agents running on edge devices, the professionals who understand end-user workflows will matter more than those who understand training infrastructure.

This is why Episode 015 (next week) focuses on the "Fabless Professional" strategy. You don't need to own the foundry. You need to know how to design for it.

My Check-In

Applying my own starfish framework: I'm swimming (moving forward, but it takes effort). The new role is exciting but demanding. Learning new systems (JIRA, Confluence, the team's workflows) while also trying to contribute value. It's the right kind of challenge, but I'd be lying if I said it wasn't tiring.

The newsletter remains my anchor—a way to process what I'm seeing, connect patterns, and stay engaged with this community. Thank you for being here.

Looking Ahead

Next Week (Episode 015): "The Fabless Professional"—How to thrive when you don't own the infrastructure.

Week After (Episode 016): "Not Agentifying the World"—What I'm actually doing in my first 90 days (and why it's not what you'd expect).

Until then—keep paddling beneath the surface. Together.

Joseph

Deeper Dives / Further Reading

This week's analysis draws from:

  • Cisco's Bubble and Recovery: "Cisco stock price crash 25-year recovery AI dotcom bubble" - Business Insider (link)

  • AI Infrastructure Wars: "The evolution of neoclouds and their next moves" - McKinsey (link)

  • Semiconductor Foundry Economics: "From Moore's Law to market rivalry: The economic forces that shape the semiconductor manufacturing industry" - Law & Economics Center (link)

Keep Reading

No posts found