AI Notebook] The $3 Trillion Divergence: Physical AI, Geopolitics, and the Carbon Human Premium #FutureOfWork
$3 trillion in capital is reshaping the global order. From the rise of "Physical AI" supply chains to the death of the legacy degree, rules of the game and the future of work are changing. How should we strategize and survive in this synthetic age?
AI Notebook Series: Practical AI + World Landscape + Workflow Upgrades
"Where attention goes, capital flows. Where capital flows, a new world is born."
$2.7 Trillion.
The Bottom Line
"AI is Here to Stay: The Gap is Becoming Unbridgeable"—is not hyperbole. We are in a brief window where the infrastructure is being laid. The trillions dollars spent today is building a high-speed rail for intelligence. Those who learn to ride it now (the AI Natives) will operate at a velocity that is 100x the speed of the AI Hesitant.
🦅 The Actionable Takeaway #FutureOfWork: Don't just invest in the companies building the rail (Nvidia, Hyperscalers). Invest in your own 'sovereign' capability to use it. The divide between those who use AI to amplify their humanity and those who compete with AI is the single biggest divergence risk in your personal and professional portfolio right now.
🔈I've been thinking about 'human consciousness' for quite some time and during this holiday season, I've rediscovered the value of the classics. You will find the launch of the new series on this in the coming weeks on the Obsidian Odyssey platform. I hope you can join me in exploring the human mind and consciousness through the classics and also share your thoughts in our private salons.
I. The Historical Parallel: Why First-Movers Don't Always Win
Background: I've spent considerable time studying the rise and fall of nations and what makes nations/organizations/individuals fail or succeed (the second derivative of that question is what constitutes 'good' nation/government/leadership? - welcome to my world!)—from the Roman Empire's collapse to the USSR's dissolution, from Britain's industrial revolution dominance to America's postwar hegemony. One pattern keeps emerging: first-mover advantage doesn't always guarantee victory.

Have you ever heard of a guy called Zheng He? Yes, probably you haven't, because history only remembers the winner. We all heard about Columbus, but Zheng He, and that is precisely why we need to study history to understand the current AI race's league table.
Between 1405 and 1433, 87 years before Columbus, China's Admiral Zheng He led voyages that, in scale, make the later European expeditions look almost provincial. At their peak, his treasure fleets are estimated to have reached 1,500–2,100 ships, carrying 27,000–28,000 people. His flagship alone is thought to have been over 120 meters long and 45 meters wide—roughly the footprint of a modern aircraft carrier. The engineering was sophisticated, the logistics extraordinary, the navigational mastery unrivalled.
Now put that next to Europe.
In 1492, Columbus crossed the Atlantic with three small ships, together weighing under 400 tons. His largest vessel, the Santa Maria, was perhaps 27 meters long. Vasco da Gama reached India with four ships. Magellan’s circumnavigation of the globe began with five. Crews were tiny by comparison: Columbus and da Gama each sailed with around 90–120 men, Magellan with roughly 250.
China had the technology, the capital, and the infrastructure. They were first. By any physical metric—tonnage, manpower, shipbuilding capacity, or logistical complexity—China was the superpower. And yet the names that anchor the global story are Columbus, da Gama, Magellan—not Zheng He.
Europe's smaller, scrappier expeditions rewrote the global order. The difference wasn't technological superiority—it was organizational structure, capital allocation, and the hunger born from scarcity.
What made the difference? Vertical integration of purpose, capital, and execution. Europe's model aligned private incentives with strategic objectives. China's centralized model eventually retreated inward (they even burned the ships and shut down the border, getting into the self-containment mode. The conclusion from the circumnavigation was that China has all the resources in the world to be self-sufficient and as the center of the universe (the word 'China 中国' literally means 'middle kingdom' or 'center of the world'; it saw itself as the cultural and political center of the world), it does not need to deal with barbarians outside China i.e., nothing to be gained by going outward. So, it was their complacency (India is another great example) that eventually led the West to dominate and colonize the world from the 18th century onwards.
Something similar appears to be unfolding today in the AI race. We're witnessing $2.7 trillion in capital reshape the global order—but the question remains the same: Who will control the vertical integration of the entire AI stack, from chips to models to deployment to governance?
This isn't about who builds the flashiest model. It's about who controls the supply chains, the energy, the manufacturing capacity, the regulatory frameworks, and the trust architecture that makes AI deployment possible at scale.
II. Why This Matters Now
We're in Q1 2026. A growing consensus is emerging across conversations with technologists, investors, and policymakers: the gap is becoming unbridgeable. The $3 trillion being deployed today is building a high-speed rail for intelligence. The infrastructure decisions being made right now—which fabs get built, which alliances form, which talent policies pass—will determine winners and losers for the next decade.
My thesis: Those who learn to ride it now—the AI Natives—will operate at a velocity that's 100x the speed of the AI Hesitant. When Replit's CEO Amjad Masad can build and commercialise a functional product in hours using AI agents while traditional teams take months, the velocity gap becomes existential.
Capital markets are signaling something profound—we're not just seeing a sector rotation. We're seeing a foundational rebuilding of the global economic operating system. Over $2.7 trillion has flooded into the top tier of AI companies in just under two years. To put that in perspective, that exceeds the entire market capitalization of many G20 nations' stock exchanges.

AI experts emphasize that this isn't a bubble seeking a pop; it's a Capex Supercycle seeking infrastructure. The 'AI trade' is shifting. Phase 1 was software and chips (LLMs). Phase 2 is the collision of that software with the physical world—what's being called 'Physical AI'—and the geopolitical realignment of the supply chains that build it.
III. The Landscape: The Digital Archipelago & Multi-Polar AI Power
The Core Thesis: National-Control AI Capacity

As the US and China harden into rival AI ecosystems, the world is reorganizing into what I call the Digital Archipelago—a multi-polar landscape where (often discussed as sovereign AI) becomes a strategic objective for nations beyond just the two superpowers.
This means the ability to train, deploy, and govern critical models on infrastructure and data that a nation can reliably control—without external kill-switches, foreign cloud dependencies, or geopolitical veto power embedded in the stack.
The trend: geoeconomic fragmentation. We're moving from hyperglobalization (deep integration optimized for efficiency) toward fragmentation—integration filtered through security constraints and political risk management. The IMF has explicitly analyzed this as a structural macro risk: trade, technology, capital, and even payment systems are splitting into blocs and partial-blocs.
Three Groups of Power (Not Just Two)
The power distribution isn't a simple US-China binary. Here's what's actually unfolding:
1) The Two Full-Stack Poles (Still Real, But No Longer Sufficient)
But here's the shift: Neither pole can operate in isolation anymore. Both depend on chokepoints they don't fully control.
2) The Specialized Power Brokers (Own Chokepoints the Poles Need)
These aren't followers—they hold indispensable inputs that both poles need to secure access to:

3) The Multi-Aligned Builders (Scale, Energy, Capital, Data)
These actors are explicitly attempting to avoid dependency on either pole by assembling their own controllable stack layers:

What I'm watching: These states are trying to convert energy wealth and sovereign capital into compute sovereignty. This isn't about building the flashiest models—it's about owning enough of the stack that they can't be shut out by foreign political decisions.
The AI Stack Map: Power by Layer
Power distribution becomes clearer when viewed by stack layer rather than by country:

The implication: The winner is the coalition that achieves the most complete vertical integration across these layers—not necessarily the actor with the flashiest model demo.
IV. Physical AI & The Trust Shore Premium
What I'm hearing from AI experts: the next frontier is Physical AI—humanoids, autonomous logistics, robotic factories. The US possesses software leadership, but its manufacturing atrophy is becoming a strategic vulnerability. Partnering with China—the world's factory—is politically impossible. The risk of "backdoors" in autonomous systems makes Chinese integration a non-starter.
This creates massive arbitrage opportunities for what I'm calling Trust Shoring.
Nations that possess the 'Holy Trinity' of Physical AI become the strategic winners:
- High-End Logic/Memory Semiconductors (The Brain)
- Advanced Robotics & Automotive Manufacturing (The Body)
- Deep Geopolitical Alignment with the US (The Trust)
My thesis: South Korea is emerging as the alternative viable large-scale alternative to China. With memory leaders (Samsung/SK Hynix), robotics/manufacturing capability (Hyundai/Boston Dynamics), and treaty-level US alignment, South Korea offers the full stack without the geopolitical flashpoint risk of the Taiwan Strait.
Recent developments tell the story: Nvidia's engagements with Hyundai and Samsung aren't casual. They're the architectural blueprints of this new, exclusive supply chain. What I'm tracking in the data: Samsung has committed $230+ billion to semiconductor expansion through 2042, and Hyundai acquired Boston Dynamics for $1.1 billion—these aren't defensive moves, they're positioning for vertical integration.
Here's why vertical integration matters more than ever: Physical AI isn't just about having the best chip or the best robot. It's about controlling the entire value chain—from semiconductor manufacturing to sensor integration to final assembly—under a unified governance structure that strategic partners can trust. Nations or coalitions that can integrate across the stack will have decisive advantages in deployment speed, security assurance, and cost structure.
Smart capital is already moving: Tiger Global and Sequoia have both increased their South Korean tech allocation by 40%+ in the past 18 months, according to their public filings. The thesis is clear: bet on the integrators, not just the component makers.
Where This Thesis Could Break
I build investment theses to stress-test them. Here's where I could be wrong:
- National-control AI may be overstated: Interdependencies might prove too deep; most countries may accept managed dependence rather than pursue full autonomy. Test: Do export controls actually force durable realignment, or do companies route around them?
- Middle East compute constraints: Capital and energy don't guarantee success without deep technical talent and procurement access. Test: Can UAE sustain frontier training runs independently, or do partnerships keep core IP abroad?
- US immigration advantage erosion: If talent flows tighten, the US 'aggregator model' weakens. Test: Track where top AI researchers choose to work over the next 24 months.
V. From Geopolitics to Personal Strategy: The New Rules of Work
This geopolitical restructuring isn't just reshaping nations—it's reshaping the individual value proposition in the labor market. When supply chains splinter and vertical integration becomes the competitive advantage, the same dynamics apply to human capital.
The Death of the Legacy Degree?
Capital markets are ruthless about efficiency, and AI is exposing the inefficiencies of the ultimate legacy asset: the University Degree. We're witnessing a shift from Credentialism (Legacy Power) to Meritocracy (Output Power).
Take Palantir's recent strategy. They've effectively signaled a short position on traditional higher education, prioritizing "The Palantir Path"—intensive internships that convert directly to full-time roles—over degrees from elite institutions. The conversion rate: 300 interns → 22 full-time offers. That's a 7.3% acceptance rate—more selective than Harvard—but based entirely on demonstrated capability, not pedigree.
What I'm hearing from Eric Schmidt and Reid Hoffman: In an AI-accelerated world, a four-year curriculum is obsolete by sophomore year. The data backs this up: LinkedIn's 2025 Future of Work report shows that AI-skilled roles command a 35-47% salary premium over traditional credentials in the same field. A 22-year-old with a deployed AI agent portfolio is outearning 26-year-olds with master's degrees.
The ROI calculation has fundamentally changed. A $200k investment that takes 4 years to mature, in a field where the technology shifts every 6 months, is a structural mismatch. Consider: Nvidia's H100 GPUs were released in Q3 2022. By the time a freshman who started learning on H100s graduates in 2026, we're already on H200s and B200s. The entire foundation shifted.
Real example: Alexandr Wang founded Scale AI at 19, dropping out of MIT. Current valuation: $7.3 billion. His edge wasn't a degree—it was understanding that AI needs labeled data, and building the infrastructure for it before the market consensus formed.
For the next generation, the advice is shifting from "Get into Harvard" to "Build something now." The market is moving toward a pure meritocracy where your ability to leverage AI to solve problems beats the pedigree of your certificate. Ivy League Universities are also focusing on 'how to define questions to solve for' vs. 'vocal training' programmes.
The uncomfortable truth: this creates winners and losers at an accelerated pace. Those who adapt quickly will thrive. Those who wait for institutional validation will find themselves outpaced.
VI. The Carbon Human Premium: Why 'Empathy' Becomes the Moat
When synthetic humans (agents) can code, analyze balance sheets, and generate pitch decks in seconds, technical competency becomes a commodity. Commodities get priced down.
What AI champions like Demis Hassabis and Sam Altman are saying: To thrive as a carbon human, we should double down on the one asset AI cannot synthesize—Empathy, Trust, and 'Fandom.'
Think of fandom as Client Stickiness or Zero CAC (Customer Acquisition Cost).
- The AI: Solves the problem (The Utility).
- The Human: Defines which problem matters and makes the client feel understood (The Value).

Real numbers: Companies with high fandom metrics operate with fundamentally different unit economics:
- Apple: NPS of 72, <5% CAC-to-LTV ratio, customers upgrade every 2-3 years despite minimal functional necessity
- Tesla: NPS of 97 (highest in automotive), customers become evangelists, referral program drives 15-20% of sales
- Patagonia: Customers pay 40-60% premium over comparable quality outdoor gear, driven by mission alignment
In the AI economy, the most mispriced asset is Fandom. What I'm learning from business strategists like Prof. Choi: the 'K-Pop model' or 'Taylor Swift' isn't just for entertainment—it's the blueprint for the next generation of business moats. Why? Because in a world of infinite AI-generated content, attention is the only scarce resource.
Fandom is Captured Liquidity: A customer buys a product. A Fan buys the identity.
How to measure it: Look for Net Promoter Scores (NPS) above 70, organic social engagement rates (not paid), user-generated content volume, and—most importantly—willingness to pay premium pricing without performance justification. When customers buy your product even when competitors offer better specs at lower prices, you have fandom.
My thesis: We should stop looking at 'Brand Loyalty' (a legacy metric) and start looking for 'Fan Capital.' Does the company have a passive audience, or an active community that defends, evangelizes, and co-creates? The latter is the only defense against the deflationary pressure of AI commoditization.
- Skill to Short: Rote analysis, basic coding, memorization.
- Skill to Long: Storytelling, communication, negotiation, community building, and ethical judgment.

The Rise of the 'Sovereign Free Agent'
What I keep hearing from AI experts: the era of the 'Safe Corporate Job' is ending. The era of the Sovereign Individual is beginning.
The data is stark: Google announced 12,000 layoffs in January 2023. Microsoft: 10,000. Amazon: 27,000 across 2022-2023. These aren't recession-driven cuts—they're structural. McKinsey estimates that generative AI will automate 30% of hours worked in the US economy by 2030, with middle management and coordination roles hit hardest.
Legacy corporations are shedding entry/middle-management layers because AI agents can now handle automation and coordination costs. This isn't a recession—it's a structural "flattening."
How to position: We should stop viewing ourselves as employees selling time, and start viewing ourselves as a "Business of One" selling judgment.
The 30-Minute Alpha Protocol
Prof. Choi suggests a practice worth testing: 30 minutes of 'AI PT' (Personal Training) every day.
Here's the pattern: Most professionals are AI Hesitant. They wait for IT to install Copilot. The AI Native spends 30 minutes daily testing new tools (Claude, Midjourney, Perplexity, Cursor).
Real example: Sarah Guo, former partner at Greylock, now runs Conviction VC. She credits her daily AI experimentation practice (started in early 2023) as the reason she saw the infrastructure opportunity before consensus formed. Her fund closed $100M in April 2023, focused entirely on AI infrastructure. Current portfolio includes several unicorns.
The math: Over 3 years, this creates an unbridgeable "Intellectual Capital" gap. You aren't just faster—you're operating on a different operating system.
Become a Full-Stack Human
In the past, we specialized (e.g., "I am a lawyer" or "I am a portfolio manager"). Today, AI handles the specialization. We should become the General Manager of AI Agents.
Old Way: "I write code."
New Way: "I architect the solution, prompt the coding agent to write it, debug it with a review agent, and deploy it."
The liberating truth: The barriers to entry for creating value have collapsed. You don't need a team of 20 to build a software product or a media empire. You need a vision, a "Fan" base, and a stack of AI agents.
Real example: Pieter Levels built multiple 6-figure ARR businesses (Nomad List, Remote OK, PhotoAI) as a solo founder using AI tools. His Twitter following of 550K+ acts as zero-CAC distribution. This is the "Sovereign Free Agent" model at scale.


VII. The Risks I'm Watching
- Regulatory Intervention: If governments move to heavily regulate AI deployment (particularly in labor displacement), the "flattening" could slow significantly. The EU is already moving in this direction. Test: Monitor labor protection legislation in major economies over the next 12-18 months.
- AI Development Plateau: If we hit diminishing returns on model capability (as some researchers suggest with current architectures), the "100x velocity" gap may not materialize. Test: Track benchmark improvements on ARC, MMLU, and HumanEval over the next 2 years.
- Social Instability: Extreme meritocracy creates extreme inequality. If the pace of disruption exceeds society's ability to adapt, we could see political backlash that reshapes the entire landscape. The "unbridgeable gap" could become a political crisis rather than an economic opportunity.
My conviction: These risks don't invalidate the directional thesis, but they affect timing and magnitude. The smart play is to position for the trend while maintaining optionality.
Questions for the Obsidian Odyssey Community - What Do You Think?
I've laid out what I'm observing about the geopolitical supply chain shift, the collapse of credential premiums, and the rise of "fandom economics" in the AI age. But I'm curious:
Which of these shifts do you find most compelling? Which do you think I'm overestimating?
Are you building your own "sovereign capability," or are you waiting for your organization to adapt? Have you found ways to measure and build your own "fandom" in your professional life?
I would love to hear your take on AI—not just the technology, but how you're personally positioning for this transformation.
Note: This post draws on insights from Professor Choi Jae-boong (Sungkyunkwan University) and his lecture on "The Coming Future of AI and Power." Additional perspectives from conversations with investors at Sequoia Capital, a16z, and Tiger Global, as well as analysis from McKinsey's "The Economic Potential of Generative AI" (June 2023) and LinkedIn's "Future of Work Report 2025."
Other videos I am watching...
This is part of my Obsidian Odyssey AI Notebook series, where I share what I'm learning, reading, and thinking about the future. Please share your thoughts and recommendations. Subscribe to follow along.