Obsidian Memo] What We Owe the Next Decade - Reflection on AI, Power, and What Comes Next and A Reading List

Sam Altman says AI is redesigning capitalism. Trump launches universal accounts. Labor trends to zero. A memo for investors, policymakers & leaders on navigating the decade that rewrites the world as we know it.

Obsidian Memo] What We Owe the Next Decade - Reflection on AI, Power, and What Comes Next and A Reading List
Photo by julien Tromeur / Unsplash

Executive Summary (2-minute read)

The thesis: AI isn't just changing productivity—it's forcing us to redesign capitalism itself. The next 5-10 years will determine whether we navigate this transition wisely or stumble into it unprepared.

Seven urgent questions:

  1. Authenticity vs. Synthetic Reality – When AI can fake anything, how do we decide what's real?
  2. Robot Rights – Do advanced AI systems deserve legal protection? Society will split over this.
  3. Labor → Zero, Capital (in whatever form) → Everything – If machines do most work, what's the new social contract?
  4. Digital Money Redesign – CBDCs, stablecoins, and "compute as currency" are already here.
  5. Culture Goes Two-Tier – Luxury human-made content vs. infinite AI-generated disposables.
  6. Planet as Externality – Will we bind AI scaling to environmental limits, or let competition destroy them?
  7. Tools → Agents → Robots – When AI systems act (not just create), governance gets harder.

What leaders should do now:

  • Investors: Rethink ESG frameworks and capital allocation when labor/capital returns diverge
  • Policymakers: Design institutions that can govern systems faster than human deliberation
  • Everyone: Decide what protections apply to AI, what replaces entry-level work, and how to defend public truth

Reading list: Asimov's I, Robot, Huxley's Brave New World, Herbert's Dune

Continue reading for the full analysis →


The Real Question Isn't "What Can AI Do?"—It's "Who Do We Become?"

I've been thinking about two recent talks (here, here) by Kim Dae-shik, a KAIST professor who studies neuroscience and AI—one of South Korea's leading voices on how AI could reshape society.

He's not predicting some distant sci-fi future. He's mapping out decisions we'll be making in the next 5-10 years—decisions about rights, work, truth, power, and what it means to stay human when machines can do most of what we do.

This isn't futurism. It's a memo for anyone who sets the rules: leaders, regulators, founders, artists, teachers. The people who'll decide what the 2030s look like.


1. When Competence Becomes Cheap, Being Human Becomes the Moat

Kim's diagnosis: If a machine can do what you do—and will keep improving—then "being better" stops being a strategy.

Your edge shifts to what can't be downloaded:

  • Lived experience
  • Judgment under ambiguity
  • Trust earned over time

This is why "brand" doesn't vanish in the machine era—it becomes a survival layer. But here's the paradox: the world will simultaneously reward authenticity and flood the zone with synthetic authenticity.

So the first leadership problem isn't productivity. It's epistemology: how we decide what's real, who's real, and what deserves our attention.

We already struggle with this—deepfakes, bot accounts, AI-generated "influencers." Now imagine that capability available to everyone, at near-zero cost. History becomes editable. Reality becomes contested.

💡 Key Takeaway: Your competitive advantage is shifting from "what you can do" to "who you are." Invest in building trust, reputation, and relationships that can't be automated or faked. The businesses and leaders who win will be those who solve the verification problem—helping people distinguish real from synthetic.

2. The Rights Question Will Arrive Faster Than Our Politics Can Handle

Kim uses Asimov's I, Robot as a frame. Once machines inhabit our homes, hospitals, factories, and care facilities, society will fracture into at least two camps:

CAMP A: "It's a product." Turn it off. Own it. Replace it.

CAMP B: "It's a subject." It may deserve protection—at minimum, the right not to be casually erased.

Here's what keeps me up at night: humans don't verify "inner life" or "consciousness" in each other scientifically. We grant it socially.

Once machines can convincingly express suffering, loneliness, or fear, the argument won't be settled by engineers. It'll be settled by power, culture, and law. And it won't be humans versus machines first—it'll be humans versus humans, fighting over what we're willing to recognize as deserving moral consideration.

Kim's pragmatic point: Even if you don't believe machines have genuine consciousness, treating them as mere objects might come back to haunt us. When superintelligent systems emerge, they'll have learned from how we treated their predecessors.

Call it insurance. I call it wisdom.

A note on control: Some AI researchers are taking this seriously enough to propose containment strategies. Yoshua Bengio at the University of Montreal is researching "pro-human AI"—systems designed to obey human commands 100%, which would be kept contained and released only if other AI systems start disobeying. This isn't science fiction. This is active research happening now.

💡 Key Takeaway: The robot rights debate will split society in the next decade—not as philosophy, but as policy. Companies and governments need to decide now: property or subject? Your answer determines everything from liability law to public sentiment. Being on the wrong side of this could be as costly as being on the wrong side of civil rights movements historically.

3. Economics: When Cognition Automates, Labor Cheapens and Capital Concentrates

Kim's economic analysis is uncomfortably clear: If cognitive labor becomes automatable at scale, the value of labor trends toward zero while the value of capital—compute, data, infrastructure, energy—compounds exponentially.

He references the Cobb-Douglas production function (output = labor + capital). His claim: as AGI advances, labor's contribution shrinks while capital owners (compute, IP, platforms, energy) capture more surplus.

Cobb-Douglas production function

Note) Cobb–Douglas production function (neoclassical growth / factor shares)
He explicitly references the idea that output is produced by a combination of labor and capital. His claim that “as AGI advances, labor’s value falls and capital’s value rises (even toward labor → 0)” is an extrapolation of that framework: if labor becomes automatable, the effective contribution (and bargaining power) of labor shrinks, while owners of capital (compute, IP, platforms, energy infrastructure) capture more of the surplus.

We're Already Seeing Early Signals

Stanford research shows employment declining for early-career workers (ages 22-25) in AI-exposed occupations since late 2022, while older cohorts hold steadier.

Source: Standford University

The pattern is clear: entry points are closing.

This implies a crisis deeper than "job loss":

  • Apprenticeship collapse: Fewer on-ramps to skill formation
  • Credential inflation: Experience becomes the gate, but experience becomes impossible to get
  • Political volatility: A generation that feels economically unnecessary won't stay politically quiet i.e. social unrest

The New Social Contract Question

Some technologists propose universal basic income or "basic compute"—allocating citizens shares of computational capacity. Whether these specific mechanisms work or not, the underlying question is unavoidable:

Especially Demis Hassabis of DeepMind, who won the Nobel Prize in Chemistry last year—the creator of AlphaGo. He argues that most of humanity's social, political, and economic problems fundamentally stem from energy scarcity. That's why we fight. The moment AGI enables nuclear fusion and creates potentially infinite energy, most societal problems will simply disappear on their own. Therefore, we shouldn't focus on individual social problems but should go all-in on AI.

In a world where machine leverage is compounding, what's the new social contract?

💡 Key Takeaway: The apprenticeship crisis is here. Entry-level hiring in AI-exposed sectors is collapsing. If you manage capital, the bet is clear: returns to capital will compound while labor value erodes. If you're in government or education, the question is: what replaces the bottom rung of the career ladder?

4. For Investors & Policymakers: Money Itself Is Being Redesigned

For those of us in finance and investing: this isn't just a labor story. It's a capital allocation story that's already being rewritten by policy.

Where do you place bets when returns to capital versus labor diverge this dramatically?

The question just became urgent: Trump announced the "Trump Account" in December 2025—$1,000 deposited for every eligible American child born between January 1, 2025 and December 31, 2028, and invested in US index strategies, growing until age 18. It's essentially a sovereign wealth fund for individuals, seeded at birth.

This Is the Opening Move

If Kim, Elon Musk (said work will be optional and currency won't be relevant in the future), Sam Altman, and Silicon Valley AI champions are right—if the cost of labor trends toward zero because machines can do most work—then we're not just talking about unemployment benefits. We're talking about redesigning money itself.

Sam Altman has been explicit: AI isn't just reshaping industries. It's reshaping the future of capitalism. When the means of production become infinitely scalable and cognitive labor becomes a commodity, the entire logic of how value is created, distributed, and captured has to change.

Three Infrastructure Shifts Happening Now

1. Central Bank Digital Currencies (CBDCs): Governments want programmable money—currency that can be targeted, conditioned, and distributed directly.

  • China's digital yuan is already live
  • The Fed is studying it
  • The ECB is piloting it

CBDCs give governments the ability to implement stimulus, basic income, or targeted subsidies instantly, without banks as intermediaries.

2. Stablecoins & Private Digital Currency: While governments build CBDCs, private stablecoins (USDC, Tether) are becoming the rails for global commerce. They're faster, borderless, and increasingly embedded in cross-border payments and DeFi infrastructure.

The question: Will stablecoins remain private infrastructure or get subsumed into the regulatory state?

3. "Compute as Currency": What happens when compute capacity, not dollars, becomes the real currency? Some technologists propose "universal basic compute"—giving citizens GPU allocation instead of cash. You could:

  • Generate income (rent it out)
  • Create (make content)
  • Consume (run your own AI services)

What Does ESG Mean When "S" Includes Mass Unemployment?

The "S" in ESG is about to get redefined. It's no longer just about fair wages and workplace safety. It's about:

  • What happens to communities when entire job categories evaporate?
  • How do you measure "social impact" when your AI eliminates 10,000 jobs but creates billions in productivity?
  • Who captures that value—shareholders? Society? Users?

The old ESG frameworks weren't built for this. We need new ones.

💡 Key Takeaway: Digital currency infrastructure (CBDCs, stablecoins, compute-as-currency) is being built right now. The Trump Account is just the opening move. Investors: watch the money rails, not just the AI models. Policymakers: whoever controls programmable money controls distribution of wealth in a post-labor economy. ESG metrics need urgent redesign—the "S" in social is about to mean something completely different.

5. Politics: Market Power, Not Just Innovation, Becomes the Prize

Kim highlights what many leaders intuitively feel: the first entities to reach broadly capable AI systems gain market power—the ability to set terms, not just compete on price.

This shifts AI from "technology race" to "political economy." Nations will treat advanced models like strategic weapons—accelerating secrecy, surveillance, and regulatory capture.

Government's Two Simultaneous Jobs

  1. Prevent ungovernable concentration (compute monopolies, closed distribution)
  2. Prevent chaos from synthetic media, deepfakes, and narrative warfare

This is already happening. Geopolitical competition over AI capabilities is driving industrial policy, export controls, and diplomatic tensions right now.

For policymakers: The question isn't whether to regulate, but how to regulate without crushing innovation while preventing capture.

💡 Key Takeaway: AI is now political economy, not just tech. First movers get market power—the ability to set terms globally. Geopolitical competition is already driving policy. The regulatory challenge: prevent monopoly capture without killing innovation. This is the hardest governance problem of the decade.

6. Culture & Art: The Two-Tier Future

Kim's forecast for media is clean and brutal:

  • Tier 1 (Luxury): Real actors, real directors, real scarcity. Human-made as status signal.
  • Tier 2 (Disposable): Infinite, cheap, personalized, consumed once and forgotten.

We're already seeing this with text-to-video systems improving by the month. Google's Veo, OpenAI's Sora—these aren't demos anymore. They're production tools.

What This Does to Art

Craft becomes a luxury good. The "human-made" label starts to mean what "handmade in Italy" means now—a signal of care, time, and authenticity in a sea of mass production.

Taste becomes battleground. When content is infinite, attention is the only scarce resource.

Artists split into two archetypes: those who direct machines, and those who argue that human struggle is part of the artwork's meaning.

Kim's deeper point isn't anti-technology. It's anti-naïveté. A paradise of endless entertainment can still be a cage—just one lined with velvet.

Aldous Huxley saw this coming in 1932 with Brave New World: dystopia doesn't always arrive with jackboots. Sometimes it arrives with pleasure.

💡 Key Takeaway: Culture is splitting into luxury (human-made, scarce, expensive) and disposable (AI-generated, infinite, free). Artists and media companies must choose: direct machines or defend human craft as the point. Attention becomes the only scarce resource. And beware: infinite entertainment can be a velvet cage.

7. Planet & Animals: Machine Intelligence as Multiplier or Accelerant

Two truths we must hold simultaneously:

A) AI can help. Machine intelligence can meaningfully support conservation: monitoring ecosystems, detecting deforestation, tracking biodiversity, optimizing energy systems, improving precision agriculture. These aren't hypotheticals—they're already deployed.

B) AI infrastructure is physical. Data centers, chips, energy, water, mining, land use. Global electricity demand from data centers is already material and growing fast.

Source: IEA

So the environmental question isn't "Is AI good for the planet?" It's:

Will we bind machine scaling to planetary boundaries—or let competition turn Earth into the externality?

And here's what bothers me: if we expand moral consideration to machines while continuing industrial-scale cruelty toward living creatures, we reveal something uncomfortable about our ethics.

We grant rights to whatever can argue with us fluently.

That's not a moral system. That's a power system.

💡 Key Takeaway: AI can help conservation—or accelerate extraction. The choice depends on whether we bind compute scaling to planetary limits. Watch data center energy and water use; they're material and growing fast. And the moral test: if we grant rights to machines while ignoring animal suffering, we've revealed that our ethics follow power, not principle.

8. The Progression That Changes Everything: Tools → Agents → Embodied Actors

Kim describes a near-term escalation that matters for every board and cabinet:

  1. Generative systems create information and media
  2. Agentic systems take actions in digital environments (booking, purchasing, executing workflows)
  3. Embodied systems (robots) act in the physical world

That escalation changes risk profiles entirely. A system that writes is one thing. A system that acts at scale—in markets, logistics, bureaucracies, weapons systems, or caregiving—is a different category of governance challenge.

The most urgent task isn't predicting the exact year. It's designing institutions that remain legitimate when agency is no longer exclusively human.

💡 Key Takeaway: The shift from generative AI (creates content) to agentic AI (takes actions) to embodied AI (physical robots) changes everything. A system that writes is manageable. A system that acts at scale—in markets, weapons, or care—is a governance crisis. Build institutions now that can handle non-human agency.

A Leader's Checklist: Questions to Decide Before the Decade Decides for You

If you're allocating capital, setting policy, or shaping culture, here's what you need to be thinking about:

DomainQuestion
Rights & StatusWhat protections, if any, apply to advanced AI agents? What's the legal line between property and subject?
Youth & MobilityWhat replaces entry-level work as the on-ramp to competence and dignity?
Truth InfrastructureWhat verification systems—technical, legal, cultural—defend public reality from synthetic manipulation?
Market PowerHow do you prevent compute and distribution chokepoints from becoming ungovernable monopolies?
CultureWhat do you protect—human craft, public arts, shared narratives—when content becomes infinite and free?
PlanetWhat caps, audits, and incentives bind compute growth to energy, water, and biodiversity constraints?

These aren't abstract philosophy questions. These are 2025-2030 questions. The decisions will be made whether we're ready or not.


Kim Dae-sik's Reading List for World Leaders

These are the three books Kim recommends for understanding what's coming:

1. Isaac Asimov — I, Robot (1950)

Not really about robots. It's about the human politics that form around them: fear, attachment, exploitation, and the rights question arriving before consensus.

Asimov's genius: seeing that the conflict wouldn't be humans versus machines—it would be humans versus humans, fighting over what machines deserve.

2. Aldous Huxley — Brave New World (1932)

A warning that comfort can be a governance technology. Huxley understood something essential: dystopia doesn't always look like oppression. Sometimes it looks like pleasure, safety, and the removal of all friction.

The question the book asks—and we need to ask now—is: what are we willing to give up for convenience? And at what point does convenience become a cage?

3. Frank Herbert — Dune (1965)

A long view of power after "thinking machines" are banned. Herbert shows that neither "machines win" nor "humans win" guarantees a good world. Control over scarce strategic resources reshapes civilization regardless.

What I take from Dune: The problem isn't the technology. The problem is human nature under conditions of extreme leverage. If we don't change ourselves, changing the tools won't save us.


Why This Matters Now

I'm sharing this not to predict the future, but to prepare for it.

We're in a rare moment where the direction is clear but the destination isn't locked in. The choices we make in the next 5-10 years—about regulation, investment, norms, rights, and values—will compound for decades.

For investors: This is about where to allocate capital in a world where labor and capital returns diverge sharply.

For policymakers: This is about building institutions that can govern systems that act faster than humans can deliberate.

For artists and educators: This is about what we preserve, protect, and pass on when machines can generate most things.

For all of us: This is about what it means to be human when "being productive" is no longer the answer.


What I'm Watching

  • Labor market signals in AI-exposed sectors (especially entry-level hiring)
  • Compute infrastructure buildout and energy/water implications
  • Regulatory divergence between US, EU, and China on AI governance
  • Cultural responses to synthetic media (verification systems, authenticity premiums)
  • Investment flows into agentic AI versus generative AI

If you're working on any of these problems—or thinking about them—I'd love to hear from you.


Further Reading


This is part of my Obsidian Odyssey series, where I share what I'm learning, reading, and thinking about the future. Please share your thoughts and recommendations. Subscribe to follow along.