skip to content
kyrisu

I’ve recently watched fantasy author Brandon Sanderson keynote titled “We Are The Art” that crystallized something I’d been thinking about for some time now. His argument was simple: AI can now produce output that people can’t distinguish from human-made art: songs that land on top of the charts, text passages that pass blind tests, images that win competitions. But, Sanderson argued, the piece was never the only product. The product is also how the act of creation changes you as a person. The struggle, the dead ends, the breakthroughs - that’s what transforms an amateur into a professional. AI-generated art skips this process, and in doing so, it “steals the opportunity for growth.”

He’s talking about art, but I see the same argument being made for other domains where humans create things.

The Hidden Product

In education - let’s consider an essay. Any student can now prompt an AI to produce a competent paper on almost any topic. The output is indistinguishable from - maybe better than - what most students would write. But every educator knows that the essay was never really the product. The product was the transformation: the student who had to wrestle with contradictory sources, construct an argument, discover that their first thesis was wrong, and rebuild it. The struggle is where learning happens. As the Fordham Institute put it: “The more we rely on AI to summarize, paraphrase, or interpret the world for us, the less capable we become of making sense of the world ourselves.”

Now consider software. In 1985, computer scientist Peter Naur published a paper called “Programming as Theory Building” that argues that the primary product of software development is not the code but the theory - the mental model, in the minds of engineers, that maps the system to the real world. I actually discovered this paper through the post by Facundo Olano who argued that this is why legacy software is so painful - not because the code is bad, but because the team’s understanding of it has been lost. The code tells you what and how, but rarely why. After over two decades of building systems, I used to take pride in a specific feeling: when a client pointed to a bug or proposed a new feature, I knew exactly what that meant for the codebase. Not because I memorized every line, but because the act of building had given me a theory of the entire system - its constraints, its trade-offs, and its capabilities. The code was an artifact. The understanding was the real product. This is also why we often hear about software companies being acquired together with the team (which usually has incentives to stay) - the code without the team often has no value.

Roots and Fruits

In 1990, C.K. Prahalad and Gary Hamel published “The Core Competence of the Corporation” in Harvard Business Review. Their central metaphor was that a corporation is like a tree. The visible leaves and fruits, are the products customers buy. The trunk and branches are the core products. But the roots, which are often invisible, are the core competencies - the collective learning, integrated skills, and institutional knowledge that takes years to develop.

They warn that executives who focus only on the fruit and cut the roots to save money will get a good harvest this year and a dead tree the next. Companies that outsourced their core competencies for short-term savings lost the ability to innovate and became dependent on suppliers who retained the knowledge they had surrendered.

Replace “outsourcing manufacturing” with “replacing developers with AI” and the argument maps perfectly.

When a company replaces its engineers with AI, the visible output - shipped features, resolved tickets, merged pull requests - may look fine, perhaps even better. The ‘fruit’ is abundant, but the roots are dying. The institutional knowledge, the domain understanding, the architectural judgment that accumulated through years of struggling with the problem - that pipeline is severed.

Jason Gorman coined a term for this: comprehension debt. Technical debt is the code you chose to write poorly. Comprehension debt is the code you never understood to begin with. It’s a ticking time bomb. When something inevitably breaks - and it will - nobody in the team will know why. The only option is to ask the AI to fix it, which adds another layer of code nobody understands on top of the existing layer of code nobody understands.

One might argue that this is not different from assigning work to interns or junior developers. But there’s a crucial distinction: people take responsibility for what they produce. A developer who cut a corner knows they cut a corner. When the issue surfaces, they say “I know what happened” and can fix it. They carry context. AI carries none. Each interaction starts from zero1.

The Outsourcing

The pattern of replacing internal capabilities with external output isn’t new - it’s just wearing a different hat. From the perspective of knowledge building, AI replacement and traditional outsourcing are functionally equivalent. Both produce the artifact while extracting the understanding from the organization.

Traditional OutsourcingAI Replacement
OutputDeliveredDelivered
Domain understandingBuilds in the outsourced team, not yoursBuilds nowhere
AccountabilityDiffused across contract boundariesDiffused into a black box
When things break”Call the vendor""Re-prompt the model”
Long-term effectHollowed-out capabilityHollowed-out capability

There’s a critical difference though that makes AI replacement potentially worse: when you outsource to humans, at least someone builds the understanding. Those people develop domain expertise. They can be brought in-house. Their knowledge persists somewhere in the world. When you outsource to AI, the understanding evaporates entirely. Nobody learns. It’s a knowledge sink.

Internal and External Goods

The philosopher Alasdair MacIntyre, in After Virtue (1981), drew a distinction that ties all of this together.

Every practice - painting, carpentry, medicine, software engineering - produces two kinds of goods. External goods are things like money, status, and power. They can be achieved through many means, including shortcuts: if someone else paints the painting and you take credit, you still get paid. Internal goods are the excellence, understanding, and character transformation that can only be achieved by engaging in the practice yourself. There is no shortcut. You cannot outsource learning to see like a painter. The skill, the judgment, the way it changes how you perceive - these are available only to the person doing the work.

Crucially, MacIntyre’s point isn’t about individual self-improvement - it’s about communities of practice. A practice has shared standards, a history, and a culture of mentorship. In software, that’s code review norms, architectural conventions, the senior engineer who explains why we don’t do it that way. When those communities erode, the practice itself loses coherence.

MacIntyre’s argument: when institutions prioritize external goods over internal goods, the practice degrades and eventually dies.

This is the frame that unifies Sanderson’s argument about art, the educator’s argument about essays, and the software argument about code. Shipped code is an external good - it can be produced by humans, AI, contractors, anyone. The understanding that develops through building is an internal good - available only to the person who struggled with it.

Companies optimizing for AI-generated output are systematically maximizing external goods while destroying the pipeline for internal goods. In MacIntyre’s terms, they are corrupting the practice.

But Haven’t We Heard This Before?

A fair objection - every wave of automation triggered the same fears. Weavers said hand-crafting built irreplaceable understanding. Assembly programmers said compilers would destroy systems knowledge. And in each case, the understanding didn’t vanish - it shifted up a level of abstraction. The weaver became the textile engineer. The assembly programmer became the systems architect.

Why is this time different?

Because previous tools were deterministic shortcuts - they did faster what we already knew how to do, following principles we designed and could inspect. A compiler optimizes register allocation using heuristics vetted by the community. A query planner chooses join order based on statistics you can examine. A spell-checker catches your typos but doesn’t write your argument. These tools compressed effort while preserving legibility. You could always trace the tool’s reasoning, disagree with its choices, and override them - because you shared a principled framework with the tool. They augmented the hidden product by eliminating cognitive friction while keeping you on a path you understood.

AI code generation is different in kind. It produces solutions derived from statistical patterns in training data we didn’t curate, can’t fully inspect, and don’t share a principled framework with. Small changes in how you phrase a request can produce structurally different solutions - making it hard to build a mental model not just of the code but of the tool itself. You provide the goal and AI provides a solution that emerged from patterns you have no access to. The developer risks becoming a system integrator connecting black boxes of code they never wrote and don’t fully understand. The opportunity to build a mental model through struggle and discovery is bypassed.

Previous abstractions moved the struggle up. AI threatens to remove it entirely.

There is an optimistic version of this story. Barbara Oakley’s Learning How to Learn introduced me to what learning scientists call desirable difficulties - training conditions that feel harder in the moment but produce superior long-term retention. Struggle is not a bug in skill development but the mechanism through which durable learning happens. A recent study confirmed this directly: programmers who used AI assistance scored 17% lower on mastery quiz than those who struggled without it.

That being said - AI-generated code could become a teaching substrate - reviewing AI output with a junior and asking “why this pattern? what are the failure modes?” But in practice, almost nobody is doing this. The incentive structure points the other way - toward speed, toward shipping, toward measuring the external goods while the internal ones quietly atrophy.

The Rational Bet

None of this is lost on the executives making these decisions. I’m assuming many of them understand the risk perfectly well. Their calculation is different - it’s a bet on timing.

The logic goes as follows: if AGI2 arrives within 2-3 years - as [Dario Amodei](https://www.forbes.com/sites/anishasircar/2026/01/28/anthropic-ceo-warns- superhuman-ai-could-arrive-by-2027-with-civilization-level-risks/) and Sam Altman predict - then companies that already restructured around AI will be positioned to dominate. Human understanding becomes a legacy asset. Better to move early and take the transition costs now.

The timeline, though, spans a breathtaking range. Optimists in the AI industry say 2026-2028. The AI Futures Model places a fully automated coder - AI that can independently handle large software projects - at a median of 2032. Yann LeCun says “years, if not decades” . Stanford’s AI institute declared 2026 “the era of AI evangelism is giving way to evaluation”. The Atlantic asked in February 2026: “Do you feel the AGI yet?” - with the implied answer being not really.

The problem with the rational bet isn’t the logic - it’s the asymmetry.

Knowledge destruction and knowledge rebuilding are not symmetric operations. You can fire 700 people in a week, but you cannot rebuild the institutional knowledge those 700 people carried by hiring 700 new people, no matter how experienced they are. The knowledge is gone, and the hires start from zero. This is Prahalad and Hamel’s warning - roots take years to grow but can be killed in a quarter.

I see the above scenarios as follows:

AGI arrives (2-3 years)AGI doesn’t arrive (5-10+ years)
Replaced humansPositioned for the new worldPermanent damage - institutional knowledge destroyed, can’t rebuild
Augmented humansSlightly slower to adaptDeep human capital intact, AI amplifies existing understanding

The augmentation strategy has a better worst case. The replacement strategy is a bet-the-company gamble on a timeline that nobody agrees on.

On second thought, if you listen to techno-optimists, and AGI arrives on schedule, “slightly slower to adapt” might be an understatement. A company that spent years nurturing human understanding could find itself structurally uncompetitive against AI-native startups that never hired the teams in the first place - companies with radically lower cost structures and no legacy processes to unlearn. The honest framing is that augmentation is the safer bet, not the obviously correct one.

And there’s a second-order effect that some people start to notice: the talent pipeline dries up. The commodity work that AI handles best - boilerplate, simple features, test scaffolding - is exactly the work where junior engineers traditionally build their foundations. Automate the entry-level work and you stop producing the next generation of senior engineers. You don’t just hollow out the current team, you destroy the mechanism by which the team regenerates. Five years from now, those companies will be desperately bidding for a shrinking pool of pre-AI seniors who still carry deep understanding. As HBR reported in January 2026: companies are laying off workers based on AI’s potential, not its performance.

The Uncomfortable Question

There’s a question I can’t stop thinking about - what if the hidden product genuinely loses all economic value?

Not today. Not next year. But eventually. If AI reaches a level where it can own the full loop - understanding, building, debugging, adapting - then human comprehension of the system becomes unnecessary for business purposes. The IMF argues that as AI prediction advances, human judgement becomes the scarce resource. But what happens when judgment itself is automated?

In that world, Sanderson’s argument still holds - but for a different reason. The transformation that happens through creation would matter not because it has a price tag, but because it’s essential to what it means to live a fully human life. MacIntyre’s internal goods don’t derive their deepest value from economics. They never did. The fact that understanding-through-building currently has economic value is a happy coincidence of our technological moment, not an eternal truth.

This is the fork in the road:

If AI plateaus or advances gradually, the companies that invested in human capital - the hidden product - will outperform those that didn’t. The roots will matter. The understanding will be the competitive advantage. Everything I’ve argued above holds.

If AI achieves full autonomy, the hidden product still matters - but for human flourishing, not for quarterly earnings. We’d need to answer a much harder question - what is the purpose of human effort in a world where effort has no economic necessity?

I don’t know which scenario we’re heading towards. Nobody does. And anyone who tells you they do is selling something - probably an AI product.

Footnotes

  1. Context persistence and memory in agents is improving but remains shallow compared to human mental models.

  2. There’s a disagreement about what AGI actually is, but for the sake of argument let’s assume AGI is an AI that can replace a human worker and that can do any task that this human worker can do - from doing specialized work to answering client emails and collaborating with the team on projects over Slack.