If you walked into the home office of a cutting-edge software engineer today, you might see something that looks… distinctly unproductive.
On one screen, an AI is writing code, running tests, fixing its own errors, and deploying applications. It is doing the work that, just two years ago, would have required a team of three people.
And on the other screen? The human engineer is waiting. Watching Instagram Reels. Occasionally intervening, but mostly just… monitoring.
To an outsider, this looks like slacking off. But in reality, this is what the collapse of the cognitive labor market looks like. We are witnessing a phenomenon called “Silent Automation.” It isn’t happening with giant robotic arms in a factory. It is happening quietly, behind the screen, where the execution of work is being decoupled from the human worker entirely.
And while software engineers are the first to experience this shift, they are essentially the beta test for the entire economy. The logic that is currently rewriting Silicon Valley is coming for every white-collar job that relies on what we used to call “intelligence.”
For the last decade, the economic narrative was simple. We were told that the “Physical World” was dangerous—prone to automation by robots—and the “Digital World” was safe. The advice given to an entire generation was clear: escape the factory, get a degree, and then a white collar job. Learn to code. Learn to analyze data. Work with your mind.
We assumed that automation would climb the ladder from the bottom up. First, it would take the muscle power. Then, the repetitive, routine tasks. And only in the distant future would it touch creativity or logic.
The narrative was wrong. Automation isn’t hitting the bottom; it’s now hollowing out the middle. And it’s happening because of a fundamental misunderstanding of what artificial intelligence actually is.
We thought AI was just a tool to help us work faster. We didn’t realize it was becoming a replacement for the work itself.
And this brings us to the core thesis of this post: We are witnessing the “Death of Medium Intelligence.”
I don’t mean that humans are getting stupider. I mean that the economic value of “average” cognitive labor is collapsing. Writing a standard brief, organizing a spreadsheet, coding an app—these tasks are no longer skills. They are now features of the software we use.
“Medium Intelligence” was the engine of the middle class for decades. But in 2026, being “competent” at processing information is no longer a competitive advantage. It is a commodity. And the first industry to face this reality is the one that built the tools in the first place: Software Engineering.
In 2025, 84% of developers reported using AI tools. Autonomous agents are now estimated to handle 90% of execution tasks in modern codebases. We are seeing major open-source companies lay off 75% of their engineering staff because the “implementation layer” has collapsed, and solo founders are replacing 50-person departments with single-person, agent-driven workflows. This isn’t a “future prediction.” It is the new reality.
Why Software Engineering? Why is this specific field being automated faster than accounting, or law, or medicine?
For years, we were told this was the “untouchable” profession—that writing code was a domain of pure logic and complexity that was impossible to automate.
But the irony is that this “pure logic” is exactly what makes it vulnerable. The field isn’t being automated because the job is easy. It is being automated because the domain is “Context Ready.”
To understand what “Context Ready” means, you have to understand the one major limitation of Large Language Models: The Context Window.
Think of an AI like a brilliant intern with severe amnesia. It has a “Working Memory”—called a Context Window. It can only “see” what you fit into that window. If a problem relies on information outside that window—like a conversation you had by the water cooler three weeks ago—the AI is useless. It hallucinates. It fails.
And this is where Software Engineering is unique: Code is explicit. It is text files. It is logic. The entirety of a program—the variables, the functions, the history—is all contained within a single folder. You can give it to the AI, and it will have the ability to understand all of it.
Compare that to a lawyer. A lawyer’s work relies on implicit context: a judge’s mood, a client’s unstated fears, the history of a handshake deal. That data isn’t digital. It’s in the air. It doesn’t fit in the window.
And that is why software fell first. It wasn’t about intelligence. It was about how easily the work fits into the machine’s memory.
It is this structural advantage that allowed tools like Anthropic’s Claude Code to evolve beyond simple autocomplete. Instead of just suggesting text, they became agents—capable of “seeing” the entire codebase and interacting with it directly.
Former Tesla AI Director Andrej Karpathy calls this the move to “Software 3.0”. We are no longer writing the code; we are prompting the system in English, and the system manages the logic.
Karpathy calls this “Vibe Coding”. You don’t need to know the syntax. You don’t need to know the “vibe” of what you want—“make it pop,” “fix the bug,” “add a payment stripe.” The barrier to execution has dropped to near zero.
So, if an AI agent can act as a “mid-level” engineer—if it can write unit tests, fix bugs, and refactor code faster than a human—what happens to the actual junior engineers?
They simply… aren’t hired.
We are seeing a phenomenon known as the “Hollowing Out.” Companies are freezing entry-level hiring because the return on investment of a junior employee is now negative. Why pay a graduate to learn on the job when an AI costs $20 a month and knows every library ever written?
But here is the paradox. Senior engineers—the ones currently orchestrating these AI agents—don’t just magically appear with twenty years of experience. They are formed. Formed by doing the “grunt work.” By struggling with the syntax and debugging code.
If we automate the “middle”—the struggle, the learning, the execution—we destroy the pipeline. We are burning the ladder while we are standing on it.
This is the “Junior Void.” And it creates a terrifying question for the future of the workforce: Who will audit the AI when the last generation of humans who actually know how the machine works retires?
Now, if you work in Law, or Finance, or Strategy, you might be feeling safe right now. You might be thinking: “My job is safe. The law isn’t written in a GitHub repository. My client meetings aren’t recorded. The nuance of a negotiation happens in the room, not on a server.”
This is a dangerous illusion. The reason these fields haven’t been automated yet isn’t because they can’t be. It’s simply because they aren’t digital enough yet.
Software engineering fell first because code is text. It is structured. It is clean. A lawsuit, a medical diagnosis, a strategy meeting—that is messy. It involves unrecorded conversations and intuition.
But that “messiness” is being solved.
Right now, billions of dollars are being poured into digitizing that intuition. Companies are recording thousands of hours of senior partners and doctors—not just to transcribe what they say, but to capture how they think. They are turning the “implicit” world of human interactions into the “explicit” world of training data.
Once that data is captured, the moat dries up. And the same “Death of Medium Intelligence” that hit coders will hit everyone else.
So, if “execution”—the actual doing of the thing—is becoming free, what is left to sell?
The first scarce resource is Agency.
If you can build a clone of Uber or a Flappy Bird game in 30 seconds with a single prompt, the app itself has zero value. The bottleneck isn’t building the app; the bottleneck is knowing why you are building it, who it is for, and having the drive to push it through the friction of the real world.
In the old world, you were paid to execute. In the new world, you are paid to decide what is worth executing.
The second scarce resource is Taste.
When AI can generate 1,000 marketing slogans, 1,000 logos, or 1,000 lines of code in a second, the skill isn’t creating them. The skill is curation. It is the ability to look at the “slop” produced by the machine and say, “That one. That is the one that connects with other humans.”
So where does this leave us? I think we are heading for a split, but not the one people usually talk about.
On one side, we have the “Techno-Optimist” vision, pitched by leaders like Sam Altman. It sounds compelling: For billions of people, life will become frictionless. You get world-class medical advice for free. You generate apps to solve your daily problems. You create entertainment just for yourself. It is a world of abundance.
But there is a flaw in this vision: You cannot make a living just by consuming. This is why figures like Elon Musk increasingly talk about Universal Basic Income. They know that if the “medium” labor that supports the middle class hits zero value, the only way to keep the engine running is to pay people to exist.
But we have to be honest: Is that actually going to work? Can we really patch a fundamental collapse in labor value with a monthly check? There is a real risk that our current economic operating system—capitalism, free markets—is fundamentally incompatible with a world where “average” work has zero value. We might be heading for a friction that even ideas like UBI cannot solve.
On the other side of the split, we will see the rise of the “Human Premium.”
As synthetic intelligence becomes infinite and cheap, “Human Made” becomes a luxury good. Programming, writing, and designing might follow the path of baking sourdough bread or pottery. It used to be a survival necessity. Now, it is a craft we do for passion, for status, and for the sheer joy of mastery.
It is easy to look at the exponential charts and assume we are heading for a straight line to the Singularity. But the history of technology is rarely that clean. There is a very real debate happening right now about whether this pace can be sustained. We might hit a wall in data scaling. We might find that integrating these tools into the physical world takes decades, not months. The future is rarely as linear as the marketing suggests.
However, even if the timeline is debatable, I believe the incentives have permanently shifted.
Personally, I think this “Orchestrator” concept—the idea of the super-powered human directing an army of agents—is absolutely real. But I also think it’s a trap.
Don’t get me wrong, in the next few years, the people who can command these agents are going to be incredibly sought after. They will build unicorns alone. They will make a fortune. And if you can do this, you should do this.
But I suspect this is just a temporary station. Because why would the abstraction stop there?
The “Orchestrator” is really just the final layer of middle management. And like all middle management, it is destined to be automated away once the workers—in this case, the agents—are smart enough to self-organize.
So, if the “doing” is gone, and eventually the “managing” is gone… what is left?
This is where I think the “Human Element” starts to shine. And I don’t mean “Human Element” in the corporate sense of “soft skills.” I mean it in a much more fundamental way.
There is something profound about the fact that, as machines become perfect, we might start valuing imperfection. We might start valuing effort for effort’s sake.
Think about Chess. Computers beat humans at chess decades ago. The machine is “better” than us in every quantifiable way. And yet, nobody watches computers play chess against each other. We watch humans. We watch Magnus Carlsen because we want to see the struggle. We want to see the pressure. We want to see the psychology.
I think work is going to go a similar path. We won’t write code or essays because we need the output. The machine can do the output. We will do it because the act of doing it—the struggle, the learning, the expression—is what makes us human.