According to Business Insider, former Tesla AI director and OpenAI founding member Andrej Karpathy declared that AI coding agents caused a “phase shift in software engineering” around December 2025. In extensive notes posted on Monday, he detailed how his personal workflow flipped from 80% manual coding and 20% AI agents to the exact opposite—80% agents and 20% manual editing—in just a month. He specifically named improvements in Anthropic’s Claude Code and OpenAI’s Codex, with Claude Opus 4.5 launching in late November. Karpathy admitted he now programs “mostly in English,” sheepishly telling the LLM what code to write in words, and has noticed his ability to write code manually is starting to atrophy. Engineers from xAI and Anthropic, including Claude Code’s creator Boris Cherny, quickly chimed in on the social media thread, with Cherny stating his team writes “pretty much 100%” of their code using Claude Code.
The new reality is English, not code
Karpathy’s post isn’t just another “AI is cool” tweet. It’s a first-person account from a top-tier architect watching his own fundamental skills change in real time. When the guy who helped build the AI models says he’s mostly “programming in English,” you have to listen. It’s a profound shift. The cognitive load moves from syntax and library memorization to high-level system design and, crucially, the ability to articulate intent with extreme precision. You’re not just a coder anymore; you’re a director, a spec writer, a reviewer. And honestly, that “hurts the ego,” as Karpathy put it. There’s a real pride in crafting elegant code manually that’s hard to let go of.
What happens to the 10x engineer?
The reactions from the xAI and Anthropic engineers are maybe even more telling than Karpathy’s original post. Ethan He from xAI said this turns a “10x engineer” into a “one-man army.” Charles Weill, also at xAI, compared a founder using agents to a VC spreading capital across a portfolio. The implication is staggering: leverage. The ceiling for individual output isn’t just raised; it’s shattered. But here’s the flip side Karpathy hints at: what’s the new floor? If manual skills atrophy, does a new kind of “code illiteracy” become a risk? Can you effectively direct and debug an AI if you’ve lost the muscle memory for the underlying logic?
The AI loop: AI writes, AI reviews
Perhaps the most meta insight comes from Boris Cherny at Anthropic. He openly admits the “quality” problems—AI can overcomplicate and leave dead code. His solution? Have AI review the AI-written code. We’re building a fully autonomous loop where the human is the initial prompt and the final sanity check. This is where the industry is sprinting. It’s not about replacing one human task with an AI; it’s about creating entirely new, AI-native workflows. The goalposts for “developer productivity” are being moved so fast we can barely see them.
software”>A tectonic shift for software
So, is this good? It’s incredibly powerful, that’s for sure. The ability to prototype, iterate, and build complex systems is accelerating at a pace we haven’t seen since maybe the move to high-level languages. But Karpathy’s note about atrophy is a blinking red warning light. It feels like we’re outsourcing not just grunt work, but core understanding. The risk is creating a generation of architects who can design a skyscraper in English but can’t read the blueprints for the foundation. The “phase shift” is real and it’s here now, as Karpathy’s full notes argue. The question isn’t whether to use these agents—they’re too powerful to ignore. The question is how we preserve the deep, intuitive knowledge of the craft while riding this wave. That’s the next big challenge for software engineering.
