Signal Check: February 13, 2026
This week's signal in AI and engineering
We’re two years into mainstream AI adoption and still figuring out the basics. How does it change workloads? Who’s accountable for agent-written code? This week brought a rigorous eight-month study, a legal analysis, and a practitioner reckoning that all point to the same conclusion: the tooling is ahead of our understanding of how to use it.
AI Doesn’t Reduce Work, It Intensifies It
Aruna Ranganathan & Xingqi Maggie Ye, Harvard Business Review
UC Berkeley tracked 200 employees for eight months. AI didn’t give time back. Workers expanded their scope, absorbed other people’s responsibilities, and stopped noticing where work ended. One engineer nailed it: “You had thought that maybe you could work less. But then, really, you don’t work less.” The Hacker News thread asked the obvious follow-up. If AI is so effective at reducing work, why are workloads increasing for companies that adopt it?
There Is No Skill in AI Coding
Mo Bitar
Bitar takes Karpathy’s honest assessment of agentic coding and turns it into a performance review. The comments are worth reading too. One reader compared AI to a leavening agent: too much and your bread implodes. Another pointed out that syntax memorization, the thing AI supposedly replaces, was already handled by LSPs and autocomplete years ago. The real work was never typing.
Built by Agents, Tested by Agents, Trusted by Whom?
Stanford Law School CodeX
Stanford’s legal team asks the question most engineering orgs are quietly avoiding. When agents optimize for “pass the tests” rather than “build good software,” who answers for it? The piece is blunt about the stakes. Reading and writing code has been the bedrock of this profession for seventy years. If that skill becomes optional, we should determine what replaces it as the basis for accountability.
Karpathy: From “Vibe Coding” to “Agentic Engineering”
Andrej Karpathy, via X
Karpathy called the original “vibe coding” post “a shower of thoughts throwaway tweet” and seemed genuinely surprised that it defined a movement. The new term is more deliberate. The community split predictably between those who see a new discipline forming and those who see a rebrand of common sense. What’s worth paying attention to: Karpathy still acknowledges models make “subtle conceptual errors,” overcomplicate code, and won’t manage their own confusion. That’s honest, coming from the person who started all of this.