Head of Claude Code: What Happens After Coding is Solved
Plus Factory-to-Factory Collaboration & the Rise of GLP-1 Drugs
“The present is never tidy, or certain, or reasonable, and those who try to make it so once it becomes the past succeed only in making it seem implausible.” —William Manchester
I did a podcast this month on some of my thinking on AI: The $10K Projects You Never Do (AI Just Changed That).
Lots of AI again this month with a focus on how local knowledge and expertise may (or may not) persist. There's a GLP-1 chaser at the end if you're sick of hearing about AI.
Articles and Podcasts
Head of Claude Code: What happens after coding is solved | Boris Cherny [Podcast]
Lenny’s Podcast
Boris Cherny, the head of Claude Code, mentioned that everyone on his team at Anthropic (design, PM, finance person) codes.
He thinks by the end of year, “the title of software engineer is going to start to go away.” That's not because engineers aren’t needed, but because the three roles (engineering, product, design) will overlap so much that they'll collapse into a single role.
There’s a pattern in management history that runs in the opposite direction: towards specialization. Peter Drucker called roles that defeat several good people in a row “widow-maker” positions. His advice was to redesign the role, not find a better person. GM split the CEO and chairman roles in ’92. Boeing, Dell, and Oracle did the same. The job just kept growing until it exceeded what one person could do well.
AI seems to be pushing the boundary the other way now. The scope of what one person can handle is expanding (or at least changing). An engineer who also does product thinking and user research isn’t spread too thin anymore.
There’s a Ronald Coase argument here. Coase asked why firms exist at all? Why not just contract everything out? His answer was because coordinating inside a firm is sometimes cheaper than transacting across a market. Hiring someone to do every design task individually is way more work than just having a full time designer at a certain point.
Following the logic, you split a job into two jobs when one person can’t do both well enough, and the cost of coordinating between two specialists is worth the quality gain.
If AI tools make you 90th percentile at design, product thinking, and engineering then the gap between you and a dedicated specialist narrows. Maybe a specialist designer is still better. But is that enough to justify the coordination costs?
Every handoff between people costs context. Every sync meeting is time not spent building.
When one person can cover 80% of the quality across three roles, the coordination savings from not splitting the work start to dominate for many tasks.
I think software is the canary in the coal mine here. It’s where AI tools are most mature so it’s where role boundaries are shifting first. But there’s no reason this stops at engineering. Anywhere the bottleneck has been “I need a specialist who knows how to do X” rather than “I need someone with good judgment about what X to do” — that boundary is going to move.
The unit of “one person’s worth of work” is changing shape. It seems less deep and narrow, more broad and integrative? More right brain, less left brain maybe? (I suspect there’s a better way to think about this but haven’t worked it out yet, a topic for an upcoming essay I think).
Context Engineering: Why Hayek’s Knowledge Problem Survives AI [Article]
Chris Walker
In 1945, economist Friedrich Hayek argued that useful knowledge is dispersed. The person closest to the problem knows things that headquarters never will. (James C. Scott’s legibility argument in Seeing Like a State is a version of this.) Walker takes this idea and applies it to AI.
One take is that AI is a centralizing force: models dissolve the knowledge problem by processing everything centrally. E.g. You need fewer managers at each store or in each department because headquarters can just make all the decisions.
My thinking tends to lean the other way: local knowledge still matters and is never going to be perfectly captured by an AI system.
Anthropic’s own engineering primer describes context as “a finite resource with diminishing marginal returns.” This is the language of economic tradeoffs. More context isn’t necessarily better context.
Context is also reduced to all the data you have so far. Feed a legal AI your full contract history and it learns from three years of aggressive positions your startup took to close early deals. It thinks those positions are company standards, but the situation has evolved.
Someone has to decide what the model should see for this task, in this domain, right now. That judgment only comes from having done enough of the underlying work to know what good looks like and being able to think broadly about how this particular task exists in a broader context. The centralizing argument is that eventually the AI subsumes this as well. It’s certainly trending in that direction, but it’s a long way away for now. I expect that to persist for a while (years), but I've been wrong before.
When Engineering Gets 100% Meta-Rational [Article]
David Chapman
Chapman makes a supporting argument for the importance of local knowledge by using his distinction between rationality and meta-rationality.
Rationality takes the problem statement as given and solves within it. Meta-rationality asks whether the problem is sensible in the first place, whether the requirements match what was imagined, whether the direction is worth pursuing.
There’s an old form of striking called a work-to-rule strike where workers follow every rule, procedure, and regulation exactly as written. They don’t walk off the job, they just do precisely what their contract says, nothing more.
This is a clever way to strike because they still get paid, but it grinds the company to a halt. No one actually just does what’s in their job description: workers at any company use judgment, cut corners on bureaucratic processes, and voluntarily do things outside their strict job description to keep things running smoothly.
Coding agents are incredibly good at rational work. They execute within defined parameters. What they need from you is requirements analysis (what to build) and architecture (the big picture of how). Those are the parts that still require judgment, because rationality, by definition, excludes consideration of purposes.
A lot of the AI-is-taking-our-jobs conversation is about which roles survive. I think the more constructive framing is which parts of which roles survive those roles. The rational execution layers collapse. What remains, and what appreciates, is the meta-rational judgment about what to build and why.
Have Your Factory Call My Factory [Article]
Venkatesh Rao
A douche-y and uncool way to use your AI is to just generate a wall of text and send it to someone and expect them to read it.
A perfectly reasonable way to use it is to generate a wall of text, thoughtfully edit it and send it to someone to read.
A baller way to use AI is to have your AI call my AI.
Venkat calls this F2F: “factory to factory.” Two people who trust each other enough to let their systems talk, exchanging work-in-progress through their own scaffolding.
A V1 of this: I will periodically send someone an email with a wall of text from my AI and say “copy this into your setup and ask it how it applies to you.”
A well-architected set up already has a lot of context about you and if I can just give it some prompting towards where I want to nudge someone, that will often work much better and faster than me trying to understand all the context.
I was recently explaining the Kelly Criterion for position sizing to someone and it worked way better to just send them a dump of text on how I thought about it and have their AI that had access to their investment portfolio read it and think about how it applies to them.
It’s increasingly the case that the value of building your own AI scaffolding isn’t just what it helps you do alone. It becomes a surface area for collaboration. What you’ve assembled (your context files, your tools, your memory) is a new kind of social capital that can interface with someone else’s.
Ben Thompson on AI Ads, the End of SaaS, and the Future of Media [Podcast]
Cheeky Pint Podcast
Ben Thompson (Stratechery) and Stripe co-founder John Collison sit down on the Cheeky Pint podcast to talk about where AI-mediated commerce actually leads.
Thompson raises a concern worth dwelling on: when agents do all the product research and purchasing, everything that can be measured and compared will be measured and compared. Which sounds good, until you think about what gets optimized away.
He uses sports analytics as the example. Basketball involves lots of statistics that are useful ways to understand how good a player are team are. It also involves qualities that resist quantification: team chemistry, defensive effort, how one player’s energy affects another’s, etc.
Daryl Morey’s NBA teams have consistently over-optimized on measurable metrics at the expense of these harder-to-capture dynamics. Thompson argues that’s why they haven’t won a championship. Scale that pattern to AI-mediated everything: “how many things that can’t get measured fall by the wayside because we end up with utilitarian goods that have no soul to them?”
This is really just another version of the local knowledge/legibility/meta-rationality point (do you sense a theme?). My suspicion is that developing these skills personally is important and useful, but also that at a societal and cultural level, we tend not to value them and things are likely to go ‘too far’ before the pendulum swings back to recognizing their importance.
The Human Alignment Problem [Article]
Daniel Thorson
The alignment problem is aligning AI to human values: how do we not get turned into paperclips A less explored, but I think more interesting and important question is what does AI do to how humans align with their own values?
As AI gets better, it largely closes the execution gap: the space between desire and the capacity to act on it. A medieval peasant who craved wealth had almost no means to pursue it. When AI collapses that distance, you get what you asked for faster. This is cool!
However, you may also discover faster that it doesn’t touch the underlying desire.
Armin Ronacher, a well-known open source developer described this as “agent psychosis.” He spent two months in a manic loop, building tools he never used, unable to stop. “You can just do things” was running on repeat in his head.
As the execution gap closes, it reveals another gap for many people: what we think we want and what we actually want.
You want wealth because you want security because you want to feel safe because somewhere deep down, you want to rest in something you can trust completely. AI can deliver the surface-level want at machine speed (and this is dope!), but it brings you no closer to the thing underneath.
Why Does Ozempic Cure All Diseases? [Article]
Scott Alexander
In 1992, scientists discovered a chemical in Gila monster venom that mimicked GLP-1, the hormone your gut releases to signal fullness. By tinkering with its structure, pharma companies extended its duration from two hours (the Gila monster version) to one month (the latest synthetic GLP-1 receptor agonist known as Ozempic AKA semaglutide).
The GLP-1 phenomenon is the most amazing medical breakthrough I can remember and their effectiveness raises a lot of questions. These drugs are approved for diabetes and obesity and they seem to work quite well at that.
What’s fascinating though is that they also appear to treat alcoholism, smoking, stimulant addiction, opioid addiction, behavioral addictions like shopping, and, possibly, dementia. Why does one drug class do all of that?
It seems that GLP-1 drugs work in the brain, not the body. Scientists bred rats with GLP-1 receptors only in the body versus only in the brain and found the drugs didn’t work. The weight loss mechanism is neurological, not gastric.
They seem to dampen a specific part of the reward system that governs both food cravings and addictive behaviors, without flattening reward in general. You still enjoy a job well done or a child’s smile, but you’re also happy to stop after two beers rather than 12.
Alexander’s suggestion: addictions were originally a food reward system. GLP-1 signals satiety, and the evolutionary hack was to shut down a whole subsection of the reward system when you’re full. “You’re already well-nourished; why would you need the ability to crave things?” It may be that addictive substances happen to pull the same lever that food does, which would explain why a satiety signal can treat cocaine addiction.
One thought I’ve had lately: maybe these drugs become something like Vitamin D? We used to be outside all the time and so humans evolved to produce enough Vitamin D from sun exposure. Now, most of us spend so much time indoors that we are Vitamin D deficient without supplementation.
Similarly, we evolved in a world without McDonald’s, cocaine, or DraftKings and lots of people end up getting themselves in trouble by consuming too much of those things. Do we get to a point where a low-dose of a GLP-1 agonist is just seen as a way to function in the modern world like a Vitamin D supplementation? I don't love this outcome, but I don't hate it either and it's plausible!
Worth noting: there’s an enormous amount of money flowing into GLP-1 research right now. Novo Nordisk's market value surpassed Denmark's entire annual GDP. Some of these findings won’t replicate and I won’t be surprised in ten years if the scope of these drugs is diminished from what it seems like now. I also won’t be shocked if future versions are even better.



Nice to see you team claude code! From the trenches, we're seeing this happen fast.
2 years ago, I needed a product person to write descriptions, designer to make mockups in figma, an engineer to build it, and QA to test.
Now the right person can go from product -> design -> build in the same loop. It's still not perfect (and we have not nailed the self-testing QA of most things yet) but I'd say our velocity is up 50% across the board.
Thanks for the summary.
One erratum:
“Denmark’s economy has roughly doubled on the back of these drugs.”
This is not plausible. A quick search suggests that they may have been responsible for most GDP growth in recent years and now (including indirect effects) be responsible for 10% of the economy. That is huge, but not “roughly doubled”.