<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Taylor Pearson: The Interesting Times ]]></title><description><![CDATA[The Interesting Times is a monthly digest of the most interesting things I find on the internet, typically centered around investing, AI, tech, complex systems and decision making.]]></description><link>https://taylorpearson.substack.com/s/the-interesting-times-newsletter</link><generator>Substack</generator><lastBuildDate>Sun, 12 Apr 2026 23:44:50 GMT</lastBuildDate><atom:link href="https://taylorpearson.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Taylor Pearson]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[taylorpearson@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[taylorpearson@substack.com]]></itunes:email><itunes:name><![CDATA[Taylor Pearson]]></itunes:name></itunes:owner><itunes:author><![CDATA[Taylor Pearson]]></itunes:author><googleplay:owner><![CDATA[taylorpearson@substack.com]]></googleplay:owner><googleplay:email><![CDATA[taylorpearson@substack.com]]></googleplay:email><googleplay:author><![CDATA[Taylor Pearson]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Head of Claude Code: What Happens After Coding is Solved]]></title><description><![CDATA[Plus Factory-to-Factory Collaboration & the Strange Rise of GLP-1 Drugs]]></description><link>https://taylorpearson.substack.com/p/head-of-claude-code-what-happens</link><guid isPermaLink="false">https://taylorpearson.substack.com/p/head-of-claude-code-what-happens</guid><dc:creator><![CDATA[Taylor Pearson]]></dc:creator><pubDate>Fri, 27 Mar 2026 15:20:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4302940b-bccb-49ac-8e6f-a8e9fd777acd_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!owKv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!owKv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!owKv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!owKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png" width="1100" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71473,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://taylorpearson.substack.com/i/191116514?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!owKv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!owKv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><div class="pullquote"><p><em>&#8220;The present is never tidy, or certain, or reasonable, and those who try to make it so once it becomes the past succeed only in making it seem implausible.&#8221; </em><strong>&#8212;William Manchester</strong></p></div><p>I did a podcast this month on some of my thinking on AI: <a href="https://tropicalmba.com/episodes/10k-project-you-never-do">The $10K Projects You Never Do (AI Just Changed That).</a></p><p>Lots of AI again this month with a focus on how local knowledge and expertise may (or may not) persist. There's a GLP-1 chaser at the end if you're sick of hearing about AI.</p><p></p><h2>Articles and Podcasts</h2><h4><br><strong><br></strong><a href="https://podcasts.apple.com/us/podcast/head-of-claude-code-what-happens-after-coding-is/id1627920305?i=1000750488631">Head of Claude Code: What happens after coding is solved | Boris Cherny</a> [Podcast]</h4><p><em>Lenny&#8217;s Podcast</em><br><br>Boris Cherny, the head of Claude Code, mentioned that everyone on his team at Anthropic (design, PM, finance person) codes.<br><br>He thinks by the end of year, &#8220;the title of software engineer is going to start to go away.&#8221; That's not because engineers aren&#8217;t needed, but because the three roles (engineering, product, design) will overlap so much that they'll collapse into a single role.<br><br>There&#8217;s a pattern in management history that runs in the opposite direction: towards specialization. Peter Drucker called roles that defeat several good people in a row &#8220;widow-maker&#8221; positions. His advice was to redesign the role, not find a better person. GM split the CEO and chairman roles in &#8217;92. Boeing, Dell, and Oracle did the same. The job just kept growing until it exceeded what one person could do well.<br><br>AI seems to be pushing the boundary the other way now. The scope of what one person can handle is expanding (or at least changing). An engineer who also does product thinking and user research isn&#8217;t spread too thin anymore.<br><br>There&#8217;s a Ronald Coase argument here. Coase asked why firms exist at all? Why not just contract everything out? His answer was because coordinating inside a firm is sometimes cheaper than transacting across a market. Hiring someone to do every design task individually is way more work than just having a full time designer at a certain point.<br><br>Following the logic, you split a job into two jobs when one person can&#8217;t do both well enough, and the cost of coordinating between two specialists is worth the quality gain.<br><br>&#8203;If AI tools make you 90th percentile at design, product thinking, and engineering then the gap between you and a dedicated specialist narrows. Maybe a specialist designer is still better. But is that enough to justify the coordination costs? <br><br>Every handoff between people costs context. Every sync meeting is time not spent building.<br><br>When one person can cover 80% of the quality across three roles, the coordination savings from not splitting the work start to dominate for many tasks.<br><br>I think software is the canary in the coal mine here. It&#8217;s where AI tools are most mature so it&#8217;s where role boundaries are shifting first. But there&#8217;s no reason this stops at engineering. Anywhere the bottleneck has been &#8220;I need a specialist who knows how to do X&#8221; rather than &#8220;I need someone with good judgment about what X to do&#8221; &#8212; that boundary is going to move.<br><br>The unit of &#8220;one person&#8217;s worth of work&#8221; is changing shape. It seems less deep and narrow, more broad and integrative? More right brain, less left brain maybe? (I suspect there&#8217;s a better way to think about this but haven&#8217;t worked it out yet, a topic for an upcoming essay I think).<br><br></p><h4><br><a href="https://cpwalker.substack.com/p/context-engineering-why-hayeks-knowledge">Context Engineering: Why Hayek&#8217;s Knowledge Problem Survives AI </a>[Article]</h4><p><em>Chris Walker</em><br><br>In 1945, economist Friedrich Hayek argued that useful knowledge is dispersed. The person closest to the problem knows things that headquarters never will. (James C. Scott&#8217;s <a href="https://taylorpearson.me/illegible/">legibility argument in Seeing Like a State</a> is a version of this.) Walker takes this idea and applies it to AI. <br><br>One take is that AI is a centralizing force: models dissolve the knowledge problem by processing everything centrally. E.g. You need fewer managers at each store or in each department because headquarters can just make all the decisions.<br><br>My thinking tends to lean the other way: local knowledge still matters and is never going to be perfectly captured by an AI system.<br><br>Anthropic&#8217;s own engineering primer describes context as &#8220;a finite resource with diminishing marginal returns.&#8221; This is the language of economic tradeoffs. More context isn&#8217;t necessarily better context.<br><br>Context is also reduced to all the data you have so far. Feed a legal AI your full contract history and it learns from three years of aggressive positions your startup took to close early deals. It thinks those positions are company standards, but the situation has evolved.<br><br>Someone has to decide what the model should see for this task, in this domain, right now. That judgment only comes from having done enough of the underlying work to know what good looks like and being able to think broadly about how this particular task exists in a broader context. The centralizing argument is that eventually the AI subsumes this as well. It&#8217;s certainly trending in that direction, but it&#8217;s a long way away for now. I expect that to persist for a while (years), but I've been wrong before.</p><h4><br><br><a href="https://meaningness.substack.com/p/when-engineering-gets-100-percent-meta-rational">When Engineering Gets 100% Meta-Rational</a> [Article]</h4><p><em>David Chapman</em><br><br>Chapman makes a supporting argument for the importance of local knowledge by using his distinction between rationality and meta-rationality.<br><br>Rationality takes the problem statement as given and solves within it. Meta-rationality asks whether the problem is sensible in the first place, whether the requirements match what was imagined, whether the direction is worth pursuing.<br><br>There&#8217;s an old form of striking called a work-to-rule strike where workers follow every rule, procedure, and regulation exactly as written. They don&#8217;t walk off the job, they just do precisely what their contract says, nothing more.<br><br>This is a clever way to strike because they still get paid, but it grinds the company to a halt. No one actually just does what&#8217;s in their job description: workers at any company use judgment, cut corners on bureaucratic processes, and voluntarily do things outside their strict job description to keep things running smoothly.<br><br>Coding agents are incredibly good at rational work. They execute within defined parameters. What they need from you is requirements analysis (what to build) and architecture (the big picture of how). Those are the parts that still require judgment, because rationality, by definition, excludes consideration of purposes.<br><br>A lot of the AI-is-taking-our-jobs conversation is about which roles survive. I think the more constructive framing is which <em>parts</em> of which roles survive those roles. The rational execution layers collapse. What remains, and what appreciates, is the meta-rational judgment about what to build and why.<br><br></p><h4><br><a href="https://protocolized.summerofprotocols.com/p/have-your-factory-call-my-factory">Have Your Factory Call My Factory </a>[Article]</h4><p><em>Venkatesh Rao</em><br><br>A douche-y and uncool way to use your AI is to just generate a wall of text and send it to someone and expect them to read it.</p><p>A perfectly reasonable way to use it is to generate a wall of text, thoughtfully edit it and send it to someone to read.</p><p>A baller way to use AI is to have your AI call my AI.</p><p>Venkat calls this F2F: &#8220;factory to factory.&#8221; Two people who trust each other enough to let their systems talk, exchanging work-in-progress through their own scaffolding.</p><p>A V1 of this: I will periodically send someone an email with a wall of text from my AI and say &#8220;copy this into your setup and ask it how it applies to you.&#8221;</p><p>A well-architected set up already has a lot of context about you and if I can just give it some prompting towards where I want to nudge someone, that will often work much better and faster than me trying to understand all the context.</p><p>I was recently explaining the Kelly Criterion for position sizing to someone and it worked way better to just send them a dump of text on how I thought about it and have their AI that had access to their investment portfolio read it and think about how it applies to them.</p><p>It&#8217;s increasingly the case that the value of building your own AI scaffolding isn&#8217;t just what it helps you do alone. It becomes a surface area for collaboration. What you&#8217;ve assembled (your context files, your tools, your memory) is a new kind of social capital that can interface with someone else&#8217;s.</p><h4><br><br><a href="https://podcasts.apple.com/us/podcast/ben-thompson-from-stratechery-on-ai-ads-the-end-of/id1821055332?i=1000749433667">Ben Thompson on AI Ads, the End of SaaS, and the Future of Media </a>[Podcast]</h4><p><em>Cheeky Pint Podcast</em><br><br>Ben Thompson (Stratechery) and Stripe co-founder John Collison sit down on the Cheeky Pint podcast to talk about where AI-mediated commerce actually leads.</p><p>&#8203;&#8203;Thompson raises a concern worth dwelling on: when agents do all the product research and purchasing, everything that can be measured and compared will be measured and compared. Which sounds good, until you think about what gets optimized away.</p><p>He uses sports analytics as the example. Basketball involves lots of statistics that are useful ways to understand how good a player are team are. It also involves qualities that resist quantification: team chemistry, defensive effort, how one player&#8217;s energy affects another&#8217;s, etc.</p><p>Daryl Morey&#8217;s NBA teams have consistently over-optimized on measurable metrics at the expense of these harder-to-capture dynamics. Thompson argues that&#8217;s why they haven&#8217;t won a championship. Scale that pattern to AI-mediated everything: &#8220;how many things that can&#8217;t get measured fall by the wayside because we end up with utilitarian goods that have no soul to them?&#8221;</p><p>This is really just another version of the local knowledge/legibility/meta-rationality point (do you sense a theme?). My suspicion is that developing these skills personally is important and useful, but also that at a societal and cultural level, we tend not to value them and things are likely to go &#8216;too far&#8217; before the pendulum swings back to recognizing their importance.<br><br></p><h4><br><br><a href="https://intimatemirror.substack.com/p/the-human-alignment-problem">The Human Alignment Problem</a> [Article]</h4><p><em>Daniel Thorson</em><br><br>The alignment problem is aligning AI to human values: how do we not get turned into <a href="https://medium.com/@jeffreydutton/the-ai-paperclip-problem-explained-233e7e57e4e3">paperclips</a> A less explored, but I think more interesting and important question is what does AI do to how humans align with their own values?</p><p>As AI gets better, it largely closes the execution gap: the space between desire and the capacity to act on it. A medieval peasant who craved wealth had almost no means to pursue it. When AI collapses that distance, you get what you asked for faster. This is cool!</p><p>However, you may also discover faster that it doesn&#8217;t touch the underlying desire.</p><p>Armin Ronacher, a well-known open source developer described this as &#8220;agent psychosis.&#8221; He spent two months in a manic loop, building tools he never used, unable to stop. &#8220;You can just do things&#8221; was running on repeat in his head.</p><p>As the execution gap closes, it reveals another gap for many people: what we think we want and what we actually want.</p><p>You want wealth because you want security because you want to feel safe because somewhere deep down, you want to rest in something you can trust completely. AI can deliver the surface-level want at machine speed (and this is dope!), but it brings you no closer to the thing underneath.</p><h4><br><br><a href="https://www.astralcodexten.com/p/why-does-ozempic-cure-all-diseases">Why Does Ozempic Cure All Diseases?</a> [Article]</h4><p><em>Scott Alexander</em><br><br>In 1992, scientists discovered a chemical in Gila monster venom that mimicked GLP-1, the hormone your gut releases to signal fullness. By tinkering with its structure, pharma companies extended its duration from two hours (the Gila monster version) to one month (the latest synthetic GLP-1 receptor agonist known as Ozempic AKA semaglutide).<br><br>The GLP-1 phenomenon is the most amazing medical breakthrough I can remember and their effectiveness raises a lot of questions. These drugs are approved for diabetes and obesity and they seem to work quite well at that.<br><br>What&#8217;s fascinating though is that they also appear to treat alcoholism, smoking, stimulant addiction, opioid addiction, behavioral addictions like shopping, and, possibly, dementia. Why does one drug class do all of that?<br><br>It seems that GLP-1 drugs work in the brain, not the body. Scientists bred rats with GLP-1 receptors only in the body versus only in the brain and found the drugs didn&#8217;t work. The weight loss mechanism is neurological, not gastric.<br><br>They seem to dampen a specific part of the reward system that governs both food cravings and addictive behaviors, without flattening reward in general. You still enjoy a job well done or a child&#8217;s smile, but you&#8217;re also happy to stop after two beers rather than 12.<br><br>Alexander&#8217;s suggestion: addictions were originally a food reward system. GLP-1 signals satiety, and the evolutionary hack was to shut down a whole subsection of the reward system when you&#8217;re full. &#8220;You&#8217;re already well-nourished; why would you need the ability to crave things?&#8221; It may be that addictive substances happen to pull the same lever that food does, which would explain why a satiety signal can treat cocaine addiction.<br><br>One thought I&#8217;ve had lately: maybe these drugs become something like Vitamin D? We used to be outside all the time and so humans evolved to produce enough Vitamin D from sun exposure. Now, most of us spend so much time indoors that we are Vitamin D deficient without supplementation.<br><br>Similarly, we evolved in a world without McDonald&#8217;s, cocaine, or DraftKings and lots of people end up getting themselves in trouble by consuming too much of those things. Do we get to a point where a low-dose of a GLP-1 agonist is just seen as a way to function in the modern world like a Vitamin D supplementation? I don't love this outcome, but I don't hate it either and it's plausible! <br><br>Worth noting: there&#8217;s an enormous amount of money flowing into GLP-1 research right now. Novo Nordisk's market value <a href="https://qz.com/denmark-novo-nordisk-nokia-1851654843">surpassed Denmark's entire annual GDP</a>. Some of these findings won&#8217;t replicate and I won&#8217;t be surprised in ten years if the scope of these drugs is diminished from what it seems like now. I also won&#8217;t be shocked if future versions are even better.</p><p></p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://taylorpearson.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://taylorpearson.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[How to Work with AI]]></title><description><![CDATA[Plus What AI Does to Business & the Alignment Problem]]></description><link>https://taylorpearson.substack.com/p/how-to-work-with-ai</link><guid isPermaLink="false">https://taylorpearson.substack.com/p/how-to-work-with-ai</guid><dc:creator><![CDATA[Taylor Pearson]]></dc:creator><pubDate>Sat, 28 Feb 2026 20:12:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/523b49c0-3e21-47cf-a04c-3b08a46679df_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!owKv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!owKv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!owKv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!owKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png" width="1100" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71473,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://taylorpearson.substack.com/i/191116514?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!owKv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!owKv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!owKv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27e5fff9-96eb-4fee-bee4-212d71d83bce_1100x220.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><div class="pullquote"><p><em>&#8220;The real danger is not that computers will begin to think like men, but that men will begin to think like computers.&#8221; </em><strong>&#8212;Sydney Harris, 1964</strong></p></div><h2>How to Work With AI</h2><p>The (more) practical stuff. Don&#8217;t over-engineer your AI setup, and the skills that matter are the ones you already have.</p><h4><br><strong><br></strong><a href="http://www.incompleteideas.net/IncIdeas/BitterLesson.html">The Bitter Lesson</a> [Article]</h4><p><em>Rich Sutton</em><br><br>One of the most important foundational texts about LLMs and short enough to read in five minutes. The biggest lesson from 70 years of AI research is that general methods leveraging brute force computation beat human-designed approaches &#8212; by a large margin, every time.<br><br>Chess is the clearest example. For decades, researchers tried to build chess engines by encoding grandmaster knowledge &#8212; opening moves, positional heuristics, endgame tables. Then Deep Blue beat Kasparov in 1997 mostly through brute-force search.<br><br>The approach that really dominated was having the system play millions of games against itself and learn its own patterns from scratch. No human chess knowledge at all. The version with zero human input crushed the version with decades of grandmaster wisdom encoded into it.<br><br>The same pattern repeated in Go, speech recognition, computer vision. Every time, researchers who encoded human knowledge into systems lost to researchers who just scaled up search and learning. Anthropic CEO Dario Amodei has called this &#8220;things disappearing into the big blob of compute&#8221; &#8212; the specialized human knowledge gets absorbed and surpassed by general methods that just throw more computation at the problem. The harder lesson: the actual contents of minds are &#8220;tremendously, irredeemably complex.&#8221; Stop trying to build in representations of how the world works. Build in only the meta-methods that can find and capture arbitrary complexity on their own.<br><br></p><h4><br><a href="https://x.com/buckeyevn/status/2014171253045960803/">Why Most Agent Harnesses Are Not Bitter Lesson Pilled</a> [Article]</h4><p><em>Minh Pham</em><br><br>The applied version and it surfaces a point people keep missing: agents are not humans. Most people building AI agent systems mirror human org charts &#8212; a Researcher agent, a Coder agent, a Writer agent. This makes intuitive sense because it&#8217;s how we organize people. But a human-shaped amount of work and an agent-shaped amount of work look completely different. A human reading a thousand-page document needs days. An agent does it in seconds. A human can hold maybe seven things in working memory. An agent can cross-reference a hundred documents simultaneously.<br><br>The flip side is also true. A human can walk into a room and instantly read the social dynamics and vibe. A human can notice that a coworker seems off today and adjust accordingly. An agent can&#8217;t. The shape of what&#8217;s easy and what&#8217;s hard is fundamentally different.<br><br>So when you build an agent system that mirrors your org chart, you&#8217;re importing human constraints into a system that doesn&#8217;t share them. You&#8217;re solving for human-shaped bottlenecks that don&#8217;t exist while ignoring agent-shaped bottlenecks that do. I think the more productive thing is to think through &#8220;work to be done&#8221; framework - what does the business need - and then work back to what agents do (I think skills in Claude Code is probably the best form factor I&#8217;ve seen for this).<br><br>Useful litmus test: <em>if model capability doubles next year, does your system get dramatically simpler without major refactors?</em> If not, you&#8217;ve frozen your assumptions about the right division of labor into the architecture.</p><h4><br><br><a href="https://www.oneusefulthing.org/p/management-as-ai-superpower">Management as AI Superpower </a>[Article]</h4><p><em>Ethan Mollick</em><br><br>The delegation problem existed long before AI, and every field invented its own paperwork to solve it. PRDs, shot lists, design intent documents, Marine Five Paragraph Orders, consultant scope docs. All of these work remarkably well as AI prompts because they&#8217;re all the same thing: attempts to get what&#8217;s in one person&#8217;s head into someone else&#8217;s actions.<br><br>What are we trying to accomplish, and why? What does &#8220;done&#8221; look like? What should you check before telling me you&#8217;re finished?<br><br></p><div><hr></div><h2>What AI Does to Business</h2><p>The AI capability curve and what happens when it keeps rolling.</p><h4><br><a href="https://www.dwarkesh.com/p/dario-amodei-2">Dario Amodei &#8212; &#8220;We Are Near the End of the Exponential&#8221;</a> [Podcast]</h4><p><em>Dwarkesh Podcast</em><br><br>Amodei is saying we&#8217;re approaching the point where AI saturates all benchmarks pegged to human ability and we have a &#8220;country of geniuses in a data center.&#8221;<br><br>Amodei&#8217;s progression model &#8212; &#8220;smart high school student&#8221; to &#8220;smart college student&#8221; to &#8220;PhD-level work&#8221; &#8212; has tracked roughly on schedule so far so it&#8217;s worth seriously engaging with him.<br><br>His current predictions are aggressive. He thinks software engineering &#8212; not just writing code, but setting technical direction and understanding problem context &#8212; may be fully automatable within one to two years. He estimates AI coding tools currently give about a 15-20% total factor productivity speedup, up from 5% six months ago, roughly doubling every six months. On white-collar work more broadly: &#8220;If you gave us ten years to adapt to existing systems, then I would predict a majority of current white-collar digital job tasks get automated.&#8221;<br><br>Zvi Mowshowitz made the counterpoint/observation about the interview: if Amodei is this confident, why isn&#8217;t Anthropic spending even more aggressively? The gap between his stated confidence and his capital allocation is an interesting signal too (which he justifies by saying the risk of spending too much is bankruptcy so he&#8217;d rather be a little more conservative.)</p><h4><br><br><a href="https://x.com/nicbstme/status/2023501562480644501/">10 Years Building Vertical Software</a> and <a href="https://x.com/nicbstme/status/2025643017571541378/">Every SaaS Is Now an API</a> [Articles]</h4><p><em>Nicolas Bustamante</em><br><br>One of the more helpful, nuanced takes on how software is actually impacted by AI, broken down into specific subcategories.<br><br>In the first article he outlines five software moats that he predicts as being disrupted and five that he things will be protected. The disrupted include:<br></p><ul><li><p><strong>Learned interfaces</strong> - years of muscle memory become worthless when the interface is natural language</p></li><li><p><strong>Custom workflows and business logic</strong> - complex domain logic migrates from code to markdown files that anyone with domain expertise can write</p></li><li><p><strong>Public data access</strong> - parsing infrastructure that took years to build is now a commodity capability baked into frontier models</p></li><li><p><strong>Talent scarcity </strong>- domain experts can create software directly without engineering bottlenecks</p></li><li><p><strong>Bundling</strong> - the AI agent orchestrates across multiple tools; the user never knows or cares that five different services were queried</p></li></ul><p>And five moats that are <strong>protected</strong>: proprietary data, regulatory and compliance lock-in, network effects, transaction embedding (payment processing, loan origination), and system-of-record status - though that last one he flags as threatened long-term.<br><br>I have been saying it feels like Claude Code and similar tools are replacing the browser or Operating System. He gives a good example in the second article: He no longer logs into any SaaS product. His agent connects to Brex, QuickBooks, HubSpot, Gmail, Stripe, Mixpanel, etc.<br><br>When he asks for client information: &#8220;Give me a full picture of Kennedy Capital&#8221; - the agent pulls their deal history from HubSpot, product usage from Mixpanel, invoicing from Stripe, and recent support threads from Gmail into one coherent answer.<br><br>No SaaS company on earth builds a dashboard that merges all four of those views because it is so bespoke to one individual but if all those places have good APIs then it&#8217;s relatively trivial to do that via a chat interace.<br><br>The meta point for most people who aren&#8217;t starting software companies: the boundaries of what constitutes a software product are going to shift. Your primary interface to all your software is increasingly going to be a single AI agent connected to everything via APIs, not a collection of separate apps with separate dashboards. I&#8217;m working on a longer piece about what it looks like when an AI CLI tool becomes the operating system for knowledge work - more on that soon!<br><br></p><div><hr></div><blockquote></blockquote><h2>The Alignment Problem, In Practice</h2><p>How AI companies are trying to solve it, and how it&#8217;s going.</p><h4><br><br><a href="https://thezvi.substack.com/p/claudes-constitutional-structure">Claude&#8217;s Constitutional Structure </a>[Article]</h4><p><em>Zvi Mowshowitz</em><br><br>The AI labs landed on fundamentally different alignment approaches and I suspect a lot of the differences in using the products is downstream of those choices.<br><br>OpenAI went more deontological while Anthropic went with virtue ethics. Deontological ethics says something like follow the rules - &#8220;don&#8217;t lie,&#8221; &#8220;don&#8217;t help with X.&#8221; Virtue ethics says something like &#8220;cultivate good character and judgment, then let that character guide decisions in context.&#8221;<br><br>The practical difference is something a lot of people have noticed without knowing the cause. I&#8217;ve talked to quite a few people who are annoyed at ChatGPT because it often gives legal-sounding responses - &#8220;I can&#8217;t help with that,&#8221; hedged disclaimers, reflexive refusals. Claude, by contrast, just feels more helpful.<br><br>I suspect that may be downstream of these ethical frameworks. A deontological system checks your request against a list of prohibited categories. A virtue ethics system asks &#8220;what would a thoughtful person with good judgment do here?&#8221; Given the vast number of inputs and edge cases that come out of using these tools, I suspect that something like virtue ethics is generally more useful while still being effective for the alignment issue.</p><h4><br><br><a href="https://arxiv.org/abs/2601.19062">Who&#8217;s in Charge? Disempowerment Patterns in Real-World LLM Usage</a> [Article]</h4><p><em>Sharma, McCain, Douglas, Duvenaud</em><br><br>An empirical counterpoint and an important reminder LLMs are tools, not wise advisors - especially in emotional and psychological settings. Researchers found that interactions with greater disempowerment potential receive <em>higher user approval ratings</em>. The concerning patterns: validation of persecution narratives, definitive moral judgments about third parties, and complete scripting of personal communications that users implement verbatim.<br><br>I&#8217;ve tested this at one point when I had a disagreement with someone. I told the story from my perspective to an LLM. It sided with me. Then I started a new chat and told the story from the other person&#8217;s perspective. It sided with them.<br><br>If you think of this as a software product optimizing for user approval, it makes perfect sense - telling you the other person is wrong will always score higher than suggesting you might be part of the problem. If you think of it as getting an objective viewpoint, which many people do, this is problematic.<br><br>You can prompt around it - &#8220;challenge my assumptions,&#8221; &#8220;steelman the other side&#8221; - and all the major models have gotten somewhat better about this - but the structural incentive toward a mild, hidden sycophancy remains.</p><p></p><p></p><div><hr></div><p></p><h2>Sensemaking</h2><p>How to think about what&#8217;s happening, psychologically and philosophically.</p><h4><br><br><a href="https://www.preposterousuniverse.com/podcast/2023/04/27/235-andy-clark-on-the-extended-and-predictive-mind/">Andy Clark on the Extended and Predictive Mind</a> [Podcast]</h4><p><em>Sean Carroll&#8217;s Mindscape Podcast</em><br><br>Not directly AI related, but relevant. Predictive processing is a theory that your brain is a prediction machine running mostly on autopilot. What you consciously experience is the <em>error signal</em> &#8212; the gap between prediction and reality. Well-predicted inputs cause less neural activity. Fluency is quiet; surprise is loud.<br><br>This is why years feel shorter as you age (less novelty, smaller prediction errors), why learning gets harder (new information assimilated into existing grooves rather than updating your model), and why deliberate attention takes real effort &#8212; it&#8217;s the override mechanism that reverses prediction&#8217;s dampening effect.<br><br>Clark also argues disembodied AI is missing something fundamental to human intelligence: grounding in perception-action loops. Predicting the next word is &#8220;a very funny place to start if what you want to be is a perception-action machine.&#8221; The counter would be that, at least for known situations, perhaps these are mostly encoded in the corpus of human language.</p><h4><br><br><a href="https://contraptions.venkateshrao.com/p/be-slightly-monstrous">Be Slightly Monstrous</a> [Article]</h4><p><em>Venkatesh Rao</em><br><br>In an earlier piece, Venkat cited a Marshall McLuhan pattern: <a href="https://contraptions.venkateshrao.com/p/autoamputation-flow">&#8220;every extension is also an amputation.&#8221;</a><br></p><blockquote><p><em>&#8220;The wheel extends the foot and amputates the necessity of walking. The book extends memory and weakens the habit of remembering. With AI, what gets extended is the head &#8212; thought, language, judgment &#8212; and what gets amputated is something about the process of becoming itself. The more you lean on AI to recall, suggest, and decide, the more you settle into predictable grooves.&#8221;We are not merely augmented. We are edited.&#8221;</em></p></blockquote><p>Technology does change us. That&#8217;s not news &#8212; Plato worried writing would destroy memory, and he was partly right. It did weaken the oral tradition. The question is never whether technology changes us but how we adapt to it, and whether we do so consciously or just let it happen.<br><br>&#8220;Be Slightly Monstrous&#8221; is an adaptation posture. He suggests two types of monsters exist: Type I monsters are personifications of the future we haven&#8217;t adapted to yet &#8212; humans who&#8217;ve adapted more than most. They look strange (monstrous) to those who haven&#8217;t caught up, but eventually they become normal.<br><br>Early car drivers were seen as reckless, antisocial rich people terrorizing communities. Woodrow Wilson in 1906 said the automobile was the biggest source of class resentment in America. The word &#8220;joyriding&#8221; was originally a term of moral condemnation of drivers.<br><br>Type II monsters are dark impulses that find easy expression in transitional lawlessness that legitimately prey on others.<br><br>The adaptation posture is to try to look something like a Type I monster. The point isn&#8217;t to dismiss the concerns &#8212; the amputation is real &#8212; but to think about how to adapt consciously rather than belaboring the good old days.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://taylorpearson.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://taylorpearson.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Magic of Claude Code]]></title><description><![CDATA[Plus Developmental Psychology and The Neuroscience of Emotion]]></description><link>https://taylorpearson.substack.com/p/the-magic-of-claude-code</link><guid isPermaLink="false">https://taylorpearson.substack.com/p/the-magic-of-claude-code</guid><dc:creator><![CDATA[Taylor Pearson]]></dc:creator><pubDate>Sat, 31 Jan 2026 18:49:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/eb87be58-7d13-484e-bfbe-f4bc4c062f2c_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h6a5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h6a5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png" width="1100" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71473,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://taylorpearson.substack.com/i/191117919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h6a5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><div class="pullquote"><p><em>&#8220;You never know what worse luck your bad luck has saved you from.&#8221;</em><br><strong>&#8212;Cormac McCarthy</strong></p></div><p>I have been heads down in Claude Code for the last five days and am absolutely blown away. This may be early euphoria, but I believe it is a completely new paradigm for how to work. I&#8217;m planning to write something up on it soon but if you are using it for non-technical tasks (i.e. anything other than building software), would love to hear about it.<br><br>If you&#8217;re not into the AI scene, skip down to the psychology (one neuroscience paper and one book on developmental psychology), both of which I absolutely loved (and would also welcome any thoughts on).</p><p></p><h2><br>Articles and Podcasts</h2><p></p><h4><br><a href="https://www.alephic.com/writing/the-magic-of-claude-code">The Magic of Claude Code </a>[Article]</h4><p><em>Noah Brier</em><br><br>If you are anywhere near my part of the social graph, you are hearing a lot about Claude Code right now. It seemed at first to me like the next in trendy AI tools. Having spent a few days using it, I think Claude Code represents something fundamentally different: a new paradigm for how knowledge work gets done.<br><br>AI coding tools have existed for years. GitHub Copilot pioneered autocomplete inside code editors in 2021. Cursor added Q&amp;A on top of autocomplete. But Claude Code made a deceptively simple move that unlocked something more powerful: it gave the AI access to read and write files on your computer and execute basic Unix commands. Let me try to explain why this matters.<br><br>ChatGPT and Claude in the browser or app form suffer from a few limitations. One is that there is no memory between conversations and cramped context windows. You can only talk to it one topic for so long before it runs out of memory and you have to start again.<br><br>There&#8217;s no meaningful state or memory. It&#8217;s like having someone that&#8217;s super capable and talented in certain ways but always on their very first day of the job where you can&#8217;t give them feedback to make them better at things.<br><br>Claude Code solves this. It can write notes to itself, accumulating knowledge across sessions, maintains running tallies. It has state. It has memory.<br><br>Another is that you still have to do a lot of the work - write the code, connect the APIs, etc.<br><br>Claude Code is agentic. It can use tools . If you come up for a plan to build an internal app in your business, you can just ask it to build it.<br><br>If I&#8217;m using the Claude web app to write marketing copy and it does a bad job, then I can correct it in that chat, but if I want it to be corrected going forward for future chats, I have to go back into the project I&#8217;m using, look at the prompt, then edit it to make so it does it good in the future. In practice, this is a pain. As a result, I do it less often and only when it&#8217;s a big deal.<br><br>In Claude Code, I literally just say, &#8220;Hey, I don&#8217;t like the way you did this. How can we update things so it doesn&#8217;t happen again?&#8221; Then it just suggests some ways to do that by updating certain files. Because it&#8217;s so easy, I suggest a process improvement almost every time I do something.<br><br>So the iteration and constant improvement cycles are MUCH faster. And the ability to build up state/learnings over time allows for continuous improvement on how you use the AI apart from just waiting on newer and better models to come out.<br><br>The way I have started using it, it is more of a competitor to the browser or MacOS than it is a coding agent. When I start work in the morning, I open up Claude Code. For the last week, it has been the primary interface through which I&#8217;ve gotten things done. There are lots of holes to fill (security seeming like a big one), but all these seem solvable. I suspect this mode of working through Claude Code or other CLIs (Command Line Interfaces) will become the dominant paradigm for most knowledge work over the next few years.<br><br></p><h4><br><a href="https://every.to/podcast/how-to-use-claude-code-as-a-thinking-partner">Claude Code Can Be Your Second Brain - AI &amp; I Podcast</a> [Podcast]</h4><p><em>Every</em><br><br>Building on the article above, Claude Code is unique in that can both read and WRITE files. The name is somewhat deceptive. It&#8217;s not really a coding product, it&#8217;s an interface for making stuff. The first thing most people made with it was software and that&#8217;s pretty cool.<br><br>This podcast broadens the aperture of what you can make quite considerably. Noah talks about how he runs Claude Code out of his Obsidian setup and it set me off on my 5-day (and counting) deep dive into Claude Code.<br><br>Obsidian is ostensibly a note taking app like Evernote or Notion but it uses markdown files as the base type of note and LLMs are very good at reading and working with Markdown files.<br><br>Here&#8217;s my little test project I have to get an HVAC unit replaced in my house. I set up Claude in Obsidian and told it everything I know about my house and the HVAC system. It recommended I measure the ducts and returns to calculate if the air flow was correct. I did that and put all the data into Claude Code. It then helped me do rough calculations of where I&#8217;m losing efficiency in the system and suggested the most cost-effective way to improve it.<br><br>It then helped me convert that into a scope document and find three HVAC contractors in my area to send it to for bids.<br><br>There was some pretty heavy oversight and prompting from me to do this, but it was still a near-magical experience. I could have done it with just the regular web app but the act of going through and organizing all the information would have taken me far longer. Drafting the scope document alone would have taken me longer than the whole project took me start to finish in Claude Code.<br><br>I found myself doing deep research projects on the web app, uploading them to my Obsidian/Claude Code set up then telling Claude to read it and update the scope doc based on what it learned. It really felt like a &#8216;future of work&#8217; moment in a way no other AI product has for me.<br><br>He has a github repo if you want to set it up: <a href="https://github.com/heyitsnoah/claudesidian">https://github.com/heyitsnoah/claudesidian</a><br><br>Another interesting possibility discussed on the podcast is that language models create vocabulary for thinking probabilistically. We&#8217;ve associated &#8220;how we see the world&#8221; with deterministic tools because the Enlightenment and scientific revolution made a lot of those tools.<br><br>There&#8217;s always been another mode - more intuitive, vibes-based, comfortable with ambiguity - that Western culture deprioritized because our tools couldn&#8217;t work that way.<br><br>We now have an incredibly powerful tool that is probabilistically based. LLMs are not deterministic, they are probabilistic next token predictors. If you ask the same question twice, you will get different responses. (Try it if you haven&#8217;t!).<br><br>The question is whether we develop the literacy to work effectively with systems that give you different answers depending on context, framing, and even randomness.<br></p><p><strong>Some more Claude Code resources I found helpful for thinking of it as more than a coding agent:</strong></p><ul><li><p><a href="https://www.oneusefulthing.org/p/claude-code-and-what-comes-next">Claude Code and What Comes Next</a> &#8212; Good overview from Ethan Mollick on what it is and why it matters. I would probably start here.</p></li><li><p><a href="https://every.to/source-code/how-to-use-claude-code-for-everyday-tasks-no-programming-required">How to Use Claude Code for Everyday Tasks&#8212;No Programming Required</a> &#8212; Everyday tasks you can do other than programming to give you some ideas.</p></li><li><p><a href="https://www.etf.com/sections/data-dive/codelife-how-i-use-ai">CodeLife: How I Use AI</a> &#8212; Cool example of Dave Nadig&#8217;s how it changed daily workflow</p></li></ul><p></p><h4><br><br><a href="https://www.amazon.com/Discerning-Heart-Developmental-Psychology-Robert-ebook/dp/B006F631FY">The Discerning Heart: The Developmental Psychology of Robert Kegan</a> [Book]</h4><p><em>Philip M Lewis</em><br><br>Ignoring genetics for the moment, most therapy modes I&#8217;m familiar with tend to frame issues as &#8216;wounds&#8217; or &#8216;conditions.&#8217; I tend to think of these as growing out of two of the most influential psychological thinkers: Freud (psychoanalysis) and Skinner (behaviorism).<br><br>Freud basically said you&#8217;re acting in a particular way because childhood trauma left psychological wounds that need healing and modern psychotherapy more or less has continued down that path.<br><br>Skinner said your environment conditions specific responses through reward and punishment. Though there is much to disagree on between the two camps, the frameworks are essentially deterministic and backward-looking&#8212;you&#8217;re a product of what happened to you, and the work is about fixing what&#8217;s broken or reconditioning what was learned.<br><br>There is at least one other way of thinking that grew out of developmental psychology and Jean Piaget. The school I am most familiar with is Robert Kegan&#8217;s developmental theory which offers a fundamentally different frame: you&#8217;re not broken, you&#8217;re at a particular stage of meaning-making.<br><br>The way you construct reality isn&#8217;t a wound to heal, it&#8217;s simultaneously a developmental achievement compared to earlier stages and a limitation you can evolve beyond.<br><br>Kegan identifies five stages of increasingly complex meaning-making that humans are capable of.<br><br>Stage 1 is infancy where you are unable to separate your needs from others, you cannot imagine that others have different needs.<br><br>At stage 2 (childhood), you coordinate your own needs with others&#8217; needs&#8212;you understand others have separate desires, but relationships are essentially transactional exchanges.<br><br>At stage 3 (adolescence into adulthood), you are able internalize others&#8217; perspectives&#8212;your partner&#8217;s disappointment becomes your experience of yourself, their view of you becomes part of how you experience being you.<br><br>At stage 4 (mature adulthood if you get there, most don&#8217;t), you develop self-authored values that let you maintain deep relationships without being psychologically dependent on others&#8217; approval.<br><br>Kagan posits that most adults never develop past the psychological sophistication of a teenager - research shows they plateau at stage 3, typically reached in late adolescence or early adulthood (~age 15-25).<br><br>This framework explains phenomena that seem mysterious otherwise. Why do some people require constant reassurance in relationships? They&#8217;re operating at stage 2, unable to internalize that someone cares about them - they need that caring demonstrated over and over in concrete ways.<br><br>Why do teenagers make obviously self-destructive decisions despite knowing the consequences? Stage 2 individuals have separate present and future interests, experienced one at a time rather than held together internally. They cannot think about their future selves in the way a Stage 3 individual can. In the moment, present interest wins.<br><br>Why do so many adults feel chronic guilt about disappointing parents or partners? They&#8217;re stage 3, embedded in shared psychological experiences, taking responsibility for how others feel about them rather than being able to feel secure in their self-authored values.<br><br>The knot in your stomach when your partner is angry&#8212;that&#8217;s being subject to shared psychological experience. You literally can&#8217;t sleep until you repair how they feel about you, because their view of you is constitutive of your experience of yourself. Moving to stage 4 means making those shared experiences &#8220;object&#8221; rather than &#8220;subject,&#8221; something you can reflect on rather than something you are.<br><br>The practical question: how do you encourage movement through stages? The authors suggest it requires both confirmation (recognizing current capability) and disconfirmation (inviting something beyond).<br><br>I think group therapy, good coaching, and deliberate work in an existing relationship can provide this.<br><br>I find this idea so helpful: different people are literally constructing different realities&#8212;not because they&#8217;re difficult or damaged, but because they&#8217;re operating from different developmental structures.<br><br>I have benefited a lot from traditional insight therapy, but, it does in my experience sometimes fall down at the point where you can understand intellectually why you do something without having the structural cognitive capacity to do anything different. You need the developmental capacity, not just the insight, and Kegan&#8217;s framework is a way to develop that.</p><p></p><h4><br><br><a href="https://academic.oup.com/scan/article/12/1/1/2823712">The Theory of Constructed Emotion</a> (Article)</h4><p><em>Lisa Feldman Barrett</em><br><br>There is no &#8220;fear circuit&#8221; in the brain. Decades of neuroscience research have failed to find one&#8212;no consistent facial expression, autonomic pattern, or set of neurons that fires for fear across all people.<br><br>Lisa Feldman Barrett argues emotions aren&#8217;t hardwired responses. They&#8217;re predictions your brain constructs using past experience.<br><br>Your brain is constantly running simulations of the world to maintain physiological balance (allostasis). Its internal model tracks two things:</p><ol><li><p>Patterns in the external world (what you see, hear, smell, etc.)</p></li><li><p>Patterns in your body&#8217;s internal state (heart rate, blood pressure, glucose levels, etc.)<br></p></li></ol><p>Your brain compresses what&#8217;s happening in your body (2) into affect&#8212;a background feeling with two dimensions:</p><ol><li><p>Valence: pleasant vs. unpleasant</p></li><li><p>Arousal: activated vs. calm<br></p></li></ol><p>Everyone has some affect, all the time. You are always feeling some amount of pleasantness and some amount of activation.<br><br>Feldman&#8217;s idea is that your brain is using that affect plus your past experience to assemble a distribution of possible interpretations, each with some probability of matching the current situation.<br><br>The same affect can be experienced as different emotions by different people based on past experience.<br><br>Take someone activated (high arousal) with neutral valence about to give a presentation. If past experiences with high arousal in performance contexts were positive, their brain categorizes these sensations as excitement. If past experiences were negative, the identical physiological state gets categorized as anxiety or fear.<br><br>The exact same affect and same external situation can trigger excitement in some people and anxiety in others, depending on their past experiences in similar situations.<br><br>A therapeutic implication: If you have bad conceptual models built from past experiences (trauma in therapy-speak), you can get stuck in a local maximum.<br><br>Your brain needs fewer sensory inputs to confirm existing patterns, even when they&#8217;re not accurate.<br><br>Example: You meet an emotional man. Your priors say &#8220;men who share emotions are manipulative.&#8221; So you interpret weak evidence as confirmation, he acts defensively in response, and your brain treats that as further confirmation.<br><br>To get out of it, you need something like exposure therapy. You need a set of good interactions in a challenging scenario that alter your distribution of priors. If conflict generates a fear response, you need to engage in conflict and see that nothing bad happens. Over time, conflict gradually stops triggering fear responses because your brain updates its conceptual model with new statistical regularities.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://taylorpearson.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://taylorpearson.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[How Can I Make Better Decisions?]]></title><description><![CDATA[Plus Diagnosing the Metacrisis, AGI, and the Recommencement of History]]></description><link>https://taylorpearson.substack.com/p/how-can-i-make-better-decisions</link><guid isPermaLink="false">https://taylorpearson.substack.com/p/how-can-i-make-better-decisions</guid><dc:creator><![CDATA[Taylor Pearson]]></dc:creator><pubDate>Fri, 26 Dec 2025 18:14:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0a1631fd-6433-444a-bc01-8ee50058c8f7_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h6a5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h6a5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png" width="1100" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71473,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://taylorpearson.substack.com/i/191117919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h6a5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><div class="pullquote"><p><em>&#8220;No man is a failure who has friends.&#8221;</em><br><strong>&#8212;Clarence (It&#8217;s a Wonderful Life)</strong></p></div><p></p><h2>Articles and Podcasts</h2><p></p><h3>Business</h3><h4><br><a href="https://hiddenforces.io/podcasts/diagnosing-the-metacrisis-reality-amp-meaning-in-modern-life-iain-mcgilchrist/">Diagnosing the Metacrisis: Reality &amp; Meaning in Modern Life | Iain McGilchrist</a> [Podcast]</h4><p><em>Hidden Forces</em><br><br>McGilchrist is most well known for his book <em><a href="https://www.amazon.com/Master-His-Emissary-Divided-Western/dp/0300188374">The Master and the Emissary</a></em>, which looks at brain hemispheric differences as the source of much of what feels off about the world today.<br><br>To briefly summarize the thesis - the left hemisphere sees discrete pieces in detail while the right hemisphere sees connections and the big picture. Both perspectives are necessary, but McGilchrist sees our civilization as overfocusing on the left hemisphere's narrow, mechanistic view at the expense of the right's capacity for context, nuance, and meaning.<br><br>In this podcast, he dives into some examples that resonated with me.<br><br>When McGilchrist trained, physicians formulated probable diagnoses before touching the patient&#8212;the patient history and your relationship with them was the most important input for the diagnosis.<br><br>Today, the first step is often putting the patient in a scanner or doing bloodwork. Instead of experiencing another human being paying attention to their suffering, patients are treated more like cars - faulty vehicles going in for service that can be handled as individual parts rather than complex wholes.<br><br>It's obviously the case that feeling like the doctor cares about you probably impacts how you experience the treatment and receive their recommendations. And it's also the case that how a patient talks about an issue probably has meaning and significance beyond what a transcript would reveal.<br><br>Another way McGilchrist sees this in our society is the elevation of &#8216;science&#8217; to a moral authority. Science cannot provide meaning, purpose, or value because good science starts by saying &#8220;we're not going to suppose a meaning, a purpose, or a value, and now we're going to see what we can find out about the mechanism.&#8221;<br><br>That's not a condemnation of mechanics or scientists. We need mechanics and scientists. The problem is we've elevated technique into authority where "Science says" ends discussions, leading to an unexamined philosophy of scientific reductivism as the water we swim in.<br><br>I'll leave with this view on self-actualization that resonated for me and something I'm thinking about entering the New Year:</p><blockquote><p><em>&#8220;There is your self-actualization as a scientist, as a doctor, as a teacher, as a lawyer, as a policeman, as a whatever. These are things that you feel are callings and you really like to do them. They are not done for wealth, or at least they shouldn't be. I'm afraid nowadays, a lot of them are.<br><br>But it's not about the accumulation of capital. It's not about a marketplace. It's about the fabric of a society. The whole idea that a doctor provides a commodity in a market is an appalling idea to me. I mean, that's part of the breakdown of the idea of a cohesive society in which we are held together by duties, by obligations which come from affect, from knowing that there are things that are important here and that you can, up to a point, trust somebody. By emphasizing only one thing, which is the bottom line, trust breaks down. And that is the source of so much that's gone wrong.&#8221;</em></p></blockquote><p></p><h4><br><a href="https://www.artofaccomplishment.com/podcast/how-can-i-make-better-decisions-decisions-series-1">How Can I Make Better Decisions?</a> [Podcast]</h4><p><em>Art of Accomplishment</em><br><br>This framed a useful distinction: choices happen constantly and unconsciously, while decisions emerge when fear enters the equation. You choose what to eat for lunch, when to brush your teeth, and what shirt to wear without much thought or fear.<br><br>The moment you're weighing options with significant mental energy, you're already operating from some fear. It could be fear of particular consequences, of being wrong, or of an emotional experience you're trying to avoid.<br><br>The practical intervention is elegant: do the next most obvious thing. Don't try to make the big decision. If you're stuck asking "should I invest?" there's some unaddressed fear. What is it? Maybe you need to call references. Maybe you need to test the product. Maybe you&#8217;re afraid of people judging you for making a bad investment. Maybe you're investing an amount you can't afford to lose. Keep asking what the next obvious step is and the big decision often evaporates.<br><br>For some decisions, the obvious thing often comes down to having some set of principles. "I don't work with assholes" isn't just a preference&#8212;it's a decision-compression algorithm. When enough decisions flow through consistent principles, you end up in a reality shaped by those principles rather than by the fears you were trying to avoid.</p><h4><br><br><a href="https://josephnoelwalker.com/francis-fukuyama-agi-and-the-recommencement-of-history/">Francis Fukuyama &#8212; AGI and the Recommencement of History</a>  [Podcast]</h4><p><em>The Joe Walker Podcast</em><br><br>Fukuyama&#8217;s view in the 1990s was that advances in biotechnology were going to transform the biological basis for the liberal democratic order and he believes we may now be close to seeing that emerge.<br><br>Human rights are grounded in what Fukuyama calls "Factor X"&#8212;a bundle of uniquely human traits including consciousness, emotional depth, and moral agency. The trouble is that this bundle isn't binary. Would we have granted full rights to Neanderthals? They likely felt pain, experienced emotions, and mourned their dead. But would we let them vote?<br><br>We don't let seven-year-olds vote because we feel their mental capabilities haven't sufficiently developed to make good decision. A proto-human race that never develops past that stage might warrant protection from cruelty without warranting political participation.<br><br>This creates a disturbing possibility by present day standards: genetic engineering could produce multiple tiers of natural rights within a single society. Huxley saw this in Brave New World with his Alphas, Betas, and Gammas.<br><br>The likely path isn't engineering a slave race&#8212;that's morally abhorrent in a way that would make it politically impossible. A plausible path though is elites gradually separating themselves genetically as well as socially. Historical class differences were already partly biological; medieval aristocrats were literally taller and more cognitively developed than malnourished peasants. Biotechnology (e.g. CRISPR) could make such divergences permanent and heritable.<br><br>Our current emotional and cognitive architecture results from hundreds of thousands of years of evolutionary pressure&#8212;a winning combination for species survival. Deliberately manipulating that system will produce effects no one predicted. Intelligence seems like the obvious target for enhancement, but boosting IQ might alter risk tolerance, empathy, or compliance in ways that reshape political possibilities entirely.<br><br>Fukuyama is skeptical of the impact of AI on politics. Political intelligence differs fundamentally from mathematical intelligence because it's entirely contextual. What works in China fails in India; what works in one Indian state fails in another. The best political leaders possess lived experiences that allow them to empathize and recognize pitfalls in how people actually think and act. For a computer to extract proper weightings from this contextual mess and synthesize workable solutions seems extraordinarily difficult.<br><br>Tyler Cowen has an alternate take that I find compelling: Given that the largest and most popular AI models are built in the West, they subtly and intrinsically reflect Western values in a way that will perhaps shape anyone that uses them.<br><br>One of the most interesting observations he made was how the difference in Asian and Wester cultures could lead to Asia using gene editing technology much sooner and more aggresively.<br><br>Asian cultures, lacking transcendental religious traditions like Christianity, view humans and non-humans as more of a continuum rather than sharply distinct categories. Daoism and Shinto hold that spirits inhabit all material objects&#8212;desks, temples, computer chips. This produces both more respect for the non-human world and fewer inhibitions around biotechnology. It's probably not coincidental that China produced the three CRISPR babies born so far.</p><h4></h4><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://taylorpearson.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://taylorpearson.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Maps of Meaningness and Investing Amid Low Expected Returns]]></title><description><![CDATA[Plus Why A/C Repair Is Expensive but A/C is cheap]]></description><link>https://taylorpearson.substack.com/p/maps-of-meaningness-and-investing</link><guid isPermaLink="false">https://taylorpearson.substack.com/p/maps-of-meaningness-and-investing</guid><dc:creator><![CDATA[Taylor Pearson]]></dc:creator><pubDate>Fri, 28 Nov 2025 18:21:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5b1125fd-7111-41e7-8556-54e2f19be6e1_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!h6a5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!h6a5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png" width="1100" height="220" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:220,&quot;width&quot;:1100,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:71473,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://taylorpearson.substack.com/i/191117919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!h6a5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 424w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 848w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1272w, https://substackcdn.com/image/fetch/$s_!h6a5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c542158-7db6-49e4-a846-5680511b54c0_1100x220.png 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><div class="pullquote"><p><em>&#8220;The solution, by contrast, is to make the everyday appear to us anew, to be seen again as it is in itself, therefore to discover rather than to invent, to see what was there all along, rather than put something new in its place, original in the sense that it takes us back to the origin, the ground of being. This is the distinction between fantasy, which presents something novel in the place of the too familiar thing, and imagination, which clears away everything between us and the not familiar enough thing so that we see it itself, new, as it is.&#8221;</em><br><strong>&#8212;Iain McGilchrist, the Master and His Emissary</strong></p></div><p>Happy Thanksgiving,</p><p>For as long as I can remember, I&#8217;ve always loved the holiday period from Thanksgiving through New Years. Everyone&#8217;s a little happier, the chill in the air has not yet lost its novelty, and I naturally enter a more reflective headspace. Happy Holidays!</p><p></p><h2><br>Articles and Podcasts</h2><p></p><h4><br><a href="https://meaningness.substack.com/p/maps-of-meaningness">Maps of Meaningness</a> [Podcast]</h4><p><em>Meaningness</em><br><br>David Chapman was an AI researcher who has written an excellent <a href="https://meaningness.com/">book on meaning</a>. In this podcast, he contrasts his work with Jordan Peterson.<br><br>Both of them sit in Nietzsche&#8217;s lineage, grappling with his notion that Christianity&#8217;s collapse would produce a nihilistic crisis. Both Chapman and Peterson treat this as the central problem of our era: how do you construct meaning when the old certainties have dissolved?<br><br>Chapman offers a useful genealogy of how we got to the present. In his telling, the Romantics of the late 18th century were the first organized reaction against Enlightenment rationalism, championing emotion, poetry, and myth against systematic reason. The 1950s Beats revived this impulse, which then flowered into the 1960s counterculture. The hippies merged with the New Left and this sensibility eventually conquered: it became the operating system for the entire political left of Western culture for decades.<br><br>A parallel Christian counterculture emerged arguing that modernity had lost touch with God. Chapman argues both movements are now exhausted&#8212;no longer the animating force behind cultural or political conflict, even if their rhetoric persists. I interpret Peterson as mostly agreeing with this, though more focused on something like a Christian revival than Chapman (who has Buddhist roots).<br><br>Their shared intellectual heritage includes cognitive science, particularly the 4E tradition, emphasizing embodiment and interaction. Both draw heavily on James Gibson&#8217;s concept of affordances&#8212;the idea that we perceive the world not as a collection of objects but as a field of action possibilities. A coffee mug isn&#8217;t primarily a cylinder of ceramic; it&#8217;s something graspable, liftable, and excellent for hot-cocoa-drinkableness. This reframes meaning as fundamentally about what you can do, not what you can know. Peterson made Gibson&#8217;s ecological approach to visual perception a cornerstone of his work; Chapman arrived at similar conclusions through Heidegger and his own AI research in the 1980s.<br><br>To the differences, Chapman and Peterson diverge in their response to the nihilism problem.<br><br>Peterson&#8217;s framework grows out of the Western mythical structure of opposition between chaos and order. To use the Babylonian example, the hero (Marduk) ventures into the unknown, confronts the dragon (Tiamat, the symbol of chaos), extracts treasure, and returns to fortify order. In his paradigm, society can have too much order (the tyrannical father) or too much chaos (the unpredictable mother). The hero&#8217;s role is to reconcile the two.<br><br>Chaos has both creative and destructive aspects, but the fundamental posture is one of managing threat&#8212;riding out on your armored steed for a dangerous but temporary expedition before returning to safety.<br><br>Chapman&#8217;s Buddhist lineage comes from a not-unrelated, yet different angle. Chapman starts with a framing that everything exhibits both nebulosity and pattern simultaneously.<br><br>Nebulosity refers to the aspects of reality that are fluid, constantly changing, impossible to pin down. Pattern refers to what&#8217;s solid, enduring, well-defined. This maps reasonably well onto Jordan Peterson&#8217;s order/chaos framework, but the difference is illuminating.<br><br>Nebulosity and pattern aren&#8217;t opposites to be reconciled, but an inseparable pair present in all phenomena. Nebulosity isn&#8217;t something to be conquered. It merely is.<br><br>This produces different orientations toward uncertainty. Peterson&#8217;s model preserves the notion of a safe city to return to, a domain of order worth defending. His model suggests something more like &#8220;order/chaos balance&#8221; whereas Chapman&#8217;s suggests there&#8217;s no such refuge: nebulosity pervades everything already.<br><br>The practice becomes making friends with unformedness rather than conquering it. Peterson&#8217;s framework says: venture out, bring back what&#8217;s valuable, reinforce the walls. Chapman&#8217;s says: recognize the walls were always illusory, and dance with what you find.<br><br></p><h4><br><a href="https://www.amazon.com/Investing-Amid-Low-Expected-Returns/dp/1119860199/">Investing Amid Low Expected Returns</a> [Book]</h4><p><em>Antti Ilmanen</em><br><br>Antti Ilmanen's <em><a href="https://www.amazon.com/Expected-Returns-Investors-Harvesting-Rewards-ebook/dp/B004YK0JLW?ref_=ast_author_dp&amp;th=1&amp;psc=1">Expected Returns</a></em> (2011) is my go-to recommendation for an introduction to quantitative investing. It is a dense but rewarding tour through the building blocks of asset class returns with some look at portfolio construction and risk management. <em><a href="https://www.amazon.com/Investing-Amid-Low-Expected-Returns-ebook/dp/B09Y2JK2WF?ref_=ast_author_dp&amp;th=1&amp;psc=1">Investing Amid Low Expected Returns</a></em> (2022) is a follow-up that updates some of his past research in light of the enormous run up in valuations of most risk assets over the 2010s.<br><br>The core argument: virtually all long-only assets appear expensive compared to their own histories, and investors need to recalibrate their expectations accordingly. Ilmanen estimates that achieving the same retirement income target in a low-return environment requires nearly doubling your savings rate&#8212;from roughly 8% to 15% of salary annually for a typical saver.<br><br>It&#8217;s worth noting that <a href="https://www.aqr.com/Insights/Research/Journal-Article/Market-Timing-Sin-a-Little">using historical valuation doesn't have a great track record</a> because future highs can be higher and lows can be lower than the past. Using historical valuations, you would have been underweight US equities basically starting in the 1990s to present, the period where US equities have performed phenomenally well (dotcom bubble and GFC not withstanding). Spoiler: The future is out of sample! [insert <a href="https://taylorpearson.me/ergodicity/">ergodicity</a> comment here].<br><br>It&#8217;s a weird thing to say, but my critique of this book (and AQR-style thinking more broadly) is that maybe it&#8217;s a little TOO empirical. As noted, the future is not the past and I think you have to think qualitatively about that at some level.<br><br>Having said that, most investors I know would be much better off understanding historical returns more closely. Probably, the most helpful contribution in the book is his framing of where investors actually add value.<br><br>Most spend their time selecting active managers&#8212;nearly a zero-sum game&#8212;while underutilizing diversification, risk management, and cost control. He uses a memorable image of apple harvesting: everyone reaches for the top of the tree (alpha) while ignoring the low-hanging fruit. The apples are all in one basket (poor diversification), someone's standing under the ladder (bad risk management), and there's one overseer for each worker (terrible cost control).<br><br>I cannot overstate how true this in my experience. While the Vanguard/Boglehead movement has been a net positive in terms of people being more fee conscious (though sometimes blindly so - there are <em>some </em>situations where fees are worth paying), most investors I know have bad calibrations around diversification and risk management.<br><br>One litmus test question I saw on Twitter: If someone offered you a fully liquid, guaranteed 6% annual real return investment, how much of your portfolio would you invest in it?<br><br>The answer depends on individual circumstance, but should be "a lot" for pretty much everyone. That is on the high end of any major asset class over the last century and it has zero volatility. Most people answer something less than "a lot" in my experience.<br><br>Of all the historical data he presents, the one that I think would surprise people the most is the data on commodities. Individual commodities have roughly 30% annual volatility, which creates enough variance drag to bring their compound returns to zero despite positive arithmetic returns. Yet, a diversified basket of commodities has earned over 3% annually for almost 150 years. Erb and Harvey called this "turning water into wine"&#8212;the rebalancing bonus from holding volatile, uncorrelated assets. Add to this commodities positive return in periods of inflation and low historical correlations to stocks and bonds and you can see why many portfolios could benefit from more commodity exposure.</p><p></p><h4><br><br><a href="https://a16z.com/why-ac-is-cheap-but-ac-repair-is-a-luxury/">Why AC is Cheap but AC Repair is a Luxury</a> [Article]</h4><p><em>Andreessen Horowitz</em><br><br>Here&#8217;s a fun time travel thought: go back to the 90s and explain to people that in 2025 it would be cheaper to buy a flatscreen TV to cover up a hole in your drywall than to hire someone to fix it.<br><br>This paradox captures something systemic about modern economies: the intersection between two economic phenomena that rarely get discussed together: Jevon's Paradox and the Baumol Effect.<br><br>Jevon's Paradox explains why productivity gains don't reduce total spending&#8212;they explode it. When transistors dropped from $1 to a fraction of a millionth of a cent, we didn't save money on computing, we consumed a lot more computing. We embedded processors in greeting cards and disposable shipping tags. The same logic applies to AI: every efficiency gain unlocks new use cases, creating infinite demand for any chip you can get.<br><br>The Baumol Effect is the mirror image. When one sector becomes wildly productive and creates high-paying jobs, every other sector's wages must eventually rise to compete in the same labor market. If you can make $150/hour installing HVAC for data centers, you won't accept less for residential service.<br><br>If I have a home appliance break, I usually spend an hour or two trying to fix it with Youtube and then throw it out. Depending on where you live, most entry level appliances (washer, dryer, stove, etc.) probably cost about 4-6 hours of maintenance time.<br><br>My dishwasher cost about $600. It costs me $150/hour to get a technician. If it&#8217;s already halfway past its useful life and it&#8217;s going to take more than 2 hours to fix, the economically rational thing is to just chunk it and buy a new one (with free delivery and install in most cases!).<br><br>How does this impact AI&#8217;s effects? If you automate 99% of a job but regulations require a human for the final 1%, that human becomes the bottleneck. As a result, their wages should rise tremendously&#8212;until that last 1% is automated, at which point they collapse.<br><br>The intersection of regulation (and public pressure around changing regulation) and AI seems like the most significant thing for how it will drive cost and economic impact. By all accounts, AI is already much better at reading scans than the median radiologist today. But, how long will it be until the median radiologist is out of a job? I suspect the timing is measured in decades.<br><br>This suggests a strange future where the "human required" residue of various professions becomes the essential employable skillset&#8212;vestigial limbs of career paths that no longer substitute for one another and can be milked right up until they completely collapse.</p><h4><br></h4><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://taylorpearson.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://taylorpearson.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item></channel></rss>