What jumps out is how quickly 'managing AIs' becomes another room where we're expected to be coherent and composed before we've built any shared norms for doing that. We're reshaping work structures faster than we're reshaping the stories people tell themselves about their own agency inside those structures. That gap, the curve outpacing the human narrative, is where a lot of the coming whiplash will live. Great insights!
"Uncertainty is not the same as helplessness" Amen. From our research: what paralyses people about AI isn't the tool(s) or hockey sticks, it's the unresolvable consequences. And the antidote isn't expertise or seriousness; it's curiosity. Play turns uncertainty from a threat into a threshold. There's a professional case for fun that most organisations haven't figured out yet, or are tripping up on trying to take all this seriously!
I recognize it is early days but customer service chat bots, presumably powered by AI, have proliferated to the point that everyone I deal with as a customer has one. And with a single exception, they all suck. You expect the DMV to suck but Amazon? Why aren't they better?
The amazon bot I talked with was awesome. It gave me the refund I wanted almost immediately, with no hassle. But yeah, all the other bots are terrible. I can imagine this is because anybody can sell a customer service "AI solution" and the company buying that solution has lots of incentive to buy it because it promises less work for them. But there's less incentive to have it actually work well for actual customers, since the customers using the chatbot have almost no power in this situation.
If you understand context, a high school student can easily confuse Rufus (Amazon's chatbot). If a high school student can do that, a senior black hat hacker can do some serious damage.
On the topology of disruption. A fellow writer, CP, makes the point that AI bites hardest where economies are already legible — standardized workflows, clean data, SaaS everywhere. The Software Factory works because software is the most legible domain. In places where work still moves through relationships, paper, face-to-face judgment, disruption arrives first as assist, then maybe replacement, then maybe never. America's white-collar economy has been software-eaten for two decades. OTOH China's information economy still runs partly on offline meetings and walled gardens. This suggests the "rolling disruption" won't roll evenly. Same exponential, different surfaces, diffrent outcomes.
On what new jobs appear in the cracks. While the Factory automates the middle, edge cases multiply. Waymo is paying DoorDash drivers to close robotaxi doors left open by passengers — autonomy creates edge cases, edge cases create occupations. RentAHuman.ai now lets agents hire humans for physical errands. Meanwhile, organizations with hybrid workforces (humans + agents + soon humanoids) will need people who debug norms, not just code. Amanda Askell at Anthropic is an example — shaping character and ethics. Arnold Kling suggests every sizable organization will need an AI "keeper-upper" who separates the useful from the mirage. These aren't roles that existed two years ago.
Your closing is spot on: the window to shape the Thing is here now. I'd add that part of shaping it is noticing the new work that appears, not just the old work that disappears.
We're good at writing eulogies. We're less good at spotting new beginnings.
"In places where work still moves through relationships, paper, face-to-face judgment, disruption arrives first as assist, then maybe replacement, then maybe never."
I totally agree. My first computer job in the 80's was a yearlong internship at Bank of America, where I had a front row seat to the spectacle of trying to reduce costs and increase productivity using tech, while at the same time take advantage of liberalization of the banking regulations to offer new kinds of products. They were closing branches and installing ATMs, but they knew there were customers who would never use them. They also had the problem of how to design, roll out, offer, and support these more complex products as the mainframe and peripheral industry was starting its consolidation and decline. California was in a tug of war with the feds over bank regulation, so whatever picture we could put together on the upcoming regulatory environment left us feeling uneasy about building systems to comply with it, because we didn't know how long it would last, and the projected ROI on the projects could be way off. With paper, we could just modify the brochures and the forms, train up the branch and backroom staff on some new procedures, and roll out the change.
My bottom line: if it involves something super important--e.g., safety critical, financial, criminal law--tech is going to penetrate only at the speed the environment allows. Case in point: DOGE. Elon Musk and his impatient young tech heads were no match for the mind boggling complexity of the federal machine.
Chris, this is exactly the texture that gets lost in abstract AI discourse. Your banking experience in the 80s is apt: the technology was ready, but the environment set the pace. Regulatory uncertainty, customer behavior, staff training, the sheer messiness of organizational change—these aren't bugs, they're the terrain.
Your framing: "tech penetrates only at the speed the environment allows" is cleaner than anything I wrote. That's the core of what CP calls topology-dependent disruption.
The DOGE point is the inverse proof. You can have all the technical capability and political will in the world, but if you're hitting a system that was never made legible—layers of paper, institutional memory, complexity that exists because it was never rationalized—you bounce off. The federal machine isn't slow because it's stupid. It's slow because it's thick.
I'm starting a monthly series starting next Monday—Future Tense—exploring exactly these questions. Where does AI land next, and what new work appears in the cracks? Would welcome your perspective if you read along.
Thank you for your kind reply. Nearly my entire software engineering career was spent in high-stakes domains: handling people's money safely, legally, and profitably, administering IV drugs, and running the flight critical software in UAVs.
The fundamental topology or texture in those environments is to not replace anything which works. Again at the bank, we had applications that had been written in assembler language in the 1950s. The code was 30 years old when I started there! The applications were hundreds of thousands of lines of assembler language for a mainframe, whose maker no longer existed. I asked them what they would do if their mainframe burned up? They said they already had an emulation of it validated on another computer architecture, and that they would rather deploy that emulated implementation than rewrite it! I asked, "don't you have specs for it? Flowcharts? Design docs?" The systems people looked at each other and then laughed.
Chris, thanks - the emulation story is priceless. "We'd rather run 30-year-old assembler on a ghost of a dead mainframe than rewrite it." That's institutional thickness beautifully described. The specs aren't in the documentation. They're in the running system, and no one wants to learn what undocumented assumptions break when you touch it.
I'm publishing something later today that circles a related question — what happens when we hand over negotiation, filtering, even judgment to personal AI agents? We've already delegated memory and navigation to our phones without quite noticing. The next handover goes further.
What strikes me is how this LLM revolution is highlighting the worst aspects of the human animal: the commodification of human beings, the acceptance of AI work product that looks right but isn’t (slop), the failure to make distinctions, the lure of “getting rich quick “, the uncritical acceptance of the capability hype ( like doubling human life expectancy in the next five years), the appalling ignorance and gullibility of journalists, and many others.
LLMs have hit a wall on reliability and safety—it’s time for everyone to take a deep breath, and look critically at these things that we built but don’t actually understand.
I come at this with my own presuppositions and biases, to be sure, but it feels much more important than you allow for here that all three of the destablizing incidents that happened that week were wholly about human beliefs about the imminent power of AI, and not in any way about the capabilities of AI. In the Citrini case, humans believing other humans' speculations about the future impacted human behaviors in the markets; in the Block case, Dorsey's choice of explanation for the layoffs worked (to whatever extent it worked) because it seemed to confirm widespread belief in imminent huge impacts from AI on employment; and in the Anthropic case, the standard Trump playbook for government relations with businesses took over – given urgency, presumably, because the human actors in the situation feel so confident about the potential military power of AI systems. This is definitely "AI disrupting the world" in a sense, but only through the filter of the extraordinary ferment of belief in what it will surely soon – soon! – be capable of.
My request for a future post: Deep dive into the StrongDM "software factory" and look for implications for white collar non-coding knowledge work.
For example, can we create a digital twin of our target customers so we can test which website variants will work best?
It's possible to kinda do this today ... asking Claude to set up a group of AI subagents to act as a group of reviewers, each with its own lens ... but there's got to be a more advanced method, right?
@Dov, since the ByteDance AI wasn't prompted to judge, it went beyond (violated) the documentary form by providing that overtly intrusive closing comment. If I thought AI had already become sentient, I would think the tool decided for itself to judge the mere human silly enough to ask it to document human silliness. And perhaps it is a comment on our inability even now to create something that would accurately present otters on a plane using WiFi.
The tension between exponential benchmarks and the fact that "remarkably little has changed in most organizations" is the most interesting part of this piece. The StrongDM Software Factory is wild - but it's also three people at a company that lives and breathes infrastructure tooling. The gap between "AI can pass expert-level tests" and "my company still can't figure out how to use it" feels like the real story of 2026 so far.
Concentrated in the hands of a few unconstrained actors, artificial intelligence could be the neutron bomb of hyper‑capitalism — destroying lives and livelihoods while leaving buildings standing.
Great post, as always, Ethan. Thank you. I know this might not be your specialty, but one thing I couldn't help thinking of while reading this is the insane amount of power infrastructure all of this development requires (and is going to need in the future). Many people are going about this whole AI thing with an underlying assumption that electricity is unlimited and that we'll somehow just magically figure out a way to deal with that.
The article's title, "There is no cloud," effectively underscores the fact that all of this requires galactic amounts of concrete, copper, water, land, and countless other physical materials. The article discusses the actual wait times for many of these fundamental elements, which currently sits at YEARS (not days or months). And there is no sign of any of this moving faster, because PHYSICS and FINITE RESOURCES.
What jumps out is how quickly 'managing AIs' becomes another room where we're expected to be coherent and composed before we've built any shared norms for doing that. We're reshaping work structures faster than we're reshaping the stories people tell themselves about their own agency inside those structures. That gap, the curve outpacing the human narrative, is where a lot of the coming whiplash will live. Great insights!
"Uncertainty is not the same as helplessness" Amen. From our research: what paralyses people about AI isn't the tool(s) or hockey sticks, it's the unresolvable consequences. And the antidote isn't expertise or seriousness; it's curiosity. Play turns uncertainty from a threat into a threshold. There's a professional case for fun that most organisations haven't figured out yet, or are tripping up on trying to take all this seriously!
Check out my version of similar idea if you’d like; good stuff.
I recognize it is early days but customer service chat bots, presumably powered by AI, have proliferated to the point that everyone I deal with as a customer has one. And with a single exception, they all suck. You expect the DMV to suck but Amazon? Why aren't they better?
To be fair, only 99.99% suck!
My positive experience was with SimpliSafe which is an alarm company. Their installation contractor sucks however.
The amazon bot I talked with was awesome. It gave me the refund I wanted almost immediately, with no hassle. But yeah, all the other bots are terrible. I can imagine this is because anybody can sell a customer service "AI solution" and the company buying that solution has lots of incentive to buy it because it promises less work for them. But there's less incentive to have it actually work well for actual customers, since the customers using the chatbot have almost no power in this situation.
If you understand context, a high school student can easily confuse Rufus (Amazon's chatbot). If a high school student can do that, a senior black hat hacker can do some serious damage.
My experience was different. Fortunately, it is possible to talk to a person at Amazon who solved my problem. Took some effort to get there.
Eli Lilly’s chatbot was beyond great. I wonder how much of this is driven by elevenlabs…
I recently called Strickland Bros (10 min oil changes) and was caught off guard by how good their system is.
Two thoughts come to mind:
On the topology of disruption. A fellow writer, CP, makes the point that AI bites hardest where economies are already legible — standardized workflows, clean data, SaaS everywhere. The Software Factory works because software is the most legible domain. In places where work still moves through relationships, paper, face-to-face judgment, disruption arrives first as assist, then maybe replacement, then maybe never. America's white-collar economy has been software-eaten for two decades. OTOH China's information economy still runs partly on offline meetings and walled gardens. This suggests the "rolling disruption" won't roll evenly. Same exponential, different surfaces, diffrent outcomes.
On what new jobs appear in the cracks. While the Factory automates the middle, edge cases multiply. Waymo is paying DoorDash drivers to close robotaxi doors left open by passengers — autonomy creates edge cases, edge cases create occupations. RentAHuman.ai now lets agents hire humans for physical errands. Meanwhile, organizations with hybrid workforces (humans + agents + soon humanoids) will need people who debug norms, not just code. Amanda Askell at Anthropic is an example — shaping character and ethics. Arnold Kling suggests every sizable organization will need an AI "keeper-upper" who separates the useful from the mirage. These aren't roles that existed two years ago.
Your closing is spot on: the window to shape the Thing is here now. I'd add that part of shaping it is noticing the new work that appears, not just the old work that disappears.
We're good at writing eulogies. We're less good at spotting new beginnings.
"In places where work still moves through relationships, paper, face-to-face judgment, disruption arrives first as assist, then maybe replacement, then maybe never."
I totally agree. My first computer job in the 80's was a yearlong internship at Bank of America, where I had a front row seat to the spectacle of trying to reduce costs and increase productivity using tech, while at the same time take advantage of liberalization of the banking regulations to offer new kinds of products. They were closing branches and installing ATMs, but they knew there were customers who would never use them. They also had the problem of how to design, roll out, offer, and support these more complex products as the mainframe and peripheral industry was starting its consolidation and decline. California was in a tug of war with the feds over bank regulation, so whatever picture we could put together on the upcoming regulatory environment left us feeling uneasy about building systems to comply with it, because we didn't know how long it would last, and the projected ROI on the projects could be way off. With paper, we could just modify the brochures and the forms, train up the branch and backroom staff on some new procedures, and roll out the change.
My bottom line: if it involves something super important--e.g., safety critical, financial, criminal law--tech is going to penetrate only at the speed the environment allows. Case in point: DOGE. Elon Musk and his impatient young tech heads were no match for the mind boggling complexity of the federal machine.
Chris, this is exactly the texture that gets lost in abstract AI discourse. Your banking experience in the 80s is apt: the technology was ready, but the environment set the pace. Regulatory uncertainty, customer behavior, staff training, the sheer messiness of organizational change—these aren't bugs, they're the terrain.
Your framing: "tech penetrates only at the speed the environment allows" is cleaner than anything I wrote. That's the core of what CP calls topology-dependent disruption.
The DOGE point is the inverse proof. You can have all the technical capability and political will in the world, but if you're hitting a system that was never made legible—layers of paper, institutional memory, complexity that exists because it was never rationalized—you bounce off. The federal machine isn't slow because it's stupid. It's slow because it's thick.
I'm starting a monthly series starting next Monday—Future Tense—exploring exactly these questions. Where does AI land next, and what new work appears in the cracks? Would welcome your perspective if you read along.
Thank you for your kind reply. Nearly my entire software engineering career was spent in high-stakes domains: handling people's money safely, legally, and profitably, administering IV drugs, and running the flight critical software in UAVs.
The fundamental topology or texture in those environments is to not replace anything which works. Again at the bank, we had applications that had been written in assembler language in the 1950s. The code was 30 years old when I started there! The applications were hundreds of thousands of lines of assembler language for a mainframe, whose maker no longer existed. I asked them what they would do if their mainframe burned up? They said they already had an emulation of it validated on another computer architecture, and that they would rather deploy that emulated implementation than rewrite it! I asked, "don't you have specs for it? Flowcharts? Design docs?" The systems people looked at each other and then laughed.
Chris, thanks - the emulation story is priceless. "We'd rather run 30-year-old assembler on a ghost of a dead mainframe than rewrite it." That's institutional thickness beautifully described. The specs aren't in the documentation. They're in the running system, and no one wants to learn what undocumented assumptions break when you touch it.
I'm publishing something later today that circles a related question — what happens when we hand over negotiation, filtering, even judgment to personal AI agents? We've already delegated memory and navigation to our phones without quite noticing. The next handover goes further.
If you're curious: https://rajeshachanta.substack.com/
Would welcome your perspective. You've seen more of these transitions from the inside than most.
What strikes me is how this LLM revolution is highlighting the worst aspects of the human animal: the commodification of human beings, the acceptance of AI work product that looks right but isn’t (slop), the failure to make distinctions, the lure of “getting rich quick “, the uncritical acceptance of the capability hype ( like doubling human life expectancy in the next five years), the appalling ignorance and gullibility of journalists, and many others.
LLMs have hit a wall on reliability and safety—it’s time for everyone to take a deep breath, and look critically at these things that we built but don’t actually understand.
We're building up to your 2027 climax post of "One Useful Thing" aren't we? 🤨
I come at this with my own presuppositions and biases, to be sure, but it feels much more important than you allow for here that all three of the destablizing incidents that happened that week were wholly about human beliefs about the imminent power of AI, and not in any way about the capabilities of AI. In the Citrini case, humans believing other humans' speculations about the future impacted human behaviors in the markets; in the Block case, Dorsey's choice of explanation for the layoffs worked (to whatever extent it worked) because it seemed to confirm widespread belief in imminent huge impacts from AI on employment; and in the Anthropic case, the standard Trump playbook for government relations with businesses took over – given urgency, presumably, because the human actors in the situation feel so confident about the potential military power of AI systems. This is definitely "AI disrupting the world" in a sense, but only through the filter of the extraordinary ferment of belief in what it will surely soon – soon! – be capable of.
My request for a future post: Deep dive into the StrongDM "software factory" and look for implications for white collar non-coding knowledge work.
For example, can we create a digital twin of our target customers so we can test which website variants will work best?
It's possible to kinda do this today ... asking Claude to set up a group of AI subagents to act as a group of reviewers, each with its own lens ... but there's got to be a more advanced method, right?
I am not really sure what the wise-ass ByteDance Otter meant by that snarky
"Back to the drawing board, Humans!" .
Maybe he feels that we need a little Recursive Self Improvement ourselves? Looking around, it is hard to disagree.
@Dov, since the ByteDance AI wasn't prompted to judge, it went beyond (violated) the documentary form by providing that overtly intrusive closing comment. If I thought AI had already become sentient, I would think the tool decided for itself to judge the mere human silly enough to ask it to document human silliness. And perhaps it is a comment on our inability even now to create something that would accurately present otters on a plane using WiFi.
The tension between exponential benchmarks and the fact that "remarkably little has changed in most organizations" is the most interesting part of this piece. The StrongDM Software Factory is wild - but it's also three people at a company that lives and breathes infrastructure tooling. The gap between "AI can pass expert-level tests" and "my company still can't figure out how to use it" feels like the real story of 2026 so far.
Concentrated in the hands of a few unconstrained actors, artificial intelligence could be the neutron bomb of hyper‑capitalism — destroying lives and livelihoods while leaving buildings standing.
Great post, as always, Ethan. Thank you. I know this might not be your specialty, but one thing I couldn't help thinking of while reading this is the insane amount of power infrastructure all of this development requires (and is going to need in the future). Many people are going about this whole AI thing with an underlying assumption that electricity is unlimited and that we'll somehow just magically figure out a way to deal with that.
I strongly recommend (if you haven't) that everyone read this article: https://tscsw.substack.com/p/the-datacenter-bible-from-layman?r=18kxbm&utm_campaign=post&utm_medium=web
The article's title, "There is no cloud," effectively underscores the fact that all of this requires galactic amounts of concrete, copper, water, land, and countless other physical materials. The article discusses the actual wait times for many of these fundamental elements, which currently sits at YEARS (not days or months). And there is no sign of any of this moving faster, because PHYSICS and FINITE RESOURCES.
If there was any doubt that AI effects capital markets, well, there it is.
Whether this is real or perceived is missing the larger frame; something has changed and our job is to harness it instead of avoid it.
It’s here. Time to stare into the abyss because it’s already staring back at us.
I guess there goes Asimov's laws without the blink of an eye.
And what about data and surveillance?
And what about quality of human life when data centers use of water becomes proprietary over human?
Zero human input and zero human oversight.
The window to be a precedent-setter for AI is real. And it started to feel like it's getting shorter by the day.
The factory is really interesting. Never heard about this until now