58 Comments
User's avatar
Julia Junge's avatar

As an organizational development consultant mainly for Non-Profits, I can’t help but wonder if some of the seemingly "nonsensical" processes in organizations are actually quite meaningful. What looks inefficient on the surface often reflects social cohesion, informal support, and the kind of relational intelligence that holds teams together – especially in mission-driven or nonprofit settings. F.E. A meeting might not be useful directly to produce something, but to strengthen the team spirit and for emotional well being.

Not all outputs in organisations are as clearly defined as “winning a chess game.” Goals like trust, participation, or social justice are hard to measure, often conflicting – but still essential. That’s what makes AI adoption so tricky: it needs to deal not just with outcomes, but with ambiguity, values, and context.

I'm fascinated by how AI navigates messy systems and invents new workflows to reach a goal. But I also believe we need human judgment, emotional intelligence, and trust in the wisdom of complex social dynamics. Otherwise, we risk only achieving the parts of our mission that are easy to describe – and losing sight of the rest.

Recent examples, like Anthropic’s experiments about agents starting blackmailing, show how hard it is for AI to handle conflicting goals. But that’s exactly the daily reality of organizations. That’s why we need to think of AI not only as a technical tool, but as something that must learn to operate within relationships, tensions, and shared responsibility.

Expand full comment
macirish's avatar

My experience supports exactly what you are saying.

I was 69 before I realized that it's not the job, it's the people. I love to program, thought that was my "cheese", but the reality was that I liked serving people with my programming.

Expand full comment
Ezra Brand's avatar

The examples you give of why certain processes exist all relate to human psychology. This is exactly the point of the OP: many of those aspects become irrelevant in the case of an advanced AI taking over all or much of the execution of the process

Expand full comment
Justin Tauber's avatar

On the contrary, Julia Junge’s point is more than a psychological one. She correctly asserts that what we mean by a good outcome is a richer concept than task completion. The alternative is to treat task completion in any way (like through blackmail) and while externalising any avoidable cost (like degradation of relational trust) as all equally acceptable.

When we ask ourselves what good looks like, we almost never mean simply “done on time and in budget”, even if we take many of those hidden criteria for excellence for granted.

Expand full comment
Ezra Brand's avatar

Agree with all that. No one’s arguing for a simplistic or overly reductive view of what counts as a successful outcome. My point was just that the *specific examples* mentioned by the grandfather comment were all psychological in nature, and those specific examples mentioned might well stop being relevant if/when advanced AI enters the picture and changes the process end-to-end

Expand full comment
𝕁𝕀𝕍𝕏's avatar

It will also be interesting to see the difference between for-profit and non-profit companies adoption to AI as the incentive structure are inherently different.

For the for-profit company it's all about efficiency and survival of the most adaptable in the way of have to adopt the new efficient ways AI is able to dream up or perish by others who do.

For the non-profit company, it can still roll on with inefficiencies because there is not really a need to grow at all cost.

Expand full comment
Brendan's avatar

Beware the false dichotomy. There are many for-profit companies that do well by doing good. "Growth at any cost" is not part of their value system. Conversely, there are "non-profits" for whom survival is the prime directive, never mind what the mission statement says.

Expand full comment
Val's avatar

Great point. I wonder if there's also an 'ends justify the means' risk here: when we only train on outcomes, we might not see how the AI is actually achieving them. Some of that organizational messiness might contain important ethical guardrails that pure optimization could bypass.

Expand full comment
Isaac Andersen's avatar

It sounds like the garbage can theory glosses over the “why?” behind the complexity and treats it as inherently bad.

I bet if you dug into any of the strange complexity and redundancy you’d find a good rationale that makes sense in the context of your org.

It feels like the bitter lesson and garbage can theories actually forward the same idea: Optimal, complex systems don’t fit cleanly into human mental models.

To embrace the bitter lesson is to embrace the garbage can: Focus less on process design and instead focus on incentives and rewards. A successful company doesn’t need to be intelligible—it just needs to hit its KPIs.

Expand full comment
Greg G's avatar

In my experience, weird processes in companies often do not have a good rationale. Someone dropped a ball, failed to communicate, didn't think something through, or set the wrong incentives, and the organization then did the best it could given the circumstances. Then that becomes the process, and often it also becomes load-bearing when other processes are layered on top of it. It's the organizational equivalent to tech debt, and any time you look at an old system (organizational or technical) that's not fully understood, you find pretty odd stuff.

I agree with your raising incentives as important, but I don't think that can be considered separately from the process. Incentives often lead to unintended consequences and almost always lead to gaming the system, and yet people rarely take this into account.

Expand full comment
Isaac Andersen's avatar

I think that's right. I don't want to suggest bureaucracy and process are ideal. Just wanted to point out: they are the "optimal" products of incentives and very little else.

Which is true for LLMs too. At a high-level, I think this explains the same "pretty odd stuff" you see in ML model behavior.

Expand full comment
Ezra Brand's avatar

Great piece. This all reminded me of "Chesterton's fence", defined by Wikipedia as "the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood."

And ibid.: " The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.""

To me, that's the main value in any kind of outsider (whether a consultant, or a future advanced AI) making an effort to understand current processes, as opposed to simply the known inputs and the expected outputs, and then building from first-principles. There's often a good reason why the current process exists, and often it's not simply an accident of history.

The same is true when learning from biological processes and evolution

Expand full comment
Dave Sampson's avatar

So true, as the Mammal said the the Dinosaur... I don't see the point of dinosaurs.

Expand full comment
Jess McConnell's avatar

I've spent countless hours helping organizations with process improvements and garbage can is a perfect description. Even when you get close to a good process flow, you'll discover a huge amount of bias, lack of clarity on authority (everyone has to say yes, any ONE can say no), and hidden constraints on the process that have nothing to do with the work and far more about CYA. Toward the end of my career I almost exclusively looked at the constraints before even attempting to prove process and often got step change imorovements. That is still an early prompt I work in with problem solving with AI - asking it to uncover implied or explicit constraints and bias in the question itself.

Expand full comment
Vincent Murphy's avatar

Virtually every single process across all of modern society is predicated on the invention of the printing press: international treaties, common agreed standards of weights and measurements, scientific method, educational establishments, universal literacy, government and business power structures, copyright, legal process, every institution on Earth - our entire modern world arose from and was structured around the output and capacity of the printing press.

Even its offshoot mediums: books, newspapers, journals magazines, radio, television, film and even the Internet are all based around the conceptual core of a uni-directional communication of print "I create content- you consume the content.

Print like Language and Writing before it are not just technologies, nor even the even rarer General Purpose Technologies like plough, electricity, steam that each changes the way humans exert and control the physical world - but belong to an even more exclusive sub set of 'Cognologies that change the very way we process and manipulate information itself.

And now a fourth Cognology has arrived in the form of AI which like its predecessors allows for a 10x exponential expansion in the total available Information Space whilst similarly compressing the speed of that expansion by 10x.

Think of the World before Gutenberg's invention 1453, manuscripts had been the World's principle communication medium for over 2,500 years - yet by 1500, less than 50 years it had virtually vanished whilst the World found itself wrestling with the Renaissance the Reformation, Enlightenment etc.

And with the subsumption of Manuscript came the rise of every single process mentioned in both Ethan's original article and all of the one's mentioned here in everyone's comments.

That 50 year period after printing press is called 'the Incunabula' where the World had to scramble to adapt, adopt, try out and experiment with the new fangled medium sweeping the old ways aside: Paragraphs, word spacings, upper and lower case, vernacular texts, punctuation, indexes, declarative authorship to name just a few all had to be invented amidst the tumult. For example the scant manuscript libraries that did exist often filed books not by subject or author but by the first line of text!

We are now in the midst of a new incunabula but one that is both vertiginously steeper and far more compressed - squeezing 50 years into 5.

The only thing we can really be sure of is a) it is very definitely happening b) the stark difference between a monastic scriptorium and a 15th century print shop will pale by comparison in just how radically different our World will soon look/

Expand full comment
Brendan's avatar

I'm not sure that AI is kicking off the Second Incunabula or whether that was Web 2.0, ten years ago, when suddenly anyone could publish content to a global audience at a very low cost (once a certain tech threshold had been reached). Printing presses were still expensive, and those who owned them held power. "Never start a fight with a man who buys ink by the barrel." With the rise of blogs, YouTube, Facebook, etc., that has changed.

What AI is doing - at least in the realm of communication - is challenging the very bedrock of authenticity: is this content even real? The notions of gatekeeping or bias seem almost quaint anymore.

Perhaps a Third Incunabula is here before we've fully absorbed the Second?

Expand full comment
Court Chilton's avatar

Great article. Was reminded of the re-engineering mantra: don’t automate the mess. Maybe the game we’re playing is simplifying the mess (with agents to save time and ineffective human iteration) and then add back the messy elements we need as people: decision points that require trust, participation, fairness.

Expand full comment
The Bull and The Bot's avatar

We’re going to see a stark contrast in how conservative, highly regulated industries, like finance, healthcare, law, and government, adopt AI compared to fast-moving, innovation-embracing sectors. In risk-averse fields like finance, there’s a core requirement for transparency: they need to understand how AI is producing outputs, especially when those outputs touch client deliverables or regulated workflows. Letting AI run unconstrained can lead to powerful results - but blind trust just isn’t an option for these institutions, no matter how impressive the output.

That’s why AI adoption in these sectors will likely be slower. These organizations demand control, auditability, and continuity. Before they can embrace AI at scale, there’s a need for the chaotic, often painstaking mapping of existing processes - so AI can be integrated without disrupting how things have always been done. In these industries, AI adoption won’t look like radical reinvention but it’ll look like familiar processes, just done faster, cheaper, and with more consistency.

Expand full comment
Abhay Ghatpande's avatar

There are several issues with this article. The false equivalence between closed systems like chess and games, with open systems like organizations is deeply problematic. It's a classic category error combined with hasty (over)generalization. It assumes because “agents trained on outcomes” work in chess‑like contexts, they’ll scale across complex enterprises. That leap ignores the need to understand context, intent, and implicit norms. There is no evidence to conclude this approach will work broadly across all organizational functions. Equating organizational outputs with chess results ignores that important dimensions of work cannot be captured by output alone.

Expand full comment
Brendan's avatar

I think you missed the point of the article. He's not setting them as equivalent, but rather as polar opposites. "Here's Plan X for solving problems of Type A. (Deep Blue) Here's another way of solving this kind of problem - Plan Y for Type A. We've been Trying to use Plan A for problems of Type B, and not doing so well. What if we try Plan Y on Type B?"

We really haven't tried that at scale yet, that's his point. So yes, "There is no evidence to conclude this approach will work broadly across all organizational functions." We don't know. Yet.

"...important dimensions of work cannot be captured by output alone" True. "Not everything that matters can be measured, and not everything that can be measured, matters." But most organizations tend to base incentives and rewards on measurable outputs. Certainly the market does, even to the point of quantifying something as nebulous as "brand goodwill."

Which is why it's highly unlikely that an AI marketing / crisis-communication strategist came up with the brilliant idea of having Gwyneth Paltrow record a short PR spot for Telescope last week. (Look it up.)

Expand full comment
Dov Jacobson's avatar

Assuming 'Telescope' is a typo for 'Astronomer', it is hard to see why you think the idea that they exploit Gwyneth Paltrow required trans-human cleverness. After all she is the ex-wife of the man (Chris Martin of Coldplay) who outted the careless Astronomer execs, and may have been leveraged to help undo some the damage he participated in.

Expand full comment
Brendan's avatar

Astronomer, right. Cognitive interference effect. (I honestly thought it was a special-interest magazine, not a tech company. Whatever happened to company names like "Akron Tool and Die"?)

No, I don't think that it "required trans-human cleverness", that's my point. It required *human* cleverness. I don't think a machine - a statistical pattern engine - could have come up with the idea. It required a level of semantic and cultural understanding quite beyond the current iteration of LLMs and their kin.

Expand full comment
Dov Jacobson's avatar

My apologies for misreading your comment.

I will delete my useless response shortly.

Expand full comment
Michael Champion's avatar

Garbage Can-ish organization doesn't have rules of the game a AlphaZero-like AI could use to pay against itself to learn effective strategies. SO I don't necessarily agree that the Bitter Lesson will apply to real-world "games", especially in a world where organizations can re-define "success" rather than change strategies or leaders.

But the article treats that as an open question: "We're about to find out which kind of problem organizations really are: chess games that yield to computational scale, or something fundamentally messier."

Expand full comment
Sarah Hildebrandt's avatar

It’s been a long time since enterprise organizations have endeavored to wallpaper their conference rooms with BPMN diagrams. One doesn’t need AI to realize that analyzing as-is processes with an expensive team of consultants doesn’t help to improve the situation, which I would respectfully refrain from describing as “garbage”.

In fact, the processes have already evolved so much by the time the analysis is finished and the purported optimization planned that the solution is deprecated before the first sprints start.

And that was before AI,

What you’re forgetting is that AI means: dramatically fewer humans in the processes— therefore simpler workflows, way less UI.

It’s the humans and the UI that make enterprise development so complex.

Take them out, and you’ll have really fast transformation. That’s the bitter truth.

Expand full comment
AAF's avatar

This is cool. Think you can solve the process question by thinking in terms of constraints. In the case of the chess game there are many constraints placed on the AI (it must play by all of chess’s different rules). These constraints are already in the public domain so AI can discover and adhere to them without someone inputting “your knight can move two spaces in one direction and one in another”.

When we ask AI to achieve an outcome, we are always implicitly asking it to figure out its own process by which to do that. But within a constraint we have outlined—“reach this goal.” Most goals have many constraints attached to them. The constraints that are in the public domain, AI can figure out on its own. But we must input the ones that are not. For example, “seek high quality sources.” “Use my organization’s branding.” Instead of thinking of it as creating process—think of it as constraining AI to empower it to reach a specific outcome.

“Make a to do list” (kind of like extended thinking with Claude) is a great constraint if you have the sub goal of teaching the user HOW AI is achieving the goal that’s been laid out—so we can learn from AI’s process, the point of which is to continue to refine our goals (constraints) with further sub-goals (constraints) to empower AI to reach even better outcomes.

Expand full comment
TW's avatar

Fundamental issue here is that process model.

A chess game is a decision tree. The tree may be conceptually bigger than the known universe, but it is linear and proceeds predictably from the step immediately before it. It is a "perfect information" game: you have all the information you need to make a decision at any given time. So it's quite tractable to modeling, including having the model play itself.

The problem with human activities is that the game space is essentially unbounded, unlike chess. Humans can quit and work for a competitor, bribe other humans in order to avoid something unpleasant, etc. We focus on "psychology" because of our ego, but the really important distinction is that a chess board never gets hit by lightning, never collapses because a neighboring chess board is suffering a drought, etc. This is not tractable to modeling. At least with chess you know exactly what all the pieces do and how many there are.

I wonder, then, following my "drunk at the lamppost" principle (easy always wins eventually), whether businesses of the future will resemble chess games, not garbage cans. Human activity will be the equivalent of the two old men chatting over the board. With perhaps the additional proviso that they never make any moves.

Expand full comment
Mike Hulbert's avatar

I appreciate reading an article that meaningfully addresses the messiness of how corporations get things done. This seems to be often missed in the hype train we are on right now about AI agents.

I think that a primary factor is missing though - the mess of enterprise systems that are used to run corporations. Over the last 20 years, functional leaders picked their own enterprise systems in sales, finance, marketing, supply chain, etc. Many of the craziest processes relate to getting things accomplished that span corporate functions and their local technology. This is a major constraint to how the current concept of an AI agent can operate. It is not enough to just train on outcomes, you have to train (and integrate them) with the existing technology.

Expand full comment
Jordan Furlong's avatar

The implications for AI’s use in the legal field — where following established procedures and satisfying statutory or common-law requirements are viewed as essential to both the accuracy and legitimacy of the result — are interesting. Lawyers and judges tend to be linear thinkers and sequential actors, placing one foot after another down a well-marked path.

So there might be little appetite within the legal profession for a technology that creates legal products or arrives at legal solutions in a “messy” or undocumented manner, even if the outcome is as good as (and far more efficient than) what the traditional method produced. But clients and other legal system users might see it differently.

Expand full comment
David Bent's avatar

I wonder if there is a different bitter lesson, about, well, capitalism. That is, when companies can, they make a situation simpler in order to have more control and make more money (at least, as modelled and in the short-term).

Past example: business process re-engineering.

In this case, make themselves into being like chess games, so they can deploy AI and avoid the costly, messiness of humans. Even if, in the medium-term, you cannot get rid of the messiness, and the gains are illusionary or temporary.

Expand full comment
Ulrich Tietz's avatar

Thank you for the newsletter. Many of the depicted addressed problems might have a solution. I would recommend to read the following book: The Logical Thinking Process: A Systems Approach to Complex Problem Solving (Kindlebook available)

by H. William Dettmer.

Expand full comment
Roberto Seif's avatar

Great article. Two things that worry me about this "outcomes-based" approach to agentic AI are:

1) That famous "paper clip" thought experiment-- where an AI that's instructed to maximize the production of paper clips ends up going rogue to achieve its goals and nobody can stop it. While the paper clip story is extreme, it also exposes the risks that a minor oversight in the program's rules could lead to disastrous unintended consequences on a large scale.

2) And by extension, if AI trains itself to find the best ways to reach its goal, it also means that nobody could tell why, how, or when it crossed the line and what it will do next (i.e. it's a black box).

Thoughts?

Expand full comment