Discussion about this post

User's avatar
Julia Junge's avatar

As an organizational development consultant mainly for Non-Profits, I can’t help but wonder if some of the seemingly "nonsensical" processes in organizations are actually quite meaningful. What looks inefficient on the surface often reflects social cohesion, informal support, and the kind of relational intelligence that holds teams together – especially in mission-driven or nonprofit settings. F.E. A meeting might not be useful directly to produce something, but to strengthen the team spirit and for emotional well being.

Not all outputs in organisations are as clearly defined as “winning a chess game.” Goals like trust, participation, or social justice are hard to measure, often conflicting – but still essential. That’s what makes AI adoption so tricky: it needs to deal not just with outcomes, but with ambiguity, values, and context.

I'm fascinated by how AI navigates messy systems and invents new workflows to reach a goal. But I also believe we need human judgment, emotional intelligence, and trust in the wisdom of complex social dynamics. Otherwise, we risk only achieving the parts of our mission that are easy to describe – and losing sight of the rest.

Recent examples, like Anthropic’s experiments about agents starting blackmailing, show how hard it is for AI to handle conflicting goals. But that’s exactly the daily reality of organizations. That’s why we need to think of AI not only as a technical tool, but as something that must learn to operate within relationships, tensions, and shared responsibility.

Expand full comment
Ezra Brand's avatar

Great piece. This all reminded me of "Chesterton's fence", defined by Wikipedia as "the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood."

And ibid.: " The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.""

To me, that's the main value in any kind of outsider (whether a consultant, or a future advanced AI) making an effort to understand current processes, as opposed to simply the known inputs and the expected outputs, and then building from first-principles. There's often a good reason why the current process exists, and often it's not simply an accident of history.

The same is true when learning from biological processes and evolution

Expand full comment
69 more comments...

No posts