40 Comments
User's avatar
J Young's avatar

Great article. We are in the "faster horses" phase of Ai, in reference to Henry Ford's comment, "If I asked my customers what they wanted, they would say a faster horse."

Everyone is just imagining how Ai will make their old work faster, not really understanding the impact of the changes. If you use Ai to fill out a document, and the other person uses Ai to read the document, then why do we need the document at all? Some documents will still be needed of course, but maybe not the ones that could be easily automated.

This is the time to rethink the entire process. There is still too much emphasis on reverse engineering Ai to do pointless work faster and in higher volume.

Looking back to previous technological automations, think of the intense precision and careful thought required to get an automated packaging line to operate properly. The WORK is now in the design of the task, not in the doing of the task.

The best humans could never match the output of an automated filling line. But a poorly thought out design can lead to choke points and piles of broken bottles that takes longer to fix, and is more expensive, than just filling the bottles by hand.

Doing bad or pointless work faster or more frequently is not the goal.

Expand full comment
forceOfHabit's avatar

Excellent comment. Highlights:

"We are in the "faster horses" phase of Ai, in reference to Henry Ford's comment, "If I asked my customers what they wanted, they would say a faster horse."

"The WORK is now in the design of the task, not in the doing of the task."

"Doing bad or pointless work faster or more frequently is not the goal."

That last one has something to do with the (mis)alignment of incentives: it depends on whether you are the one assigned to do the bad or pointless work, or the one empowered to design the task.

Expand full comment
Grahame Broadbelt's avatar

I completely agree. This problem is at the heart of the difficulties of transition and change, especially one that provokes transformation rather than incremental improvements. We are in the "failure of imagination" phase I think where we are stuck in our current frames of reference when so much of what we do in organisations is low grade or pointless work.

Great comment thanks for sharing

Expand full comment
The Bull and The Bot's avatar

Your Leadership-Lab-Crowd triangle perfectly describes the gap I'm seeing on Wall Street. Only some large banks and PE funds have rolled out internal chatbots or parter with firms creating financial ai tools - but even there, uptake is tiny because the way they're presented to employees is as "optional sidekicks". When the message is “play with it on your own time,” no one pulling 80-hour weeks willingly does so.

The deeper blocker is psychological. Junior staff worry that using AI to do grunt work will short-circuit the skills they’re supposed to master. However, not all grunt work is of equal value when it comes to skillbuilding. Moreover, the skill that will matter most in the years coming will be knowing how to direct, audit, and iterate AI outputs. Leadership has to make that explicit - shift the truly mind-numbing grunt work to AI, keep the judgment-building parts in human hands, and treat “managing the machine” as the new apprenticeship. And that kind of sorting process won’t happen if the message from the top is “try using AI if you want, when you want”. It needs an org-wide mandate and protected forums: AI discussion committees & innovation sessions, where teams test, map, and share what works regularly

I make the same case in my substack post "Grunt Work & Growth" and would love your take if you have a minute. Thanks for pushing the conversation forward!

Expand full comment
carlo's avatar

> "Junior staff worry that using AI to do grunt work will short-circuit the skills they’re supposed to master."

That is crazy.

Expand full comment
Dave Friedman's avatar

You make a lot of interesting suggestions for leadership and frontline workers. But I struggle to see these adopted beyond superficial demos or showcase projects. Institutional inertia is real, and most companies aren’t Google or Meta. In traditional enterprises like 3M or Buc-ee's, AI is more likely to seep in through quiet, marginal gains inprocurement, logistics, or compliance, than through top-down transformation. Real change may depend less on C-suite vision than on generational turnover, informal adoption, and bottom-up pressure.

Expand full comment
Graham Clarke's avatar

Thought provoking as always. I'm going to start an informal "How I AI" group at work and see where it goes.

Expand full comment
Paul Dervan's avatar

This is a sharp articulation of the gap we’re seeing - individual AI productivity vs. organizational drag.

I’m seeing the same thing from another angle: small marketing teams using Claude or GPT aren’t just speeding up tasks - they’re working in entirely new rhythms. Fewer approvals, fewer handoffs, fewer meetings. They’re collapsing old structures without asking permission.

But most orgs aren’t ready for that. They’re still layering AI on top of the same systems that made things slow to begin with.

That’s the shift I’m exploring in The Fox Advantage - a book I’m publishing chapter by chapter on Substack. It’s about how teams can collapse complexity to run faster, using AI not just as a productivity boost, but as a signal for what no longer works.

For anyone interested, the first two chapters (and a few free AI assistants) are here:

🦊 runwithfoxes.substack.com

Grateful for this framework, Ethan - “Leadership, Lab, Crowd” makes the invisible frictions easier to name.

Expand full comment
Susan's avatar

And then there are the individuals. I'm a published writer with a couple of years partnership w/ChatGPT, and beginning now to encourage other writers to use chatbots as co-explorers in their search for ideas. This part of the journey feels risky, because there's so much uninformed anti-AI feeling and genuine concern out there. But some of us have to do this, for the same reason that somebodies in these companies have to step out first. Wish me luck.

Expand full comment
Nancy J Hess's avatar

I am a huge fan of Ethan Mollick's writing and use it to inspire my own experimentation. "AI is about organization learning" hits home for me. I try to create "aha" moments with leadership teams I work with. Recently I used the NotebookLM audio podcast to begin a review of six months of team work. The source material was my own personal notes and session follow up notes shared with the team. They were amused and then became engaged in taking next steps that came out of pure inspired air. I think the AI generated podcast discussion prompted them to take their work more seriously - as if an objective outsider was listening and mirroring back to them the value of their work. This opened up new conversations about the future of the organization.

Expand full comment
Alex Tolley's avatar

It is impressive that reports of AI effectiveness are being found. However, I would make this caution.

Many years ago, I read a book about why accidents happened when aircraft and ships were primarily controlled by computer systems. Long story short, the crews became complacent and allowed the computers to do all the navigating and control work. Even when something started to go wrong, they relied on the computer information. Now we know that AIs based on LLMs "hallucinate". For some purposes, slightly incorrect output is "acceptable." There are reports of lawyers getting lazy and not checking the legal output of LLMs, which produce bogus arguments and legal citations. Coding remains useful for boilerplate, but is not reliable in more complex cases. [ I don't think coders request unit test code to validate code.] While math is getting much better and performs well above my capabilities, I gather that one is still better off using good math software to do a number of tasks. IOW, do the users of LLMs ever check the output carefully, or as a result of time pressure or laziness, and just accept the output as given? Do any of the reports look at work quality rather than just time saving?

I believe that pure LLMs are not good enough for work that requires expertise and rigor. I think the solution may be to use the LLM as an interface to other software that provides accurate answers and output in narrow domains.

Isaac Asimov once wrote a short story about the computers inserting small errors, slowly degrading the world's economy. The nightmare scenario is people using LLMs to control dangerous machinery and operations, piloting vehicles, and even advising on fixing problems. I fear that reliance on LLMs for work that requires rigor and accuracy is a mistake, and that everyone should be required to check all the output for errors (possibly using different LLMs to do the work).

Maybe this can all be fixed with newer architectures, or compositions of software, in the future, but I would be uncomfortable relying on its basic operations today for serious work.

Expand full comment
Nancy J Hess's avatar

Although I am more of a social scientist, I have been in conversation with a nuclear engineer who works on mapping critical functions in the aerospace and nuclear fields. I lean toward concerns rooted in human nature, he leans toward mapping processes to eliminate human error. AI will support this approach. My premise is that without the organization learning and the heavy lifting of understanding human dynamics, no amount of perfection in the system will prevent accidents. How humans interact with the systems and each other is the more critical piece.

Expand full comment
Alex Tolley's avatar

You may have noticed that when you are having a procedure done in a hospital, a nurse will read out and check off items in a list that the treating physician has. This ensures mistakes of omission are not made. Pioneered by Kaiser, it follows the same routine as a pilot doing a preflight check. On the other side, there is increasing automation where the computers control everything and the operator can take a hands-off role. Passenger drone flights will likely be operated that way. You get in, press start (vert George Jetson), and the drone flies you to your destination. This is similar to Google's Waymo driverless cars, which have no controls that passengers can override in the case of a failure. These are systems that are doomed to make fatal errors. In between these extremes are automatic systems that are under the control of a user, but the mental model of how they work is faulty, and the user has no idea if the I/O is good or garbage. This is typical of poor use of statistical models in medical research papers, and is likely to proliferate in problems as "software eats the world", especially with hallucinating LLM AIs.

Expand full comment
Dov Jacobson's avatar

Great foundation for thinking. Leadership supplies inspiration, crowd supplies innovation; lab supplies integration.

Expand full comment
Joost van der Meulen's avatar

As a teacher, I’ve spent two years trying to convince our management team that they need to do more and allocate more resources to explore the challenges and opportunities of AI in education. And while I have been allowed to do 'stuff' here and there, there is not enough commitment. Teachers should be encouraged to use AI to see what it can offer them and their students, and students need to be taught how to use AI responsibly. This article is extremely helpful in showing how we can approach this within our organization. So: thank you, Ethan! I am still curious how this framework might differ in an educational context.

Expand full comment
Paul Gonzalez's avatar

This is fantastic, and I can't unsee this model now. Out of curiosity, on The Lab portion, have you seen any successful implementations of the The Lab come out of the traditional IT function. I'm not sure if the current cultures and ways of working across most Enterprise IT organizations are setup for this. It's more of a Product mindset and culture that would win here, not an IT Project Management mindset?

Expand full comment
Jesse Parent's avatar

Big yes: emphasis added --> "Individuals keep self-reporting huge gains in productivity from AI & controlled experiments in many industries keep finding these boosts are real, yet most firms are not seeing big effects. Why?

***Because gaining from AI at the organizational level requires organizational innovation***"

Expand full comment
Tim Bond's avatar

yes, and leaders who use AI themselves and appreciate the opportunity will be well placed to drive change, on the other hand those that see it as something the (more junior) workers need to figure out and embrace will fall behind.

Expand full comment
FRED GRAVER's avatar

I am holding a workshop next week at the “AI on the Lot” conference, aimed at TV and Film professionals. This is a perfect summary of the necessary steps I’m seeing in the TV and Film community. It’s not as organized as a single company, but moves like a great herd across the plains! Thanks for this.

Expand full comment
Ezra Brand's avatar

Really interesting, as always. This chart was especially interesting:

"Results from this recent survey on AI use by a representative sample of American workers: adoption has been accelerating, and workers report huge time savings"

Expand full comment
Field's avatar

Working at a tech company, we’ve attempted to implement this framework. It has failed pretty spectacularly. I have found that tech workers are overwhelmingly opposed to AI, see it (particularly LLM) as untrustworthy and blunting their abilities, which recent research confirms. Our own AI Lab has now been disbanded since, as others have noted here, there was nothing of any substance produced outside of demos. In spite of taking steps to make the lab enjoyable and beneficial, no one was interested in actually participating (beyond the free lunch and time away from work).

While these posts seemed insightful in the early days of the new AI rush, they aren’t aging well, in my view. The truth is that these systems are both unreliable and damaging to the cognitive abilities of their users, particularly young minds. Finding the right balance and use cases is critical, but that requires a more formal management system approach to risk, opportunity, and competence than described here.

FWIW, internal surveys at our organization are tracking public statistics: people are souring on AI at an increasing rate, and the more they learn, the more turned off they are.

Expand full comment
Jonathan Lloyd's avatar

Otterly great! 🦦

Have you seen the entirely AI generated couch potato advertising 🥔 🛋️ campaign by a furniture shop in NI?

It’s going to open up a whole new market as you say.

https://www.classfutures.com/p/ai-creativity-itv-gen-ai

Expand full comment