Your Leadership-Lab-Crowd triangle perfectly describes the gap I'm seeing on Wall Street. Only some large banks and PE funds have rolled out internal chatbots or parter with firms creating financial ai tools - but even there, uptake is tiny because the way they're presented to employees is as "optional sidekicks". When the message is “play with it on your own time,” no one pulling 80-hour weeks willingly does so.
The deeper blocker is psychological. Junior staff worry that using AI to do grunt work will short-circuit the skills they’re supposed to master. However, not all grunt work is of equal value when it comes to skillbuilding. Moreover, the skill that will matter most in the years coming will be knowing how to direct, audit, and iterate AI outputs. Leadership has to make that explicit - shift the truly mind-numbing grunt work to AI, keep the judgment-building parts in human hands, and treat “managing the machine” as the new apprenticeship. And that kind of sorting process won’t happen if the message from the top is “try using AI if you want, when you want”. It needs an org-wide mandate and protected forums: AI discussion committees & innovation sessions, where teams test, map, and share what works regularly
I make the same case in my substack post "Grunt Work & Growth" and would love your take if you have a minute. Thanks for pushing the conversation forward!
Great article. We are in the "faster horses" phase of Ai, in reference to Henry Ford's comment, "If I asked my customers what they wanted, they would say a faster horse."
Everyone is just imagining how Ai will make their old work faster, not really understanding the impact of the changes. If you use Ai to fill out a document, and the other person uses Ai to read the document, then why do we need the document at all? Some documents will still be needed of course, but maybe not the ones that could be easily automated.
This is the time to rethink the entire process. There is still too much emphasis on reverse engineering Ai to do pointless work faster and in higher volume.
Looking back to previous technological automations, think of the intense precision and careful thought required to get an automated packaging line to operate properly. The WORK is now in the design of the task, not in the doing of the task.
The best humans could never match the output of an automated filling line. But a poorly thought out design can lead to choke points and piles of broken bottles that takes longer to fix, and is more expensive, than just filling the bottles by hand.
Doing bad or pointless work faster or more frequently is not the goal.
"We are in the "faster horses" phase of Ai, in reference to Henry Ford's comment, "If I asked my customers what they wanted, they would say a faster horse."
"The WORK is now in the design of the task, not in the doing of the task."
"Doing bad or pointless work faster or more frequently is not the goal."
That last one has something to do with the (mis)alignment of incentives: it depends on whether you are the one assigned to do the bad or pointless work, or the one empowered to design the task.
You make a lot of interesting suggestions for leadership and frontline workers. But I struggle to see these adopted beyond superficial demos or showcase projects. Institutional inertia is real, and most companies aren’t Google or Meta. In traditional enterprises like 3M or Buc-ee's, AI is more likely to seep in through quiet, marginal gains inprocurement, logistics, or compliance, than through top-down transformation. Real change may depend less on C-suite vision than on generational turnover, informal adoption, and bottom-up pressure.
It is impressive that reports of AI effectiveness are being found. However, I would make this caution.
Many years ago, I read a book about why accidents happened when aircraft and ships were primarily controlled by computer systems. Long story short, the crews became complacent and allowed the computers to do all the navigating and control work. Even when something started to go wrong, they relied on the computer information. Now we know that AIs based on LLMs "hallucinate". For some purposes, slightly incorrect output is "acceptable." There are reports of lawyers getting lazy and not checking the legal output of LLMs, which produce bogus arguments and legal citations. Coding remains useful for boilerplate, but is not reliable in more complex cases. [ I don't think coders request unit test code to validate code.] While math is getting much better and performs well above my capabilities, I gather that one is still better off using good math software to do a number of tasks. IOW, do the users of LLMs ever check the output carefully, or as a result of time pressure or laziness, and just accept the output as given? Do any of the reports look at work quality rather than just time saving?
I believe that pure LLMs are not good enough for work that requires expertise and rigor. I think the solution may be to use the LLM as an interface to other software that provides accurate answers and output in narrow domains.
Isaac Asimov once wrote a short story about the computers inserting small errors, slowly degrading the world's economy. The nightmare scenario is people using LLMs to control dangerous machinery and operations, piloting vehicles, and even advising on fixing problems. I fear that reliance on LLMs for work that requires rigor and accuracy is a mistake, and that everyone should be required to check all the output for errors (possibly using different LLMs to do the work).
Maybe this can all be fixed with newer architectures, or compositions of software, in the future, but I would be uncomfortable relying on its basic operations today for serious work.
I am holding a workshop next week at the “AI on the Lot” conference, aimed at TV and Film professionals. This is a perfect summary of the necessary steps I’m seeing in the TV and Film community. It’s not as organized as a single company, but moves like a great herd across the plains! Thanks for this.
Really interesting, as always. This chart was especially interesting:
"Results from this recent survey on AI use by a representative sample of American workers: adoption has been accelerating, and workers report huge time savings"
It's true that it's not enough for leadership to mandate the use of AI, most folks still don't know how to use it effectively. The companies I've seen embrace AI have taken time to train their teams on the wide variety of available tools and how to use them effectively. Sharing examples, walking through tutorials and making them feel like it's okay to admit you don't know how to use AI.
For those of us that use it a lot, we can forget that it's scary and new to most folks. After a while, it becomes natural to think AI-first but you need to get your team comfortable enough to start on the learning curve before even starting on that path.
This is a sharp articulation of the gap we’re seeing - individual AI productivity vs. organizational drag.
I’m seeing the same thing from another angle: small marketing teams using Claude or GPT aren’t just speeding up tasks - they’re working in entirely new rhythms. Fewer approvals, fewer handoffs, fewer meetings. They’re collapsing old structures without asking permission.
But most orgs aren’t ready for that. They’re still layering AI on top of the same systems that made things slow to begin with.
That’s the shift I’m exploring in The Fox Advantage - a book I’m publishing chapter by chapter on Substack. It’s about how teams can collapse complexity to run faster, using AI not just as a productivity boost, but as a signal for what no longer works.
For anyone interested, the first two chapters (and a few free AI assistants) are here:
And then there are the individuals. I'm a published writer with a couple of years partnership w/ChatGPT, and beginning now to encourage other writers to use chatbots as co-explorers in their search for ideas. This part of the journey feels risky, because there's so much uninformed anti-AI feeling and genuine concern out there. But some of us have to do this, for the same reason that somebodies in these companies have to step out first. Wish me luck.
Your Leadership-Lab-Crowd triangle perfectly describes the gap I'm seeing on Wall Street. Only some large banks and PE funds have rolled out internal chatbots or parter with firms creating financial ai tools - but even there, uptake is tiny because the way they're presented to employees is as "optional sidekicks". When the message is “play with it on your own time,” no one pulling 80-hour weeks willingly does so.
The deeper blocker is psychological. Junior staff worry that using AI to do grunt work will short-circuit the skills they’re supposed to master. However, not all grunt work is of equal value when it comes to skillbuilding. Moreover, the skill that will matter most in the years coming will be knowing how to direct, audit, and iterate AI outputs. Leadership has to make that explicit - shift the truly mind-numbing grunt work to AI, keep the judgment-building parts in human hands, and treat “managing the machine” as the new apprenticeship. And that kind of sorting process won’t happen if the message from the top is “try using AI if you want, when you want”. It needs an org-wide mandate and protected forums: AI discussion committees & innovation sessions, where teams test, map, and share what works regularly
I make the same case in my substack post "Grunt Work & Growth" and would love your take if you have a minute. Thanks for pushing the conversation forward!
Great article. We are in the "faster horses" phase of Ai, in reference to Henry Ford's comment, "If I asked my customers what they wanted, they would say a faster horse."
Everyone is just imagining how Ai will make their old work faster, not really understanding the impact of the changes. If you use Ai to fill out a document, and the other person uses Ai to read the document, then why do we need the document at all? Some documents will still be needed of course, but maybe not the ones that could be easily automated.
This is the time to rethink the entire process. There is still too much emphasis on reverse engineering Ai to do pointless work faster and in higher volume.
Looking back to previous technological automations, think of the intense precision and careful thought required to get an automated packaging line to operate properly. The WORK is now in the design of the task, not in the doing of the task.
The best humans could never match the output of an automated filling line. But a poorly thought out design can lead to choke points and piles of broken bottles that takes longer to fix, and is more expensive, than just filling the bottles by hand.
Doing bad or pointless work faster or more frequently is not the goal.
Excellent comment. Highlights:
"We are in the "faster horses" phase of Ai, in reference to Henry Ford's comment, "If I asked my customers what they wanted, they would say a faster horse."
"The WORK is now in the design of the task, not in the doing of the task."
"Doing bad or pointless work faster or more frequently is not the goal."
That last one has something to do with the (mis)alignment of incentives: it depends on whether you are the one assigned to do the bad or pointless work, or the one empowered to design the task.
You make a lot of interesting suggestions for leadership and frontline workers. But I struggle to see these adopted beyond superficial demos or showcase projects. Institutional inertia is real, and most companies aren’t Google or Meta. In traditional enterprises like 3M or Buc-ee's, AI is more likely to seep in through quiet, marginal gains inprocurement, logistics, or compliance, than through top-down transformation. Real change may depend less on C-suite vision than on generational turnover, informal adoption, and bottom-up pressure.
It is impressive that reports of AI effectiveness are being found. However, I would make this caution.
Many years ago, I read a book about why accidents happened when aircraft and ships were primarily controlled by computer systems. Long story short, the crews became complacent and allowed the computers to do all the navigating and control work. Even when something started to go wrong, they relied on the computer information. Now we know that AIs based on LLMs "hallucinate". For some purposes, slightly incorrect output is "acceptable." There are reports of lawyers getting lazy and not checking the legal output of LLMs, which produce bogus arguments and legal citations. Coding remains useful for boilerplate, but is not reliable in more complex cases. [ I don't think coders request unit test code to validate code.] While math is getting much better and performs well above my capabilities, I gather that one is still better off using good math software to do a number of tasks. IOW, do the users of LLMs ever check the output carefully, or as a result of time pressure or laziness, and just accept the output as given? Do any of the reports look at work quality rather than just time saving?
I believe that pure LLMs are not good enough for work that requires expertise and rigor. I think the solution may be to use the LLM as an interface to other software that provides accurate answers and output in narrow domains.
Isaac Asimov once wrote a short story about the computers inserting small errors, slowly degrading the world's economy. The nightmare scenario is people using LLMs to control dangerous machinery and operations, piloting vehicles, and even advising on fixing problems. I fear that reliance on LLMs for work that requires rigor and accuracy is a mistake, and that everyone should be required to check all the output for errors (possibly using different LLMs to do the work).
Maybe this can all be fixed with newer architectures, or compositions of software, in the future, but I would be uncomfortable relying on its basic operations today for serious work.
I am holding a workshop next week at the “AI on the Lot” conference, aimed at TV and Film professionals. This is a perfect summary of the necessary steps I’m seeing in the TV and Film community. It’s not as organized as a single company, but moves like a great herd across the plains! Thanks for this.
Really interesting, as always. This chart was especially interesting:
"Results from this recent survey on AI use by a representative sample of American workers: adoption has been accelerating, and workers report huge time savings"
Great foundation for thinking. Leadership supplies inspiration, crowd supplies innovation; lab supplies integration.
It's true that it's not enough for leadership to mandate the use of AI, most folks still don't know how to use it effectively. The companies I've seen embrace AI have taken time to train their teams on the wide variety of available tools and how to use them effectively. Sharing examples, walking through tutorials and making them feel like it's okay to admit you don't know how to use AI.
For those of us that use it a lot, we can forget that it's scary and new to most folks. After a while, it becomes natural to think AI-first but you need to get your team comfortable enough to start on the learning curve before even starting on that path.
This is a sharp articulation of the gap we’re seeing - individual AI productivity vs. organizational drag.
I’m seeing the same thing from another angle: small marketing teams using Claude or GPT aren’t just speeding up tasks - they’re working in entirely new rhythms. Fewer approvals, fewer handoffs, fewer meetings. They’re collapsing old structures without asking permission.
But most orgs aren’t ready for that. They’re still layering AI on top of the same systems that made things slow to begin with.
That’s the shift I’m exploring in The Fox Advantage - a book I’m publishing chapter by chapter on Substack. It’s about how teams can collapse complexity to run faster, using AI not just as a productivity boost, but as a signal for what no longer works.
For anyone interested, the first two chapters (and a few free AI assistants) are here:
🦊 runwithfoxes.substack.com
Grateful for this framework, Ethan - “Leadership, Lab, Crowd” makes the invisible frictions easier to name.
And then there are the individuals. I'm a published writer with a couple of years partnership w/ChatGPT, and beginning now to encourage other writers to use chatbots as co-explorers in their search for ideas. This part of the journey feels risky, because there's so much uninformed anti-AI feeling and genuine concern out there. But some of us have to do this, for the same reason that somebodies in these companies have to step out first. Wish me luck.
Thought provoking as always. I'm going to start an informal "How I AI" group at work and see where it goes.
Your Letters ALWAYS make my day :)
Thank you.
How right you are, "We are all figuring this out together!"
Cheers Prof,
Blessings Always
Nix G
Cape Town