30 Comments

Terrific post. It’s exciting to see people doing more than just generating silly images and fake term papers.

Expand full comment

Your first principle suggests that one consequence of rebuilding organizations along these lines is that managers will have less power. Conflicts over remote work and recent successes by labor unions are other signs of a general realignment between managers and workers, especially knowledge workers. What other signs we should watch out for in terms of organizational or economic restructuring?

Expand full comment

Curious where you got the impression that that post speaks to “managers having less power”? My read was that managers here still have power, but are utilizing AI as another member of the team in order to make work more efficient. It is the manager that sets where and how AI is utilized in the new workflow

Expand full comment

"We already can see a world where autonomous AI agents start with a concept and go all the way to code and deployment with minimal human intervention. This is, in fact, a stated goal of OpenAI’s next phase of product development. It is likely that entire tasks can be outsourced largely to these agents, with humans acting as supervisors"

imagine meetings where AI is sitting on where changes to the current system or new developments are discussed. ATM, the way this usually goes conclusions are wrapped into tasks and tasks are passed on to say software developers to execute on. A week or two later you get running software.

If you have AI listening in - in theory - you get running software at the end of the meeting. So you can certainly cut out software developer (currently a well paid job).

Expand full comment

I’m aware and agree it is entry and mid-level roles first at risk. At the moment, AI agent output still requires significant review as it is often only correct <50% of the time. While we’re in these early innings, the judgment that good human managers possess will still be needed -- to determine how to utilize GenAI, to review output (and teach others how to review), etc

Expand full comment

If "teams develop their own methods" and "guidelines will need to be much clearer for employees to feel free to experiment" then managers are going to have to give up power. The alternative is "secret cyborgs" and knowledge workers looking for opportunities that don't require pretending that nothing has changed. I think the insight here is that the habit of thinking about tech as "external IT solutions imposed by management" won't work for generative AI because it will be a collaborator within the team, rather than a method of control/increased efficiency. If managers want efficiency from LLMs, they have to give up power. This is the first mass technology adoption where workers are bringing the tool to the office instead of having management provide it. And this is happening when other trends, such as workers unionizing and demands for remote work, are pushing against managerial control.

Expand full comment

Definitely not the first time workers bring tools to do work. PCs, smartphones, all types of SaaS tools -- this is the definition of ShadowIT. Smart managers have long embraced the ability of their teams to tell them the future and communicated that to IT to properly implement. AI will be no different.

Expand full comment

Point Alex. I think the extent of this depends on the context and when the tech was adopted. I was fortunate to have my first mobile device (a Blackberry) and my first PC (IBM) provided by my employer, but I am betting that smartphones in particular are work tools that are typically paid for by the worker and always have been.

Expand full comment

Not always but often. But that's irrelevant IMHO. What matters is whether something is used to do work. Smartphones have been for a while. The reaction is the BYOD and device management space. Blackberries were different b/c they were corporate from the getgo -- they have never been a true consumer tool. In contrast, today, consumer technology drives most user-facing enterprise technology. UX, functionality, etc. Slack is AOL IM (which we used at work, without IT asking or monitoring, back in the day). Document sharing -- people work around SharePoint constantly b/c it is so cumbersome. I don't really think in terms of ownership, I think in terms of how is the job getting done. And the best teams will figure out the best way to get the job done with technology while incorporating acceptable risks. We didn't chat about financials or anything super critical on IM. We talked about how to complete projects. That's just my .02 of course. :)

Expand full comment

Workers bringing the tool to the office through unauthorized channels pose a risk to the company and clients (ie exposing company and client data to consumer versions of chatbots). As such it is managers who have to provide guidelines on how to use the tools or, preferably, provide enterprise versions of the tools that retain data confidentiality. Agree with you that managers banning/prohibiting these tools will not work, but I think there is a role for good managers to enable responsible use of them

Expand full comment

The emergence of DAOs (decentralized autonomous organizations) as a common standard of startups and public entities.

Expand full comment

This is great and very timely for me. I'm about to write an article/essay (for an EdD) looking at the impact of AI on the professions, particularly HE academics. Do you know of any useful methodologies for systematically analysing current processes? I think this would be a useful starting point for considering potential changes.

Expand full comment

I definitely would like to see a post about how to integrate the idea's discussed here into K-12 education and classrooms. Unfortunately, I am skeptical that most schools will be able to pivot successfully in time to really take advantage. It seems like outside companies and edtech startups are where most of the exciting work is being done as opposed to within schools themselves. There are isolated cases of teachers experimenting on their own, but I would love to see a consortium or some more formalized group sharing practices, organizing conferences, and just in general communicating more effectively.

Expand full comment

I'm opening discussions with organizations about how to solve business problems strategically with AI. We apply a framework to reveal the process and then work to remove or optimize steps. With the the SOP It then shows where roles and responsibilities align with KPIs so there is no exclusion or overlap. My gut was telling me that as soon as we started doing this we were changing the organization. My concern is having a strategic framework.

Expand full comment

Long time reader. Was fun to see the tool I built (Screenshot to code) featured in this post :)

Expand full comment

I'm really excited by the discussions around human/machine teaming. There is still so much to experiment with when it comes to mixed teams of humans and agents. When I was working on some projects focused on this at Philosophie (~2018) we found that there was a strong need for cross collab of people and particular focused agents. I've talked about how this might play out in the context of a smart home here:

https://uxdesign.cc/a-smart-home-is-one-that-talks-to-itself-58bb9222d893

Also, I wonder what we might learn from some earlier projects in this field like Orchestra from B12:

https://github.com/b12io/orchestra

The idea of building known workflows vs. dynamic goals will be interesting (and will depend on how "meta" the goals are).

Expand full comment

Another Great post Ethan thank you.

This ‘sneaking’ AI into our workflow at reminds me of years ago some of us had calculators we hide in our desks. Others had something called a PC running DOS at home so they would do the hard stuff at night or weekends and bring back results. Businesses were SO SLOW to adopt the idea of giving a PC to employees. Only ‘select’ employees were allowed one ...

Managers were afraid PCs would be a negative impact at work due to people wasting time on them.

So here we are again with AI. I welcome it.

Expand full comment

Wow, what brilliant Foresight into the World of Work! If we don't act now-now, it feels, it's going to be too late! And if over 50% "claim" to use Ai at work, imagine, what the actual behavior would be in reality :).

Expand full comment

With GPTs incorporating custom data of all kinds easily, and thus effectively adding the ability to form memories which LLMs have lacked up to now, and with new models a year or two or less away that will be multi sensory in ways that will extend a lot of what AIs can do, we are I think overcoming some of the big initial hurdles that have kept ChatGPT as a bit of a parlor trick for most uses and businesses, and entering a period where there will be very serious societal changes resulting from large productivity gains (I think centered around job losses). One thing that occurred to me is that smart introverted people, who have had a good few decades, will find things much more rough going in the medium-term onward. It’s the extroverts (and physical trades) that I think will emerge less impacted.

Expand full comment

Your article brilliantly captures the essence of organizational transformation in the AI era, drawing a fascinating parallel between past and present. It's intriguing to see how AI is not just a tool but a team member, reshaping workflows and decision-making.

However, I wonder if we risk oversimplifying AI's role by not acknowledging the potential pitfalls, such as ethical dilemmas and over-reliance on technology. Could this lead to a new kind of 'AI-driven bureaucracy' where human creativity is stifled?

It's a controversial thought, but perhaps the future of AI in organizations isn't just about efficiency, but also about balancing innovation with humanity. Keep the posts coming!

Also, have a cool AI tool that we’ve built that automates newsletters w/ a direct integration to Substack (so you can directly copy/paste) - would love your feedback on it (free to trial) - https://neuralnewsletters.com/

Expand full comment

I’m glad that you are working on this, however we are taking a slightly different approach. We have a program that can be tailored to any group, any age, any educational background. We designed it for elementary school students. So we know any one can learn it. It’s called Accelerator HAI. We teach both Human intelligence and Artificial intelligence at the same time. This gives individuals and companies to foundation to really use their full potential.

Expand full comment

So helpful. AI as teammate. Too early too late. Thanks.

Expand full comment

I love the comment about treating AI as a team member rather than external IT app. Great perspective to adopt.

Expand full comment

As a UX designer and researcher I'm a bit bemused about the statement "Well, one thing that AI is quite good at doing is providing feedback".

Yes, it can synthesise patterns of previous feedback, however, if "nobody has built the kinds of complex educational games we are developing" where have you sourced your dataset?

As in any creative build you can certainly start with the simulcra of a design framework, but that leap would probably benefit more from just doing the primary research. ;-)

Expand full comment

Great post. I am done with LLMs. It's time for personal LMs, different approach for memory, truth, grounded, cost-effective, for all people, controlled, and owned.

Expand full comment

Excellent post... wow. Thank you. As the best posts do -- it got me thinking...

And I caught myself thinking of the Coase Theorem, Theory of the Firm and transaction costs, and that LLM's collapse the transaction costs of working with language... and this is powerful because language is action in many contexts: program code, legal filings, a signature. So collapsing the cost of working with language lowers the cost of a whole class of actions. And in doing so, if firms are organized around high transaction cost transactions, then per Coase it should re-arrange the lines between firms.

Coase and transaction costs are an old model... so I hadn't connected it up. But if feels like part of what has been worrying me isn't just that we will all need to redesign our processes -- but that the economic value add for the "human in the loop" will change in ways that will require re-architecting the value proposition for business at the same time we are re-designing processes.

The processes I can kind of wrap my head around a little bit... bit the transaction cost effect feels like what's got us all anxious and I would appreciate anyone's thoughts on applying Coase and the Theory of the Firm to what's coming from AI?

Expand full comment