40 Comments

Typeface too small to read in many of your example inserts. (your subscribers may use smallish laptop or tablet to read emails) Perhaps you could ask AI assistant to insert as attachments or format in column not side by side.

Thanks

Expand full comment

Any links to this Primer software? Is this an existing thing or something the lab will start working on?

> "We are going to be releasing software like Primer, the AI-agent based educational system, open source, for anyone to build on."

Expand full comment

I can't wait to see this!

Expand full comment

We have a studio that creates the exact game based learning that Ethan plans to produce with AI. Our games are extremely effective learning tools - but they require time and money. Time and money well spent: In the last two decades, our studio has provided well-paid and profoundly meaningful work to hundreds of warm, complex living human beings.

A pain deeper than mere sentiment accompanies the disappearance of these jobs as clients inevitably turn to AI products that are cheaper and faster and almost as good. Not just at my studio, of course, not just my industry, but through all our species' creative and intellectual work. (Even the work of prompt-crafting.)

Expand full comment

I'm pretty sure that your warm, complex living human beings can learn to use generative AI to leverage their expertise in the sort of ways that the author is advocating for.

Will it require effort on their part? Certainly, yes. But lamenting that their old, comfortable way of making a living has to change makes as much sense as a gold miner lamenting that the vein he struck 10 years ago is played out and he has to go back to prospecting. At least your employees have the advantage of going forward into the future, not back.

Expand full comment

Thanks for responding, Force.

I'm not worried about skill adoption. I am worried about economics.

Real world example: Last year an art director calls a freelance illustrator when we need concept art for a pitch. She describes her concept to the artist, and pays him a few hundred bucks per image and waits a few days. This year she describes it to Midjourney, pays $20/month and waits seconds. (And next year, the Art Director GPT comes for her.)

Expand full comment

Hi Dov,

I hear you. I see similar things happening in education. I can feed my lecture notes into ChatGPT and it will instantly generate a coherent summary. Then I can ask it to generate either True/False or Multiple Choice questions at a level suitable for the material, both of which it does entirely adequately. Education is in for some major upheavals driven by the arrival of generalized AIs.

But as I suggested above (in parallel with Ethan), the teachers who will not just survive, but thrive in this new environment are the ones who learn to leverage the new technology. Some aspects of their job will get easier, but helping the students learn (not just regurgitate) will (I suspect) remain as challenging as ever. New technology, new tools, maybe mean new pathways (faster? more direct? requiring more self discipline? easier to be led astray?) but I suspect the goal of real understanding (of pretty much any topic) will remain as elusive and as rewarding as it currently is.

Expand full comment

I really like the term 'co-intelligence'. I've also hear AI referred to as 'augmented intelligence'. I like that too!

Expand full comment

I've recently been trying to use ChatGPT4o to code in an effort to develop a tool I would like to create, but also to understand the limits and potential of the technology. I'm really surprised that there is not more discussion about the limitations. I consistently run into problems where the interaction leads to results that are surprisingly error prone and limiting. I have to believe that this must be a big limitation for others trying to create any kind of dynamic models.

Expand full comment

I too have experienced those limitations. I have found that using the same chat can lead to more problems as the LLM doesn't see where it is causing the error. I have found some success with opening a totally new chat (or GPT) and having it analyze the code. It gives the errors a fresh perspective.

Expand full comment

I'll give that a try. I have been also switching between LLM's to see what kind of different answers I get, but also discovering that on any given day the same LLM will give a different answer.

Expand full comment

Same experience for me. The best use cases so far are documenting code and idea generation, summaries of topics and in some cases process/guideline drafts.

Expand full comment

Stuff like this concerns me. “ Yet it is clear that they do have expertise hidden in their latent space - they outperform most doctors at diagnosing diseases, and many more in providing empathetic replies to patients, for example.” When I follow your link, it’s a single study about responding to questions on social media. Two problems. First, this (diagnosing on social media) is not what doctors do (depending on the context, it may even be dangerous and unethical). Second, to make this claim, you’d need to replicate it, and ideally experts would produce a meta analysis of multiple studies. Why keep making claims like this based on such flimsy evidence?

Expand full comment
author

I am certainly not saying this is what doctors do in their entirety, or that they should or could be replaced with AI. And it was not my intention to imply that.

But on your specific question, there are other studies on the specifics of LLM diagnoses and empathy that use other methods: http://research.google/blog/amie-a-research-ai-system-for-diagnostic-medical-reasoning-and-conversations/

Expand full comment

Thanks for the reply! That’s still based on a simulation that does not reflect a real doctor patient interaction (the authors state this in the study). We need stronger evidence to claim that AI is outperforming doctors, even in a narrow case. At best, these studies tell us that further study is merited.

Expand full comment
author

I agree. And I altered the text to make it clearer.

Expand full comment

There are entire medical assistant / diagnosis product lines with embedded AI which exist for doctors in various fields. They utilize AI transcriptions, summaries, and diagnostic analysis to speed up and improve accuracy in patient care. This is happening from generalists, specialists, and in fields such as psychology and psychiatry. Putting a fine point on experts who use and provide feedback to AI tools today are refining the use cases of tomorrow, which may include some job replacement. That replacement is not to be decried as we will discover new areas in which AI is capable and human experts once again will dominate and thrive in those domains. The cycle of improvement will be accelerated by AI.

Expand full comment

Interesting piece, but I'm kinda puzzled how the piece coheres.

The first half of the piece argues that "we need to learn the lesson of the industrial revolution", and "the actual use cases will come from workers and managers who find opportunities to use AI to help them with their job."

The second half of the piece is all about how experts should build bespoke tools for specific use cases.

I think there's value in both points, but they're very different, parallel approaches

Expand full comment

Doesn't Ethan means that the "workers and managers" are the experts that can start to experiment and through this process the specific tools for specific use cases are built? or at least the ideas?

Expand full comment

Just an update to anyone coding using an LLM. The new Claude3 is much better, but has the limitations that even with the pay pro version you are limited to 45 queries in 5 hours. Also, there is a cap on the size of the result returned, which is really odd. I don't understand why Anthropic is putting these caps on their pay users, but it is what it is.

Expand full comment

Just wanted to say this newsletter was perfect timing as I had a conversation with 4o and it hallucinates and makes stuff up then apologizes but if I was enough of an expert on the topic I wouldn’t have noticed it.

My point, misinformation is going to skyrocket because people are going to put too much trust in these AI as there is no way to vet the info.

Which is what we have already sky rocking…we need an AI fact checking itself;)

But like the AI complains and uses as an excuse it’s just algorithms and training but the people who invent and train have no accountability except the ‘market’ forces that exist;)

Expand full comment

Excellent article! Two points.

1. I work with large companies. IMO, the reason to focus first on cost savings is to generate capital to fund innovation. I recognize that too many companies don't get to step 2, but that's often the intent

2. To your point about not always knowing how the LLMs work, I asked GPT4o to summarize your article and pull a compelling quote from it to jumpstart a LinkedIn post. The quote was:

"The most important work for your organization is probably being done by people whose job descriptions don’t even mention it." - Ethan Mollick

Not only was that not in the article, it turns up 0 responses on Google. And yet it captures one of the points you make, which is that companies need to foster the curious; they are the ones who will hack something together and push the boundaries of what can be done with AI, just as happened with the internet, computers, etc ... back to the industrial revolution.

Expand full comment

I believe that there is also an issue around risk and uncertainty, and the impact these have on investment decisions. Taking costs out of an organization through automation is a relatively low risk, simple process; so it is easy to say yes to funding. Innovation has many unknowns, especially for revolutionary technologies where we are still in the R&D phase. Easy to say No (or wait). What's interesting about the application of LLMs is that innovation can be very inexpensive, so the risk of waiting is actually quite high, something many leaders don't realize.

Expand full comment

I’d love a readable copy of the tutor prompts. We’re testing out customer facing gpts and i really think there’s good inspiration in there!

Expand full comment

Ethan,

We heard you speak at penn 50th Reunion Class of 1974 and were so impressed. How to we reach you to engage you as a speaker for a professional group in Phila?

Expand full comment

Best thing I’ve read in a while on AI….

Expand full comment

Love the AI/Industrial Rev analogy. Really resonates with where we are in UK local government.

We have a UK Local Gov AI user group of over 600 reps (200 plus councils). A lot of us have spent the first 12 months getting our heads around how AI works. For us AI is primarily MS Copilot and Copilot 365. Various roll out approaches ranging from give it front line teams (my vote) to a more considered roll out to a relatively small number of staff and then build out. We've all seen the personal "first wave" productivity gains and some of those have the potential to be truly transformational e.g. with a critical shortage of social workers can AI free up capacity for our SWs to focus more on care and less on admin.

Ethan I'm afraid that given the pressure of public finances AI is directly linked with financial savings here across the pond. The personal productivity box is well and truly ticked but true cashable savings remains very much work in progress.

What year 1 has also told us is that this isn't some tech you just roll out. When Covid hit many of us had a work force that literally overnight was working from home. Then we did an emergency roll out of tech platforms like MS Teams to facilitate virtual meetings and corroboration. What we saw is that with very limited training staff soon picked it up and off they went. AI tech isn't like that.

Our Year 2 will be reallocating resources so they can be embedded in teams to educate them as how AI works, what the tools do etc with the aim of turning them into AI Champions. They know their service area best so they are best placed to understand how adapting the steam engine for their purposes will make a real difference.

Somewhere in here are wider themes e.g. making us an employer of choice in a very competitive job market.

We're constantly learning (hence championing Ethan's book and articles in the user group). It remains one truly fascinating journey.

Expand full comment

This is the second time I’ve read this article and like all things, fresh eyes tend to produce perspective.

Since I’m far from a lot of this work, much of this is new language and models but I may not take this one out of my inbox for awhile because of the power of understanding what you’re saying about using AI now, near and far.

It’s not that the learning curve is so steep—which it is—but the need to keep iterating to move through the assimilation long enough to see each level of adoption and adaptation required to reveal what the capabilities are.

I built a system of inquiry used in our leaderwork and am able to see on many levels how what “I” do with the inquiry model I built as an expert could be ported to AI—I’ve intuited it for years always knowing it would work and now it’s a matter of slugging through the learning curve in order to make the connections with my expertise that I have to make as I learn how to improve my “slugging percentage.”

The ‘meeting people where they are” gambit is something I have stressed for 3 decades and then had to learn what that met and all the models involved in humaning that so again thx for the note and guidance.

Expand full comment

Come for the AI insights, stay for the economic history comparisons and Meisenzahl & Mokyr (2012).

Expand full comment