51 Comments

The observation about people treating AI like Google resonates with me deeply.

It seems like the real challenge isn’t just teaching prompt engineering but reframing how people think about AI entirely. What if AI education focused less on technical skills and more on developing an experimental mindset—treating AI as a partner to learn from through trial and error?

That could bridge the gap between what these models can do and how they’re actually used.

Expand full comment

I like this but how would you prevent people from using AI as a cruch by offloading demanding tasks to ai via our system 1 thinking huertistics? Also, the automation bias really makes it hard to keep AI as just a partner instead of treating it like an oracle imo after using ai for over 10 hours like Ethan suggests

Expand full comment

Just ask it harder questions then it's shoddy:)

Expand full comment

in my expierence, it is getting harder and harder to ask questions that stump gen ai models.

Expand full comment

Your right but they can help with that also when they make it up every now and again. I think they do that more when the sobt know. Also I often work with comparatively low benchmark models to state of the art ones

Expand full comment

Fully aggree. Instead of keywords complete sentences should be used and using voice to enter these made it easier for me:

https://news.aidful.net/i/151718262/voice-powered-chatbot-search-a-game-changer-for-information-discovery

Expand full comment

Very important point:

“people treat AI like Google, asking it factual questions. But AI is not Google, and do not provide consistent, or even reliable, answers."

Relevant to critiques I often hear from people

Expand full comment

Thanks Ethan. That was brilliant

Expand full comment

In February this year, I wrote a piece called "Minimum Viable Prompt: Your Cure for AI Overwhelm."

It had a lot of similar ideas and was all about having the right mindset of "just getting started."

Love to see a more fleshed out version of the concept from the great Ethan Mollick himself.

Expand full comment

very good piece! I wanted to push back some aspects of AI adoption in knowledge work that I have not seen covered much in your blog posts.

I appreciate you motivating people to use AI through the 10 hour of use principle but frankly there are people in companies that are content with their workflows and are not interested in keeping up with the latest and greatest AI products and being forced by corporate to use them in their work. Not speaking for myself but for some of my co-workers!

From a knowledge worker perspective in the corporate world, having capitalism and company leadership shove AI products down our throats and force us to use this in our workflows is going to create some resistance/stress/anti-productivity (as it should).

Some nice articles pushing back on the idea that AI is going to make us more productive or happier in our jobs.

Will AI make work burnout worse? [BBC] https://www.bbc.com/news/articles/c93pz1dz2kxo

The Ethical Implications of Artificial Intelligence (AI) For Meaningful Work

https://link.springer.com/article/10.1007/s10551-023-05339-7

"Historically technological advancements have, since at least the first industrial revolution, significantly changed opportunities for meaningful work by altering what workers do, the nature of their skills, and their feelings of alienation from or integration with the production process."

Expand full comment

I think this misses the expertise of our most important teachers - our mothers. I'd like to see an experiment where kindergarten through sixth grade teachers taught AI. I've read AI is biased toward those who recorded history and the suggested approach repeats that problem. Mothers have long experienced teaching their children repetitively and with infinite patience. AI needs a woman's touch.

Expand full comment

Love the concept of Algorithmic Aversion! I’m going to embed an emotional aspect and call it algorithmophobia, analogous to homophobia.

Expand full comment

Ethan, not sure if you'll read this comment because you've got a lot on your plate, but if you do, here's my ask: Would you write a post that addresses the energy issues of using LLMs as an individual?

Many people in my network are vehemently opposed to using AI for environmental reasons (energy & water usage, mostly) and I don't have a good sense of the impact of *my personal use* of AI on these fronts. I use Claude for all sorts of tasks and find it genuinely useful in many contexts. However, I don't know how concerned I should be about the environmental impact of my choice to use an LLM for daily tasks.

I trust your expertise and would appreciate a post from you on this topic! Thank you!

Expand full comment

From Kevin Bryant: “Frontier LLM inference is ~.005kwH at most. 100 q/day is ~energy of a single lightbulb on for a few hours.” Doesn’t mean it doesn’t matter at the level of societies, but individual use is not huge.

Expand full comment

It would be useful to include a comparison to google searches since both AI and Google both use data center energy.

Expand full comment

When I first saw the potential of AI.

I had a complex data set on how badly my boat rolled, created by a smartphone app, and absolutely no idea how to analyse it, or even what it meant or what I would be looking for.

So I just asked “analyse this” - and it did!

Expand full comment

David Khaneman showed this in the 1960-1970’s. He asked 9 expert radiologists to create an algorithm to diagnose gastric pathology and then asked both lay people and the radiologists that created the algorithm to review a set of gastric X-rays. The lay people out performed the radiologists. This was a clear example of physician bias.

Expand full comment

i wanted to comment an anecdote about probing it on your own expertise to understand its strengths and weaknesses.

in march 2023 when i was first using them in earnest i had a conversation with gpt3.5 about my research field from grad school (extragalactic astronomy) and found that it had regurgitated initially a lot of what one would consider to be the consensus of the field and as i drilled in on certain ideas it was able to see the holes in some of these consensus ideas and explained them well. shortly after i subscribed to chatgpt plus which i used for a year before switching to poe so i could try all the models to take advantage of each of their unique strengths. for example, i spent hours trying to figure out a google api issue with claude and gpt4o but gemini pro got it instantly.

Expand full comment

I feel like my approach to AI has now been validated. Of course, I learned a lot through reading you, so that is partially how I developed my approach.😀 When training people on AI, I try to push my clients away from actual “how do I prompt?” to “how do I think about the prompt?” and even “should I even use AI for this?” I find that critical and design thinking make a huge difference in just what someone might ask for, as opposed to “Googling the ChatGPT.” I often use the analogy to the movie “50 First Dates,” where every day is a new day with no memory of the one before.

Using the AI as a partner to practice stakeholder conversations, learner personas, and interactive focus groups really resonates with my corporate clients. It’s funny how people are amazed by this, yet it’s about changing your approach to AI. It’s not just a tool, but a transformation.

Expand full comment

Michele, I keep a running dialogue on a particular issue and ChatGPT and Claude will build on my ideas over time. For example, they will reference an earlier purpose sratement (could be days ago) and say this will help yoi accomplish xyz. You and @EthanMollick refer to this forgetting but would you agree that going back to one running dialogue over time allows the LLM to focus?

Expand full comment

Great advice! We, at BBVA, keep on telling people to accomplish the "10-hour phase" and we are trying to help them there. I thought you might be interested in having a look to the case study OpenAI publish on our case: https://openai.com/index/bbva/

Expand full comment

Its encouraging to know that the best way to make use of AI is to start making use of it!

Expand full comment

Can you explain what 'GPT alone' means in the first experiment mentioned? I don't understand this, since GPT cannot just walk up and diagnose a patient. Does it actually mean GPT as used by the authors?

Expand full comment

I also wondered about this. So I read the cited paper. The methodology for creating 'GPT alone' results was given as follows:

"In a secondary analysis, we included a comparison arm using the LLM alone to answer the cases. Using established principles of prompt design, we iteratively developed a 0-shot prompt; the same language was used along with the clinical vignette questions for each case.27 The researcher physician inputting prompts to the model did not alter model responses. eTable 4 in Supplement 2 gives an example prompt. These prompts were run 3 times in separate sessions, and the results from each run were included for blinded grading alongside the human outputs before unblinding or data analysis."

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2825395#:~:text=In%20a%20secondary,or%20data%20analysis.

Expand full comment

hmm bit too jargony for me... they basically just gave the LLM the exact prompt given the doctors, repeated 3 times? Is that it?

Expand full comment

Essentially. Best analogy is giving three students a single written exam question instead of having a clinically related conversation.

Expand full comment

As a health professional and academic using AI in teaching & research, I can say that you've hit the nail on the head with this post Ethan. Great content, great opinion - well done.

Expand full comment