29 Comments
Oct 22, 2023Liked by Ethan Mollick

Ethan has put his finger on a huge challenge for us -- how do we get our colleagues (in higher ed, in my case) to start USING generative AI & experimenting with it? If people don't use it, they can't really understand what it can do. Yet, I'm surrounded by people who seem to be waiting for others to try it first! What are they waiting for? Please lobby your colleagues, whatever your business/industry, and ask them to please try it out. As Ethan says, moving in incremental steps, you're not really going to break anything. But the costs of NOT experimenting is growing daily, as the inexperienced people fall further behind.

Expand full comment

I am actively explaining that AI raises the lowest level of intelligence of a subject or topic, but doesn't remove the experts for now. Which is a very similar message to BAH.

In my area of expertise, accounting, it is very easy for people to chuck some data into chat GPT for example and ask it to recommend the most tax efficient thing, but that is subjective, which tax, are you planning for now or 10 years, have you taken into account other implications.

This is where the BAH beats AI at this moment in time with the general public, but that is only due to the prompt not being good enough, as people don't use it enough in their chosen topic to chat with.

The responses do give a good place to start a conversation with an expert though.

Expand full comment

Thank you so much for shedding light on this. I think one of the reasons people shy away from using AI, especially in my industry(education) is because of the stigma that still surrounds it; particularly when it comes to AI-generated content. It is often regarded as unoriginal or unethical and people who use AI use are seen as lazy. But this couldn't be further from the truth. In fact, people who embrace AI often do so to increase their productivity, allowing them to focus on higher-order tasks that require uniquely human qualities like empathy, creativity, and critical thinking. AI is a tool in our toolbox, and I think using it smartly is a testament to adaptability and resourcefulness. Technology empowers us to do more, not less. I will write an article about this in the coming weeks.

Expand full comment

The thing that fascinates me about all of this research and prompting on top of ChatGPT is how dramatically it could improve the training set for further iterations on the service. There are now millions of phenomenal researchers and practitioners creating detailed, annotated examples of working with AI on everything from consulting analyses to cognitive behavioral therapy to lesson planning and individual tutoring. I expect AI to get much better at these types of reasoning soon. It'll be interesting to see whether that becomes a sustainable data moat for companies like OpenAI, or whether startups and open source developers can go through a similar progression.

Expand full comment

Very well done post on the pragmatic side to AI optimism. I especially agree with the approach of how LLMs add considerable value in supporting two areas:

1. Wastework and important but not urgent work - I think most professionals wish they had better note taking, email writing and summary skills but it takes too much time even when you have the skills. Having an LLM that you can talk to like a high impact secretary and which then generates functional copy is life changing. On a similar level for assessing too many reports / documents to read, process and categories.

2. Mental health and coaching related - one of the most impactful applications of LLMs is the ability for them to act as a positive career, professional or skill coach. This is something that almost anyone can benefit from but coaches are expensive in time and money and it’s incredibly difficult to find a coach that “fits” you. LLMs won’t replace the work that professional coaches do already but will enable the 90% of the professional world that can’t afford or find a suitable coach to get advice.

It’s just amazing to consider all the areas that this is and will have positive impact. There will be abuse and risks but the net benefits will outweigh any costs. Plus, I agree completely with your leading statements - we don’t have a choice. Since it’s here and will be here, let’s make the best of it.

Expand full comment

Isn’t “hallucinate” more of a cool-sounding marketing term used instead of the more accurate “confabulate”?

Expand full comment

Your idea on the usage of AI as a "best available coach" is interesting. It is clear that in an ideal world, everyone would have a coach but even a "subpar" AI coach could be enough to move enough people forward with their aspirations.

I'm reminded of something I thought about often, an imperfect solution implemented consistently over the long term is much better than a perfect solution implemented inconsistently.

Expand full comment

Sometimes i get the feeling that the user's biases are connecting with the latent biases in the LLMs, and then coming through to the user via the AI. More use would give a back and forth continuity ...This would actually mean it's a matter of tricky triangulation in terms of getting a root cause analysis to "how'd it all go wrong?!" and i HOPE it never comes to that.

Expand full comment

It’s clear from a series of studies that AI is better than the average healthcare professional at empathy when raters are blinded as to the source of the empathic message. After they find out that the message came from an AI, raters often downgrade their evaluations, or feel betrayed by being deceived. But depriving patients of effective empathy causes real harm. So there’s an ethical dilemma where the ethical principle of beneficence suggests we should give access to the most effective source of empathy, but the ethical principle of respect for persons suggests we have to be honest with people about where the empathy is coming from. Some human AI hybrid empathy is going to be become normal relatively quickly I predict.

Expand full comment

Ethan,

Thank you, thank you, thank you! I am regularly posting your posts to my campus' repository of AI resources You are quickly becoming a touchstone for those of us interested in the power of generative AI. PS: Have you thought of writing a response to John Warner's piece in Inside Higher Ed?

Best,

Rachel

Expand full comment

BAH! HUMBUG!

Actually, if you put that into an AI Imagemaker, you get this:

Well, okay it won't let me paste it to this comments column (Bet the kid in Uganda could do it). If you want to see it, go to Wadeeli.Substack.com and it'll be up for the Christmas column.

Yes, there will always be pushback on new ideas, like there were on cell phones, the internet, rock music, and whatnot. AI is a new and powerful item and while the BAH concept may promote laziness for people who don't want to take the time to THINK about a solution, it does indeed facilitate ideas not so obvious to the casual observer. (For example, searching for a girlfriend without my wife finding out...) (Tried to strike that out, but couldn't. Hey, kid!)

Expand full comment

Yes, that's what I tell my college professor colleagues. They SHOULD know this stuff & spot the obvious hallucinations, for example.

Expand full comment

Thank you for your perspective and your clarity. More, please.

Expand full comment

Excellent article as always. I am loving this AI journey, and all the questions it's raises and opportunities it presents.

Expand full comment

Interesting perspective and points on the use of AI.

Expand full comment

This isn't so much a comment as a follow-on idea I thought of because of this post. I know that AIs use information they have been fed to do all these things. How can we add information to them? For example, I have a resume and a job description. How could I get it to write a cover letter that touches on all the requirements of the job description?

Expand full comment