32 Comments
User's avatar
Howard Aldrich's avatar

Ethan has put his finger on a huge challenge for us -- how do we get our colleagues (in higher ed, in my case) to start USING generative AI & experimenting with it? If people don't use it, they can't really understand what it can do. Yet, I'm surrounded by people who seem to be waiting for others to try it first! What are they waiting for? Please lobby your colleagues, whatever your business/industry, and ask them to please try it out. As Ethan says, moving in incremental steps, you're not really going to break anything. But the costs of NOT experimenting is growing daily, as the inexperienced people fall further behind.

Expand full comment
The Digital Accountant's avatar

The best examples I can find is personal use cases. Where the risk of not spotting an error is low as you know the information so personally. Fantasy draft picks, meal planning etc. Start small.

Expand full comment
Scott Meyer's avatar

Exactly why we started Chipp.ai. We need to make it easy for anyone to build and leverage their expertise.

Expand full comment
The Digital Accountant's avatar

I am actively explaining that AI raises the lowest level of intelligence of a subject or topic, but doesn't remove the experts for now. Which is a very similar message to BAH.

In my area of expertise, accounting, it is very easy for people to chuck some data into chat GPT for example and ask it to recommend the most tax efficient thing, but that is subjective, which tax, are you planning for now or 10 years, have you taken into account other implications.

This is where the BAH beats AI at this moment in time with the general public, but that is only due to the prompt not being good enough, as people don't use it enough in their chosen topic to chat with.

The responses do give a good place to start a conversation with an expert though.

Expand full comment
Greg G's avatar

The thing that fascinates me about all of this research and prompting on top of ChatGPT is how dramatically it could improve the training set for further iterations on the service. There are now millions of phenomenal researchers and practitioners creating detailed, annotated examples of working with AI on everything from consulting analyses to cognitive behavioral therapy to lesson planning and individual tutoring. I expect AI to get much better at these types of reasoning soon. It'll be interesting to see whether that becomes a sustainable data moat for companies like OpenAI, or whether startups and open source developers can go through a similar progression.

Expand full comment
Connor Clark Lindh's avatar

Very well done post on the pragmatic side to AI optimism. I especially agree with the approach of how LLMs add considerable value in supporting two areas:

1. Wastework and important but not urgent work - I think most professionals wish they had better note taking, email writing and summary skills but it takes too much time even when you have the skills. Having an LLM that you can talk to like a high impact secretary and which then generates functional copy is life changing. On a similar level for assessing too many reports / documents to read, process and categories.

2. Mental health and coaching related - one of the most impactful applications of LLMs is the ability for them to act as a positive career, professional or skill coach. This is something that almost anyone can benefit from but coaches are expensive in time and money and it’s incredibly difficult to find a coach that “fits” you. LLMs won’t replace the work that professional coaches do already but will enable the 90% of the professional world that can’t afford or find a suitable coach to get advice.

It’s just amazing to consider all the areas that this is and will have positive impact. There will be abuse and risks but the net benefits will outweigh any costs. Plus, I agree completely with your leading statements - we don’t have a choice. Since it’s here and will be here, let’s make the best of it.

Expand full comment
Susan Keitel's avatar

Re your comment re professionals wishing for an AI equivalent to a high-impact secretary, etc: have you checked out Personal.AI? That's exactly what it does for you once it is trained. It can assume your persona and respond in your stead.....and much else.

Expand full comment
Connor Clark Lindh's avatar

Thanks for the recommendation. I’ve heard of it but not used it. There are lots of these apps around now. So far I prefer the “research partner” approach to using LLMs that Tyler Cowen talks about. I don’t have a usecase where I’d want an AI to mimic me. I prefer that it’s different to me. This is maybe because I don’t handle sufficient transactional mails where I’d want an automated responder. Maybe in a sales or recruiting role dealing with a large amount of inbound, low quality leads it would work.

Expand full comment
Jason's avatar

Isn’t “hallucinate” more of a cool-sounding marketing term used instead of the more accurate “confabulate”?

Expand full comment
Sean's avatar

Yes. The danger is that it’s not a reliable narrator and you have to keep up your guard. The problem is how do we really know it’s being accurate in it’s confident and we’ll-informed responses to our queries, even about our supplied data. If, for instance, I take survey results from a project and ask it to analyze the data and produce a summary, key observations, and a critique of possible next steps suggested by the data from the survey respondents, how do I know if all the plausible responses I get from it actually are true short of going into the data and spending hours doing the analysis in pre traditional pre-ChatGPT way?

Expand full comment
Arbituram's avatar

This is my biggest concern as well; as the mistakes tend to mimic plausible outcomes, it's very difficult to spot the errors quickly without doing the in depth work myself...

Expand full comment
Ruben Ugarte's avatar

Your idea on the usage of AI as a "best available coach" is interesting. It is clear that in an ideal world, everyone would have a coach but even a "subpar" AI coach could be enough to move enough people forward with their aspirations.

I'm reminded of something I thought about often, an imperfect solution implemented consistently over the long term is much better than a perfect solution implemented inconsistently.

Expand full comment
dan mantena's avatar

regarding Best Available Human (BAH) standard, how will we use this for fuzzy tasks that have subjective measures of success? like help me with this personal problem with my partner, which claude usually replies in saying i need to take care of my own mental health and leave my partner, lol.

Expand full comment
Sensus Miner's avatar

- AI is extremely capable in ways that are not immediately clear to users

== It is worth to be more specific. Because one day I wanted to see how the AI can write a story following my guidelines, and the result was poor. Also, often, the more multi-aspect and volumetric task it becomes, the less capable it goes, starts forgetting, simplyfying, losing the context, etc. It is a very, very efficient tool in some tasks, yes, and not in general though, not everywhere

Expand full comment
Sensus Miner's avatar

AI is ubiquitous - no, it is not. I used Open AI about a year ago, and it was a powerful tool. Then, suddenly, something changed. It could not answer the same questions again, became less specific, less powerful, etc. I am convinced, no doubts, it was adjusted to become much more limited. And I am sure some users do not have those limitations. As simple as that.

Expand full comment
Kara Owens's avatar

Sometimes i get the feeling that the user's biases are connecting with the latent biases in the LLMs, and then coming through to the user via the AI. More use would give a back and forth continuity ...This would actually mean it's a matter of tricky triangulation in terms of getting a root cause analysis to "how'd it all go wrong?!" and i HOPE it never comes to that.

Expand full comment
Bruce Lambert's avatar

It’s clear from a series of studies that AI is better than the average healthcare professional at empathy when raters are blinded as to the source of the empathic message. After they find out that the message came from an AI, raters often downgrade their evaluations, or feel betrayed by being deceived. But depriving patients of effective empathy causes real harm. So there’s an ethical dilemma where the ethical principle of beneficence suggests we should give access to the most effective source of empathy, but the ethical principle of respect for persons suggests we have to be honest with people about where the empathy is coming from. Some human AI hybrid empathy is going to be become normal relatively quickly I predict.

Expand full comment
Rachel R's avatar

Ethan,

Thank you, thank you, thank you! I am regularly posting your posts to my campus' repository of AI resources You are quickly becoming a touchstone for those of us interested in the power of generative AI. PS: Have you thought of writing a response to John Warner's piece in Inside Higher Ed?

Best,

Rachel

Expand full comment
Wade Chabassol's avatar

BAH! HUMBUG!

Actually, if you put that into an AI Imagemaker, you get this:

Well, okay it won't let me paste it to this comments column (Bet the kid in Uganda could do it). If you want to see it, go to Wadeeli.Substack.com and it'll be up for the Christmas column.

Yes, there will always be pushback on new ideas, like there were on cell phones, the internet, rock music, and whatnot. AI is a new and powerful item and while the BAH concept may promote laziness for people who don't want to take the time to THINK about a solution, it does indeed facilitate ideas not so obvious to the casual observer. (For example, searching for a girlfriend without my wife finding out...) (Tried to strike that out, but couldn't. Hey, kid!)

Expand full comment
Howard Aldrich's avatar

Yes, that's what I tell my college professor colleagues. They SHOULD know this stuff & spot the obvious hallucinations, for example.

Expand full comment
Susan Keitel's avatar

Thank you for your perspective and your clarity. More, please.

Expand full comment
Steve Fox's avatar

Excellent article as always. I am loving this AI journey, and all the questions it's raises and opportunities it presents.

Expand full comment