Ethan has put his finger on a huge challenge for us -- how do we get our colleagues (in higher ed, in my case) to start USING generative AI & experimenting with it? If people don't use it, they can't really understand what it can do. Yet, I'm surrounded by people who seem to be waiting for others to try it first! What are they waiting for? Please lobby your colleagues, whatever your business/industry, and ask them to please try it out. As Ethan says, moving in incremental steps, you're not really going to break anything. But the costs of NOT experimenting is growing daily, as the inexperienced people fall further behind.
The best examples I can find is personal use cases. Where the risk of not spotting an error is low as you know the information so personally. Fantasy draft picks, meal planning etc. Start small.
I am actively explaining that AI raises the lowest level of intelligence of a subject or topic, but doesn't remove the experts for now. Which is a very similar message to BAH.
In my area of expertise, accounting, it is very easy for people to chuck some data into chat GPT for example and ask it to recommend the most tax efficient thing, but that is subjective, which tax, are you planning for now or 10 years, have you taken into account other implications.
This is where the BAH beats AI at this moment in time with the general public, but that is only due to the prompt not being good enough, as people don't use it enough in their chosen topic to chat with.
The responses do give a good place to start a conversation with an expert though.
Thank you so much for shedding light on this. I think one of the reasons people shy away from using AI, especially in my industry(education) is because of the stigma that still surrounds it; particularly when it comes to AI-generated content. It is often regarded as unoriginal or unethical and people who use AI use are seen as lazy. But this couldn't be further from the truth. In fact, people who embrace AI often do so to increase their productivity, allowing them to focus on higher-order tasks that require uniquely human qualities like empathy, creativity, and critical thinking. AI is a tool in our toolbox, and I think using it smartly is a testament to adaptability and resourcefulness. Technology empowers us to do more, not less. I will write an article about this in the coming weeks.
The thing that fascinates me about all of this research and prompting on top of ChatGPT is how dramatically it could improve the training set for further iterations on the service. There are now millions of phenomenal researchers and practitioners creating detailed, annotated examples of working with AI on everything from consulting analyses to cognitive behavioral therapy to lesson planning and individual tutoring. I expect AI to get much better at these types of reasoning soon. It'll be interesting to see whether that becomes a sustainable data moat for companies like OpenAI, or whether startups and open source developers can go through a similar progression.
Very well done post on the pragmatic side to AI optimism. I especially agree with the approach of how LLMs add considerable value in supporting two areas:
1. Wastework and important but not urgent work - I think most professionals wish they had better note taking, email writing and summary skills but it takes too much time even when you have the skills. Having an LLM that you can talk to like a high impact secretary and which then generates functional copy is life changing. On a similar level for assessing too many reports / documents to read, process and categories.
2. Mental health and coaching related - one of the most impactful applications of LLMs is the ability for them to act as a positive career, professional or skill coach. This is something that almost anyone can benefit from but coaches are expensive in time and money and it’s incredibly difficult to find a coach that “fits” you. LLMs won’t replace the work that professional coaches do already but will enable the 90% of the professional world that can’t afford or find a suitable coach to get advice.
It’s just amazing to consider all the areas that this is and will have positive impact. There will be abuse and risks but the net benefits will outweigh any costs. Plus, I agree completely with your leading statements - we don’t have a choice. Since it’s here and will be here, let’s make the best of it.
Re your comment re professionals wishing for an AI equivalent to a high-impact secretary, etc: have you checked out Personal.AI? That's exactly what it does for you once it is trained. It can assume your persona and respond in your stead.....and much else.
Thanks for the recommendation. I’ve heard of it but not used it. There are lots of these apps around now. So far I prefer the “research partner” approach to using LLMs that Tyler Cowen talks about. I don’t have a usecase where I’d want an AI to mimic me. I prefer that it’s different to me. This is maybe because I don’t handle sufficient transactional mails where I’d want an automated responder. Maybe in a sales or recruiting role dealing with a large amount of inbound, low quality leads it would work.
Yes. The danger is that it’s not a reliable narrator and you have to keep up your guard. The problem is how do we really know it’s being accurate in it’s confident and we’ll-informed responses to our queries, even about our supplied data. If, for instance, I take survey results from a project and ask it to analyze the data and produce a summary, key observations, and a critique of possible next steps suggested by the data from the survey respondents, how do I know if all the plausible responses I get from it actually are true short of going into the data and spending hours doing the analysis in pre traditional pre-ChatGPT way?
This is my biggest concern as well; as the mistakes tend to mimic plausible outcomes, it's very difficult to spot the errors quickly without doing the in depth work myself...
Your idea on the usage of AI as a "best available coach" is interesting. It is clear that in an ideal world, everyone would have a coach but even a "subpar" AI coach could be enough to move enough people forward with their aspirations.
I'm reminded of something I thought about often, an imperfect solution implemented consistently over the long term is much better than a perfect solution implemented inconsistently.
Sometimes i get the feeling that the user's biases are connecting with the latent biases in the LLMs, and then coming through to the user via the AI. More use would give a back and forth continuity ...This would actually mean it's a matter of tricky triangulation in terms of getting a root cause analysis to "how'd it all go wrong?!" and i HOPE it never comes to that.
It’s clear from a series of studies that AI is better than the average healthcare professional at empathy when raters are blinded as to the source of the empathic message. After they find out that the message came from an AI, raters often downgrade their evaluations, or feel betrayed by being deceived. But depriving patients of effective empathy causes real harm. So there’s an ethical dilemma where the ethical principle of beneficence suggests we should give access to the most effective source of empathy, but the ethical principle of respect for persons suggests we have to be honest with people about where the empathy is coming from. Some human AI hybrid empathy is going to be become normal relatively quickly I predict.
Thank you, thank you, thank you! I am regularly posting your posts to my campus' repository of AI resources You are quickly becoming a touchstone for those of us interested in the power of generative AI. PS: Have you thought of writing a response to John Warner's piece in Inside Higher Ed?
Actually, if you put that into an AI Imagemaker, you get this:
Well, okay it won't let me paste it to this comments column (Bet the kid in Uganda could do it). If you want to see it, go to Wadeeli.Substack.com and it'll be up for the Christmas column.
Yes, there will always be pushback on new ideas, like there were on cell phones, the internet, rock music, and whatnot. AI is a new and powerful item and while the BAH concept may promote laziness for people who don't want to take the time to THINK about a solution, it does indeed facilitate ideas not so obvious to the casual observer. (For example, searching for a girlfriend without my wife finding out...) (Tried to strike that out, but couldn't. Hey, kid!)
This isn't so much a comment as a follow-on idea I thought of because of this post. I know that AIs use information they have been fed to do all these things. How can we add information to them? For example, I have a resume and a job description. How could I get it to write a cover letter that touches on all the requirements of the job description?
Ethan has put his finger on a huge challenge for us -- how do we get our colleagues (in higher ed, in my case) to start USING generative AI & experimenting with it? If people don't use it, they can't really understand what it can do. Yet, I'm surrounded by people who seem to be waiting for others to try it first! What are they waiting for? Please lobby your colleagues, whatever your business/industry, and ask them to please try it out. As Ethan says, moving in incremental steps, you're not really going to break anything. But the costs of NOT experimenting is growing daily, as the inexperienced people fall further behind.
The best examples I can find is personal use cases. Where the risk of not spotting an error is low as you know the information so personally. Fantasy draft picks, meal planning etc. Start small.
Exactly why we started Chipp.ai. We need to make it easy for anyone to build and leverage their expertise.
I am actively explaining that AI raises the lowest level of intelligence of a subject or topic, but doesn't remove the experts for now. Which is a very similar message to BAH.
In my area of expertise, accounting, it is very easy for people to chuck some data into chat GPT for example and ask it to recommend the most tax efficient thing, but that is subjective, which tax, are you planning for now or 10 years, have you taken into account other implications.
This is where the BAH beats AI at this moment in time with the general public, but that is only due to the prompt not being good enough, as people don't use it enough in their chosen topic to chat with.
The responses do give a good place to start a conversation with an expert though.
Thank you so much for shedding light on this. I think one of the reasons people shy away from using AI, especially in my industry(education) is because of the stigma that still surrounds it; particularly when it comes to AI-generated content. It is often regarded as unoriginal or unethical and people who use AI use are seen as lazy. But this couldn't be further from the truth. In fact, people who embrace AI often do so to increase their productivity, allowing them to focus on higher-order tasks that require uniquely human qualities like empathy, creativity, and critical thinking. AI is a tool in our toolbox, and I think using it smartly is a testament to adaptability and resourcefulness. Technology empowers us to do more, not less. I will write an article about this in the coming weeks.
The thing that fascinates me about all of this research and prompting on top of ChatGPT is how dramatically it could improve the training set for further iterations on the service. There are now millions of phenomenal researchers and practitioners creating detailed, annotated examples of working with AI on everything from consulting analyses to cognitive behavioral therapy to lesson planning and individual tutoring. I expect AI to get much better at these types of reasoning soon. It'll be interesting to see whether that becomes a sustainable data moat for companies like OpenAI, or whether startups and open source developers can go through a similar progression.
Very well done post on the pragmatic side to AI optimism. I especially agree with the approach of how LLMs add considerable value in supporting two areas:
1. Wastework and important but not urgent work - I think most professionals wish they had better note taking, email writing and summary skills but it takes too much time even when you have the skills. Having an LLM that you can talk to like a high impact secretary and which then generates functional copy is life changing. On a similar level for assessing too many reports / documents to read, process and categories.
2. Mental health and coaching related - one of the most impactful applications of LLMs is the ability for them to act as a positive career, professional or skill coach. This is something that almost anyone can benefit from but coaches are expensive in time and money and it’s incredibly difficult to find a coach that “fits” you. LLMs won’t replace the work that professional coaches do already but will enable the 90% of the professional world that can’t afford or find a suitable coach to get advice.
It’s just amazing to consider all the areas that this is and will have positive impact. There will be abuse and risks but the net benefits will outweigh any costs. Plus, I agree completely with your leading statements - we don’t have a choice. Since it’s here and will be here, let’s make the best of it.
Re your comment re professionals wishing for an AI equivalent to a high-impact secretary, etc: have you checked out Personal.AI? That's exactly what it does for you once it is trained. It can assume your persona and respond in your stead.....and much else.
Thanks for the recommendation. I’ve heard of it but not used it. There are lots of these apps around now. So far I prefer the “research partner” approach to using LLMs that Tyler Cowen talks about. I don’t have a usecase where I’d want an AI to mimic me. I prefer that it’s different to me. This is maybe because I don’t handle sufficient transactional mails where I’d want an automated responder. Maybe in a sales or recruiting role dealing with a large amount of inbound, low quality leads it would work.
Isn’t “hallucinate” more of a cool-sounding marketing term used instead of the more accurate “confabulate”?
Yes. The danger is that it’s not a reliable narrator and you have to keep up your guard. The problem is how do we really know it’s being accurate in it’s confident and we’ll-informed responses to our queries, even about our supplied data. If, for instance, I take survey results from a project and ask it to analyze the data and produce a summary, key observations, and a critique of possible next steps suggested by the data from the survey respondents, how do I know if all the plausible responses I get from it actually are true short of going into the data and spending hours doing the analysis in pre traditional pre-ChatGPT way?
This is my biggest concern as well; as the mistakes tend to mimic plausible outcomes, it's very difficult to spot the errors quickly without doing the in depth work myself...
Your idea on the usage of AI as a "best available coach" is interesting. It is clear that in an ideal world, everyone would have a coach but even a "subpar" AI coach could be enough to move enough people forward with their aspirations.
I'm reminded of something I thought about often, an imperfect solution implemented consistently over the long term is much better than a perfect solution implemented inconsistently.
Sometimes i get the feeling that the user's biases are connecting with the latent biases in the LLMs, and then coming through to the user via the AI. More use would give a back and forth continuity ...This would actually mean it's a matter of tricky triangulation in terms of getting a root cause analysis to "how'd it all go wrong?!" and i HOPE it never comes to that.
It’s clear from a series of studies that AI is better than the average healthcare professional at empathy when raters are blinded as to the source of the empathic message. After they find out that the message came from an AI, raters often downgrade their evaluations, or feel betrayed by being deceived. But depriving patients of effective empathy causes real harm. So there’s an ethical dilemma where the ethical principle of beneficence suggests we should give access to the most effective source of empathy, but the ethical principle of respect for persons suggests we have to be honest with people about where the empathy is coming from. Some human AI hybrid empathy is going to be become normal relatively quickly I predict.
Ethan,
Thank you, thank you, thank you! I am regularly posting your posts to my campus' repository of AI resources You are quickly becoming a touchstone for those of us interested in the power of generative AI. PS: Have you thought of writing a response to John Warner's piece in Inside Higher Ed?
Best,
Rachel
BAH! HUMBUG!
Actually, if you put that into an AI Imagemaker, you get this:
Well, okay it won't let me paste it to this comments column (Bet the kid in Uganda could do it). If you want to see it, go to Wadeeli.Substack.com and it'll be up for the Christmas column.
Yes, there will always be pushback on new ideas, like there were on cell phones, the internet, rock music, and whatnot. AI is a new and powerful item and while the BAH concept may promote laziness for people who don't want to take the time to THINK about a solution, it does indeed facilitate ideas not so obvious to the casual observer. (For example, searching for a girlfriend without my wife finding out...) (Tried to strike that out, but couldn't. Hey, kid!)
Yes, that's what I tell my college professor colleagues. They SHOULD know this stuff & spot the obvious hallucinations, for example.
Thank you for your perspective and your clarity. More, please.
Excellent article as always. I am loving this AI journey, and all the questions it's raises and opportunities it presents.
Interesting perspective and points on the use of AI.
This isn't so much a comment as a follow-on idea I thought of because of this post. I know that AIs use information they have been fed to do all these things. How can we add information to them? For example, I have a resume and a job description. How could I get it to write a cover letter that touches on all the requirements of the job description?
Just paste it in. Or upload it for the ones that read PDF.