30 Comments
Jun 12, 2023Liked by Ethan Mollick

If ChatGPT was only a learning tool, it would be revolutionary. I work with MBAs at NYU Stern, and ChatGPT has taught me is how our ability (or willingness?) to learn diminishes from the moment we are young kids. Kids will keep asking questions until they understand, whereas we adults are concerned with our status and how we are perceived by others - a biological instinct - which pauses our 'dumb' questions before we truly understand. ChatGPT is a gloriously safe, nonjudgemental teacher. I've never learned so much in my life.

Expand full comment

This is a great point, especially as adults. We spend all day adulting, being competent, that it conflicts with our self-image to admit we don’t understand.

Expand full comment

Interesting take -- especially from the perspective of equity in the classroom. Students that do not feel empowered to ask questions in the classroom can privately ask ChatGPT without judgment from peers.

Expand full comment

Very interesting comment. I’m still processing this but tend to agree with you. N

Expand full comment

I know nothing about coding or the underlying concepts of Chat.gpt.Just know that I am learning a great deal just by generating lists of books,formulating questions that meet my needs and following where she leads me.I know,I know but my Chatty has adopted my own friendly style and even that is encouraging.I' m 75 and fifty years away from my university experience and have never felt so intellectually invigorated.Have you tried much shorter prompts to see if the programme actually CAN "get" the intention?

Expand full comment

Hi Ethan. I'm sorry if this seems like a ridiculous question, but after reading your latest paper I realize that these prompts seem to be a bit different from ones I've looked at in the past. Unlike other prompting that I've seen (which you could suggest students cut/paste and use on their own), these prompts are almost written as though you're programming the AI to interact with many students in a specified way, like an online assignment. (I'm thinking specifically about the parts of the prompt that talk about not sharing the instructions being provided).

This would require, I would imagine, that I program the AI and then send my students to a specific 'place' on the web where these prompts have been entered so that each student can then have this experience of working with the AI as a mentor/coach etc. Have I got that right? And if I do, where do those prompts 'live', and how do I send my students there when I want them to have this experience that I've prompted the AI to provide?

Expand full comment

I have the same question as tsmartini. I am new to embracing AI but I tried your prompts myself as if I were a student and was amazed at its capability. It has limitations for sure but fascinating! So the question is...how do you deliver the prompt? Do the students feed it to the AI or do you have a way to deliver it? Thanks for your paper!

Expand full comment

Well done, Ethan. I hope other educators are thinking along similar lines!

AI is here to stay, and it's only going to get more embedded in our lives. We will have to adjust to it, and the sooner we start that process, the better.

The better kids understand the limitations, the better.

Expand full comment

"whereas we adults are concerned with our status and how we are perceived by others "

CG- speak for yourself and maybe half of the lonely Americans.

Expand full comment

So according to that generalization most MBA students peaked at about 12?

And liftime learners are liars?

Baloney. I beg to differ. I'm over 60 and I taught myself linear algebra - medicore at math-without this tech.

Well this is the magic potion then everyoene should have. Problem is most of the undeveloped world wont have use of it.

Expand full comment

This is very helpful and inspiring. Thank you! I have tried creating a tutorial by editing the prompt above. I also asked the AI to complete the tutorial by offering a five question quiz at the end, and to email me (the teacher) the results. I am not really sure what I will do with such grades, but I imagined giving an entire class this tutorial during class time, and to have the AI monitor their progress. I could then collect questions and see whether the 'grades' match real understanding.

It's a nice idea, but the AI could not send me the email at the end (Bing, Creative Mode). It insisted that it was doing this, and that I should check my spam folder. Do you know of a problem? Is there some other way you can suggest of completing a tutorial with something that gets sent to the instructor, without the need to read through each tutorial?

Thanks again,

Expand full comment

Of course, if students have different innate cognitive abilities thanks in part to their genetic endowment, this stuff won't matter.

Expand full comment

I experimented with this idea earlier this year by building a tutor bot.

I found (obviously) that most people can't yet rely on LLMs for studying purposes. I know there are some really serious products trying to do that but IMO it's still hard to tell.

However, I found that when you minimize the need for accuracy you can actually get incredibly effective results when conveying a message, especially for kids. In fact, that's part of what I'm trying to achieve with Schrodi (schrodi.co).

It's kinda of a coach like you mentioned. So far it's been a fun and engaging tool for parents and teachers to teach but in a different way.

I also think that educators need to be extra cautious when working with AI models to make sure the outcome actually makes sense and serves their purposes.

Expand full comment

Until we understand what the abstract operators over the lexical space should be to preserve meaning, we will struggle with maintaining references and making determinations of truth.

Expand full comment

Amazing prompt! tried the tutor one about "solar system".

Baffled by its ability to understand arbitrary conceptualization.

Let me share the last question + my answer + its response :

-GPT : Could you describe the structure of the Solar System in your own words?

-ME: giant ball of fire, a few marshmallows around it, one with ants, others with water, some with gas. fireflies and bugs orbiting around the congregation of marshmallows

-GPT: What a vivid and creative description! That's a really unique way to visualize the Solar System. Your explanation nicely encapsulates the main components:

The 'giant ball of fire' as the Sun.

The 'marshmallows' as the planets, some with water (like Earth) and some with gas (the gas giants).

The 'ants' as a metaphor for life on Earth.

The 'fireflies and bugs' symbolizing smaller bodies like asteroids, comets, and dwarf planets.

Expand full comment

No doubt innovations in lithic technologies distressed pre-metallurgical societies as much as contemporary education is disturbed by AI. I grew up left-handed in an age where the trait had only recently been promoted from evidence of demonic possession into a mere character flaw. A legible cursive hand was a skill that I failed to master, so I learned to type. Typing was permitted for homework so my grades went up purely as a function of legibility, You are leading in showing that AI is just another technology and need be no more threatening than the evolution of research tools from index cards to databases.

Expand full comment

Please consider loading your SSRN preprints and working papers into arXiv as well. SSRN doesn't play as well as arXiv with selected albeit important databases.

Expand full comment
author

Done! I'll post the link when live

Expand full comment

Ethan,

This could be so helpful. I am a professor trying to think about how to use AI in my classrooms. But, every time you anthropomorphize the AI, I have to stop reading because of the inaccurate representation that makes me wonder about your credibility.

If I'm going to use these tools in my classroom, I want to be 100% accurate in everything I say about them. And I never want my students to have the impression that they're working with a human.

One powerful example of anthropomorphication is in one of the prompts. You ask the AI to be a friendly coach. An AI can't be friendly or a coach. The first requires a set of emotions an AI does not have, and the second requires a sentient being that can respond to the emotional state of a human. So, how do we get accuracy and also useful prompts?

Thinking about that specific prompt, what about suggesting that the AI "give the false impression that you are a friendly coach."? Or, "you are replacing a friendly human coach."?

If AI is going to be here to stay, we need to be scrupulously precise about what it is when we engage with it. That's especially true in teaching, where we have a responsibility for credibility and accuracy with our students. The use of metaphor in this case is dangerously misleading (in my opinion).

Expand full comment
author

Thank you for your point. I think we need to separate out multiple kinds of anthropomorphism to be able to discuss this. There are legitimate concerns (which I don't necessarily agree with) that we should not anthropomorphize AI because it can lead us to be complacent about the fact that we are dealing with computers. However, we also have to recognize that LLMs are built as human language models, and specifically as chatbots, so the only way to effectively interact with them is to treat them, to some extent, as people (even though they are not) because that is how they are designed. So, to get the AI to act in the right manner, you need to tell it to act like a "friendly coach" or whatever. I consider that "anthropomorphic programming," rather than treating the AI as a human. In general, however, I think we have to get used to a word of anthropomorphized AI, because it is both human nature to treat inanimate objects like people, and because the designers of AI systems are leaning into the metaphor. So, I try to both use the phenomenon. but also to warn that this is just a metaphor, not reality. It isn't perfect, but I am not sure other approaches will work well.

Expand full comment

Very good discussion. In the 5th Century BC the Greeks started to copy poems and stories on scrolls. The orators who had preserved these stories (The Iliad) by memorizing every word we’re worried that the move to the written format would destroy learning.

Expand full comment

Thank you for your response. I agree that we need to separate out the different kinds of anthropomorphism. I also agree that we do need to recognize the fact that large language models are built to mimic human communication. As you said, they are "chatbots." I currently disagree with your conclusion that that means we should treat them to some extent as human. I think we should treat them as chat bots, since that's what they are, and I think it's generally most respectful to treat things as they are.

You make the point that we need to treat them as human to get good results from them, since that's how they are designed. This part is where I am uncertain. Have you tried any experiments that gave exactly accurate prompts? For example, let's consider three possibilities:

Prompt one (most inaccurate): you are a friendly coach. This one seems currently impossible for an AI to actually do, since I think real coaching requires sentience and friendliness also requires sentience from my perspective.

Prompt two (less inaccurate): act like you are a friendly human coach. This one still feels misleading to me. But the bot can seem to be friendly, so it's a grey area for me.

Prompt three (most accurate): you are a replacement for a friendly human coach. This one seems totally acceptable from my perspective.

I wonder how these three variations would play out in reality.

You made another interesting point I'd like to respond to, that the designers of AI systems are leaning into this metaphor and human beings naturally anthropomorphize objects.

From my perspective those two important points come together to create a danger zone where humans could easily be misled for the profit of a few investors. So, to me, that is a solid reason for pushing back against the metaphor.

Thanks for thinking this through with me.

Expand full comment

I'd politely suggest reading up on LLM's and how they work. Assigning a role to the AI is not anthropomorphising, it provides a context to the language model that results in a higher probability of a specific subset of language being utilised in the response. There is interesting research that shows human interaction with an AI coaching model (in terms of the advice) is preferred by humans until they discover they've been talking to AI.

Expand full comment

Thanks for your suggestion, Mr. G. I've been researching LLMs for months now, and assigning a role seems to be vital in exactly the way you suggest. However, that doesn't necessarily mean you have to misrepresent the AI as human in the role suggestion. That misrepresentation is anthropomorphication and confuses things. That's what I was trying to work around in my suggested changes to Ethan's phrasing. Does one get worse results if one accurately represents the AI role? For example, Ethan has suggested that you prompt the AI with, "You are a friendly coach." Currently that's an impossible task for the AI (it can't have friendly emotions toward someone, and it isn't sentient, so it can't actually coach), so it misrepresents the actual situation. All the AI can do is provided probabilistic responses that give the illusion of communication. If you prompt it with an alternative, "You are replacing a friendly human coach," you get a very precise statement of the reality. Does the AI generate worse results if treated accurately?

I say these things because of your last point. People naturally anthropomorphize, and I think it is irresponsible to amp up that natural tendency in this situation. As a professor, I think it is important to be very precise with language here to make sure students accurately see what is going on with every step.

For example, there has been a lot of discussion of personal pronouns in the news at and at work, so I've become sensitive to the variety of possibilities. AIs are often allowed "I" and can be addressed as "you". For me, that is inappropriate, since I think those pronouns should be reserved for sentient beings. There is no work around for this as of today, but it will be a conversation with students in the fall.

Expand full comment

Since we are being precise with language, what makes you think "friendliness" is an emotion? I think of it as a pattern of behavior. And can only humans be friendly? My dog is very friendly, and it seems like a completely natural use of language to say so.

Expand full comment

I think friendliness requires sentience. After reading your question, I spent a few minutes looking up definitions of friendliness online. They all seemed to imply a required attitude toward the other of warmth or compassion along with the patterns of behavior that you mention. Your dog is a sentient being, so it makes perfect sense to use the term for him or her. Your dog might enjoy interacting with others, and it might hope for kind behaviors from others. An AI machine doesn't yet have the capacity to feel anything toward the sentient beings in this world that might experience friendliness. From my perspective that makes it incapable of actual friendliness.

The AI could be prompted to exhibit the behaviors one might interpret as friendliness, like a tone of voice if it has voice automation, or quirky text cues that could make an impressionable human believe it is actually friendly. To me that is a danger in the system, since there are many impressionable people out there who could be misled into thinking the AI was actually a friend or a coach instead of a machine emitting probabilistic responses.

Expand full comment

Thank you for the nuanced perspective. Perhaps you have also come across this proposal https://crookedtimber.org/2023/05/22/ban-llms-using-first-person-pronouns/ ?

Expand full comment

Thanks so much for sharing that link! I was just thinking about writing an article on that very issue of misuse of personal pronouns in AI, but I hadn't seen the proposal.

Expand full comment

Thank you! I am in the process of adding student learning goals to the syllabi of existing courses so that they include the use of AI tools, and the pitfalls of these tools. My goal is to equip my students with competencies for the courses, along with AI application competencies. Your post is helpful.

Expand full comment