82 Comments
Mar 30Liked by Ethan Mollick

This has been a topic of discussion since at least the 1970s. John McCarthy who led the Stanford AI lab wrote in 1979 a paper titled Ascribing Mental Qualities to Machines. He goes as far as arguing that it makes sense to talk about whether the thermostat mistakenly believes it is too cold as an explanation as to why the furnace is on. Dan Dennett published a philosophy journal article in 1971 (Intentional systems) defending the idea that sometimes the best way to understand a machine is by taking an intentional stance (something with goals, beliefs, and intentions). He later expanded this in his book the Intentional Stance. Some thought that this was a huge mistake. Drew McDermott, for example, wrote Artificial Intelligence meets Natural Stupidity (published in 1981). Arguments about anthropomorphism in computer terminology are even older. There were arguments that saying a computer has memory is too anthropomorphic and one should refer to its “store” instead.

Expand full comment

Viewing GenAI chatbots as interns is the right analogy. If you simply tell them what you want, you likely won’t be happy with the results. If you provide detailed instructions, frequent feedback, and micromanage more than you’d ideally like, you will get helpful output.

Interestingly while people don’t tend to anthropomorphize software often, they do it all the time with hardware — “my computer is acting up”, “my car’s best days are behind it”, etc. so I don’t think it will be hard to move in this direction if people take the time to experiment with these tools.

Expand full comment

People anthropomorphize black holes, for crying out loud. I cannot tell you how many supposedly reputable scientific magazines and journals promote headlines saying that black holes are "hungry", that they "devour" other stars, that they "lurk" at the heart of galaxies.

It's called "gravity", you fools. There's no magic to it.

It's a matter of personal taste, but I also refrain from ascribing magical, or human, qualities to AI. But then I have a brute-force clinical perspective of AI as merely statistical interpolations or extrapolations along a data curve, which offers me nothing to anthropomorphize with.

Expand full comment

I’m turning 65 in may and I’ve pre-ordered your book. I hope I’m not to old and rusty to get the hang of it. It is so very interesting what you can do with ai. I hope it isn’t going to get to much used for evil purposes. In my age you have experienced a lot of evil.

Expand full comment

Great post, as usual. Two comments:

1)

>"I’m not suggesting that AI systems are sentient like humans, or that they will ever be. Instead, I’m proposing a pragmatic approach: treat AI as if it were human because, in many ways, it behaves like one. This mindset can significantly improve your understanding of how and when to use AI in a practical, if not technical, sense."

This is true, but there's an additional important factor: one of the major ways that language adapts to new circumstances is through the use of 'metaphor'. The use of terms like 'learning' for computers, in the term 'machine learning' (that you mention) is a metaphor, since although the mechanism and outcome are quite different, it's similar enough to be useful.

2)

>"even if you don’t want to anthropomorphize AI, they seem to increasingly want to anthropomorphize themselves". Ironically, here you're over- anthropomorphizing the chatbots. They indeed speak anthropomorphically (in the first person, " I"), but this is obviously because they were explicitly programmed to do so, to make them more user-friendly

Expand full comment

About a year ago I wrote against the use of "I", but I clearly lost that battle! See https://livepaola.substack.com/p/an-ethical-ai-never-says-i and the couple of posts after that...

Expand full comment

Over the two decades of building information and AI systems I have learned It is always best to be explicit about your own subjectivity, and I hope the many of us who are programming and creating the AI systems of tomorrow can do so as well. It is urgent for us to have more diverse and varied neocortexes, and to have these diverse humans explain why they chose to have the AI call itself an "I" or use contractions for more personification. Explaining WHY you are choosing to anthropomorphize contains the evidence that you have done the work to choose to anthropomorphize. When these choices are not explicit or, worse, made for us - that is when I worry. Thank you for being explicit.

Expand full comment

So the reason is that it's convenient and also kinda fun to treat AI as a person. I think it is harmful to society in general to propagate misinformation and false thinking. Objective truth matters and we should all strive to stay in touch with it, or risk heading down the wrong path.

Expand full comment

OpenAI is pretty politically correct, and non offensive with its output. Because there's nothing better than GPT, nobody gives a sh*t about open source models, and thus, the status quo is maintained.

Also what is "objective reality"?

How much LSD are you on? ;p

Expand full comment

I'm on the side of the 'don't anthromoporphize' because what we humans tend to hallucinate a lot more than the AI does. I think, when we better understand AI and, critically, learn more about the human brain, we realize that anthromoporphization carries great risks.

I wrote about some of that in "The Biggest Threat from AI is Us!"

https://www.polymathicbeing.com/p/the-biggest-threat-from-ai-is-us

Expand full comment

I like that Ethan shares his conversations with GPT-4 and is willing to show how frequently the AI is unable to follow his commands. The problem with genAI at the moment and will likely prohibit significant adoption across every major industry and field is precisely what is revealed by the demonstration involving the final GIF he is able to generate. My students have repeatedly shared with me that, not only are they not overly impressed with the initial AI outputs, but they do not have the time, energy, or (by their own admission) skill to get to the point where they could achieve what Ethan achieved here. Trying to convince teachers that time should be spent showing students how to get better at prompting AI's by anthropomorphizing them when the majority of teachers themselves are unfamiliar with this process is a fools errand. Despite my excitement and optimism about the potential of genAI to transform education, I fear that the manner in which it is haphazardly being rolled out with dozens of LLM's, jargon-laced instructions, poor marketing strategies, and overblown hype have already weighed down the possibilities. Until both the hallucination (or confabulation or whatever you want to call "making stuff up") problems and examples like the one above are solved, genAI is not going to have the kind of impact I think many people inside the bubble think that it will. My colleagues in K-12 education are mostly not yet convinced this is worth the trouble. Time will tell whether that is a good strategy.

Expand full comment

if you can't use an AI chatbot, you're severely mentally disabled.

Expand full comment

"LLMs are essentially just a really fancy autocomplete."

So are people! :)

Expand full comment

I was waiting for this comment, because I've been thinking along those lines for some time. Another way of thinking about it; what if we eventually find that LLMs aren't yet and may never be conscious, but it's irrelevant because we're not either. We're just so caught up in our own world of autocomplete that we never really understood that that's all we are.

Expand full comment

bruh

Expand full comment

I feel like the auto complete phrase is employed in a way to not give credit to the things LLMs are able to achieve at the moment.

Humanity is watching the birth of a cognitive rival that can soon match and exceed if. Having cognitive dissonance around that fact makes sense in someway since all we can do is sit and watch this happen.

Expand full comment

This is the truest comment here :)

Expand full comment

That's why we always finish each other's sandwiches

Expand full comment

Not sure I understand your opinion based on the linked article...what do you not think they are not ?

Expand full comment

People are not fancy autocomplete.

Expand full comment

(from Claude)

Here are three arguments for why people might be considered similar to large language models (LLMs) like autocomplete:

1. Both humans and LLMs process and generate language based on learned patterns and associations from training data (life experiences for humans, text corpora for LLMs).

2. Neither humans nor LLMs have a true understanding of the meaning behind the language they generate. They simply output what seems statistically likely based on the input and context.

3. The responses of both humans and LLMs are fundamentally constrained and determined by their training data and architecture, not by original thought or free will.

Expand full comment

1. Poverty of the stimulus 2. Problem of qualia 3. [Citation needed]

Now, I’m going to need you to do this captcha to prove you’re not a robot. It should be impossible for you since you’re merely an inferior meaty robot, right?

Expand full comment

LOL. ughhhhhhhhhhhhhhhhhHHHHHHHH

Expand full comment

Care to elaborate?

I will explain myself further. I have been able do well in school and college by just memorizing and regurgitating random facts and knowledge when prompted by teachers and exams.

Expand full comment

That has nothing to do with how cognition works. Does an AI memorize random facts? It can’t even do anything unprompted.

Expand full comment

yikes...this post will come back to haunt you when AI takes over the world.

Expand full comment

This is one of the most profound topics to discuss as we increasingly integrate AI into our lives. When we imagine, say, an AI companion for a lonely senior citizen, we may tend to have some automatic reaction -- "How terrifying!" or "How sad and pathetic!" or "How terrible that we live in a world where this is necessary!" But I think it's essential to our overcome our knee-jerk reactions (whether positive or negative), because this integration IS going to happen, and there's no way we're not going to anthropomorphize the AI as it does. What does it mean for our humanity? I don't know.

Expand full comment

On the topic of anthropomorphizing AI, I think it depends what characteristics we might consider attributing to them. I don't think it's appropriate to say that an AI 'thinks' or 'feels' other than in a loose metaphorical sense. But when I say that GPT4 'underastands' grammer I am being a lot more literal. While these machines can't do everything that our human brains can, it's clear that they are capable of doing some things that we previously thought were in our remit alone, and to avoid anthropomorphizing them entirely risks trivialising them.

Expand full comment

https://michaelamckuen.substack.com/p/looking-for-people-interested-in

I think even that's too much. I would like them to understand grammar, but they don't yet.

Expand full comment

I like to think of ChatGPT-4 as a good natured version of the main character in the movie Memento. Human (like), but without the ability to form new memories (and hence without the ability to learn). Maybe with very large context windows that will change, I don’t have access to those models.

Also, I think of ChatGPT as a single “person”. At least in my experience, it has certain tendencies and ways of behaving. This is also a big limitation. There are billions of people in the world, thousands of people who I have met or interacted with, with their own unique perspectives and abilities and strengths and weaknesses. But there is really just one ChatGPT. So if many people are going to be using it for data analysis, for writing, for all sorts of things, they will tend to all get similar results, as if all the world were employing a single individual. I heard that there are prompting tricks to get more variation, but at least for me, they don’t really work and I am always somehow stuck with the same ChatGPT.

The way I see the next decade unfolding is that the leading edge models will get very significantly better, at reasoning (Q* etc), at improving their understanding of the physical world (perhaps through training on millions of hours of video), at memory (through extremely large context windows), and also through developing different personalities, which will be necessary to get a variety of output so that we are not all hiring the same 3 people billions of times all over the world

Expand full comment

I like the Memento reference — we’re yet to experience an AI that can actually learn from our interactions… I can only imagine what this will unlock.

Expand full comment

I've been treating generative AI like a mad scientist's assistant: helpful, but a bit off.

Expand full comment

I agree with your premise. When the model is holding a valuable conversation with you, can we really still say we're anthropomorphizing? The distinction begins to be lost. When you say "to be clear, when I say an AI “thinks,” “learns,” “understands,” “decides,” or “feels,” I’m speaking metaphorically." - this feels like catering to a school of AI researchers who loudly assert that models don't "really" think, but those people are just incorrect. Models do think. They learn, understand, decide, and reason, and all of that is testable and provable in well-defined ways using tools from cognitive science. They have theory of mind. The (mostly language-based) associative reasoning that they use is the same mechanism that humans use, and while it may be "advanced autocorrect", that's most of how human brains. What models don't do is "feel" - although they understand emotions in some sense, they don't have them, at least as far as we can tell from tests. See my article https://medium.com/@davidrostcheck/how-ais-think-similarities-and-differences-vs-human-thought-3e5aebc17c9f for a more fine-grained treatment - but your approach is the right one and needs no apology.

Expand full comment

Humans don't use "advanced autocorrect," though. This is just a bunch of compsci dorks ignoring the actual fields studying things such as how language is processed in the brain at their own peril.

https://michaelamckuen.substack.com/p/looking-for-people-interested-in

Expand full comment

Mm, humans do, in fact, use "advanced autocorrect" (prompting, for example, is a repurposed cognitive science technique and it works on AI for the same reason that it works in humans). I do agree with your overall premise that AI researchers need to learn cognitive science to work effectively, but I think I'd use different specifics - universal grammar, for example, is an outdated theory that ultimately didn't lead anywhere; I think the more recent cognitive science work in persuasion works better for that argument.

Expand full comment

The fact people can learn persuasion is an argument against that though. The nature of life is change but the machines don’t. To control a machine you can just put in a stimulus and get a response without understanding anything, but to control a person you have to actually know their thoughts which they have and machines don’t. Plus LLMs only work sequentially and people can work non-sequentially. Maybe I have a different form of cognition that’s not understood by science but I generally don’t find it necessary to postulate that to point out the differences between AI and people.

Expand full comment

I think this piece hits the nail right on the head. The only thing I would add is that the tendency to antropomophize isn’t voluntary.

I would go as far as to say we cannot not humanize AI.

Expand full comment

Gosh I hope you don’t talk to people like that 🤣 It’s much less mysterious than this makes it out to be. The tones and instructions that work work because it’s how people talk on social media. Why do we treat this like alchemy when it’s a probabilistic mess of all the training data they managed to vacuum up, carefully avoiding documenting anything to (try) to avoid lawsuits.

Expand full comment