80 Comments
Mar 30Liked by Ethan Mollick

This has been a topic of discussion since at least the 1970s. John McCarthy who led the Stanford AI lab wrote in 1979 a paper titled Ascribing Mental Qualities to Machines. He goes as far as arguing that it makes sense to talk about whether the thermostat mistakenly believes it is too cold as an explanation as to why the furnace is on. Dan Dennett published a philosophy journal article in 1971 (Intentional systems) defending the idea that sometimes the best way to understand a machine is by taking an intentional stance (something with goals, beliefs, and intentions). He later expanded this in his book the Intentional Stance. Some thought that this was a huge mistake. Drew McDermott, for example, wrote Artificial Intelligence meets Natural Stupidity (published in 1981). Arguments about anthropomorphism in computer terminology are even older. There were arguments that saying a computer has memory is too anthropomorphic and one should refer to its “store” instead.

Expand full comment

Viewing GenAI chatbots as interns is the right analogy. If you simply tell them what you want, you likely won’t be happy with the results. If you provide detailed instructions, frequent feedback, and micromanage more than you’d ideally like, you will get helpful output.

Interestingly while people don’t tend to anthropomorphize software often, they do it all the time with hardware — “my computer is acting up”, “my car’s best days are behind it”, etc. so I don’t think it will be hard to move in this direction if people take the time to experiment with these tools.

Expand full comment

I’m turning 65 in may and I’ve pre-ordered your book. I hope I’m not to old and rusty to get the hang of it. It is so very interesting what you can do with ai. I hope it isn’t going to get to much used for evil purposes. In my age you have experienced a lot of evil.

Expand full comment

Great post, as usual. Two comments:

1)

>"I’m not suggesting that AI systems are sentient like humans, or that they will ever be. Instead, I’m proposing a pragmatic approach: treat AI as if it were human because, in many ways, it behaves like one. This mindset can significantly improve your understanding of how and when to use AI in a practical, if not technical, sense."

This is true, but there's an additional important factor: one of the major ways that language adapts to new circumstances is through the use of 'metaphor'. The use of terms like 'learning' for computers, in the term 'machine learning' (that you mention) is a metaphor, since although the mechanism and outcome are quite different, it's similar enough to be useful.

2)

>"even if you don’t want to anthropomorphize AI, they seem to increasingly want to anthropomorphize themselves". Ironically, here you're over- anthropomorphizing the chatbots. They indeed speak anthropomorphically (in the first person, " I"), but this is obviously because they were explicitly programmed to do so, to make them more user-friendly

Expand full comment

Over the two decades of building information and AI systems I have learned It is always best to be explicit about your own subjectivity, and I hope the many of us who are programming and creating the AI systems of tomorrow can do so as well. It is urgent for us to have more diverse and varied neocortexes, and to have these diverse humans explain why they chose to have the AI call itself an "I" or use contractions for more personification. Explaining WHY you are choosing to anthropomorphize contains the evidence that you have done the work to choose to anthropomorphize. When these choices are not explicit or, worse, made for us - that is when I worry. Thank you for being explicit.

Expand full comment

So the reason is that it's convenient and also kinda fun to treat AI as a person. I think it is harmful to society in general to propagate misinformation and false thinking. Objective truth matters and we should all strive to stay in touch with it, or risk heading down the wrong path.

Expand full comment

I'm on the side of the 'don't anthromoporphize' because what we humans tend to hallucinate a lot more than the AI does. I think, when we better understand AI and, critically, learn more about the human brain, we realize that anthromoporphization carries great risks.

I wrote about some of that in "The Biggest Threat from AI is Us!"

https://www.polymathicbeing.com/p/the-biggest-threat-from-ai-is-us

Expand full comment

I like that Ethan shares his conversations with GPT-4 and is willing to show how frequently the AI is unable to follow his commands. The problem with genAI at the moment and will likely prohibit significant adoption across every major industry and field is precisely what is revealed by the demonstration involving the final GIF he is able to generate. My students have repeatedly shared with me that, not only are they not overly impressed with the initial AI outputs, but they do not have the time, energy, or (by their own admission) skill to get to the point where they could achieve what Ethan achieved here. Trying to convince teachers that time should be spent showing students how to get better at prompting AI's by anthropomorphizing them when the majority of teachers themselves are unfamiliar with this process is a fools errand. Despite my excitement and optimism about the potential of genAI to transform education, I fear that the manner in which it is haphazardly being rolled out with dozens of LLM's, jargon-laced instructions, poor marketing strategies, and overblown hype have already weighed down the possibilities. Until both the hallucination (or confabulation or whatever you want to call "making stuff up") problems and examples like the one above are solved, genAI is not going to have the kind of impact I think many people inside the bubble think that it will. My colleagues in K-12 education are mostly not yet convinced this is worth the trouble. Time will tell whether that is a good strategy.

Expand full comment

"LLMs are essentially just a really fancy autocomplete."

So are people! :)

Expand full comment

This is one of the most profound topics to discuss as we increasingly integrate AI into our lives. When we imagine, say, an AI companion for a lonely senior citizen, we may tend to have some automatic reaction -- "How terrifying!" or "How sad and pathetic!" or "How terrible that we live in a world where this is necessary!" But I think it's essential to our overcome our knee-jerk reactions (whether positive or negative), because this integration IS going to happen, and there's no way we're not going to anthropomorphize the AI as it does. What does it mean for our humanity? I don't know.

Expand full comment

On the topic of anthropomorphizing AI, I think it depends what characteristics we might consider attributing to them. I don't think it's appropriate to say that an AI 'thinks' or 'feels' other than in a loose metaphorical sense. But when I say that GPT4 'underastands' grammer I am being a lot more literal. While these machines can't do everything that our human brains can, it's clear that they are capable of doing some things that we previously thought were in our remit alone, and to avoid anthropomorphizing them entirely risks trivialising them.

Expand full comment

I like to think of ChatGPT-4 as a good natured version of the main character in the movie Memento. Human (like), but without the ability to form new memories (and hence without the ability to learn). Maybe with very large context windows that will change, I don’t have access to those models.

Also, I think of ChatGPT as a single “person”. At least in my experience, it has certain tendencies and ways of behaving. This is also a big limitation. There are billions of people in the world, thousands of people who I have met or interacted with, with their own unique perspectives and abilities and strengths and weaknesses. But there is really just one ChatGPT. So if many people are going to be using it for data analysis, for writing, for all sorts of things, they will tend to all get similar results, as if all the world were employing a single individual. I heard that there are prompting tricks to get more variation, but at least for me, they don’t really work and I am always somehow stuck with the same ChatGPT.

The way I see the next decade unfolding is that the leading edge models will get very significantly better, at reasoning (Q* etc), at improving their understanding of the physical world (perhaps through training on millions of hours of video), at memory (through extremely large context windows), and also through developing different personalities, which will be necessary to get a variety of output so that we are not all hiring the same 3 people billions of times all over the world

Expand full comment

I've been treating generative AI like a mad scientist's assistant: helpful, but a bit off.

Expand full comment

I agree with your premise. When the model is holding a valuable conversation with you, can we really still say we're anthropomorphizing? The distinction begins to be lost. When you say "to be clear, when I say an AI “thinks,” “learns,” “understands,” “decides,” or “feels,” I’m speaking metaphorically." - this feels like catering to a school of AI researchers who loudly assert that models don't "really" think, but those people are just incorrect. Models do think. They learn, understand, decide, and reason, and all of that is testable and provable in well-defined ways using tools from cognitive science. They have theory of mind. The (mostly language-based) associative reasoning that they use is the same mechanism that humans use, and while it may be "advanced autocorrect", that's most of how human brains. What models don't do is "feel" - although they understand emotions in some sense, they don't have them, at least as far as we can tell from tests. See my article https://medium.com/@davidrostcheck/how-ais-think-similarities-and-differences-vs-human-thought-3e5aebc17c9f for a more fine-grained treatment - but your approach is the right one and needs no apology.

Expand full comment

I think this piece hits the nail right on the head. The only thing I would add is that the tendency to antropomophize isn’t voluntary.

I would go as far as to say we cannot not humanize AI.

Expand full comment

Gosh I hope you don’t talk to people like that 🤣 It’s much less mysterious than this makes it out to be. The tones and instructions that work work because it’s how people talk on social media. Why do we treat this like alchemy when it’s a probabilistic mess of all the training data they managed to vacuum up, carefully avoiding documenting anything to (try) to avoid lawsuits.

Expand full comment