Discussion about this post

User's avatar
Ken Kahn's avatar

This has been a topic of discussion since at least the 1970s. John McCarthy who led the Stanford AI lab wrote in 1979 a paper titled Ascribing Mental Qualities to Machines. He goes as far as arguing that it makes sense to talk about whether the thermostat mistakenly believes it is too cold as an explanation as to why the furnace is on. Dan Dennett published a philosophy journal article in 1971 (Intentional systems) defending the idea that sometimes the best way to understand a machine is by taking an intentional stance (something with goals, beliefs, and intentions). He later expanded this in his book the Intentional Stance. Some thought that this was a huge mistake. Drew McDermott, for example, wrote Artificial Intelligence meets Natural Stupidity (published in 1981). Arguments about anthropomorphism in computer terminology are even older. There were arguments that saying a computer has memory is too anthropomorphic and one should refer to its “store” instead.

Expand full comment
D R's avatar

Viewing GenAI chatbots as interns is the right analogy. If you simply tell them what you want, you likely won’t be happy with the results. If you provide detailed instructions, frequent feedback, and micromanage more than you’d ideally like, you will get helpful output.

Interestingly while people don’t tend to anthropomorphize software often, they do it all the time with hardware — “my computer is acting up”, “my car’s best days are behind it”, etc. so I don’t think it will be hard to move in this direction if people take the time to experiment with these tools.

Expand full comment
79 more comments...

No posts