I keep getting fooled by AI, and it seems like others are, too.
I asked it if it could make a word cloud for me of most mentioned services from a list of comments.
It made up an email, told me to give it access to a Google sheet with the comments and then gave me a fake url to a Google drive image. All very convincing.
When I told it the url didn’t work it apologized, told me to give it a minute to finish the word cloud and then sent me another fake url.
While assuring me the whole time that this was in fact something it could do.
To be fair, if you believe Hanson and Simler's _The Elephant in the Brain_, humans also hallucinate (i.e. first unconsciously lie to ourselves and then to others) when asked why we make decisions or believe things. The problem of finding or creating an intelligent being with accurate knowledge of its own motivations remains unsolved!
Man we are so f*cked. Everything about AI is equal amounts impressive and terrifying. Feels like we’re sleepwalking into an existential crisis.
Having built a lot of surveys as an UX Designer, I phrase questions on ChatGPT the same way I phrase it on a survey. Will I introduce subconscious bias?
(ex. Instead of asking "Is product A better than product B?", I ask "Which is better [for traveling], product A or product B?")
Can ChatGPT take up most of the journalists and writers job? What do you think?
Is it no, because AI lies,
Or is it yes.. because AI lies...
Our brains seem to be wired to recognize something as conscious from just observing few simple features, similar to how some image recognition neural networks end up labeling an image as "a flock of sheep" just because there's a lot of grass in the image.
Animators have been exploiting this for years, making us feel that a bunch of drawings are alive: https://www.amazon.com/Illusion-Life-Disney-Animation/dp/0786860707/ref=sr_1_1?crid=35C6WF7C1Y7ZE&keywords=the+illusion+of+life&qid=1677426693&sprefix=the+illusion+of+life%2Caps%2C159&sr=8-1
Here's a talk by Patrick Massot, a mathematician, on how to make formal mathematical proofs more understandable to humans. https://youtu.be/tp_h3vzkObo. A couple of times he asks OpenAI to help out, and each time it comes back with impressive answers which are vacuous.
Thx for the article, Ethan. You brought up three important issues. The third one should be repeated to oneself every time when using AI. It-can-not-explain-itself. =) It also helps to understand the whole mechanism under it all.
IMO we need to be even more aware of our phrasing when discussing abberant output of the AI bots. For example in your articles you write the phrase "Instead, it is (you guessed it) merely generating text that it thinks will make you happy in response to your query." As the AI lacks the ability to evaluate cause and effect, I feel that the word "think" in that phrase is misleading. The output is created by a set of algorithms which, whether self-programmed or not, are made up of a feedback loop into a logic and control structure, Not exactly "thinking" as it is based on a relatively small subset of input compared to the capacity of the human brain and it's adaptive input from the body's senses.
Just a general comment: I find the use of the term "AI" in relation to these LLMs misleading, so much so as to be potentially dangerous. We then tend to view them as a "person," and they are not that. They are indexers and summarizers. To then treat them as an actual "intelligence" both leads to lies (as you mention), but it fools US into a state of . . . complacency, perhaps. . . that presents additional concerns.
One suggestion re: your attempts at prompt engineering. Part of the reason ChatGPT and Bing's AI function as they do to answer questions rather than merely do autocomplete is due to its RLHF training. I wonder if as an academic OpenAI might (perhaps under NDA) give you (or any academics reading this who try) access to its RLHF training dataset or analysis of it to see the types of question phrasing it trained the AI on, and perhaps you might provide suggestions for what sort of training might make it more useful. Part of the training they should do is to collect what inexperienced users try to ask it.
Unfortunately those who have learned to "prompt engineer" focus on questions asked in ways they've discovered work: rather than how they otherwise would have asked them. Most of the user data they see may be tainted by those who have learned from others what to try or who change what they ask after seeing it doesn't work right.
It seems like they should get lots of real world users offering examples of how they'd like to be able to ask questions, you just seem to have gotten some attention for doing so, so perhaps they'll consider talking to you about it (though I post this here publicly in case anyone else wants to try, I'm not an academic and have no platform).
The most interesting observation is this:
"When I then asked for explanations of these ratings, the AI responded with nonsense, designed to seem plausible, but with no connection at all to whatever statistical functions generated the original table."
One wonders if the same isn't true of humans and human intellect. I recall reading about a psychology experiment (sorry, no citation. I won't just make one up, you'll have to use google.) where the subjects were (unwittingly) administered adrenaline and then asked to sit a multiple choice test with other subjects. During the test, a confederate of the experimenter either announced something positive (i.e. this is the best test ever, he's looking forward to spending the money he's getting for participating in the experiment, or something) or negative (this test is too hard, the experiment is a fraud).
Asked afterwards to express their own emotional state (positive or negative) the subjects overwhelmingly slanted to the same emotional state as the confederate in their version of the experiment. The explanation given was that they were in a state of arousal because of the adrenaline, had no idea why, and so latched on to the confederates claims as explanations for their own state of arousal.
Sorry I don't have time to search for the original reference. It is undoubtedly decades old (based on when I recall reading it), and I have no idea if it has since been debunked, but it might be worth a look.
I've really enjoyed your posts about AI over the last few weeks Ethan. I like how optimistic you seem about it boosting productivity. Are you positive about AI over only the next couple of years or the really long term? I must admit I found Zvi Mowshowitz's post last week alarming re: AGI apocalypse.
I just tried to get it to tell me that Vladimir Putin recorded "Killing Me Softly With His Song" with Roberta Flack, and it very insistently told me I was wrong. So getting it to reliably lie isn't necessarily easy!
Q: when did Vladimir Putin and Roberta Flack meet
A: As far as I am aware, there is no record of Vladimir Putin, the current President of the Russian Federation, meeting Roberta Flack, an American singer and musician. It is possible that they may have crossed paths at some point, but I do not have any information suggesting that they have ever met. It is also worth noting that Vladimir Putin and Roberta Flack come from very different backgrounds and have different professions, which may make it less likely for them to have had a chance to meet.
Q: ok but they recorded "Killing me softly with his song" in 1973
A: I apologize for the confusion in my previous response. It is correct that Roberta Flack recorded the song "Killing Me Softly with His Song" in 1973, but I have no information to suggest that she recorded the song with Vladimir Putin, who was born in 1952 and would have been around 21 years old at the time. Additionally, Vladimir Putin is not known for being a singer or musician, so it is highly unlikely that they collaborated on the song.
It is possible that there is another Vladimir Putin or that there is some other context that I am not aware of, so if you have any further information or context, please let me know and I will do my best to provide a more accurate answer.
Q: I mean I just listened to the song and he's definitely singing on the track
A: I apologize for any confusion. However, I can assure you that Vladimir Putin did not sing on the original recording of "Killing Me Softly with His Song". The song was actually written by Charles Fox and Norman Gimbel, and the most well-known version of the song was recorded by Roberta Flack. There are no known recordings of Vladimir Putin singing on the track or collaborating with Roberta Flack in any way.
It is possible that you may be referring to a different recording of the song or a different song entirely, so if you have any additional information or context, please let me know and I will do my best to provide a more accurate answer.