11 Comments

Interesting and clear, as always.

One error I'd like to mention: You wrote that the joke was not good. The joke was, in my view, insanely funny. I'd write LOL except that it might imply that I didn't--I'm still laughing. It's one of the finest examples of self-parody I've ever encountered. And he delivered it dead-pan!

Expand full comment

good take. I tried to find the humour. My students would have blanked, too. (and not for the first time)

OTOH, my stomach hurt from laughing all the way through Cunk on Earth. (Netflix)

My wife made me watch most of it alone.

YMMV

Expand full comment
Feb 6, 2023·edited Feb 6, 2023

Never heard of Cunk, Tim, but I looked into it and may look further.

Something occurred to me after reading your reply: Could ChatGPT have been trained to make that specific joke? So I just visited my favorite language model, and this was our conversation.

Me: Good evening, Chat. I'd like you to be a stand-up comedian and tell me a short joke about an AI model.

Chat: Sure, here's one for you: Why did the AI model cross the road? To get to the other side of the data set!

Me: Thanks, Chat. Now, let me ask you to act as a humor critic and say what you think is funny about that joke?

Chat: The joke is a play on the classic "Why did the chicken cross the road?" question, with the AI model taking the place of the chicken. The humor comes from the idea of an AI model having a purpose or goal, in this case to get to the other side of the data set, and the absurdity of a machine having a desire to cross a road. The joke is a lighthearted take on AI and highlights the contrast between the predictable behavior of a machine and the more unpredictable behavior of a living creature like a chicken.

Me: Great analysis, Chat! Enjoy your evening.

Chat: You too! Let me know if there's anything else I can help with.

-- So not only does it do self-parody (sorta), it can emulate a humor critic with all their analytic dullness.

*Added edit, five minutes later*

I realized I really hadn't answered my question so I went back to Chat.

Me: Oh. One more thing, Chat. I'm always curious about the way your training leads to the responses you give to my questions. Were you trained to offer that specific joke about AI?

Chat: As an AI language model, I wasn't trained specifically on that joke, but I was trained on a diverse range of text, including jokes and humor. This enables me to generate new text that is coherent and in line with the style and tone of the text I was trained on. My response to your request for a joke about AI was generated based on patterns and relationships learned from the vast amounts of text data I was trained on, which included examples of jokes about AI and other related topics.

-- I'll skip the small talk afterwards. . . . What struck me about this answer (which is sort of half-and-half--maybe the jokes about AI he was trained on came close . . . after all, he and Ethan Mollick's model came up with the same joke) is that it seems pretty obvious how ChatGPT came up with its explanation, because it only required a descriptive response. It's much less obvious how it came up with its earlier critique of its own joke because that answer was analytic (and delivered just as instantaneously). I can see easily enough how it could synthesize a joke on the basis of models alone, but the analytic move is less obvious. I wonder whether the joke analysis was, essentially, part of the process of synthesizing the joke (so Chat was really just describing a phase of what it had already done, testing possible jokes for their match to humor principles), or whether it was a separate post hoc analysis (which it seems to me would require a more sophisticated operation).

Expand full comment

"...with all their analytic dullness." :-) Such an enjoyable post! Great follow up questions for ChatGPT.

My IT students, at Seneca College in Toronto, can do similar dancing around a question: combining related things into an answer without "more sophisticated operation".

ChatGPT's joke seems to arise from matching humor principles as you said. That's the way GPT's language model works. It said so in response to your query. But the AI missed the incongruity that makes jokes funny: chickens and roads, then the chicken's unknown motivation to cross instead of ignoring it and going home. Both the setup and punchline have incongruities. AI vs algorithm or data set is not incongruous.

A real IT data set joke: SQL walks into a bar, goes between two tables and says , “May I join you?” If ChatGPT knew that joke, it would have been WAY down its probabilistic list. If the joke was considered as a model, AI would likely have rejected it because IT tech knowledge was a necessary antecedent and the conceptual domain was out of the chat's context.

The understanding of incongruities and absurd relationships resulting in humour is currently beyond the GPT model. However, I expect it will get better.

Why can't AI do stand up? Because it doesn't have a leg to stand on.

okay, I'll stop now.

Expand full comment

ChatGPT seems to do best on subjects it 'understands'. Assisting in or writing code. AI itself. Probably more.

But try it on something more obscure - something you really know something about. I did with the USS Indianapolis disaster - and while it had some knowledge of the subject - it had at least one glaring error - and added comments that were just BS.

It makes me suspect that there's a lot of behind the scenes coding going on.

Expand full comment

There’s a substack I read that had a completely different take on this topic.

https://nakedemperor.substack.com/p/chat-gpt-is-so-woke-it-wouldnt-write

I was in software for many years, I get how this works and you are correct “there’s a lot of behind the scenes coding going on”.

Expand full comment

It seems to do reasonably decently discussing biochemistry.

Expand full comment

Would you trust it?

Could it be that highly technical subjects don't have a lot of "fluff" on the internet. So the source material isn't as tainted?

Expand full comment

Would I trust it? More than a random human writing on the internet, for whatever that's worth. Less than a peer reviewed publication. If it claims it has sources, I'm becoming skeptical that the specific sources actually exist.

It could well be that there's less fluff on highly technical topics, yes. That would make sense.

Expand full comment

It's vague (if at all) about its sources. And Mollick says it lies.

I'm going to have a class use ChatGPT to write a paper - part of the assignment is to confirm everything - find a site (at least an internet page) for everything in what Chat writes.

Here's some answers I got about source:

What is the source of your training?

I am a language model developed by OpenAI, a private artificial intelligence research laboratory consisting of the for-profit OpenAI LP and its parent company, the non-profit OpenAI Inc. I was trained on a dataset that was collected from the internet and various other sources. The dataset includes a wide variety of texts, such as articles, books, and websites, in various languages.

How was the dataset collected?

The dataset used to train me was collected by OpenAI from various sources on the internet. It's likely that the data collection process was done using web scraping techniques and through various other means to acquire large amounts of text data. It's possible that the text data was obtained from a wide variety of sources such as websites, books, articles, and other types of documents.

Expand full comment

I had to laugh at the cliche that every article on AI reveals that the AI wrote a part of the article. I ended up doing something similar as well by asking ChatGPT about creativity (and asking it to write a sonnet on a topic for which no sonnets exist) I think a lot of the fear is in not understanding. When we do understand it, it does become a powerful tool.

https://polymathicbeing.substack.com/p/can-ai-be-creative

Expand full comment