30 Comments

Also, I feel like plenty of people could be described as word prediction machines, but like they’re still people

Expand full comment

We, as a collective as a society, are not ready

As Ray Kurzweil alluded to, we're now in the second half of the chessboard. Our existing mental models of how the world works are going to be fundamentally altered in a short amount of time.

Expand full comment

"There is no instruction manual for the current crop of LLMs. You can only learn through trial-and-error." An underrated insight! Context and content are intertwined. At this stage only through my direct interactions with LLMs I can spot where my "unlearning" needs to happen. It feels like difference between my familiar experience with search engines that are out now, is that there’s not one ChatGPT/Bing/AI. It is the human's in-session prompts that are guiding which ChatGPT/Bing/AI "persona" answers in the next line of text. Whether the human is aware or not, the human is giving the context. It's the LLMs job to interpret human's words as context. Each of your post highlights the type of metacognition required to effectively communicate with LLMs need to be explicitly taught: the learner's mind needs to be selectively focused to attend to the relevant and importantly ignore the irrelevant information (e.g., unlearn how to "ask for factual information"). This is a great opportunity to shape how we introduce LLM interactions.

Expand full comment

One obvious use case is if an AI could help with writing proposals or doing PowerPoints. I work at one of the big 4 consulting companies and this takes up an incredible amount of time.

Expand full comment

Very interesting. Thank you. The Vonnegut example is particularly impressive. With many of these I have a slight jolt of anxiety when I read them, like when you think you have a special skill and then you meet someone better at your skill, except in this case “you have” is “humans have” and “someone” is a robot. It’s remarkable. You mentioned several other AIs are coming. As someone who is still on the Bing waiting list and itching to get off, can you let us know when those are coming? I want to be first in line next time.

Expand full comment

Dear Professor Mollick,

I'm a retired rugby player from Australia, and I want to thank you for your guidance on using AI via your Substack and Twitter.

When I retired, I was overwhelmed and felt a long way behind everyone else when I joined the workforce. But your insights about the impact AI will have on how we work and how to use it effectively have been tremendously helpful.

I now feel prepared for the AI wave and back on level pegging with people of my age, and I can’t thank you enough.

Thanks again for sharing your knowledge and would love to shout you a beer if you're ever in Australia.

Cheers,

Ben

Expand full comment

Anyone asked the chatbot to write an instruction manual for itself?

Expand full comment

These two are salient points:

"We are not ready for the future of Chatbots" and "We are not ready for the future of Analytic Engines".

One of the reasons the organisations are not ready is lack of data literacy and competency training. As you said, there's no instruction manual. However, we have't trained our workforce with basic understanding of data. How can we let them use and assess these generative AI tools? Yes, very weird world ahead indeed.

Expand full comment

It seems obvious to me that prompt engineering will soon be a table stakes level skill for most white collar jobs. It’s interesting, then, to note that Anthropic is hiring for a position which will be exclusively “prompt engineer”. Maybe the path is: people are hired for this expertise in order to suffuse their knowledge throughout their organization. But as you say in this piece, these tools are improving very quickly, and knowledge of how to use them is pervading nearly as fast.

Expand full comment

If I had access to the input training of an Analytic Engine - could I train it with my writings and obtain a similar analysis?

Perhaps like most critiques it would be uncomfortable - but I've always found that it is hard to get well thought out comments/critiques.

Hmmmm.... What if you could link to a news article and ask for a fact check? Now that might be really uncomfortable.

But this might be very valuable - might even make a business out of providing it.

Expand full comment

Microsoft is constantly updating what prompts are considered fair use violations. Following Ethan's prompts, Bing responded to me that it would be a copyright violaiton to summarize my own academic papers.

Expand full comment

"On every dimension, Bing’s AI, which does not actually represent a technological leap over ChatGPT, far outpaces the earlier AI - which is less than three months old!"

Is this accurately stated? The assumption based on capabilities is that Bing AI is ChatGPT-4 based, at least early / in-progress models, which IS a technological leap over ChatGPT-3/3.5 considering the exponential growth in parameter support.

On another note, Bing's habit of responding with emojis to end paragraphs / thought sequences is a nice touch for about 5 seconds, then becomes irritating and trite every subsequent response. Diminishes the quality of interaction and trust. As does the seemingly Hemingway style of overly brief sentence responses. Feels like they capped responses at a middle school level of readability (confirmed with a Flesch Kincaid score).

Perhaps in the future AI will adjust its response mode based on the prompter's level / preference.

Expand full comment

re: "Can people be manipulated by AIs?"

There is just starting to be early work on the topic:

https://arxiv.org/abs/2302.00560

" Co-Writing with Opinionated Language Models Affects Users' Views

...Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey"

Many didn't realize they were being lead. Presumably future AIs could more subtly lead people who are more prone to spotting bias. AIs might be politically biased, or biased regarding various cultural viewpoints, or biased regarding what products to buy. Will companies pay for AI to subtly steer people their way rather than through explicit advertising? Will the public accept this since the AI is so useful otherwise?

Even bias regarding product choices might have been inadvertently trained into the AI due to the content of its training set rather than being paid for. LLMs don't reason about what they are trained on so they don't weight more neutral objective reviews better than subjective comments.

Some people complain about biased search engines, or Twitter bias either before or after Musk's buyout, but the vast majority keep using them rather than the smaller alternatives that arose due to those concerns.

Expand full comment

I’m curious about the difference between sentience and the illusion of sentience. Is the issue a matter of agency? Interiority? If the latter, why does interiority matter?

Expand full comment

Expected - yes.

Troubling? Only if you don't know how they work, but other than that the "rush to market" w/o a good test cycle is textbook new tech.

So maybe ethics and legal issues are important to emerging disruptive tech?

IP is the big stick here, remember Napster?

If I were an MBA student thinking about innovation, the Chinese Room problem would be a good topic to discuss and the business/strategy implications.

Expand full comment

I wonder what part of this article was created by AI? And even if it is 99%, what does it change for me as a reader?

Expand full comment