30 Comments

Also, I feel like plenty of people could be described as word prediction machines, but like they’re still people

Expand full comment

Exactly! If this applies to the NYT/Sydney chat: "They are basically word prediction machines and are merely reacting to prompts, completing the next sentences in response to what you write", then it definitely also applies to every conversation I have ever had with a real person.

Expand full comment

Indeed ... what if we average people are not so far removed in our thinking and actions from this associative LLM-procedure controlled by hints and prompts, and only the creative minority among us non-silicons still contribute out-of-the-box thought impulses to the overflowing, overwhelmingly redundant data universe? ...

Expand full comment

We, as a collective as a society, are not ready

As Ray Kurzweil alluded to, we're now in the second half of the chessboard. Our existing mental models of how the world works are going to be fundamentally altered in a short amount of time.

Expand full comment

"There is no instruction manual for the current crop of LLMs. You can only learn through trial-and-error." An underrated insight! Context and content are intertwined. At this stage only through my direct interactions with LLMs I can spot where my "unlearning" needs to happen. It feels like difference between my familiar experience with search engines that are out now, is that there’s not one ChatGPT/Bing/AI. It is the human's in-session prompts that are guiding which ChatGPT/Bing/AI "persona" answers in the next line of text. Whether the human is aware or not, the human is giving the context. It's the LLMs job to interpret human's words as context. Each of your post highlights the type of metacognition required to effectively communicate with LLMs need to be explicitly taught: the learner's mind needs to be selectively focused to attend to the relevant and importantly ignore the irrelevant information (e.g., unlearn how to "ask for factual information"). This is a great opportunity to shape how we introduce LLM interactions.

Expand full comment

One obvious use case is if an AI could help with writing proposals or doing PowerPoints. I work at one of the big 4 consulting companies and this takes up an incredible amount of time.

Expand full comment
Feb 20, 2023·edited Feb 20, 2023

Imagine this project:

1) implement a tokenization scheme for .pptx files: in one direction, turn a powerpoint into a sequence of tokens (lossily even, fine formatting details could be sacrificed); in the other, take a token stream and generate a powerpoint file

2) get a very large corpus of big4 powerpoints, tokenize them

3) finetune GPT-N on them, maybe with appropriate prefix text saying "this is a powerpoint file"

4) ask the fine-tuned model to generate powerpoints, and it can.

5) you could even hook up the decoding to pptx service as an API, train the model to call it toolformer-style and visualize the resulting presentation in the chat interface.

Someone at these companies should be doing this if they aren't already

Expand full comment

Very interesting. Thank you. The Vonnegut example is particularly impressive. With many of these I have a slight jolt of anxiety when I read them, like when you think you have a special skill and then you meet someone better at your skill, except in this case “you have” is “humans have” and “someone” is a robot. It’s remarkable. You mentioned several other AIs are coming. As someone who is still on the Bing waiting list and itching to get off, can you let us know when those are coming? I want to be first in line next time.

Expand full comment

Dear Professor Mollick,

I'm a retired rugby player from Australia, and I want to thank you for your guidance on using AI via your Substack and Twitter.

When I retired, I was overwhelmed and felt a long way behind everyone else when I joined the workforce. But your insights about the impact AI will have on how we work and how to use it effectively have been tremendously helpful.

I now feel prepared for the AI wave and back on level pegging with people of my age, and I can’t thank you enough.

Thanks again for sharing your knowledge and would love to shout you a beer if you're ever in Australia.

Cheers,

Ben

Expand full comment

Anyone asked the chatbot to write an instruction manual for itself?

Expand full comment

I have. Or at least aspects of itself

Expand full comment

Did it generate useful content?

Expand full comment

These two are salient points:

"We are not ready for the future of Chatbots" and "We are not ready for the future of Analytic Engines".

One of the reasons the organisations are not ready is lack of data literacy and competency training. As you said, there's no instruction manual. However, we have't trained our workforce with basic understanding of data. How can we let them use and assess these generative AI tools? Yes, very weird world ahead indeed.

Expand full comment

Well, I think one obvious point is that the people who are presently learning how to interrogate AIs (i.e., "prompt engineering") will fare better than those who are not presently learning how to interrogate AIs. But I *also* think that any such advantage will be fleeting, as knowledge about prompt engineering seems to pervade quickly. And for ~most white collar work, the skills required for good prompt engineering aren't that hard to acquire.

Expand full comment

Prompt engineering tools will pervade to equalize the playing field. Ability today is a fleeting and illusory competitive advantage, purely temporal

Expand full comment

It's possible that these advantages are short-lived. Yet I've observed that it takes a certain amount of curiosity and self-drive to, even now, do a useful Google search. Working in the variety of places that I have, I'm always stunned by how few people possess even the basic googling skills required to find basic answers.

I'm not certain that this group of people will ever be motivated to learn the skills of prompt engineering. Maybe it will be made easier. But I'm guessing that the skills of critical reading, analysis, and revising your prompt to get a better generated response will be a thing that is beyond a sizeable portion of the population.

I don't think it's settled either way, though. It remains to be seen.

Expand full comment

It seems obvious to me that prompt engineering will soon be a table stakes level skill for most white collar jobs. It’s interesting, then, to note that Anthropic is hiring for a position which will be exclusively “prompt engineer”. Maybe the path is: people are hired for this expertise in order to suffuse their knowledge throughout their organization. But as you say in this piece, these tools are improving very quickly, and knowledge of how to use them is pervading nearly as fast.

Expand full comment

If I had access to the input training of an Analytic Engine - could I train it with my writings and obtain a similar analysis?

Perhaps like most critiques it would be uncomfortable - but I've always found that it is hard to get well thought out comments/critiques.

Hmmmm.... What if you could link to a news article and ask for a fact check? Now that might be really uncomfortable.

But this might be very valuable - might even make a business out of providing it.

Expand full comment

Microsoft is constantly updating what prompts are considered fair use violations. Following Ethan's prompts, Bing responded to me that it would be a copyright violaiton to summarize my own academic papers.

Expand full comment

"On every dimension, Bing’s AI, which does not actually represent a technological leap over ChatGPT, far outpaces the earlier AI - which is less than three months old!"

Is this accurately stated? The assumption based on capabilities is that Bing AI is ChatGPT-4 based, at least early / in-progress models, which IS a technological leap over ChatGPT-3/3.5 considering the exponential growth in parameter support.

On another note, Bing's habit of responding with emojis to end paragraphs / thought sequences is a nice touch for about 5 seconds, then becomes irritating and trite every subsequent response. Diminishes the quality of interaction and trust. As does the seemingly Hemingway style of overly brief sentence responses. Feels like they capped responses at a middle school level of readability (confirmed with a Flesch Kincaid score).

Perhaps in the future AI will adjust its response mode based on the prompter's level / preference.

Expand full comment

re: "Can people be manipulated by AIs?"

There is just starting to be early work on the topic:

https://arxiv.org/abs/2302.00560

" Co-Writing with Opinionated Language Models Affects Users' Views

...Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey"

Many didn't realize they were being lead. Presumably future AIs could more subtly lead people who are more prone to spotting bias. AIs might be politically biased, or biased regarding various cultural viewpoints, or biased regarding what products to buy. Will companies pay for AI to subtly steer people their way rather than through explicit advertising? Will the public accept this since the AI is so useful otherwise?

Even bias regarding product choices might have been inadvertently trained into the AI due to the content of its training set rather than being paid for. LLMs don't reason about what they are trained on so they don't weight more neutral objective reviews better than subjective comments.

Some people complain about biased search engines, or Twitter bias either before or after Musk's buyout, but the vast majority keep using them rather than the smaller alternatives that arose due to those concerns.

Expand full comment

I should note that the LLMs core mass text database isn't weighted towards neutral objective reviews, etc, but the RLHF training, the training on human feedback, may weight various viewpoints. But there are many things the RLHF won't deal with that just comes from its core training.

Expand full comment

I’m curious about the difference between sentience and the illusion of sentience. Is the issue a matter of agency? Interiority? If the latter, why does interiority matter?

Expand full comment

Expected - yes.

Troubling? Only if you don't know how they work, but other than that the "rush to market" w/o a good test cycle is textbook new tech.

So maybe ethics and legal issues are important to emerging disruptive tech?

IP is the big stick here, remember Napster?

If I were an MBA student thinking about innovation, the Chinese Room problem would be a good topic to discuss and the business/strategy implications.

Expand full comment

I wonder what part of this article was created by AI? And even if it is 99%, what does it change for me as a reader?

Expand full comment