29 Comments

A complementrary strategy I figured out yesterday is to preface your prompt with the instruction that ChatGPT ought to ask you clarifying questions before it generates its output. Below is an example of this strategy. (I tried to format this as clearly as possible in spite of Substack's commenting limits; let me know if anything is unclear, and I will try to clarify in further comments.)

Example: "I have a complicated project for you. I'm going to provide you the instructions, but before you act upon them, I want you to ask me some clarifying questions. Do you understand this?"

Its response: "Yes, I understand. Please provide the instructions for your complicated project, and I will ask you clarifying questions to ensure I fully understand the task before taking any action."

Prompt: I want you to provide an analysis of how New York City's municipal finances work. Focus especially on the revenue side, i.e., taxes. The question at hand is this: given the rise of remote work, and the concomitant decline in office use, will taxes from offices and commercial real estate decline? And, if those taxes do decline, how will New York City make up the shortfall? Use NYC's 1970s municipal finance crisis as a comparison for your analysis.

Its clarifying questions:

Thank you for providing the details of your project. To ensure I fully understand the scope and requirements, I have a few clarifying questions:

1. Would you like me to focus on specific types of taxes, such as property taxes or income taxes, or should I provide a broader overview of all taxes related to offices and commercial real estate in New York City?

2. Should I consider any specific geographic areas within New York City or analyze the city as a whole?

3. Are there any timeframes or specific years you would like me to use for comparison, or should I just focus on the general trends of remote work and its impact on office use?

4. In terms of the 1970s municipal finance crisis, would you like a brief overview of the crisis itself, or just a comparison between the potential decline in taxes from offices and commercial real estate and the financial challenges faced during that time?

5. Do you have any preferences for the format of the analysis, such as a structured report, a list of key points, or a more conversational approach?

Please let me know if there's any additional information or context you'd like me to consider in my analysis.

Expand full comment

This is brilliant! This coupled with the 'reflection' strategy can give remarkably good results

Expand full comment

This is an excellent discovery! For comparison, here are the clarifying questions Bing asked me:

What is the purpose and scope of this analysis? Who is the intended audience and what is the desired outcome?

How long and detailed should this analysis be? Do you have a word count or a page limit?

What are the main sources of revenue for New York City’s municipal finances? How much do taxes from offices and commercial real estate contribute to the city’s budget?

How do you define remote work and office use? Do you have any data or estimates on how these have changed due to the COVID-19 pandemic and its aftermath?

What are the main factors that led to New York City’s 1970s municipal finance crisis? How did the city cope with the crisis and what were the long-term consequences?

How do you want me to structure and format this analysis? Do you have any specific guidelines or preferences for citation style, headings, tables, charts, etc.?

Expand full comment

Excellent example. Thanks for sharing. Did you get the reply you were expecting or hoping for?

Expand full comment

OK, I wrote a post which demonstrates this tactic. Note that the post uses a different prompt than what I provide above. https://davefriedman.substack.com/p/mastering-the-chatbot-a-foolproof

Expand full comment

Yes, the reply was pretty good. I’ll try to write something up about it soon.

Expand full comment

Nice write-up! I can envision a browser plugin that review sentences I type and offer me suggestions when I pause to think.

For example, if I am making a logic mistake, it suggests a better version. If I am rambling, it proposes a simpler version that is easier to read (playing the role of my audience.)

Maybe it's time to start an experiment on github. :-)

Expand full comment

Ethan, as usual, tremendous. I’m the Dean of MBA students at Stern, and have gone deep on human use cases on LinkedIn (and I’m meeting Allie Miller next week-fun!). Your examples highlight the best use cases to me. I think the mistake we make is thinking ChatGPT is a microwave, when it’s actually an Instant Pot. It will take 30 min, not 5 seconds, but it will produce a perfect 10-hour pulled pork. Thanks for your insights and would love to connect on LinkedIn or elsewhere. -Conor Grennan

Expand full comment

Every time I read one of these, I think to myself: "This is the best one and i need to send it to a bunch of people."

Every. Time.

Well done as always. Sensing this to a bunch of project managers I know...

Expand full comment

"I think it is worth concluding with a final (human) throught."

I'm assuming you intentionally threw in a misspelled word to prove that it was a human writing this final paragraph. Clever.

Expand full comment

@JGP I notice the same thing -- but (confusingly) Ethan says that

" the above paragraph is, verbatim, the text "

suggested by the bot

Expand full comment

Current 4/6/2023 version:

" but I think it is worth concluding with a final (human) thought... "

Astonishingly, now the typo is...corrected!

I'm beginning to think that Ethan Mollick *is* ChatGPT. {8^0)

Expand full comment
author

I pasted in the two sections together and accidentally added the extra "s" - figured I would delete it after everyone pointed it out. And I am still human.

Expand full comment

Great series of posts. Its a very useful and refreshing to read about how we can leverage chatgpt and improve our workflow today.

As opposed to long rants about ai safety and alignment

Expand full comment

I would be really interested to see Chat GPT's response if you explored how realistic its assertion of 2% inflation is and whether it would in fact come to a different conclusion if it used more realistic inflation estimates.

Expand full comment

Wait, did the AI make the 'ass' instead of 'as' typo?

Expand full comment

How elaborate (as in, how many sequential prompts) do your prompt chains get? Do they ever form trees, DAGs, or cycles of thought completion?

Expand full comment

Let's just hope people see it as a tool that gives advice rather than basing every single decision on AI. Not really a good idea to choose your decision just solely on what AI has said.

Expand full comment

I have used AI to handle DM's writer's block, so I know this sort of thing can be useful. However, I must add an important caveat: the AI itself has biases, and using it naively will only bring the user closer to its biases.

https://arxiv.org/pdf/2302.00560.pdf

EDIT: Or in the words of Emad Mostaque: "not your weights, not your brain."

Expand full comment

good read

Expand full comment

Regarding the “Abilene paradox,” take a look at the book, Collective Illusions. It starts with that very effect, but doesn’t call it that. Quite good.

Expand full comment

Many of these thinking companion use cases make a lot of sense. However, in your first example about the AI blog post, I think the ChatGPT responses actually introduce biases more than they resolve them, partly as a result of how careful ChatGPT is to present "reasonable" opinions on AI. ChatGPT seems to repeat talking points about how AI will only be complementary to human jobs, but it's not at all clear that some jobs won't be replaced, even in the short term. There's also a part about transparency being crucial in AI systems, but current LLMs are decidedly not transparent. We wouldn't even be having these conversations if all of the LLM providers weren't already releasing systems that are not transparent or accountable.

This is probably a topic where AI help is particularly unreliable. More generally, LLMs are something like the average of their training sets. They can certainly help you generate more ideas or check your existing ideas, but on any given point they are likely to have a bias towards the median and/or acceptable opinion. That's often correct, but sometimes decidedly not.

Expand full comment

Proof Positive. Thanks for letting us have some fun at your expense.

Expand full comment

Great LMM application

Busting out of cog biases is really tough.

We made a cognitive bias mitigation game for IARPA with 9 (count 'em) Subject Matter Experts.

The decisions of these SMEs exhibited, plenty of cognitive biases.* 😒

We would have loved a 'neutral' machine to identify bias, hopefully to monitor ourselves, but moreso to resolve disagreements. It would be no less valuable when cognitive bias is not the explicit subject of discussion, but instead the hidden enemy of reason.

*Yes, there is a Bias for that. It's called the Bias Blind Spot Bias.

Expand full comment