37 Comments

Thanks as always for a well-grounded and practical post.

I personally found that introducing an initial back-and-forth into any interaction with LLMs drastically improves most outcomes. I wrote about this in early January.

The way it works is you write your starting prompt as you wish, in natural language, then you append something along these lines to it:

“Before you respond, please ask me any clarifying questions you need to make your reply more complete and relevant. Be as thorough as needed.”

ChatGPT (GPT-4) will usually ask very pertinent, structured questions that will force you into thinking deeper about your request and what you're trying to achieve. Once you respond to the questions, ChatGPT will give you something that's much better than if you'd stuck to just a one-off request with no follow-up.

Expand full comment

This is a very good approach. Its like making the LLM responsible to make sure it provides the most accurate version of answers. Thanks for sharing

Expand full comment

Do you think y could do something like this on a prompt you repeat often in order to refine it? Like after seeing what info is missing via a back and forth a few times, you can then pre-emptively add that info in to get a more successful one shot prompt?

Expand full comment

Definitely. I always end up tweaking the prompts to fix what doesn't work or add details, etc.

But what I found is that it's a bad idea to ask ChatGPT to do it on your behalf. If I e.g. tell ChatGPT that I don't like the image because there's a Santa-Yeti creature in it, it'll tell DALL-E 3 something like "Make sure there is no Santa-Yeti anywhere in the image!" - which actually increases the chance of DALL-E 3 now drawing a Santa-Yeti, because it doesn't do well with negative directions.

So what I do is I look at the initial prompt ChatGPT gives DALL-E 3 and then change it myself to only keep the parts that matter.

Expand full comment

Makes sense, thanks.

I found the same thing with Dream Studio (StableDiffusion) until they introduced the negative prompts. So with that you could put "Santa" as a negative prompt, and *poof* no more Santa Clause.

Expand full comment

Oops, I just realized I replied to you with a comment from a completely unrelated conversation (that one was about image prompts). My bad.

Now that I can see the context, absolutely: If you're finding yourself using the same prompt frequently for the exact same purpose, starting out with the "Ask me questions" approach lets you get ChatGPT (or any other LLM) to flesh out the details, at which point you can build a more robust prompt for the future. (Then you can maybe even turn it into a custom GPT.)

The beauty of the "ask me questions" approach though is that you don't need to aim for a full, exhaustive prompt from the get-go. You let the chatbot do the heavy lifting for you, and it also adapts its questions based on your request.

(I covered all of this in more detail here: https://www.whytryai.com/p/two-methods-ai-chatbots-prompt-themselves)

Expand full comment

Thanks, will check it out!

Expand full comment

"For most people, worrying about optimizing prompting is a waste of time. They can just talk to the AI, ask for what they want, and get great results without worrying too much about prompts. In fact, almost every AI insider I speak to believes that “being good at prompting” is not a valuable skill for most people in the future, because, as AIs improve, they will infer your intentions better than you can."

Interesting to see this perspective, I suspect that there's a lot of truth to this and some further implications to consider. Often I see commentary on 'prompt engineering' or 'tricks' to get the best response from models, less often do I see any recognition that the user's proficiency with and comand of language are important for achieving good results. I suspect that ability to use natural language with precision, clarity, and nuance will increasingly be an important skill on a broader basis than specific knowledge of 'prompt engineering'. I wonder whether this also implies that education may switch focus back from STEM towards language and humanities. The familiar pattern of students being comfertable with failing english (or at least performing relatively weakly in the subject) happy in the knowledge that they can progress to a good STEM carreer on the back of strong science / maths skills may not be viable for much longer.

Expand full comment

I find this conversation fascinating and could not agree with you more. Prompt engineering will push our human brains to actually think on what we’re trying to achieve and explain in detail for a better outcome. One of the biggest challenges today in relationships (rather a business transaction, dealing with work colleagues, friends, family, loved ones…) is communication. We make assumptions to fill in the gaps of our understanding, that will not work with an LLM. So I agree with you, proficiency and command of the language will be one of the competitive advantages to master AI. At least in the short and medium term.

Expand full comment

I've thought this for a while and have always wondered what all the fuss was about with prompt engineering. The only exception may be image prompting (e.g. Stable Diffusion), but even then with the new models they seem to be incorporating a "plain English" preprocessing step of some kind.

Expand full comment

Really apricate the linked resources, have pre-ordered the book, but it doesn't look like any of the links for an audio version go anywhere (at least in the UK)? Is that on the way? As you can probably tell, I like to be able to listen to my reading.

Here is this post AI narrated:

https://askwhocastsai.substack.com/p/captains-log-the-irreducible-weirdness

Expand full comment

Thanks Ethan! Fabulous, as always. I have found the best way for me to generate useful prompts consistently is to build my own prompting custom gpt in ChatGPT. I simply took the most recent and credible research papers I could find on prompting to use as my knowledge base, then used the gpt itself to steadily iterate a custom instructions prompt until I got one that both illustrated the primary principles and consistently generated prompts reflecting them. For bonus, I built into the prompt a request to describe the principles it was illustrating in each delivered prompt, which then allows less experienced members of my team to learn the principles while getting advanced prompts. The prompter isn’t necessary for simple tasks but I find it is vastly superior for complex tasks, and was easy to build.

Expand full comment

Brad, care to share your References list of "credible research papers...on prompting"? Curious if you used only papers relevant to a given LLM or...

Expand full comment

If be in interested in seeing that gpt and how you built it. I just did a tutorial for my team (I'm a school administrator) and helping them create good prompts is going be my biggest task.

Expand full comment

I was intrigued by the greater output of idea diversity of the prompts that assigned a Steve Jobs persona. It got me thinking about the scope of his legacy. Simply put, Jobs' legacy dwarfs Musk's.

The truth is that few visionaries have embodied and helped shape the digital age quite like Steve Jobs, both in terms of the range of revolutionary products and services, and in terms of his iconoclasm, which sought above all to empower artistic creativity, challenge the status quo, and inspire people with innovations that astonished both functionally and aesthetically.

Reflecting on that legacy, I found the higher idea diversity of prompts using his persona less surprising and more understandable, if not entirely explicable.

But to be fair to Elon Musk, it's probably too early to think about a legacy that's still unfolding and very much in the making, rendering any assessment of Musk's legacy premature. Excuse me for thinking out loud.

Expand full comment

There was a time when knowing on how search on Google made a difference with tricks like using quotes for an exact match. These days, Search works well enough without them. We will see a similar shift for prompting soon.

One concrete way to get there is through cheaper and faster LLM inference so each query (prompt) generates multiple responses with the user choosing the one that fits their intentions the best, which is what we do when we google.

Expand full comment

I'm active on Twitter and have built 2 AI apps after experimenting with a lot of prompts.

Yet I'm tired of tech bros suggesting we just need to use this one special word or trick to unlock all the capabilities of AI.

Exactly as you said, not all prompts work for everyone, every time. It's an evolving beast no one has any real grasp on. We're all just muddling along!

Expand full comment

This is so real, the notion of framing AI prompts within the context of Star Trek episodes or political thrillers to enhance mathematical problem-solving abilities seems unconventional and challenges traditional methods, prompting a reevaluation of our approaches to leveraging AI capabilities effectively.

Expand full comment

I find that rather than presenting a good INITIAL prompt to chatbots that a CONVERSATION works so much better. E.g. just ask for some ideas with no special prompting. Then respond to them or ask for some more that are very different. After a few exchanges I've accomplished what I want in a natural manner.

The problem is that one can do experiments with initial prompts and their responses but it is very tricky to quantitatively evaluate conversations. So everyone is looking for their keys under the street lights.

Expand full comment

Hey Ethan! This was really interesting. I especially like what you're doing with More Useful Things! I built a Pickaxe studio out of the Prompts for Instructors section and wanted to hear what you think of this presentation.

https://studio.pickaxeproject.com/STUDIOYE69TXL994MPZ9H

Expand full comment

Is anyone here aware of people making conceptual art out of prompting LLMs in very strange ways? The Captain's Log prompt doesn't look that strange to me, though it is a bit funny that it works so well. It makes me wonder what possibilities lie in wait by using prompts unlike any string of words we've seen before.

Expand full comment

Sadly, www.moreusefulthings.com is blocked by our security team as a risk. :(

Expand full comment

I find your work very valuable. I'm glad you have this site. You're the first one I go to to understand the current AI and what it can do.

Expand full comment

This is really helpful. Thank you! I've been working with GPT-4 a lot as well as a few of the open source models. Chain of thought is the approach I've stumbled upon that I've had the most consistent results with, but I haven't really tried the other prompts that you mention give consistent results. Things are still pretty hit or miss with me on all things.

This weekend I worked with GPT-4 to try and build a PowerPoint presentation. I spent much of the last two days trying to get it to do so, but this weekend it told me either that it doesn't have the capability to do this, or it consistently gives errors when analyzing and never generates a PPT file, or it generates a PPT file that is literally just the prompt I gave it in black text on a white background.

Yet, back in November I used it to generate a PPT that had pretty decent content, was visually pretty good, complete with stock photos and varied layouts and consistent colors. I had to replace the stock photos with ones that were more context appropriate and do some wordsmithing on the text...but all in all not bad and saved me hours compared to doing it from a blank slate. This week I would have saved hours if I hadn't tried to use AI at all.

Expand full comment

Ethan, your exploration into the "irreducible weirdness of prompting AIs" is both enlightening and fascinating! It's intriguing how the effectiveness of AI prompts can hinge on such imaginative scenarios, like Star Trek episodes or political thrillers for solving math problems. This unexpected twist not only adds a layer of creativity to our interactions with AI but also challenges us to think outside the box when it comes to prompting techniques. Your efforts in creating a companion website, More Useful Things, as a resource hub is commendable and undoubtedly a valuable tool for many.

Your dive into the necessity of experimentation in prompt crafting, alongside the realisation that there's no one-size-fits-all solution, is incredibly insightful. It's especially interesting how the study you mentioned showcases the potential of Chain of Thought prompting in generating diverse ideas. This not only underscores the importance of structured prompting but also highlights the nuanced understanding required to effectively communicate with AI. It's fascinating how these interactions blend science and art, encouraging users to develop an intuition for AI "personalities." Your work is a testament to the endless possibilities that thoughtful and imaginative prompting can unlock, making AI more accessible and effective for a wide array of applications.

Expand full comment

I'm pretty sure this is written by AI, why though?

Expand full comment

Wait a second, 10 hours to get good at prompting a given LLM, and 10,000 hours to get good at concert piano playing? I know what I would choose. ;)

Anecdote 1:

I asked Claude 3 Opus to write an "unreserved recommendation" for a student that had gotten a subpar grade in one of my classes. Claude said "are your sure about that? shouldn't you write a more nuanced recommendation, taking into account that low grade" I was initially put off, "what is this uppity AI?" And then I realized that I was getting a clue about the AI personality even without realizing--like Ethan writes--that knowing an AI personality will help inform your prompts. I acquiesced but, pushing back, said, but this student got a better grade in the next class of mine which they took. To which Claude got back on task and drafted a recommendation to both of our liking. [What just happened?]

Anecdote 2:

Sometimes my spouse struggles with phrasing, words, etc. We were talking the other day about an ordeal we have been through over the past year. I was bringing AI into the conversation: "how should we feel about", "how could we better promulgate our interests", "what is the breakdown in communication here". To which, my wife said, "I don't care what you are typing into AI, I want to know what you think. How you feel!" I pushed back, by typing my interpretation of your thoughts, I get perspective from AI and round-tripping back to you, we are honing our communication. Treating AI like an impartial observer, seeing if a message sent is the one which is received and which can be re-worded. That was amazing, as long as we didn't give up. [Did that really happen?]

Expand full comment