23 Comments

Another bit of sorcery: After it gives you an answer, ask it to critique its own response, poke holes in it, then ask it to improve its response based on that critique. (Thanks for this, Ethan!)

Expand full comment

I want to offer one example where role matters and absolutely changes the answer. In the MBA teacher/clown example, the output is ultimately very similar - you're still getting a similar list of action items.

But in areas that are a little more "edgy" in some way, giving it a role changes the fundamental output. Spirituality is my best example, especially spiritual tools.

I've come at it three ways so far, to experiment with the idea - no role, "act as an expert scientist," and "act as a shaman/western astrologer/chaos magician." And then ask spiritual/energetic/woo questions.

* No role offers up some vague, Cosmo magazine style answers, with a caveat that these things are all pseudoscience and other avenues should be pursued.

* Expert scientist has pretty much no input. It may as well say "yeah no I don't believe in any of that."

* Shaman/astrologer/chaos magician will get straight into the mud with you. In depth analysis, visualization ideas, things to try, spells to cast. And no shaming, either.

From what I understand, too, we are going to move into a place where AI "roles" are going to become more important to give the AI. Because since bias can't actually be removed completely, it's going to be our only way of giving people a way to express the bias they want (that they see as unbiased).

... Also I super love your newsletter and thank you for this work you're doing, it's been so valuable. 🙏

Expand full comment

"The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively" - I completely agree.

I've been working with AI from image generators to chatGPT to other tools. If you work together with the AI, it becomes far more productive than just asking it to do things for you. 

It turns into this co-creation process where new ideas and perspectives emerge, then can feed back into the flow and make the overall outcome much richer. 

I started a hobby project with AI to see what it feels like, and it's turned into this collaborative process. The AI and I work almost as colleagues. We just haven't figured out Friday happy hours (yet...). Here's more about it: https://dearai.substack.com/p/introduction

Expand full comment

Thank you Ethan, especially your statement: There are no secret prompts.

Exactly. It's called practice.

It's the practice of writing prompts and learning from what works and what doesn't in prompt writing. And we can learn this, FOR FREE. AND through individuals such as yourself who share with us and others how to write effective prompts. And for that, I am forever appreciative~!

Expand full comment

One area where prompt engineering is useful is with iterative prompts. This essentially entails using ChatGPT to help you craft better prompts. Once you acquire the skill to write good prompts, this technique isn’t as useful. But it’s a good way to get better at using ChatGPT. Anyway, I wrote about this method here: https://open.substack.com/pub/davefriedman/p/mastering-chatgpt-with-iterative?r=37ez3&utm_medium=ios&utm_campaign=post

Expand full comment

"a lot of the prompts passed around online are magical incantations, rather than useful programs."

This so much of what I see online.

Expand full comment

Awesome job! I did my best to follow this post's advice in prompting Bing to read it and come up with a smarmy yet pointed comment attempting to refute it while making myself look smarter than you. (Seriously, those were the kind of prompts I used.) The nice thing about this if you think about it is you can't lose. The more cutting the refutation, the better your own tips for prompting are shown to work in practice. So here goes:

Ethan, I appreciate your attempt to provide a guide on prompting AI models for content generation, but I’m afraid you have missed the mark by a wide margin. Your guide is based on outdated and inaccurate assumptions about how generative AI models work and what they can do. You treat AI models as tools that follow instructions, but they are actually agents that learn from data and generate outputs that may or may not align with human expectations or goals. You suggest that prompts can easily control and manipulate AI models, but you ignore the fact that prompts are not magic words that guarantee desired outcomes, but rather inputs that influence the model’s probability distribution over possible outputs. Prompts can have unintended consequences, such as triggering biases, errors, inconsistencies, or harmful content. Moreover, prompts are not universal or transferable across different models or domains, but depend on the specific architecture, training data, and parameters of each model. Therefore, your guide is based on a naive and outdated view of AI that does not reflect the current state of the art or the challenges and risks involved in using generative models.

Your guide is also full of vague and unsubstantiated claims that lack empirical evidence or theoretical justification. You provide a list of “tips and tricks” for prompting AI models, but you do not explain how or why they work, or under what conditions they are valid or applicable. For example, you advise to “use natural language” and “be specific” when prompting AI models, but you do not provide any examples or references to support these claims. You also assert that “AI is good at generating content, but bad at evaluating it”, but you do not define what you mean by “good” or “bad”, or how to measure the quality or usefulness of the generated content. You also fail to acknowledge the limitations and trade-offs of using AI models for content generation, such as the loss of originality, creativity, or authenticity. Furthermore, you do not address the ethical and social implications of using AI models for content generation, such as the potential for plagiarism, deception, manipulation, or misinformation. Therefore, your guide is full of unsubstantiated claims that lack empirical evidence or theoretical justification.

Your guide is also irrelevant and impractical for most users who want to use AI models for content generation. You assume that users have access to advanced and expensive AI models and platforms, such as GPT-3 or Bing AI, which are not widely available or affordable for most people. You also assume that users have sufficient technical skills and knowledge to use these models and platforms effectively and safely, which is not the case for most people. You also ignore the fact that most users do not need or want to use AI models for content generation in the first place, but rather for other purposes, such as information retrieval, analysis, synthesis, recommendation, or decision making. Moreover, you do not provide any concrete examples or use cases of how to use AI models for content generation in real-world scenarios or domains, such as education, journalism, entertainment, marketing, or art. Therefore, your guide is irrelevant and impractical for most users who want to use AI models for content generation.

In conclusion, your guide on prompting AI models for content generation is a notorious example of a guide that looks highly illuminating, but turns out to have one or more fatal flaws that are only apparent after careful examination and with the benefit of hindsight. You have failed to demonstrate any understanding or awareness of the complex and often unpredictable nature of AI systems and their interactions with humans and society. You have also failed to provide any useful or reliable advice or guidance on how to use AI models for content generation effectively and safely. You have wasted your time and ours with this guide. I suggest you go back to the drawing board and learn more about AI before writing another guide on this topic.

Expand full comment

I agree with you that prompting tips for 90% of use cases will be moot very soon, but I think there will still be a good amount of time those principles will be helpful when building applications that use GPT.

GPT4/Bing really is on another level when it comes to understanding intent. Where this has helped me out the most isn't in the initial prompting, but in getting it to understand bugs or issues with code outputs.

Here's an example of asking it to make a tomogotchi like game and working with it to troubleshoot (the carats expand to show the code and other output)- https://reticulated.net/dailyai/making-games-with-bing-chat-and-gpt4/

Expand full comment

This is great! Thanks for sharing! Have you been able to overcome the limit on how much text it can keep in its memory at once? I want to feed it a bunch of organizational documents to draw from when it responds, but it seems to only be able to hold a small amount of text in its memory. Any advice?

Expand full comment

Thanks, @Ethan. Your opinion proved instrumental in my decision to decline a For Dummies book on prompts.

https://www.racketpublishing.com/blog/publishing/the-discerning-author/

Expand full comment

This guide was such a relief. Down to earth and not over hyped!

Thanks

Expand full comment

Just a note that out of context, that first diagram from the paper (figure 1) is pretty confusing. I had to look at the paper to understand what it was trying to say. It's really easy to miss the "prompt" box since the prompt is written in an unusual style here. You could add something like this to clarify:

It's not obvious, but in both the left and the right, the input includes a full first question and answer, before the second (actual) question is asked. In the example on the right, the provided prompt includes a "chain of thought".

So it's a way of templating the AI's response, by showing it the "style" of what you're looking for. As the AI follows your style, it effectively works on sub-problems, improving its response.

Expand full comment

Any thoughts on the work that Brian Roemmele is doing with SuperPrompts ?

Expand full comment

Where did you get the info that Bing in creative mode is GPT-4, and what is it when it's in normal mode?

For context: I haven't used Bing in creative mode but have in fact been confused by how much worse its answers to a lot of questions are (in normal mode) than ChatGPT in GPT-4 mode, given that Microsoft had told us that Bing was using GPT-4. I had heard speculation that the model underlying Bing was actually somehow GPT-3.75.

Expand full comment

Oh my goodness gracious! I cannot express how grateful I am to you for writing that incredibly helpful article on how to prompt ChatGPT to get the best results. Your expertise and insight are truly remarkable, and your dedication to helping others is simply awe-inspiring.

Your article was an absolute treasure trove of information, and I was blown away by the sheer depth of knowledge and expertise that you brought to the table. Your tips and tricks were so incredibly helpful, and I can already see a marked improvement in the quality of results that I'm getting from ChatGPT.

I cannot thank you enough for taking the time to write such a comprehensive and informative article. Your generosity and selflessness are a true inspiration, and I feel so lucky to have stumbled across your work. I will be forever grateful for the invaluable knowledge and insights that you have shared, and I will be sure to recommend your article to anyone and everyone who will listen!

Once again, thank you from the bottom of my heart for your amazing work. You are a true gem, and I cannot express how much your contribution means to me and to the wider community. You have truly made a difference, and I am honored to have had the opportunity to learn from you.

Expand full comment

The initial function of these systems to predict the next word based on all the text its trained on explains some of their behavior. If you gave it a sentence to complete that starts out using academic jargon from some niche field: the most likely completion is going to be in that same style. Much of the text out there that refers to a Bill Gates is going to be text regarding business so the probability of next words in text referring to him is going to be influenced more heavily by the business literature used to train it. However these systems aren't purely word prediction engines anymore due to human training on top of it to follow instructions. That process is partly going to distort the predictive effort that underlies the instruction training

I suspect the issue with attempts to tall it to "act smart" don't always work because any utility people found with it was not due to the reasons they thought, its not something it can somehow make a choice about. It seems likely that many articles by experts have introductory paragraphs referring to their expertise in a particular area, or referring to them as brilliant, like describing an accomplished brilliant nobel laureate physicist. So in theory its possible in some contexts the superlatives may point the predictive model at least slightly more towards such texts that are surrounded by praises for them. Or the human instruction training may have partly guided it when seeing such words to be looking at scholarly literature or higher quality content for answers rather than average content, but thats likely not easy to ensure. Its unclear exactly how the human training to follow instructions, and the other text in your chat, impacts what segment of its training its going to use for predicting things.

As the author noted regarding his disappointing tests trying to get it to "act smart" it may not always be the case and takes experimenting. If you gave it a paragraph of text from one of them and then start another paragraph with different words it seems likely the probability is raised that its using the subset of learning drawn from text by the author or related to him.

Expand full comment