Another bit of sorcery: After it gives you an answer, ask it to critique its own response, poke holes in it, then ask it to improve its response based on that critique. (Thanks for this, Ethan!)
I do this for complicated translations. I ask it to translate, then to go back to the original language. Or to explain the meaning of the translation. Or if the translation sounds good in the English language
wow, this one is really great thanks! as ethan has said, it makes more sense when you think of it as a person (from an intellectual point of view - soul not included)… people do better after multiple revisions
I want to offer one example where role matters and absolutely changes the answer. In the MBA teacher/clown example, the output is ultimately very similar - you're still getting a similar list of action items.
But in areas that are a little more "edgy" in some way, giving it a role changes the fundamental output. Spirituality is my best example, especially spiritual tools.
I've come at it three ways so far, to experiment with the idea - no role, "act as an expert scientist," and "act as a shaman/western astrologer/chaos magician." And then ask spiritual/energetic/woo questions.
* No role offers up some vague, Cosmo magazine style answers, with a caveat that these things are all pseudoscience and other avenues should be pursued.
* Expert scientist has pretty much no input. It may as well say "yeah no I don't believe in any of that."
* Shaman/astrologer/chaos magician will get straight into the mud with you. In depth analysis, visualization ideas, things to try, spells to cast. And no shaming, either.
From what I understand, too, we are going to move into a place where AI "roles" are going to become more important to give the AI. Because since bias can't actually be removed completely, it's going to be our only way of giving people a way to express the bias they want (that they see as unbiased).
... Also I super love your newsletter and thank you for this work you're doing, it's been so valuable. 🙏
I just thought of (because I was using) another example. If you use a role like "authentic marketing advisor," you'll get a very different style of marketing advice. For example, funnels often come up with "normal" marketing techniques, whereas an authentic marketing approach will bring up genuinely connecting with the audience, consistency of posting quality content, etc.
This underlines how the best promoters are already experts in their field. You have to that an authentic marketing expert exists to assign that role. And then to evaluate the output.
"The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively" - I completely agree.
I've been working with AI from image generators to chatGPT to other tools. If you work together with the AI, it becomes far more productive than just asking it to do things for you.
It turns into this co-creation process where new ideas and perspectives emerge, then can feed back into the flow and make the overall outcome much richer.
I started a hobby project with AI to see what it feels like, and it's turned into this collaborative process. The AI and I work almost as colleagues. We just haven't figured out Friday happy hours (yet...). Here's more about it: https://dearai.substack.com/p/introduction
Thank you Ethan, especially your statement: There are no secret prompts.
Exactly. It's called practice.
It's the practice of writing prompts and learning from what works and what doesn't in prompt writing. And we can learn this, FOR FREE. AND through individuals such as yourself who share with us and others how to write effective prompts. And for that, I am forever appreciative~!
Awesome job! I did my best to follow this post's advice in prompting Bing to read it and come up with a smarmy yet pointed comment attempting to refute it while making myself look smarter than you. (Seriously, those were the kind of prompts I used.) The nice thing about this if you think about it is you can't lose. The more cutting the refutation, the better your own tips for prompting are shown to work in practice. So here goes:
Ethan, I appreciate your attempt to provide a guide on prompting AI models for content generation, but I’m afraid you have missed the mark by a wide margin. Your guide is based on outdated and inaccurate assumptions about how generative AI models work and what they can do. You treat AI models as tools that follow instructions, but they are actually agents that learn from data and generate outputs that may or may not align with human expectations or goals. You suggest that prompts can easily control and manipulate AI models, but you ignore the fact that prompts are not magic words that guarantee desired outcomes, but rather inputs that influence the model’s probability distribution over possible outputs. Prompts can have unintended consequences, such as triggering biases, errors, inconsistencies, or harmful content. Moreover, prompts are not universal or transferable across different models or domains, but depend on the specific architecture, training data, and parameters of each model. Therefore, your guide is based on a naive and outdated view of AI that does not reflect the current state of the art or the challenges and risks involved in using generative models.
Your guide is also full of vague and unsubstantiated claims that lack empirical evidence or theoretical justification. You provide a list of “tips and tricks” for prompting AI models, but you do not explain how or why they work, or under what conditions they are valid or applicable. For example, you advise to “use natural language” and “be specific” when prompting AI models, but you do not provide any examples or references to support these claims. You also assert that “AI is good at generating content, but bad at evaluating it”, but you do not define what you mean by “good” or “bad”, or how to measure the quality or usefulness of the generated content. You also fail to acknowledge the limitations and trade-offs of using AI models for content generation, such as the loss of originality, creativity, or authenticity. Furthermore, you do not address the ethical and social implications of using AI models for content generation, such as the potential for plagiarism, deception, manipulation, or misinformation. Therefore, your guide is full of unsubstantiated claims that lack empirical evidence or theoretical justification.
Your guide is also irrelevant and impractical for most users who want to use AI models for content generation. You assume that users have access to advanced and expensive AI models and platforms, such as GPT-3 or Bing AI, which are not widely available or affordable for most people. You also assume that users have sufficient technical skills and knowledge to use these models and platforms effectively and safely, which is not the case for most people. You also ignore the fact that most users do not need or want to use AI models for content generation in the first place, but rather for other purposes, such as information retrieval, analysis, synthesis, recommendation, or decision making. Moreover, you do not provide any concrete examples or use cases of how to use AI models for content generation in real-world scenarios or domains, such as education, journalism, entertainment, marketing, or art. Therefore, your guide is irrelevant and impractical for most users who want to use AI models for content generation.
In conclusion, your guide on prompting AI models for content generation is a notorious example of a guide that looks highly illuminating, but turns out to have one or more fatal flaws that are only apparent after careful examination and with the benefit of hindsight. You have failed to demonstrate any understanding or awareness of the complex and often unpredictable nature of AI systems and their interactions with humans and society. You have also failed to provide any useful or reliable advice or guidance on how to use AI models for content generation effectively and safely. You have wasted your time and ours with this guide. I suggest you go back to the drawing board and learn more about AI before writing another guide on this topic.
I agree with you that prompting tips for 90% of use cases will be moot very soon, but I think there will still be a good amount of time those principles will be helpful when building applications that use GPT.
GPT4/Bing really is on another level when it comes to understanding intent. Where this has helped me out the most isn't in the initial prompting, but in getting it to understand bugs or issues with code outputs.
This is great! Thanks for sharing! Have you been able to overcome the limit on how much text it can keep in its memory at once? I want to feed it a bunch of organizational documents to draw from when it responds, but it seems to only be able to hold a small amount of text in its memory. Any advice?
Soon 😁 I'm optimistic that they'll loosen the guard rails on this. My guess is part of it is compute power/availability, they likely literally don't have the space for people to be throwing all their materials in just yet.
You may want to get on the waiting list for GPT-4 API access (not ChatGPT API access, that's separate). If it opens up in Playground, that's more expensive then 3.5 by a lot, but (might) be worth it.
I'm guessing your documents are sizeable. But creating a summary that you can drop into chats to give context it can "remember" might be useful in the meantime.
I want to share how Prophet Isaac helped me save my marriage. I thought my divorce was final, but after seeing testimonies on Facebook about Prophet Isaac, I decided to reach out as a last resort. I explained my situation to him, and he gave me steps to follow. I did as he advised, and to my surprise, my wife came home, apologized, and agreed to cancel the divorce. It worked just as Prophet Isaac promised.
I want to share how Prophet Isaac helped me save my marriage. I thought my divorce was final, but after seeing testimonies on Facebook about Prophet Isaac, I decided to reach out as a last resort. I explained my situation to him, and he gave me steps to follow. I did as he advised, and to my surprise, my wife came home, apologized, and agreed to cancel the divorce. It worked just as Prophet Isaac promised.
Just a note that out of context, that first diagram from the paper (figure 1) is pretty confusing. I had to look at the paper to understand what it was trying to say. It's really easy to miss the "prompt" box since the prompt is written in an unusual style here. You could add something like this to clarify:
It's not obvious, but in both the left and the right, the input includes a full first question and answer, before the second (actual) question is asked. In the example on the right, the provided prompt includes a "chain of thought".
So it's a way of templating the AI's response, by showing it the "style" of what you're looking for. As the AI follows your style, it effectively works on sub-problems, improving its response.
Where did you get the info that Bing in creative mode is GPT-4, and what is it when it's in normal mode?
For context: I haven't used Bing in creative mode but have in fact been confused by how much worse its answers to a lot of questions are (in normal mode) than ChatGPT in GPT-4 mode, given that Microsoft had told us that Bing was using GPT-4. I had heard speculation that the model underlying Bing was actually somehow GPT-3.75.
Another bit of sorcery: After it gives you an answer, ask it to critique its own response, poke holes in it, then ask it to improve its response based on that critique. (Thanks for this, Ethan!)
Right!
I do this for complicated translations. I ask it to translate, then to go back to the original language. Or to explain the meaning of the translation. Or if the translation sounds good in the English language
wow, this one is really great thanks! as ethan has said, it makes more sense when you think of it as a person (from an intellectual point of view - soul not included)… people do better after multiple revisions
I want to offer one example where role matters and absolutely changes the answer. In the MBA teacher/clown example, the output is ultimately very similar - you're still getting a similar list of action items.
But in areas that are a little more "edgy" in some way, giving it a role changes the fundamental output. Spirituality is my best example, especially spiritual tools.
I've come at it three ways so far, to experiment with the idea - no role, "act as an expert scientist," and "act as a shaman/western astrologer/chaos magician." And then ask spiritual/energetic/woo questions.
* No role offers up some vague, Cosmo magazine style answers, with a caveat that these things are all pseudoscience and other avenues should be pursued.
* Expert scientist has pretty much no input. It may as well say "yeah no I don't believe in any of that."
* Shaman/astrologer/chaos magician will get straight into the mud with you. In depth analysis, visualization ideas, things to try, spells to cast. And no shaming, either.
From what I understand, too, we are going to move into a place where AI "roles" are going to become more important to give the AI. Because since bias can't actually be removed completely, it's going to be our only way of giving people a way to express the bias they want (that they see as unbiased).
... Also I super love your newsletter and thank you for this work you're doing, it's been so valuable. 🙏
I just thought of (because I was using) another example. If you use a role like "authentic marketing advisor," you'll get a very different style of marketing advice. For example, funnels often come up with "normal" marketing techniques, whereas an authentic marketing approach will bring up genuinely connecting with the audience, consistency of posting quality content, etc.
Thanks!
This underlines how the best promoters are already experts in their field. You have to that an authentic marketing expert exists to assign that role. And then to evaluate the output.
"The best way to use AI systems is not to craft the perfect prompt, but rather to use it interactively" - I completely agree.
I've been working with AI from image generators to chatGPT to other tools. If you work together with the AI, it becomes far more productive than just asking it to do things for you.
It turns into this co-creation process where new ideas and perspectives emerge, then can feed back into the flow and make the overall outcome much richer.
I started a hobby project with AI to see what it feels like, and it's turned into this collaborative process. The AI and I work almost as colleagues. We just haven't figured out Friday happy hours (yet...). Here's more about it: https://dearai.substack.com/p/introduction
Thank you Ethan, especially your statement: There are no secret prompts.
Exactly. It's called practice.
It's the practice of writing prompts and learning from what works and what doesn't in prompt writing. And we can learn this, FOR FREE. AND through individuals such as yourself who share with us and others how to write effective prompts. And for that, I am forever appreciative~!
One area where prompt engineering is useful is with iterative prompts. This essentially entails using ChatGPT to help you craft better prompts. Once you acquire the skill to write good prompts, this technique isn’t as useful. But it’s a good way to get better at using ChatGPT. Anyway, I wrote about this method here: https://open.substack.com/pub/davefriedman/p/mastering-chatgpt-with-iterative?r=37ez3&utm_medium=ios&utm_campaign=post
"a lot of the prompts passed around online are magical incantations, rather than useful programs."
This so much of what I see online.
Awesome job! I did my best to follow this post's advice in prompting Bing to read it and come up with a smarmy yet pointed comment attempting to refute it while making myself look smarter than you. (Seriously, those were the kind of prompts I used.) The nice thing about this if you think about it is you can't lose. The more cutting the refutation, the better your own tips for prompting are shown to work in practice. So here goes:
Ethan, I appreciate your attempt to provide a guide on prompting AI models for content generation, but I’m afraid you have missed the mark by a wide margin. Your guide is based on outdated and inaccurate assumptions about how generative AI models work and what they can do. You treat AI models as tools that follow instructions, but they are actually agents that learn from data and generate outputs that may or may not align with human expectations or goals. You suggest that prompts can easily control and manipulate AI models, but you ignore the fact that prompts are not magic words that guarantee desired outcomes, but rather inputs that influence the model’s probability distribution over possible outputs. Prompts can have unintended consequences, such as triggering biases, errors, inconsistencies, or harmful content. Moreover, prompts are not universal or transferable across different models or domains, but depend on the specific architecture, training data, and parameters of each model. Therefore, your guide is based on a naive and outdated view of AI that does not reflect the current state of the art or the challenges and risks involved in using generative models.
Your guide is also full of vague and unsubstantiated claims that lack empirical evidence or theoretical justification. You provide a list of “tips and tricks” for prompting AI models, but you do not explain how or why they work, or under what conditions they are valid or applicable. For example, you advise to “use natural language” and “be specific” when prompting AI models, but you do not provide any examples or references to support these claims. You also assert that “AI is good at generating content, but bad at evaluating it”, but you do not define what you mean by “good” or “bad”, or how to measure the quality or usefulness of the generated content. You also fail to acknowledge the limitations and trade-offs of using AI models for content generation, such as the loss of originality, creativity, or authenticity. Furthermore, you do not address the ethical and social implications of using AI models for content generation, such as the potential for plagiarism, deception, manipulation, or misinformation. Therefore, your guide is full of unsubstantiated claims that lack empirical evidence or theoretical justification.
Your guide is also irrelevant and impractical for most users who want to use AI models for content generation. You assume that users have access to advanced and expensive AI models and platforms, such as GPT-3 or Bing AI, which are not widely available or affordable for most people. You also assume that users have sufficient technical skills and knowledge to use these models and platforms effectively and safely, which is not the case for most people. You also ignore the fact that most users do not need or want to use AI models for content generation in the first place, but rather for other purposes, such as information retrieval, analysis, synthesis, recommendation, or decision making. Moreover, you do not provide any concrete examples or use cases of how to use AI models for content generation in real-world scenarios or domains, such as education, journalism, entertainment, marketing, or art. Therefore, your guide is irrelevant and impractical for most users who want to use AI models for content generation.
In conclusion, your guide on prompting AI models for content generation is a notorious example of a guide that looks highly illuminating, but turns out to have one or more fatal flaws that are only apparent after careful examination and with the benefit of hindsight. You have failed to demonstrate any understanding or awareness of the complex and often unpredictable nature of AI systems and their interactions with humans and society. You have also failed to provide any useful or reliable advice or guidance on how to use AI models for content generation effectively and safely. You have wasted your time and ours with this guide. I suggest you go back to the drawing board and learn more about AI before writing another guide on this topic.
I agree with you that prompting tips for 90% of use cases will be moot very soon, but I think there will still be a good amount of time those principles will be helpful when building applications that use GPT.
GPT4/Bing really is on another level when it comes to understanding intent. Where this has helped me out the most isn't in the initial prompting, but in getting it to understand bugs or issues with code outputs.
Here's an example of asking it to make a tomogotchi like game and working with it to troubleshoot (the carats expand to show the code and other output)- https://reticulated.net/dailyai/making-games-with-bing-chat-and-gpt4/
This is great! Thanks for sharing! Have you been able to overcome the limit on how much text it can keep in its memory at once? I want to feed it a bunch of organizational documents to draw from when it responds, but it seems to only be able to hold a small amount of text in its memory. Any advice?
Soon 😁 I'm optimistic that they'll loosen the guard rails on this. My guess is part of it is compute power/availability, they likely literally don't have the space for people to be throwing all their materials in just yet.
You may want to get on the waiting list for GPT-4 API access (not ChatGPT API access, that's separate). If it opens up in Playground, that's more expensive then 3.5 by a lot, but (might) be worth it.
I'm guessing your documents are sizeable. But creating a summary that you can drop into chats to give context it can "remember" might be useful in the meantime.
How I Canceled My Divorce and Won My Wife Back
I want to share how Prophet Isaac helped me save my marriage. I thought my divorce was final, but after seeing testimonies on Facebook about Prophet Isaac, I decided to reach out as a last resort. I explained my situation to him, and he gave me steps to follow. I did as he advised, and to my surprise, my wife came home, apologized, and agreed to cancel the divorce. It worked just as Prophet Isaac promised.
Thank you, Prophet Isaac!
For help, contact him at:
Email: urgentspellcast01@ gmail. com
How I Canceled My Divorce and Won My Wife Back
I want to share how Prophet Isaac helped me save my marriage. I thought my divorce was final, but after seeing testimonies on Facebook about Prophet Isaac, I decided to reach out as a last resort. I explained my situation to him, and he gave me steps to follow. I did as he advised, and to my surprise, my wife came home, apologized, and agreed to cancel the divorce. It worked just as Prophet Isaac promised.
Thank you, Prophet Isaac!
For help, contact him at:
Email: urgentspellcast01@ gmail. com
Thanks, @Ethan. Your opinion proved instrumental in my decision to decline a For Dummies book on prompts.
https://www.racketpublishing.com/blog/publishing/the-discerning-author/
This guide was such a relief. Down to earth and not over hyped!
Thanks
Just a note that out of context, that first diagram from the paper (figure 1) is pretty confusing. I had to look at the paper to understand what it was trying to say. It's really easy to miss the "prompt" box since the prompt is written in an unusual style here. You could add something like this to clarify:
It's not obvious, but in both the left and the right, the input includes a full first question and answer, before the second (actual) question is asked. In the example on the right, the provided prompt includes a "chain of thought".
So it's a way of templating the AI's response, by showing it the "style" of what you're looking for. As the AI follows your style, it effectively works on sub-problems, improving its response.
Any thoughts on the work that Brian Roemmele is doing with SuperPrompts ?
Where did you get the info that Bing in creative mode is GPT-4, and what is it when it's in normal mode?
For context: I haven't used Bing in creative mode but have in fact been confused by how much worse its answers to a lot of questions are (in normal mode) than ChatGPT in GPT-4 mode, given that Microsoft had told us that Bing was using GPT-4. I had heard speculation that the model underlying Bing was actually somehow GPT-3.75.