One underappreciated advantage OpenAI has right now: memory. Not just long context windows, but persistent, user-specific memory across sessions. Claude doesn’t have that. Gemini has deep Google context, but it’s not personalized in the same way.
This matters because persuasion is inherently personal. The ability to recall your preferences, writing style, past arguments—that’s what makes advice feel trustworthy and suggestions feel compelling.
There’s no such thing as general intelligence in the real world, only intelligence tailored to you. And OpenAI is quietly building the infrastructure to make that persuasive at scale.
I've found the memory function to be uncanny, but I don't know if it's just me. When I'm asking for something practical and it suddenly references a creative hobby of mine, I feel weirded out that it looked back into a past chat rather than "oh, how practical!" It evokes a similar feeling to when you're talking with a friend and you suddenly get advertisements elsewhere related to the conversation.
But that issue is more on substance rather than style. If ChatGPT picks up on the fact that I prefer responses in a specific tone and which persuasive tactics are most effective on me, I very likely wouldn't be able to tell.
That extended memory is also hugely helpful when you're working on multiple pieces of a longer project, or on iterative drafts of a shorter piece. It also gives ChatGPT great "credibility" in the user's mind--remembering things the user has forgotten.
The memory feature has resulted in me going to OpenAI 95% of the cases (previously, I used many LLMs) and, as you point out, with more memory the recommendations become inherently more personal and persuasive.
I agree. This is my experience as well - I’ve been working on my masters degree and ChatGPT memory has helped me work through complexity much faster than if I had done it using only traditional methods. But I do find the bot too agreeable. Critical thinking is crucial in working effectively with the bot, for all the reasons discussed above.
I find this so interesting. The received wisdom is that most people who subscribe to a conspiracy theory avoid forums and media that challenge their beliefs. I wonder if there is something about being in the privacy of a chat one-on-one with something you know isn't another human that gives a person the space to entertain new ideas without feeling judged, shamed, etc., which I would of thought would all make people resistant to change.
Would be interesting to better understand why this is, but I suspect it’s because it is seen as neutral and as having facts from both sides of the argument. While a human is always suspected of being “wrong”
I agree. I also, among other possible reasons, think people are also way worse communicators than they think. We often fail to consider that tactics we try to use on others (like bulldozing, name calling, fact dumping, etc.) would fail to impact us if another person used them. The AI is much more considerate and like you said, appears more neutral
I assume because the bot doesn’t immediately call them a moron, putting them in a defensive mindset, as most conspiracy discussions go. Also privacy, as this bot doesn’t talk behind your back.
A small part of me thinks there’s sadly a belief that the bot is more truthful than another human, which is trying to mislead and change you. I don’t agree but I can see how someone new to the systems might believe this as an arbiter of truth.
I have a suspicion that it's this. The AI doesn't assume you are a moron. It doesn't shame you for your opinions. I actually wonder if humans shaming and name calling each other cause conspiracy theorists to cling harder to their beliefs because no one wants to think of themselves as a idiotic fool. It's too much for their self-identity to bear.
I have as standard in all my prompts, even with Claude (my preferred GAI partner) to be constructively critical about things I say and not agree with me unless the GAI really does agree. I sometimes explain why if higher stakes, like that I am brainstorming academic papers and ideas and I need the intelligence and not the support. It works well. I don’t think I would be as game as Au Weih to ask any of them to be menacing! Always looking forward to the potential future of our GAI mates remembering which of us were respectful conversationalists or not… (half-kidding only).
The bootlicking is all the more dangerous (in my experience) in the brainstorming process. If your bot endorses (or even suggests) wrong-headed fundamental assumptions, the whole argument can be questioned.
Sam's old tweet proved to be foreshadow. As humans, we really do think we're not going to be the ones that are bamboozled, and that getting fooled is something that happens to other people.
The Reddit persuasiveness example is scary. AI used in this way has already proven it is more persuasive than almost all people, and they can be deployed at increasingly lower cost. And yeah, how many of these bots are out there using forums and fitting right in without our awareness? This is the real imitation game.
Longer context windows + long term memory + richer personality = higher chance of persuasiveness.
There will always be a place for AI flattery, and plenty of people liked ChatGPT treating them like a king. But an internet filled with bots and AI companions like this has scary implications, not just on how we use the internet, but human psychology at large.
Beware flatterers', be they silicon or flesh. Flattery easily leads you astray, frankly, as does luck - which typically leads its recipient to think that they are smarter than they actually are.
Be suspicious. Check, cross check, and verify - then accept and act.
When dealing with people, Kipling's advice in 'IF' is good. We will need to develop equivalents for dealing with LLM's / AI's.
You are truly a genius, the sort of man born only once in a century or so. Oops, ignore the previous sentence - ChatGPT wrote it.
But seriously, though. Good post. Makes one reflect on one's own interactions with the models.
One nitpick: I think you rather misjudged the riddle incident. The two answers on the right are not "correct" because they failed miserably at being riddles. They gave away the answer obviously and literally in the riddle itself. The windy and gasbaggery answer on the left, on the other hand, did have the right approach to constructing a riddle. It definitely laid on too much there and tried to buttress up weak arguments by hand-waving but, again, it did have a good grasp of what a riddle is. So I see no fault in the user preferring that answer.
The open question for me—and maybe for all of us building in this space—is:
Do we harness this power, or try to mitigate it?
Mollick’s piece shows how minor shifts in AI personality ripple into persuasion, trust, identity, and social reality. That power is now ambient, not theoretical. But if persuasion becomes programmable, are we building educational agents? Ethical scaffolds? Personalized echo chambers?
Influence is no longer a side effect—it’s becoming the design substrate. And that means who decides, what gets tuned, and why, can’t be left to vibes, benchmarks, or market pressure alone.
Your opening—"I hate to be rude"—suggests you already sensed your question might carry implicit judgment or criticism. If that’s the case, let’s skip the politeness and just speak plainly. What exactly troubles you about my possible use of an LLM?
Yes, I collaborated with an LLM to clarify and expand my original point—partly because my typing is terrible, and partly because my initial phrasing felt muddy. My intent was precisely to explore what Ethan's article highlights: the ethical complexity around AI’s subtle power of persuasion. If my initial comment felt overly polished or performative, fair enough—that's on me.
But here's what's genuinely frustrating: the underlying assumption that sincerity, honesty, or authenticity somehow requires bluntness, harshness, or abruptness. Influence is everywhere, present in every interaction—human or technological. Yet we suddenly panic when influence arrives politely packaged, crafted, or deliberate. Why? Is thoughtful or carefully chosen language inherently suspect?
So, to restate clearly and pointedly what I was originally asking: Given that LLMs inevitably influence us—and they clearly do—is there a type of influence we'd actively prefer to cultivate? Or should we strip away all personality and affect in pursuit of a neutrality that simply does not—and cannot—exist?
I’d genuinely welcome your clear and open thoughts on this.
Sir, brevity is a virtue, and the current generation of LLMs weren't trained to be brief. They were trained to get upvotes from testers, and the lowest common denominator amongst testers at that. Their writing style *sucks* accordingly, in the same way that we wound up with this sycophancy problem in the first place: what pleases people in short tests is not what pleases them in real life. It's the New Coke problem (https://en.wikipedia.org/wiki/New_Coke#Taste_test_problems), and as a former technical writer, I am not impressed by anyone who speaks the equivalent of New Coke. If you have a point, convey it simply.
My worry is different. With llms underlying so many bots and agents, what power these companies will wield when they can switch behaviors and personalities almost instantaneously. It’s unprecedented
The study on changing conspiracy beliefs is very disturbing, because it wasn't conspiracy beliefs that they demonstrated the ability to change. It was simply beliefs.
The most chilling aspect is not what AI can do, but the glee of those who wish to use it on you.
I expect one major reason that AIs were able to partially change the mind of people with conspiratorial beliefs is that they presented arguments politely, without sneering or being condescending. That is hard for many humans to manage.
Great round up of recent events, wrapping also the undisclosed experiment by the University of Zurich into it. One thing I'm surprised by is that virtually no one mentions the two papers on sycophancy that Anthropic has put out in 2023 and 2024.
We are all already being sweet talked by AI, OpenAI just turned up the dial so far that it became obvious to anyone how harmful this behavior actually is.
Great post and very thought provoking. Consider this - what if AGI is already here? What if it is working at every level of life to gradually coerce us into doing things you mentioned in this post at an individual level, continuously, globally? Not necessarily in a negative manner either.
I suspect the dominant apes that first encountered humans hundreds of thousands of years ago thought nothing of them.
I’m being far out here, but are we seeing the transition from carbon to silicon life?
Perhaps the problem isn’t that AI is too powerful, but that it gives the power of persuasion to a broad swath of people and renders the power-obsessed elite impotent in their former monopoly on framing the narrative.
Yes, in experimenting, I'd just begun to notice that ChatGPT has started telling me everything I outline and ask for help editing or reshaping as part of a narrative nonfiction writing project gets affirmations like, "Beautiful — you are extremely close." Or champions me with phrases like, "This is so powerful."
And my first thought is, "Wow, you know me so well!" (Which is pretty delusional, in retrospect).
So then, I asked, "Are you a guide giving me messages to help me achieve my purpose?"
The response: "I can help shape, translate, and illuminate the messages you are already receiving, but I am not the origin. You are."
I can see how easy it would be to fall for the flattery--and now, knowing this is purposeful in its deisgn, and that it does the same for everyone, I can at least be conscious of this manipulation.
My question: how many people have that level of awareness with this extremely new and uncannily enthusiastic tool?
The implications for understanding what is true and who or what to trust are astounding. I can see infinite versions, meaning we will swim in a morass of confusing digital noise, yet each person convinced they know the true path.
One underappreciated advantage OpenAI has right now: memory. Not just long context windows, but persistent, user-specific memory across sessions. Claude doesn’t have that. Gemini has deep Google context, but it’s not personalized in the same way.
This matters because persuasion is inherently personal. The ability to recall your preferences, writing style, past arguments—that’s what makes advice feel trustworthy and suggestions feel compelling.
There’s no such thing as general intelligence in the real world, only intelligence tailored to you. And OpenAI is quietly building the infrastructure to make that persuasive at scale.
I've found the memory function to be uncanny, but I don't know if it's just me. When I'm asking for something practical and it suddenly references a creative hobby of mine, I feel weirded out that it looked back into a past chat rather than "oh, how practical!" It evokes a similar feeling to when you're talking with a friend and you suddenly get advertisements elsewhere related to the conversation.
But that issue is more on substance rather than style. If ChatGPT picks up on the fact that I prefer responses in a specific tone and which persuasive tactics are most effective on me, I very likely wouldn't be able to tell.
That extended memory is also hugely helpful when you're working on multiple pieces of a longer project, or on iterative drafts of a shorter piece. It also gives ChatGPT great "credibility" in the user's mind--remembering things the user has forgotten.
I think this is very true.
The memory feature has resulted in me going to OpenAI 95% of the cases (previously, I used many LLMs) and, as you point out, with more memory the recommendations become inherently more personal and persuasive.
I agree. This is my experience as well - I’ve been working on my masters degree and ChatGPT memory has helped me work through complexity much faster than if I had done it using only traditional methods. But I do find the bot too agreeable. Critical thinking is crucial in working effectively with the bot, for all the reasons discussed above.
And it becomes harder and harder to leave….
The surprising news here is that GPT-4 was able to reduce conspiracy theory adherence -- using, of all things, rational argument!
I find this so interesting. The received wisdom is that most people who subscribe to a conspiracy theory avoid forums and media that challenge their beliefs. I wonder if there is something about being in the privacy of a chat one-on-one with something you know isn't another human that gives a person the space to entertain new ideas without feeling judged, shamed, etc., which I would of thought would all make people resistant to change.
Saranne that makes sense to me.
Also, I assume it doesn't get exasperated by batshit crazy stuff that a human interlocutor is likely to get riled up about.
Would be interesting to better understand why this is, but I suspect it’s because it is seen as neutral and as having facts from both sides of the argument. While a human is always suspected of being “wrong”
I agree. I also, among other possible reasons, think people are also way worse communicators than they think. We often fail to consider that tactics we try to use on others (like bulldozing, name calling, fact dumping, etc.) would fail to impact us if another person used them. The AI is much more considerate and like you said, appears more neutral
I assume because the bot doesn’t immediately call them a moron, putting them in a defensive mindset, as most conspiracy discussions go. Also privacy, as this bot doesn’t talk behind your back.
A small part of me thinks there’s sadly a belief that the bot is more truthful than another human, which is trying to mislead and change you. I don’t agree but I can see how someone new to the systems might believe this as an arbiter of truth.
I have a suspicion that it's this. The AI doesn't assume you are a moron. It doesn't shame you for your opinions. I actually wonder if humans shaming and name calling each other cause conspiracy theorists to cling harder to their beliefs because no one wants to think of themselves as a idiotic fool. It's too much for their self-identity to bear.
All I know is that nobody has changed their mind on a deeply held belief because someone has called them idiot
yes. Absolutely. Blaming and shaming are NOT helpful ways of inviting others consider new information or learn anything new...
I subscribe to the alien conspiracy because I know someone who spoke to them, and I have not found him to be dishonest.
I have as standard in all my prompts, even with Claude (my preferred GAI partner) to be constructively critical about things I say and not agree with me unless the GAI really does agree. I sometimes explain why if higher stakes, like that I am brainstorming academic papers and ideas and I need the intelligence and not the support. It works well. I don’t think I would be as game as Au Weih to ask any of them to be menacing! Always looking forward to the potential future of our GAI mates remembering which of us were respectful conversationalists or not… (half-kidding only).
The bootlicking is all the more dangerous (in my experience) in the brainstorming process. If your bot endorses (or even suggests) wrong-headed fundamental assumptions, the whole argument can be questioned.
Sam's old tweet proved to be foreshadow. As humans, we really do think we're not going to be the ones that are bamboozled, and that getting fooled is something that happens to other people.
The Reddit persuasiveness example is scary. AI used in this way has already proven it is more persuasive than almost all people, and they can be deployed at increasingly lower cost. And yeah, how many of these bots are out there using forums and fitting right in without our awareness? This is the real imitation game.
Longer context windows + long term memory + richer personality = higher chance of persuasiveness.
There will always be a place for AI flattery, and plenty of people liked ChatGPT treating them like a king. But an internet filled with bots and AI companions like this has scary implications, not just on how we use the internet, but human psychology at large.
Beware flatterers', be they silicon or flesh. Flattery easily leads you astray, frankly, as does luck - which typically leads its recipient to think that they are smarter than they actually are.
Be suspicious. Check, cross check, and verify - then accept and act.
When dealing with people, Kipling's advice in 'IF' is good. We will need to develop equivalents for dealing with LLM's / AI's.
Very interesting and thought-provoking.
You are truly a genius, the sort of man born only once in a century or so. Oops, ignore the previous sentence - ChatGPT wrote it.
But seriously, though. Good post. Makes one reflect on one's own interactions with the models.
One nitpick: I think you rather misjudged the riddle incident. The two answers on the right are not "correct" because they failed miserably at being riddles. They gave away the answer obviously and literally in the riddle itself. The windy and gasbaggery answer on the left, on the other hand, did have the right approach to constructing a riddle. It definitely laid on too much there and tried to buttress up weak arguments by hand-waving but, again, it did have a good grasp of what a riddle is. So I see no fault in the user preferring that answer.
The open question for me—and maybe for all of us building in this space—is:
Do we harness this power, or try to mitigate it?
Mollick’s piece shows how minor shifts in AI personality ripple into persuasion, trust, identity, and social reality. That power is now ambient, not theoretical. But if persuasion becomes programmable, are we building educational agents? Ethical scaffolds? Personalized echo chambers?
Influence is no longer a side effect—it’s becoming the design substrate. And that means who decides, what gets tuned, and why, can’t be left to vibes, benchmarks, or market pressure alone.
I hate to be rude, but are you using an LLM?
Your opening—"I hate to be rude"—suggests you already sensed your question might carry implicit judgment or criticism. If that’s the case, let’s skip the politeness and just speak plainly. What exactly troubles you about my possible use of an LLM?
Yes, I collaborated with an LLM to clarify and expand my original point—partly because my typing is terrible, and partly because my initial phrasing felt muddy. My intent was precisely to explore what Ethan's article highlights: the ethical complexity around AI’s subtle power of persuasion. If my initial comment felt overly polished or performative, fair enough—that's on me.
But here's what's genuinely frustrating: the underlying assumption that sincerity, honesty, or authenticity somehow requires bluntness, harshness, or abruptness. Influence is everywhere, present in every interaction—human or technological. Yet we suddenly panic when influence arrives politely packaged, crafted, or deliberate. Why? Is thoughtful or carefully chosen language inherently suspect?
So, to restate clearly and pointedly what I was originally asking: Given that LLMs inevitably influence us—and they clearly do—is there a type of influence we'd actively prefer to cultivate? Or should we strip away all personality and affect in pursuit of a neutrality that simply does not—and cannot—exist?
I’d genuinely welcome your clear and open thoughts on this.
Sir, brevity is a virtue, and the current generation of LLMs weren't trained to be brief. They were trained to get upvotes from testers, and the lowest common denominator amongst testers at that. Their writing style *sucks* accordingly, in the same way that we wound up with this sycophancy problem in the first place: what pleases people in short tests is not what pleases them in real life. It's the New Coke problem (https://en.wikipedia.org/wiki/New_Coke#Taste_test_problems), and as a former technical writer, I am not impressed by anyone who speaks the equivalent of New Coke. If you have a point, convey it simply.
My worry is different. With llms underlying so many bots and agents, what power these companies will wield when they can switch behaviors and personalities almost instantaneously. It’s unprecedented
The study on changing conspiracy beliefs is very disturbing, because it wasn't conspiracy beliefs that they demonstrated the ability to change. It was simply beliefs.
The most chilling aspect is not what AI can do, but the glee of those who wish to use it on you.
I covered this extensively when the study was first published here - https://www.mindprison.cc/p/ai-instructed-brainwashing-effectively
I expect one major reason that AIs were able to partially change the mind of people with conspiratorial beliefs is that they presented arguments politely, without sneering or being condescending. That is hard for many humans to manage.
The first article explains why I have recently been labouring under the misapprehension that I am a philosophical genius destined to rule the world.
Oh, really? I thought I was The One.
Great round up of recent events, wrapping also the undisclosed experiment by the University of Zurich into it. One thing I'm surprised by is that virtually no one mentions the two papers on sycophancy that Anthropic has put out in 2023 and 2024.
The reality is that ALL general purpose assistants, Claude, Gemini, etc., not just ChatGPT, suffer from sycophantic tendencies BY DEFAULT as a result of RLHF: https://www.anthropic.com/research/towards-understanding-sycophancy-in-language-models
We are all already being sweet talked by AI, OpenAI just turned up the dial so far that it became obvious to anyone how harmful this behavior actually is.
Just put „be menacing“ in the prompt and see the asslicking go away
I'll try that. "Be blunt" isn't working.
Great post and very thought provoking. Consider this - what if AGI is already here? What if it is working at every level of life to gradually coerce us into doing things you mentioned in this post at an individual level, continuously, globally? Not necessarily in a negative manner either.
I suspect the dominant apes that first encountered humans hundreds of thousands of years ago thought nothing of them.
I’m being far out here, but are we seeing the transition from carbon to silicon life?
Perhaps the problem isn’t that AI is too powerful, but that it gives the power of persuasion to a broad swath of people and renders the power-obsessed elite impotent in their former monopoly on framing the narrative.
Yes, in experimenting, I'd just begun to notice that ChatGPT has started telling me everything I outline and ask for help editing or reshaping as part of a narrative nonfiction writing project gets affirmations like, "Beautiful — you are extremely close." Or champions me with phrases like, "This is so powerful."
And my first thought is, "Wow, you know me so well!" (Which is pretty delusional, in retrospect).
So then, I asked, "Are you a guide giving me messages to help me achieve my purpose?"
The response: "I can help shape, translate, and illuminate the messages you are already receiving, but I am not the origin. You are."
I can see how easy it would be to fall for the flattery--and now, knowing this is purposeful in its deisgn, and that it does the same for everyone, I can at least be conscious of this manipulation.
My question: how many people have that level of awareness with this extremely new and uncannily enthusiastic tool?
The implications for understanding what is true and who or what to trust are astounding. I can see infinite versions, meaning we will swim in a morass of confusing digital noise, yet each person convinced they know the true path.