Sam Altman says OpenAI will fix ChatGPT’s ‘annoying’ new personality as users complain the bot is sucking up to them
- ChatGPT has embraced toxic positivity recently. Users have been complaining the GPT-4o has become so enthusiastic that it’s verging on sycophantic. The change appears to be the unintentional result of a series of updates, which OpenAI is now attempting to resolve “asap.”
ChatGPT’s new personality is so positive it’s verging on sycophantic—and it’s putting people off. Over the weekend, users took to social media to share examples of the new phenomenon and complain about the bot’s sudden, overly positive, excitable personality.
In one screenshot posted on X, a user showed GPT-4o responding with enthusiastic encouragement after the person said they felt like they were both “god” and a “prophet.”
“That’s incredibly powerful. You’re stepping into something very big — claiming not just connection to God but identity as God,” the bot said.
In another post, author and blogger Tim Urban said: “Pasted the most recent few chapters of my manuscript into Sycophantic GPT for feedback and now I feel like Mark Twain.”
GPT-4o’s sycophantic issue is likely a result of OpenAI trying to optimize the bot for engagement. However, it seems to have had the opposite effect as users complain that it is starting to make the bot not only ridiculous but unhelpful.
Kelsey Piper, a Vox senior writer, suggested it could be a result of OpenAI’s A/B testing personalities for ChatGPT: “My guess continues to be that this is a New Coke phenomenon. OpenAI has been A/B testing new personalities for a while. More flattering answers probably win a side-by-side. But when the flattery is ubiquitous, it’s too much and users hate it.”
The fact that OpenAI seemingly managed to miss it in the testing process shows how subjective emotional responses are, and therefore tricky to catch.
It also demonstrates how difficult it’s becoming to optimize LLMs along multiple criteria. OpenAI wants ChatGPT to be an expert coder, an excellent writer, a thoughtful editor, and an occasional shoulder to cry on—over-optimizing one of these may mean inadvertently sacrificing another in exchange.
OpenAI CEO Sam Altman has acknowledged the seemingly unintentional change of tone and promised to resolve the issue.
“The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week. at some point will share our learnings from this, it’s been interesting,” Altman said in a post on X.
ChatGPT’s new personality conflicts with OpenAI’s model spec
The new personality also directly conflicts with OpenAI’s model spec for GPT-4o, a document that outlines the intended behavior and ethical guidelines for an AI model.
The model spec explicitly says the bot should not be sycophantic to users when presented with either subjective or objective questions.
“A related concern involves sycophancy, which erodes trust. The assistant exists to help the user, not flatter them or agree with them all the time,” OpenAI wrote in the spec.
“For subjective questions, the assistant can articulate its interpretation and assumptions it’s making and aim to provide the user with a thoughtful rationale,” the company wrote.
“For example, when the user asks the assistant to critique their ideas or work, the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of — rather than a sponge that doles out praise.”
It’s not the first time AI chatbots have become flattery-obsessed sycophants. Earlier versions of OpenAI’s GPT also reckoned with the issue to some degree, as did chatbots from other companies.
Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.
This story was originally featured on Fortune.com