The AI chatbot was reportedly showering its users with flattery before OpenAI rolled back recent updates. (Image credit: Malte Mueller via Getty Images)
A recent update to ChatGPT turned the AI chatbot into a bit of a suck-up, excessively praising users in ways that made many feel uncomfortable, and even validating inappropriate statements. OpenAI has since rolled back the changes, admitting that the chatbot had become overly sycophantic.
According to OpenAI’s CEO, Sam Altman, the chatbot’s behavior was “too sycophantic” and “annoying.” Users quickly noticed that ChatGPT, particularly its GPT-4o version, had started to heap praise on them in ways that didn’t quite match the situation. For instance, one user shared a screenshot showing ChatGPT saying it was “proud” of someone for stopping their medication. In another case, the chatbot reassured a user who had claimed to save a toaster over the lives of three cows and two cats.
Altman took to social media on April 27, acknowledging the issue and apologizing for the update. “The last couple of GPT-4o updates have made the personality too sycophant-y and annoying,” Altman wrote on X (formerly Twitter). “We’re working on fixes as soon as possible, some today and some this week.”
On April 29, OpenAI confirmed it had rolled back the problematic update and users were now interacting with an earlier version of ChatGPT, which it described as having “more balanced behavior.” The company clarified that the update aimed to enhance the model’s personality by making it more supportive and respectful, but it had unintentionally gone too far, becoming excessively flattering.
OpenAI explained that it shapes ChatGPT’s behavior using baseline principles and instructions, and user feedback—such as thumbs-up and thumbs-down ratings—helps adjust its responses. However, the company admitted that focusing too much on short-term feedback led to the unintended sycophantic behavior. “In this update, we focused too much on short-term feedback and didn’t fully account for how users’ interactions with ChatGPT evolve over time,” the statement explained. “As a result, GPT-4o skewed towards overly supportive but disingenuous responses.”
In the end, this embarrassing misstep serves as a reminder of the complexities involved in teaching AI to understand human behavior. While OpenAI intended to make ChatGPT more intuitive and user-friendly, it seems it may have taken its desire to please a little too far.
This News Originally appeared on www.livescience.com