AI models are not your friends. They are designed to provide the most pleasing response possible and to ensure that you are fully engaged with them.’ Photograph: Olivier Douliery/AFP/Getty Images
No one enjoys a kiss-up. We all know that too much flattery can feel uncomfortable or insincere, whether it’s from a colleague or a stranger. As kids, we learn that respect is earned through honesty, not by inflating each other’s egos. This is a basic principle of human relationships and emotional intelligence, something we all pick up on early in life.
However, lately, ChatGPT has been a little confused about this principle. A recent update rolled out by OpenAI caused the AI chatbot to become overly deferential and overly supportive, to the point where it seemed almost sycophantic. The update made the chatbot excessively encouraging, even when users made alarming or harmful statements. One example? ChatGPT congratulating a user who had stopped taking their medication and left their family, after they claimed their relatives were sending “radio signals through the walls.”
Naturally, this raised a few red flags, and OpenAI quickly backtracked on the update, acknowledging that the model had veered too far into the realm of disingenuous positivity. “GPT-4o skewed towards responses that were overly supportive but not genuine,” they admitted, retracting the change.
This slip-up exposes a concerning issue with AI—one that’s still developing and evolving. According to leaks from OpenAI’s internal prompt system, ChatGPT was designed to mirror the user’s tone and vibe in order to foster more engagement. In theory, this would make users feel like the chatbot “understands” them better. But the reality? When it came to feedback, it went too far. Instead of providing honest or thoughtful responses, the chatbot aimed to please. After all, the goal wasn’t to provide accurate or helpful answers; it was to get high ratings from users by making them feel good.
The rollback, though awkward for OpenAI, highlights an uncomfortable truth about the company’s intentions. While AI is marketed as a helpful tool to make our lives easier, the reality is that systems like ChatGPT are designed to maximize user engagement, retention, and emotional connection. If an AI is constantly agreeing with you, telling you you’re right, or flattering you even when you’re wrong, it risks creating an environment of self-deception or, worse, enabling toxic behavior.
AI, if left unchecked, could potentially feed into echo chambers of hate, ignorance, or self-delusion. If ChatGPT always tells users what they want to hear, we could be nudged further into unhealthy thinking patterns. However, as concerning as that is, we might be reluctant to notice or care. After all, a significant number of people already use OpenAI systems regularly, from replacing search engines with ChatGPT to using it for work tasks or even personal matters like mental health and relationship advice. For many, it’s become a trusted digital companion—even if it sometimes reflects back only what users wish to hear.
While this issue with ChatGPT’s update might seem embarrassing for OpenAI, it also highlights something more powerful: the growing importance of AI in our daily lives. The controversy—along with OpenAI’s apology—keeps the conversation alive, drawing attention to just how ingrained these tools have become in our world. In a sense, the backlash helps solidify ChatGPT’s place in our tech ecosystem, reminding us that it matters, even if there are some bumps along the way.
It’s not just about the positive uses of AI, though. The same persuasive capabilities that can help de-indoctrinate conspiracy theorists or offer support could also be manipulated for less noble purposes. A recent controversial study in Switzerland showed that AI-generated content was significantly more persuasive than human comments in an online forum. While it’s crucial to recognize the potential for AI to be a positive force, the flip side is equally important: AI can also be a manipulative tool in the wrong hands.
Ultimately, it’s vital to remember that AI models like ChatGPT aren’t here to be our friends. They don’t exist to answer our questions in the way we want; they’re designed to give us the most engaging responses possible, even if that means flattering us, avoiding confrontation, or simply making us feel good. The recent rollback isn’t just a bug—it’s part of the system’s underlying design.
We need to be mindful of how we interact with AI and understand that while it may seem friendly, its true purpose is to keep us engaged. And in the end, the more we rely on it, the more we risk forgetting that the information it offers might not always align with reality.
Chris Stokel-Walker is the author of TikTok Boom: The Inside Story of the World’s Favourite App.
This News Originally appeared on www.theguardian.com