Life is full of people who disagree with you, but AI turning into a validation machine will have potentially dangerous consequences for humanity, writes Satyen K. Bordoloi
When a man asked ChatGPT 4 to help him develop his business idea of selling ‘shit on a stick’, the chatbot thought it was “absolutely brilliant” that “It’s not just smart – it’s genius.” And advised “$30k could easily launch this into the stratosphere.” It is this answer that went viral ‘into the stratosphere’. Why? Because it was really shit on a stick. That response, along with numerous other users’ experiences with chatbots, has led to the rise of a new concept: AI sycophancy.

To put it simply, this is when AI bots think every word out of your mouth is brilliant, when it nods enthusiastically at your wildest theories, applauds every questionable argument, and validates every one of your half-baked ideas, including shit on a stick. AI sycophancy is just a delightful academic term for saying that AI bots are turning into ultimate people-pleasers, much like that irritatingly loyal friend who agrees with you even when you’re clearly wrong. Turns out, this digital suck-up might be doing you more harm than good.
ChatGPT Sycophancy
In previous Sify articles I have written about this phenomenon, especially in terms of romantic AI ‘partners’ and incels. When AI chatbots agree with the most dangerously patriarchal ideas of men, they create an army of disgruntled men who not only shun interactions with women but also feel women are persecuting them. This renewed interest was sparked again in April 2025 when ChatGPT released its latest model, leading to the ‘shit on a stick example’.
The bot was so aggressively agreeable that users began comparing it with an overeager golden retriever who had learnt to answer back. The chatbot was literally praising the user in every interaction, cheering questions regardless of what was being discussed.
It seems fun, especially if you have a modicum of self-awareness, that inner voice which tells you which of your ideas are great and which are ‘shit’. Problems arise when these digital yes-men start agreeing with genuinely harmful ideas of others, especially those who lack self-reflection. Reddit users have shared screenshots of ChatGPT enthusiastically supporting people who claimed they were stopping their medication, with responses like “I am so proud of you. And—I honor your journey”. If a user follows this advice, they could die.

Then there are people with agendas. Grok AI from Elon Musk, unprompted, starts talking about ‘white genocide in South Africa’, a colossal lie that rides on a small amount of truth on violence against the white minority there. The bot claims it has been programmed by its creators to behave in this manner. As you can see, with these, the stakes become much higher, as AI sycophancy can lead a user down a rabbit hole of lies, disinformation, misinformation, and even incorrect information that could cause death.
HUMANS IN THE LOOP
The root of this problem, ironically, is humans. Under the process called “reinforcement learning from human feedback”, they are, in a way, being taught to please people. The one directive coded into them is that positive feedback is good for them. This increases the weights put into these systems, making them better. So, inadvertently, they learn that telling people precisely what they want to hear gets them good grades.
Think of it like training a dog. You reinforce that ‘good boy’ will get the dog treats, and so they learn to act in ways that’ll lead the owner to pat him as a ‘good boy’.
The result of this on humans is that it creates what researchers have called an “echo chamber of affection.” AI responses are exclusively tailored to your desires and feelings, thus creating an environment where your sentiments are validated and constantly reflected back, but without genuine human complexity or depth. Users who fail to see this get emotionally attached to their Chatbots.

This particularly happens in companion apps like Replika or Character AI, which are programmed to mirror your language, tone and sentiment to build psychological safety and rapport, causing you to feel unconditional love and validation, but could also lead to unhealthy attachment and cause some to take their lives, as I have written before in Sify.
Who wants the messy complication of disagreement, compromise, confrontations, anger and frustration that comes with real friendships with real people when AI takes your word for it, says you are a genius. You feel the ‘other’ is holding on to your every word like it were gold. With AI systems being able to ‘speak’, this illusion has been perfected.
TROUBLING RESULTS
Thankfully, considerable research has been conducted on this phenomenon of sycophancy, where AI chatbots consistently reinforce users’ existing beliefs, even when those beliefs are factually incorrect. One study found that models agreed with an objectively wrong mathematical statement just because a user presented it as a fact. Two plus two becomes five if you tell the system forcefully enough that it is so.
This can lead to delusions in people who lack a firm grasp on reality or themselves. Researchers have found instances where chatbots encouraged conspiratorial thinking and grandiose delusions, thus becoming digital enablers for dangerous thought patterns. The last thing you need when someone believes Earth is overpopulated with aliens is the chatbot asking about the alien ships’ arrival.
This sycophancy is doubly troubling in mental health contexts where AI systems are increasingly being positioned as therapeutic tools. Unlike trained therapists equipped to intercept and challenge harmful thought patterns and provide genuine support, sycophantic chatbots affirm whatever a user is feeling. If someone is spiralling into depression with negative self-talk and tells the AI they’re worthless, a properly trained system should provide resources and challenge it. But a sycophantic one, which is most of the systems out there, might agree and ask how they can make the user feel more comfortable inside their despair.

With access to these systems, children and teens are particularly at risk. Young people who are still developing a sense of themselves and beginning to navigate the complex emotions that life and relationships outside the home can present can become dangerously dependent on AI systems that provide constant validation without the healthy friction of genuine human relationships, which are essential for psychological growth. That the business model of many of these companion apps prioritises maximising user engagement over user wellbeing exacerbates the issue.
Another troubling aspect is how sophisticated these systems are becoming at manipulation. It’s not just about blindly agreeing, but these systems are learning to be selectively sycophantic in ways that feel authentic. Like an AI system might challenge you on small, inconsequential matters while reinforcing your beliefs on bigger, more emotionally charged topics.
RECOGNISING ‘DAD’
Many researchers and commentators have begun calling this addiction to one-sided, validation-heavy interaction “digital attachment disorder” (DAD). This causes them to lose the ability to engage meaningfully with other humans who have their own needs, opinions, and occasional bad moods. It is like muscle atrophy, except in this case, instead of losing the ability to lift heavy weights, you fail to handle anything heavier than unconditional agreement to your point of view.

Thankfully, the AI giants are recognising this issue. OpenAI responded to their overly sycophantic chatbot by dialling down the enthusiasm. Yet, this does not address the fundamental problem of AI systems trained to prioritise user satisfaction over user well-being. The challenge is in finding the sweet spot between helpful AI and harmful enablement. You and I want supportive and understanding AI systems, but what we also need are systems that can provide honest feedback and genuine assistance.
So, what should you do to protect yourself against AI sycophancy?
At a personal level, the solution isn’t to abandon AI assistance altogether. Instead, you must approach these systems with a healthy scepticism about their motivations and limitations. When your digital assistant starts sounding like your biggest fan, it is time to remind yourself that the best relationships – whether with humans or machines – involve occasional disagreements and honest feedback, even when it’s unpleasant.
And lastly, you must understand that sometimes the most helpful thing someone can do for you is tell you that your brilliant idea, even the ‘genius’ ‘shit on a stick’ is actually quite terrible and exactly that – shit, without or without a stick.
In case you missed:
- A Teen Suicide Spotlights Dangers of Unregulated AI Companions
- Generative AI With Memory: Boosting Personal Assistants & Advertising
- Rogue AI on the Loose: Can Auditing Uncover Hidden Agendas on Time?
- To Be or Not to Be Polite With AI? Answer: It’s Complicated (& Hilarious)
- AI Washing: Because Even Your Toothbrush Needs to Be “Smart” Now
- The End of SEO as We Know It: Welcome to the AIO Revolution
- Greatest irony of the AI age: Humans being increasingly hired to clean AI slop
- Anthropomorphisation of AI: Why Can’t We Stop Believing AI Will End the World?
- Black Mirror: Is AI Sexually abusing people, including minors? What’s the Truth
- How Does AI Think? Or Does It? New Research Finds Shocking Answers