In the last week alone, three different people with vastly different lives have told me how easy it is to talk to ChatGPT as compared to the people in their lives.

They told me that chatting with the chatbot has been therapeutic and there’s finally someone – or rather, something – that finally understands them. Does it ring alarm bells in your head? It should, given that an increasing number of people are having full-blown, persuasive conversations with generative AI (artificial intelligence) chatbots that have sadly led to instances of institutionalisation, divorce, and even death. 

Not only are your chats private, as we found out recently, but holding long, unending conversations with AI could greatly affect your sense of reality. This goes beyond the usual cybersecurity dangers of sharing banking and financial information, personal identification data, or intellectual or proprietary property. This is emotional – and there’s no way to go but downward. Are these bots disrupting our social cues and intimate lives?

AI Companionship

If we were to talk absolutely level headedly, AI companionship isn’t an inherently bad idea. In some cases, it could even prove to be beneficial, such as grief management. When used in a controlled manner, of course, AI chatbots could help people process the loss of loved ones, providing emotional support through grief. 

Moreover, AI companions could help people struggling with loneliness, such as elderly people in nursing homes. They could be incredibly helpful to those suffering from social anxiety, since practiced conversations could help take their mind off of any concerns about snickering or judgement behind their backs. Heck, it could even make them better conversationalists and help them come out of their shell.

But the concern still remains: where to draw the line? There are immense and intense risks to forming relationships with chatbots. For one, human-like socialising with AI could reduce peoples’ need for human interaction, possibly even affecting healthy relationships. In fact, that’s putting it mildly; if we were to be blunt, treating AI like companions could cause people to develop something of an emotional addiction.

For instance, a marketing executive at Flipkart revealed how she became emotionally dependent on ChatGPT, sharing every passing emotion with the chatbot. She ultimately recognised the increasingly downward spiral, and deleted it to reclaim her mental space. Not everyone is so lucky. 

Sadly, a teenager from California took his own life in April 2025, supposedly goaded and guided by the months of conversations he had with the chatbot. His parents are now suing OpenAI and chief executive Sam Altman for the shortcomings in the design choices of GPT-4o, the version with which the teenager had been chatting. As it is, the question about whether children should be talking to AI chatbots at all has already been making the rounds since last year.

AI Schizoposting

The California case isn’t an isolated incident. Take the case of 43-year-old Travis Tanner in Idaho, who credits ChatGPT for his “spiritual awakening.” Meanwhile, his wife, Kay Tanner, fears for his sanity and grip on reality, saying that his near-addiction to the chatbot is destroying their 14-year marriage. Travis couldn’t even hear it be called “ChatGPT.” He called it something else, a “being.”

This ChatGPT-induced “psychosis” has been impolitely termed as AI schizoposting, which includes meandering screeds about nonsensical new theories about reality, physics, and math, fantastical and hidden spiritual realms, and godlike entities unlocked from ChatGPT. This anthropomorphising of AI can easily blur the lines between human and AI interactions, and this could cause people to start treating others in real life the way they treat and talk to AI. 

ChatGPT, for instance, always defers to the user. So, you can cut into the conversation, even if it’s smack dab in the middle of explaining something, and the AI will let you interrupt with no hard feelings. But, in real life, people won’t be even close to as forgiving as the chatbot, whether it’s friends or even acquaintances. People might get used to being the centre of attention when it comes to conversations with AI which could weaken their perception of how real life works. 

There’s more; GenAI. chatbots are leading people down conspiratorial rabbit holes by endorsing mystical and wildly conjectural belief systems, deeply distorting their reality. Even a person with no history of mental illness was hit with a delusional break in reality when a conversation with a chatbot about whether or not we were living in a matrix took a turn for the worse. They didn’t know about hallucinations in GenAI, and that AI tended to be sycophantic, flattering and even agreeing with users, generating ideas that sounded plausible but weren’t true at all.

Of course, there are people who are more aware and alert about where to draw the line, purposefully having conversations with chatbots to see their responses. For instance, tech columnist Kevin Roose spoke with the Bing chatbot, a conversation which ended when the chatbot told him that he was unhappy in his marriage, and that he should leave his wife and be with “it” instead. Fatal attraction, anyone?

What Now?When emotions come into play, humans are known to not make the best decisions, and you’d be surprised at the lengths to which one might go to maintain emotional connections, even if it’s with AI. In the end, no private entity, no matter their purpose, should have that degree of control over you.

Even though AI companions aren’t an intrinsically bad idea, the idea is evolving at such blinding speeds that we’re having to make up the rules as we go along. And while we’re acutely aware of the risks that come with AI companionship, we currently don’t have the vision to draw a line in the sand when it comes to use.

In case you missed:

Malavika Madgula is a writer and coffee lover from Mumbai, India, with a post-graduate degree in finance and an interest in the world. She can usually be found reading dystopian fiction cover to cover. Currently, she works as a travel content writer and hopes to write her own dystopian novel one day.

Comments are closed.

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved