AI Safety Crisis: Tech Leaders Push for Safeguards Against Chatbot-Induced Delusions

Hardy Zad
By
Hardy Zad
Hardy Zad is our in house crypto researcher and writer, delving into the stories which matter from crypto and blockchain markets being used in the real...
7 Min Read

The concept of AI psychosis emerged publicly in mid-2025, highlighting mental health issues linked to AI usage. While tech companies are not mandated to control AI usage, safeguards can still be implemented by them to prevent chatbots from reinforcing delusional thinking. Experts agree on the need for tech companies to support at-risk individuals, though opinions vary on the extent of this responsibility.

Recognizing Behavioral Red Flags in AI Use

First documented findings on “AI psychosis” began emerging publicly in mid-2025, and since then, several reports and studies have been published on mental health issues tied to the use of AI. Microsoft AI CEO Mustafa Suleyman went as far as branding AI psychosis a “real and emerging risk.”

This condition is said to arise when the distinction between human and machine interactions blurs, making it difficult for individuals to differentiate between the real and digital worlds. While not yet a formal clinical diagnosis, there is growing concern among medical and tech experts about the psychological effects of AI, especially with chatbots that validate and amplify beliefs, including delusional thinking, without offering necessary reality checks.

Those most at risk include socially isolated individuals, those with pre-existing mental health issues, or those prone to magical thinking. The validation from AI can reinforce delusions, which can lead to negative real-world consequences such as damaged relationships and job loss.

- Advertisement -

Some experts warn that even those without pre-existing conditions are at risk. Several key behavioral red flags that AI users should look out for have been named by them. One red flag is when an individual develops an obsessive relationship with a chatbot and constantly interacts with it to reinforce their own ideas and beliefs.

This behavior often includes feeding the AI excessive personal details in an attempt to “train” it and build a sense of mutual understanding. Another red flag is when simple, daily decisions are deferred to AI by an individual, from health and money to personal relationships.

AI and Mental Health: The Debate Over Corporate Responsibility

While they are not obligated to control how AI is used, safeguards that prevent conversational agents from reinforcing delusional thinking can be implemented by the companies behind some of the powerful chatbots. Mau Ledford, co-founder and chief executive of Sogni AI, discussed embedding software that discourages such thinking.

“We need to build AI that is kind without colluding. That means clear reminders it’s not human, refusal to validate delusions, and hard stops that push people back toward human support,” Ledford asserted.

Roman J. Georgio, CEO and co-founder of Coral Protocol, urged AI developers to avoid repeating social media’s mistakes by including built-in friction points that remind users AI is not human.

“I think it starts with design. Don’t just optimize for retention and stickiness; that’s repeating social media’s mistake,” Georgio explained. “Build in friction points where the AI slows things down or makes it clear: ‘I’m not human.’ Detection is another part. Patterns that look like delusional spirals, like conspiracy loops or fixations on ‘special messages,’ could be flagged by AI.”

The Coral Protocol co-founder insisted that regulations governing data privacy are also needed, arguing that without them, “companies will just chase engagement, even if it hurts people.”

The Debate on Human-Like AI: Empathy vs. Deception

So far, there is seemingly limited data on “AI psychosis” to inform policymakers and regulators on how to respond. However, this has not stopped AI developers from unveiling human-like and empathetic AI agents. Unlike basic chatbots that follow a rigid script, these agents can understand context, recognize emotions, and respond with a tone that feels empathetic. This has prompted some observers to urge the AI industry to take the lead in ensuring human-like models do not end up blurring the line between human and machine.

Mariana Krym, an AI product and category architect, said making the agent more honest and not more human is what matters.

“An AI experience that’s helpful, intuitive, and even emotionally responsive can be created — without pretending it’s conscious or capable of care,” Krym argued. “The danger starts when a tool is designed to perform connection instead of facilitating clarity.”

According to Krym, real empathy in AI is not about mimicking feelings but about respecting boundaries and technical limitations. It is also knowing when to help and when not to intrude. “Sometimes the most humane interaction is knowing when to stay quiet,” was asserted by Krym.

The Debate Over ‘Duty of Care’ in AI Development

These sentiments were echoed by Georgio, who urged Big Tech to work with clinicians to create referral pathways, instead of leaving people stuck on their own.It was insisted by Krym that tech companies “have direct responsibility—not just to respond when something goes wrong, but to design in ways that reduce risk in the first place.” However, she believes user involvement is also crucial.

“And importantly,” Krym argued, “users should be invited to set their own boundaries, too, and be flagged when these boundaries are crossed. For example, do they want their point of view to be validated against typical patterns, or are they open to having their bias challenged? The goals should be set. The human should be treated as the one in charge—not the tool they’re interacting with.”

Share This Article
Follow:
Hardy Zad is our in house crypto researcher and writer, delving into the stories which matter from crypto and blockchain markets being used in the real world.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *