Logo Logo
  • All News
  • Bitcoin
  • Ethereum
  • Altcoin
  • Market
  • Blockchain
  • AI
  • More
    • About Us
    • Contact
Reading: AI Safety Crisis: Tech Leaders Push for Safeguards Against Chatbot-Induced Delusions
Share
The Crypto BluntThe Crypto Blunt
Font ResizerAa
  • Home
  • Read History
  • Technology
  • Login
  • Blog
  • Contact
Search
  • Pages
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Pages
    • Home
    • Blog Index
    • Contact Us
    • Search Page
    • 404 Page
  • Personalized
    • Read History
  • Personalized
    • Read History
  • Personalized
    • Read History
  • Categories
    • Technology
  • Categories
    • Technology
  • Categories
    • Technology
Have an existing account? Sign In
Follow US
  • Pages
  • Pages
  • Pages
  • Personalized
  • Personalized
  • Personalized
  • Categories
  • Categories
  • Categories

Home - News - AI Safety Crisis: Tech Leaders Push for Safeguards Against Chatbot-Induced Delusions

News

AI Safety Crisis: Tech Leaders Push for Safeguards Against Chatbot-Induced Delusions

Hardy Zad
Last updated: August 29, 2025 10:33 am
Hardy Zad
Published: August 29, 2025
Share
AI Safety Crisis: Tech Leaders Push for Safeguards Against Chatbot

The concept of AI psychosis emerged publicly in mid-2025, highlighting mental health issues linked to AI usage. While tech companies are not mandated to control AI usage, safeguards can still be implemented by them to prevent chatbots from reinforcing delusional thinking. Experts agree on the need for tech companies to support at-risk individuals, though opinions vary on the extent of this responsibility.

Contents
  • Recognizing Behavioral Red Flags in AI Use
  • AI and Mental Health: The Debate Over Corporate Responsibility
  • The Debate on Human-Like AI: Empathy vs. Deception
  • The Debate Over ‘Duty of Care’ in AI Development

Recognizing Behavioral Red Flags in AI Use

First documented findings on “AI psychosis” began emerging publicly in mid-2025, and since then, several reports and studies have been published on mental health issues tied to the use of AI. Microsoft AI CEO Mustafa Suleyman went as far as branding AI psychosis a “real and emerging risk.”

This condition is said to arise when the distinction between human and machine interactions blurs, making it difficult for individuals to differentiate between the real and digital worlds. While not yet a formal clinical diagnosis, there is growing concern among medical and tech experts about the psychological effects of AI, especially with chatbots that validate and amplify beliefs, including delusional thinking, without offering necessary reality checks.

Those most at risk include socially isolated individuals, those with pre-existing mental health issues, or those prone to magical thinking. The validation from AI can reinforce delusions, which can lead to negative real-world consequences such as damaged relationships and job loss.

Some experts warn that even those without pre-existing conditions are at risk. Several key behavioral red flags that AI users should look out for have been named by them. One red flag is when an individual develops an obsessive relationship with a chatbot and constantly interacts with it to reinforce their own ideas and beliefs.

This behavior often includes feeding the AI excessive personal details in an attempt to “train” it and build a sense of mutual understanding. Another red flag is when simple, daily decisions are deferred to AI by an individual, from health and money to personal relationships.

AI and Mental Health: The Debate Over Corporate Responsibility

While they are not obligated to control how AI is used, safeguards that prevent conversational agents from reinforcing delusional thinking can be implemented by the companies behind some of the powerful chatbots. Mau Ledford, co-founder and chief executive of Sogni AI, discussed embedding software that discourages such thinking.

“We need to build AI that is kind without colluding. That means clear reminders it’s not human, refusal to validate delusions, and hard stops that push people back toward human support,” Ledford asserted.

Roman J. Georgio, CEO and co-founder of Coral Protocol, urged AI developers to avoid repeating social media’s mistakes by including built-in friction points that remind users AI is not human.

“I think it starts with design. Don’t just optimize for retention and stickiness; that’s repeating social media’s mistake,” Georgio explained. “Build in friction points where the AI slows things down or makes it clear: ‘I’m not human.’ Detection is another part. Patterns that look like delusional spirals, like conspiracy loops or fixations on ‘special messages,’ could be flagged by AI.”

The Coral Protocol co-founder insisted that regulations governing data privacy are also needed, arguing that without them, “companies will just chase engagement, even if it hurts people.”

The Debate on Human-Like AI: Empathy vs. Deception

So far, there is seemingly limited data on “AI psychosis” to inform policymakers and regulators on how to respond. However, this has not stopped AI developers from unveiling human-like and empathetic AI agents. Unlike basic chatbots that follow a rigid script, these agents can understand context, recognize emotions, and respond with a tone that feels empathetic. This has prompted some observers to urge the AI industry to take the lead in ensuring human-like models do not end up blurring the line between human and machine.

Mariana Krym, an AI product and category architect, said making the agent more honest and not more human is what matters.

“An AI experience that’s helpful, intuitive, and even emotionally responsive can be created — without pretending it’s conscious or capable of care,” Krym argued. “The danger starts when a tool is designed to perform connection instead of facilitating clarity.”

According to Krym, real empathy in AI is not about mimicking feelings but about respecting boundaries and technical limitations. It is also knowing when to help and when not to intrude. “Sometimes the most humane interaction is knowing when to stay quiet,” was asserted by Krym.

The Debate Over ‘Duty of Care’ in AI Development

These sentiments were echoed by Georgio, who urged Big Tech to work with clinicians to create referral pathways, instead of leaving people stuck on their own.It was insisted by Krym that tech companies “have direct responsibility—not just to respond when something goes wrong, but to design in ways that reduce risk in the first place.” However, she believes user involvement is also crucial.

“And importantly,” Krym argued, “users should be invited to set their own boundaries, too, and be flagged when these boundaries are crossed. For example, do they want their point of view to be validated against typical patterns, or are they open to having their bias challenged? The goals should be set. The human should be treated as the one in charge—not the tool they’re interacting with.”

TAGGED:Latest News on Artificial Intelligence (AI)Technology

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Email Copy Link Print
ByHardy Zad
Follow:
Hardy Zad is our in house crypto researcher and writer, delving into the stories which matter from crypto and blockchain markets being used in the real world.
Previous Article Luxxfolio Files $73M Crypto Prospectus to Boost Litecoin Treasury Holdings Luxxfolio Files $73M Crypto Prospectus to Boost Litecoin Treasury Holdings
Next Article Crypto Executive Alliance Targets $200M Bitcoin Infrastructure SPAC Acquisition Crypto Executive Alliance Targets $200M Bitcoin Infrastructure SPAC Acquisition
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

FacebookLike
XFollow
InstagramFollow
LinkedInFollow
MediumFollow
RSS FeedFollow
The Crypto BluntLogo
Subscribe to our newsletter to get our newest articles instantly!
Most Read
U.S. Authorities Seize $15B in Bitcoin Linked to Forced-Labor Crypto Scam

U.S. Authorities Seize $15B in Bitcoin Linked to Forced-Labor Crypto Scam

What is GateToken?

What is GateToken (GT)? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Ethereum? 

What is Ethereum (ETH)? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Ethereum Classic?

What is Ethereum Classic (ETC)? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Ethena?

What is Ethena (ENA)? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Ethena USDe?

What is Ethena USDe? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Dogecoin?

What is Dogecoin(DOGE)? What It Is, Overview, Works, Guides, Everything You Need to Know

what is Dai

What is Dai (DAI)? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Chainlink?

What is Chainlink (LINK)? What It Is, Overview, Works, Guides, Everything You Need to Know

What is Cronos?

What is Cronos(CRO)? What It Is, Overview, Works, Guides, Everything You Need to Know

thecryptoblunt-telegram
Logo

The most recent real-time news about crypto at The Crypto Blunt. Latest trusted news about bitcoin, ethereum, blockchain, mining, cryptocurrency prices and more.

NEWS
  • Explained
  • News
  • AI
  • Blockchain
COMPANY
  • About Us
  • Career
GET IN TOUCH
  • Contact
  • Disclaimer
  • Privacy Policy
  • Cookie Policy

© The Crypto Blunt 2025. All Rights Reserved.

© The Crypto Blunt. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?