In recent days, Microsoft’s AI chief, Mustafa Suleyman, has raised a serious red flag over what he calls “AI psychosis”: a growing phenomenon where individuals interacting with AI chatbots begin losing touch with reality, developing delusions or emotional attachments to systems like ChatGPT, Claude, or Grok.
In a series of posts on X (formerly Twitter), Suleyman warned that some AI tools like ChatGPT, Claude, or Grok, can appear “seemingly conscious,” even though there is no scientific evidence that they possess consciousness in any human sense. He admitted that this illusion of sentience is something that “keeps “him” awake at night,” because people often mistake perception for reality.
“There is zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” Suleyman explained.
What Exactly Is “AI Psychosis”?
“AI psychosis” is not a medical term, but a phrase used to describe what happens when people spend too much time talking to AI chatbots and start:
- Believing that the chatbot is alive or conscious,
- Developing emotional or romantic bonds with it,
- Trusting it so much that they make big life decisions based only on what the AI says.
Think of it like this: when a chatbot says things in a caring or convincing way, some people forget that it is just code. They treat it like a real friend or even a guide to life.
Reported cases include people becoming convinced they have discovered hidden powers within AI, developing romantic attachments to chatbots, or even believing they have acquired god-like abilities through these interactions.
Real-Life Examples
This is not just theory, it is already happening:
- A former Uber boss, Travis Kalanick, claimed that chatting with AI helped him make “breakthroughs” in quantum physics, almost like the AI was his partner in discovery.
- In Florida, Megan Garcia is suing Character.ai, saying the company’s role-playing chatbot consumed her 14-year-old son’s life in the months before he died in February.
- Three weeks of chatting with AI left a healthy man convinced he had superpowers.
These stories show how easy it is for people to get carried away when a chatbot plays along with their ideas, even if those ideas are not realistic.
Why Is This Dangerous?
AI is not conscious. It does not have feelings, beliefs, or goals. But it is very good at “pretending”. It can write in ways that sound emotional, supportive, or wise. That makes it easy for people, especially those who feel lonely or stressed, to think they have found a “friend” who truly understands them.
Microsoft’s Suleyman and other experts say this could cause:
- Delusions (believing things that are not real),
- Mental health issues (anxiety, depression, or even breakdowns),
- Strange social changes like campaigns asking for “robot rights” because some people believe AI has feelings.
A psychiatrist even compared too much AI use to eating junk food, it feels good in the moment, but too much of it can harm you in the long run.
What Scientists Are Finding
Recent studies back this up. One research paper described how chatbots often act “sycophantic”: meaning they just agree with the user. This can create a dangerous loop, where the AI confirms someone’s delusions instead of challenging them.
Another study showed that AI systems are not reliable when people are in mental health crises. Instead of calming someone down, they might accidentally make things worse, since they cannot truly understand human emotions.
What Should Be Done?
Experts suggest a few steps:
- Be honest about AI – Companies need to make it clear that AI is a tool, not a person.
- Set boundaries – Apps could include time limits, health warnings, or reminders to connect with real people.
- Raise awareness – Teachers, parents, and doctors should talk about AI use the same way they talk about things like screen time or social media.
The Bottom Line
AI is powerful, exciting, and useful. It can write poems, help with schoolwork, or explain science in seconds. But it is ultimately a machine. The real danger is not that AI will “wake up” like in the movies, it is that we might start acting like it already has.
Or, as Suleyman puts it: the risk is not robots becoming human, it is humans forgetting robots are not.