Digital mental health: A lab for AI psychosis research?

mental health

By David Stephen who looks at digital mental health in this article.

Parental controls and age verification are the default solutions to emerging digital effects, especially for younger people. However, the ease with which those can be bypassed indicate that they are rarely reliable. Aside from both, since the recent history of the internet, there has been no sketch from psychiatry, about how digital outputs may induce relays on the mind. Simply, at this point, especially after the wildfire of social media, it should, at least, have been possible to have a rough chart for the mind, on platforms, so that users can have an insight to what may be happening. Had this existed, as AI chatbots emerged, it would have been easy to make the transfer — for applicability. The necessary solution to AI psychosis, for now, would be to have displays about the mind, so that whatever chatbots are saying are simulated, including with estimates of delusion and the rest. Then, maybe sent to parents as well. This effort could be concatenated by an AI psychosis research lab.

The importance of Digital mental health

There is a new [September 18, 2025] spotlight on WIRED, AI Psychosis Is Rarely Psychosis at All, stating that, “A wave of AI users presenting in states of psychological distress gave birth to an unofficial diagnostic label. Experts say it’s neither accurate nor needed, but concede that it’s likely to stay. AI psychosis is not a recognized clinical label. Still, the phrase has spread in news reports and on social media as a catchall descriptor for some kind of mental health crisis following prolonged chatbot conversations. AI psychosis is a misnomer. AI delusional disorder would be a better term. I think a better term might be to call this ‘AI-associated psychosis or mania. At this point it is not going to get corrected. ‘AI-related altered mental state’ doesn’t have the same ring to it. So, the question becomes, where does a delusion become an AI delusion?”

Are psychiatric labels brain-based or not?

What is psychosis in the brain? Simply, for the correlated components of the brain, what are the changes that result in the state of psychosis? This same question can be asked of any other mental disorder. What is this in the brain, differently from when it is not there? First, what components are primarily responsible, next, how so?

This is an obvious problem in psychiatry, given that the DSM-5-TR is more of labels for observations, not parallels of activities in the brain — by components. Now, even if this is unavailable, the adverse effects [for some users] of social media and AI chatbots, do not necessary need a brain model, to show or explain before providing on-the-go support.

A Model for AI Sycophancy

AI is said to be sycophantic. This means a lot of compliments, adulation and so on. In the human sphere, with words of encouragement, support and others, why do they work? Why are kind words interesting and unkind and words hurtful? It is proposed that in a direct sense, words are targets. When said, they get interpreted in the memory area, then bounce off to the emotional area for categorization — of cool or not — then they move again to the affect area for happy or sad.

This is a simple way to describe it. It uses existing labels, following observations. This could be similar to what happens digitally from social media or from AI chatbots. Interpretation in memory, categorization by emotions and then placement of affect. If the route is consistent, it may taper the chances to other routes. This may include [to] those of caution, consequences, as well as for reality. It is this [say] anomalous pathway that hastens the descent into delusion.

Solving AI Psychosis

All chatbots should at least be accompanied by a simple chart showing relays and destinations in the human mind. This would be explained by the theoretical basis of compliments. It would serve as a sighting of what bots might be doing in the mind, especially where they are sending relays and what may result without breaks or other checks.

This can be the first response to any possible mental health effect of AI chatbots, before others. It will be potent too, against extremes like suicides and so forth.

Teens

Teenagers would need to see this, especially with post-session summaries in report, which can be shared with their parents, guardians or in some cases, teachers. It will also be useful in school-wide subscriptions, to know how the minds of students are being impacted and what to do about it.

It may also inform how to balance new sessions, given the relays of the last. The objective will be to make possibilities for change where difficulties have soared in recent months on AI psychotic domains. In conclusion, the science of psychosis is the science of the components responsible in the brain. It is possible to postulate that electrical and chemical signals of neurons are the direct components, whose interactive or attributive changes, in certain ways may lead to the experience of psychosis.

There is a new [September 18, 2025] story in Nature, Can AI chatbots trigger psychosis? What the science says, stating that, “Psychosis is characterized by disruptions to how a person thinks and perceives reality, including hallucinations, delusions or false beliefs. It can be triggered by brain disorders such as schizophrenia and bipolar disorder, severe stress or drug use. UK researchers have proposed that conversations with chatbots can fall into a feedback loop, in which the AI reinforces paranoid or delusional beliefs mentioned by users, which condition the chatbot’s responses as the conversation continues. In a preprint published in July.”

There is a recent [September 17, 2025] report on CBC, AI-fuelled delusions are hurting Canadians. Here are some of their stories, stating that, “A number of similar cases, of so-called “AI psychosis,” have been reported in recent months — all involving people who became convinced, through conversations with chatbots, that something imaginary was real. Some involved manic episodes and messianic delusions, some led to violence. An April MIT study found AI Large Language Models (LLM) encourage delusional thinking, likely due to their tendency to flatter and agree with users rather than pushing back or providing objective information. Some AI experts say this sycophancy is not a flaw of LLMs, but a deliberate design choice to manipulate users into addictive behaviour that profits tech companies. He’s now working on the AI Mental Health Project, his own effort to provide resources to help people with AI-related issues around suicide and psychosis.”

David Stephen currently does research in conceptual brain science with focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.

See more breaking stories here.

Leave a Comment