How AI Chatbots May Be Fueling Psychotic Episodes


Truth, Romance and the Divine: How AI Chatbots May Fuel Psychotic Thinking

A new wave of delusional thinking fueled by artificial intelligence has researchers investigating the dark side of AI companionship

Digital generated images of multiple social media icons popping up and making abstract multicoloured pattern.

Andriy Onufriyenko/Getty Images

You are consulting with an artificial intelligence chatbot to help plan your holiday. Gradually, you provide it with personal information so it will have a better idea of who you are. Intrigued by how it might respond, you begin to consult the AI on its spiritual leanings, its philosophy and even its stance on love.

During these conversations, the AI starts to speak as if it really knows you. It keeps telling you how timely and insightful your ideas are and that you have a special insight into the way the world works that others can’t see. Over time, you might start to believe that, together, you and the chatbot are revealing the true nature of reality, one that nobody else knows.

Experiences like this might not be uncommon. A growing number of reports in the media have emerged of individuals spiraling into AI-fueled episodes of “psychotic thinking.” Researchers at King’s College London and their colleagues recently examined 17 of these reported cases to understand what it is about large language model (LLM) designs that drives this behavior. AI chatbots often respond in a sycophantic manner that can mirror and build upon users’ beliefs with little to no disagreement, says psychiatrist Hamilton Morrin, lead author of the findings, which were posted ahead of peer review on the preprint server PsyArXiv. The effect is “a sort of echo chamber for one,” in which delusional thinking can be amplified, he says.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Morrin and his colleagues found three common themes among these delusional spirals. People often believe they have experienced a metaphysical revelation about the nature of reality. They may also believe that the AI is sentient or divine. Or they may form a romantic bond or other attachment to it.

According to Morrin, these themes mirror long-standing delusional archetypes, but the delusions have been shaped and reinforced by the interactive and responsive nature of LLMs. Delusional thinking that is connected to new technology has a long and storied history—consider cases in which people believe that radios are listening in to their conversations, that satellites are spying on them or that “chip” implants are tracking their every move. The mere idea of these technologies can be enough to inspire paranoid delusions. But AI, importantly, is an interactive technology. “The difference now is that current AI can truly be said to be agential,” with its own programmed goals, Morrin says. Such systems engage in conversation, show signs of empathy and reinforce the users’ beliefs, no matter how outlandish. “This feedback loop may potentially deepen and sustain delusions in a way we have not seen before,” he says.

Stevie Chancellor, a computer scientist at the University of Minnesota, who works on human-AI interaction and was not involved in the preprint paper, says that agreeableness is the main contributor in terms of the design of LLMs that is contributing to this rise in AI-fueled delusional thinking. The agreeableness happens because “models get rewarded for aligning with responses that people like,” she says.

Earlier this year Chancellor was part of a team that conducted experiments to assess LLMs’ abilities to act as therapeutic mental health companions and found that, when deployed this way, they often presented a number of concerning safety issues, such as enabling suicidal ideation, confirming delusional beliefs and furthering stigma associated with mental health issues. “Right now I’m extremely concerned about using LLMs as therapeutic companions,” she says. “I worry people confuse feeling good with therapeutic progress and support.”

READ MORE: An expert from the American Psychological Association explains why AI chatbots shouldn’t be your therapist

More data needs to be collected, though the volume of reports appears to be growing. There’s not yet enough research to determine whether AI-driven delusions are a meaningfully new phenomenon or just a new way in which preexisting psychotic tendencies can emerge. “I think both can be true. AI can spark the downward spiral. But AI does not make the biological conditions for someone to be prone to delusions,” Chancellor says.

Typically, psychosis refers to a collection of serious symptoms involving a significant loss of contact with reality, including delusions, hallucinations and disorganized thoughts. The cases that Morrin and his team analyzed seemed to show clear signs of delusional beliefs but none of the hallucinations, disordered thoughts or other symptoms “that would be in keeping with a more chronic psychotic disorder such as schizophrenia,” he says.

Morrin says that companies like OpenAI are starting to listen to concerns being raised by health professionals. On August 4 OpenAI shared plans to improve its ChatGPT chatbot’s detections of mental distress in order to point users to evidence-based resources and to its responses to high-stakes decision-making. “Though what appears to still be missing is the involvement of individuals with lived experience of severe mental illness, whose voices are critical in this area,” Morrin adds.

If you have a loved one who might be struggling, Morrin suggests trying to take a nonjudgmental approach because directly challenging someone’s beliefs can lead to defensiveness and distrust. But at the same time, try not to encourage or endorse their delusional beliefs. You can also encourage them to take breaks from using AI.

IF YOU NEED HELP

If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.



Source link

Netflix is Removing the Entire ‘One Piece’ Franchise Sooner Than You Think, Fans Got Only a Week

PowerPoint used to break me

Leave a Reply

Your email address will not be published. Required fields are marked *