COMMENTARY

Can Artificial Intelligence Chat Bots Help Prevent Suicide?

Arthur L. Caplan, PhD

Disclosures

March 06, 2023

This transcript has been edited for clarity.

Hi. I'm Art Caplan. I'm at the Division of Medical Ethics at New York University Grossman School of Medicine.

Can you use a robot or an algorithm to treat and prevent suicide? What role should these new intelligent programs and algorithms, like ChatGPT, which has emerged as something that people are using to write articles and for many other purposes online — play in mental health?

This issue came up recently in a very important way when one of the social media platforms called Discord, which has subgroups with common interests — some of whom are there because they have problems with mental illness or they feel suicidal — offered to work with a platform called Coco, which was experimenting with an artificial intelligence chat bot that could interact with people online and, in a way, provide them support, counseling, or help for their suicide ideation or mental illness problems.

I think this is not yet ready for prime time, and I think there are some really serious ethical issues that we have to pay attention to before we assign, if you will, robots using artificial intelligence to become our mental health care providers, or for that matter, to be our primary care providers.

The platform basically said they were going to take people—and there might be research involved, but they weren't very clear about it—and dump them into a group where they interact with a suicide hotline experimental program, and another group who receive the standard human response, as I understand it. In any event, it's clearly an attempt to refine the program and do research.

Right away, there's trouble because the level of consent for someone with a mental illness or who has suicidal ideation has to be extremely high. You don't want them given over to a program that they don't understand is not a human being at the other end. They always have to understand who's giving them care.

Worse, the programs haven't been certified or signed off on by specialty groups, psychologists, psychiatrists, social workers, or mental health providers. Nobody's vetted these programs as being adequate to care for someone who's vulnerable.

Yes, it may be important to experiment and to refine artificial intelligence so that it could reasonably respond to a person with a mental illness problem, including suicide, depression, or whatever it might be. You have to exercise extraordinary care in recruiting such people to research intended to refine and perfect the programs. I don't think that happened in this case.

On the research side, I have a large amount of ethical heartburn with asking people, with vague consent, to try and perfect the program when they're vulnerable and when they're at risk. Moreover, let's say they have a program that some company is going to deploy, and it's cheaper to use artificial intelligence to handle a mental health inquiry to provide support, and maybe it can do a pretty good job. I'm not saying that a robot using an artificial algorithm will never be involved in the provision of health care, but you have to be sure that the fact that it's a robot or artificial intelligence is disclosed to anybody who's going to use it. You have to make sure that it's been protected with proper privacy and confidentiality security so that no one can hack their way into it. You have to make sure that no one's reselling any information about these people — the pharmaceutical companies or anybody else.

As I said, you have to make sure that it's been vetted, not only by the company that makes it or the people who have conflicts of interest in trying to push these programs forward, but also by professional societies with the expertise to say, yes, that looks like it's offering reasonable responses, reasonable advice, and reasonable support.

Is there a future for artificial intelligence in medicine as providers of care? Yes, I suspect there is. Like it or not, we're probably going to see some forms of artificial intelligence start to interact with patients and, at least initially, guide their care.

We're not ready yet. We haven't vetted these programs adequately, we're not really sure about liability and malpractice if they don't do what they're supposed to do, and we haven't set out clear standards for disclosure and consent.

Until we get the ethical infrastructure for robot medicine, I don't think we're ready for prime time.

I'm Art Caplan at the Division of Medical Ethics at the New York University Langone Health Center. Thanks for watching.

Follow Medscape on Facebook, Twitter, Instagram, and YouTube

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....