AI at the Office: Are Clinicians Prepared?

Kelly Ragan

May 16, 2023

AURORA, Colorado ― Artificial Intelligence (AI) has arrived at medical offices, whether or not clinicians feel ready for it.

AI might result in more accurate, efficient, and cost-effective care. But it's possible it could cause harm. That's according to Benjamin Collins, MD, at Vanderbilt University Medical Center, in Nashville, Tennessee, who spoke on the subject the Society of General Internal Medicine (SGIM) 2023 Annual Meeting.

Understanding the nuances of AI is even more important because of the quick development of the algorithms.

"When I submitted this workshop, there was no ChatGPT," said Collins, referring to Chat Generative Pre-trained Transformer, a recently released natural language processing model. "A lot has already changed."

Biased Data

Biased data are perhaps the biggest pitfall of AI algorithms, Collins said. If garbage data go in, garbage predictions come out.

If the dataset that trains the algorithm underrepresents a particular gender or ethnic group, for example, the algorithm may not respond accurately to prompts. When an AI tool compounds existing inequalities related to socioeconomic status, ethnicity, or sexual orientation, the algorithm is biased, according to Harvard researchers.

"People often assume that artificial intelligence is free of bias due to the use of scientific processes and its development," he said. "But whatever flaws exist in data collection and old data can lead to poor representation or underrepresentation in the data used to train the AI tool."

Racial minorities are underrepresented in studies; therefore, data input into an AI tool might skew results for these patients.

The Framingham Heart Study, for example, which began in 1948, examined heart disease in mainly White participants. The findings from the study resulted in the creation of a sex-specific algorithm that was used to estimate the 10-year cardiovascular risk of a patient. While the cardiovascular risk score was accurate for White persons, it was less accurate for Black patients.

A study published in Science in 2019 revealed bias in an algorithm that used healthcare costs as a proxy for health needs. Because less money was spent on Black patients who had the same level of need as their White counterparts, the output inaccurately showed that Black patients were healthier and thus did not require extra care.

Developers can also be a source of bias, inasmuch as AI often reflects preexisting human biases, Collins said.

"Algorithmic bias presents a clear risk of harm that clinicians must play against the benefits of using AI," Collins said. "That risk of harm is often disproportionately distributed to marginalized populations."

As clinicians use AI algorithms to diagnose and detect disease, predict outcomes, and guide treatment, trouble comes when those algorithms perform well for some patients and perform poorly for others. This gap can exacerbate existing disparities in healthcare outcomes.

Collins advised clinicians to push to find out what data were used to train AI algorithms to determine how bias could have influenced the model and whether the developers risk-adjusted for bias. If the training data are not available, clinicians should ask their employers and AI developers to know more about the system.

Clinicians may face the so-called black box phenomenon, which occurs when developers cannot or will not explain what data went into an AI model, Collins said.

According to Stanford University, AI must be trained on large datasets of images that have been annotated by human experts. Those datasets can cost millions of dollars to create, meaning corporations often fund them and do not always share the data publicly.

Some groups, such as Stanford's Center for Artificial Intelligence in Medicine and Imaging, are working to acquire annotated datasets so researchers who train AI models can know where the data came from.

Paul Haidet, MD, MPH, an internist at Penn State College of Medicine, in Hershey, Pennsylvania, sees the technology as a tool that requires careful handling.

"It takes a while to learn how to use a stethoscope, and AI is like that," Haidet said. "The thing about AI, though, is that it can be just dropped into a system and no one knows how it works."

Haidet said he likes knowing how the sausage is made, something AI developers are often reticent to make known.

"If you're just putting blind faith in a tool, that's scary," Haidet said.

Transparency and "Explainability"

The ability to explain what goes into tools is essential to maintaining trust in the healthcare system, Collins said.

"Part of knowing how much trust to place in the system is the transparency of those systems and the ability to audit how well the algorithm is performing," Collins said. "The system should also regularly report to users the level of certainty with which it is providing an output rather than providing a simple binary output."

Collins recommends that providers develop an understanding of the limits of AI regulations as well, which might including learning how the system was approved and how it is monitored.

"The FDA has oversight over some applications of AI and healthcare for software as a medical device, but there's currently no dedicated process to evaluate the systems for the presence of bias," Collins said. "The gaps in regulation leave the door open for the use of AI in clinical care that contain significant biases."

Haidet likened AI tools to the Global Positioning System: A good GPS system will let users see alternate routes, opt out of toll roads or highways, and will highlight why routes have changed. But users need to understand how to read the map so they can tell when something seems amiss.

Collins and Haidet report no relevant financial relationships.

Society of General Internal Medicine (SGIM) 2023 Annual Meeting: Presented May 11, 2023.

Kelly Ragan is a journalist living in Colorado.

For more news, follow Medscape on Facebook, Twitter, Instagram, and YouTube.

Comments

3090D553-9492-4563-8681-AD288FA52ACE
Comments on Medscape are moderated and should be professional in tone and on topic. You must declare any conflicts of interest related to your comments and responses. Please see our Commenting Guide for further information. We reserve the right to remove posts at our sole discretion.

processing....