Mayo Platform’s Halamka on new WHO ethics guidelines for AI in medicine

Code on a screen.
Artificial intelligence is a booming industry that is creeping into more aspects of our daily lives. The World Health Organization recently released ethics guidelines for using it in health care.
Markus Spiske via Pexels

For some, the idea of artificial intelligence in medicine may conjure images of robot doctors making house calls on hoverboards. But AI is already being used in health care today, and the World Health Organization recently laid out new ethics guidelines for artificial intelligence in medical settings.

The guidelines say humans — not machines — should remain the decision-makers, the technology must do no harm and doctors must be transparent to help patients understand how it's being used.

Dr. John Halamka is president of the Mayo Clinic Platform, a tech incubator working on tech and data innovations in health care. He told host Tom Crann in an interview this week that artificial intelligence is intended to augment humans, not replace them. And one realm where Halamka sees potential for AI application is in preventative care. 

“This is what we hope the future will bring,” he said. “Keep patients healthy. Keep them out of the hospital, out of the clinic, because we’ve predicted disease and treated it before they develop.”

For instance, clinicians might use heart rate data collected by a patient’s Apple Watch to screen for abnormalities. Halamka anticipates that algorithms will someday allow doctors to predict patients’ risk of developing serious illnesses and prevent them.

With better predictive AI, however, come thornier ethical questions. Halamka gave the example of his wife, who was diagnosed with breast cancer in 2011. She’s now cancer-free after chemotherapy, radiation and surgery. Had she been informed of her risk for breast cancer and given the option to take preventative medication, Halamka said it would be up to her and her doctor to weigh the risks and potential side effects — a decision no AI could make for her.

When it comes to the ethics of artificial intelligence, “We need guidelines,” Halamka said. Algorithms, he said, don’t come with an ingredient list or nutrition label. “And so what we need is transparency as to how every algorithm performs, how it was developed.”

He said doctors should know whether algorithms have been tested and applied in ways that will work for their patients.

“It’s that level of transparency — that proof of utility — that we need going forward,” he said.

Use the audio player above to listen to the full conversation.

Your support matters.

You make MPR News possible. Individual donations are behind the clarity in coverage from our reporters across the state, stories that connect us, and conversations that provide perspectives. Help ensure MPR remains a resource that brings Minnesotans together.