The Ethical Dilemma of AI in Healthcare

the letters A and I in a desolate landscape and a judges gavel leaned against the i

Artificial intelligence has the potential to revolutionize almost any industry. The promise of faster, more accurate analysis of data – as well the ability to crunch huge data sets in a way no human can – has led organizations around the world to implement AI at a breakneck pace.

Of course, healthcare is one of those industries looking at AI solutions. AI has the potential, many believe, to optimize both healthcare finances and medical care, leading to better outcomes.

For example, AI can analyze patient’s electronic healthcare records and determine if they are showing a higher potential for certain diseases and medical conditions. It’s already being used to offer real-time “decision support” of medical recommendations for patient treatment.

And a study in China reported that AI has a better chance of correctly predicting prostate cancer than a human doctor does.

It’s an exciting time for those leading the charge to incorporate technology in healthcare. But for some, it’s also a time to apply the brakes. Concerns revolve around the ethical dilemmas that AI presents when used in healthcare, as well as overhyped expectations and potential security issues.

The Issue of Bias

In a study published in the New England Journal of Medicine, researchers at the Stanford University School of Medicine acknowledged the potential for machine learning and AI to improve medical services.

However, they argue that before machine learning and AI are used in diagnostics, doctors and other medical professionals should carefully considered the “accompanying ethical pitfalls.”

A chief concern is the issue of bias. Without consciously doing so, those who develop algorithms for AI in healthcare could put bias into those algorithms, the researchers warned. As an example, they point to AI used in the judicial system to evaluate the likelihood of a criminal being released and committing more crimes.

Those systems, designed to support judges in making decisions, have shown “an unnerving propensity for racial discrimination,” the researchers wrote. They fear a similar bias could affect healthcare AI.

They also mentioned the algorithms used by Volkswagen that allowed vehicles to pass emissions tests even though they did not meet government standards.

The researchers wrote that examples such as these show that data used to create algorithms can contain bias and could also be designed to skew results, depending on the motives of both those who design the algorithms and those who deploy them.

Great Expectations

Many have touted AI as a needed panacea to the problems that plague healthcare. These include outdated technology and reliance on human decision-making in often stressful circumstances.

However, the Royal College of General Practitioners (RCGP) recently pushed back against the notion that AI could perform better than humans. “No app or algorithm can do what a GP does,” Helen Stokes-Lampard, chair of the RCGP, told CNBC.

Others also have raised concerns that expectations for AI have outstripped the reality of what it can do. A survey of doctors by Intel found that 49% fear AI will be “overhyped and not live up to expectations.” Another 53% said they fear it will not be implemented properly. Most alarming, 54% said their biggest fear is that AI will lead to a fatal error in patient care.

Much of the debate also revolves around circumstances. Using a medical device to remotely monitor a patient’s vital signs and health obviously requires accuracy. However, using AI to determine whether someone should have a heart-related operation takes the level of accuracy needed to much higher levels.

Data Security

Another area of concern is one that surrounds AI and the use of data in every profession: security.

While bias could influence AI’s recommendations for medical treatment, weak security that allows malware to place a fault into the AI’s reasoning is an even worse situation.

This has led to healthcare tech companies working closer with medical providers on security issues. This includes meeting exacting standards for security and providing regular system updates that combat the possibility of a malicious attack.

Another option is to have the federal government get involved and establish stringent certification standards for AI-driven healthcare software.

As healthcare organizations move forward with AI, all these issues must be resolved. For now, at the very least, the Stanford researchers said doctors should understand how algorithms are created and “critically assess” the source of data used to create models for predicting healthcare outcomes.

For those working in the field of health informatics, the legal and ethical issues in health informatics will remain much-debated issues in the years to come.

healthcare informatics
YES! Please send me a FREE guide with course info, pricing and more!
Facebook
Twitter
LinkedIn

Academic Calendar

SUMMER I – 2024

Application Deadline April 12, 2024
Start Date April 29, 2024
End Date June 23, 2024

SUMMER II – 2024

Application Deadline June 7, 2024
Start Date June 24, 2024
End Date August 18, 2024

FALL I – 2024

Application Deadline August 2, 2024
Start Date August 19, 2024
End Date October 13, 2024

FALL II – 2024

Application Deadline September 27, 2024
Start Date October 14, 2024
End Date December 8, 2024

SPRING I – 2025

Application Deadline December 13, 2024
Start Date January 6, 2025
End Date March 2, 2025

SPRING II – 2025

Application Deadline February 14, 2025
Start Date March 3, 2025
End Date April 27, 2025

SUMMER I – 2025

Application Deadline April 11, 2025
Start Date April 28, 2025
End Date June 22, 2025

Get Our Program Guide

If you are ready to learn more about our programs, get started by downloading our program guide now.