Advertisement

Blog Viewer

Why AI has not transformed healthcare

By Dennis Tribble posted 10-30-2022 23:00

  

I just read an interesting article in Politico entitled “Artificial intelligence was supposed to transform health care. It hasn’t.[1]” The article highlights the prediction that AI capabilities would render the need for human radiologists obsolete and then notes this clearly has not happened.

While the article does describe some real barriers, I would like to point out some very real issues about AI in healthcare (and other high-risk endeavors) that are fundamentally different than the commercial environments in which AI is clearly flourishing.

Before we dive into that, let us review some high-level thoughts on why AI may be thriving in retail:

  • The sheer volume of transactions from which to detect information is immense and growing, and the item identification within those transactions is relatively uniform.
  • The behavior of the objects the AI observes is uniform and predictable. This includes both objects sold and persons who purchase those objects. For example, if an on-line purveyor of goods can look at enough purchases, they can identify patterns of usage that permit them to suggest other items for sale, based on a particular purchase and/or matching your purchase history with others whose histories are similar.
  • The risk associated with being wrong is very low. The worst that happens is that the user gets annoyed at the suggestions. The best that happens (from the purveyor’s perspective) is that the user makes an impulse buy.
  • Seasonal variations are readily discernible in the data and can inform the prediction.
  • With enough data points about any individual, a personal assistant can determine that it is time to remind you about a previous purchase you made that you likely would want to make again.
  • While there can be a wide variety of products within the retail supply chain, that supply chain treats them all in the same way; that is, they are widgets to be purchased, inventoried, and sold.

AI is also thriving in human recognition (primarily in facial recognition) because we quickly figured out that faces generally have one of a relatively few shapes, and generally include eyes, a nose, and a mouth. The pattern is common enough that a system can readily identify landmark features, take measurements, and develop a model that uniquely identifies a face with some probability of being right. The same can be said for fingerprint recognition. Note that it was not always so for either of these technologies; early instances trained primarily on Caucasian men struggled to reliably identify members of other races and genders and there are still failures[2].

It is interesting that, outside of healthcare, the risk associated with the decision that an AI system can make is also a barrier. Thus, while there are self-driving cars, they still generally require the presence of a human driver that is responsible for their actions and have endured some highly publicized failures[3]. Hopefully, those issues will eventually be overcome, but it is unclear what that success might look like (how much error would we, as a society, tolerate), what infrastructure might need to be in place for it to truly meet expectations[4], and how ethical issues around such use can be resolved.

In healthcare, risk is a clear issue in part because therapeutic decisions can have significant consequences, and some of the rendering of healthcare is a combination of objective and subjective information.

The article referenced is correct that part of the problem is the highly local data management in healthcare facilities (no widely used common vocabulary), and with regulatory and ethical barriers associated with removing the human from the equation.

There are also other realities:

  • Humans are not uniform; they vary widely in many ways. So, the mass of data to drive decision-making is absolutely enormous.
  • Human response to any treatment is widely variable; two individuals who appear to be very similar can react very differently to the same dose of a particular medication. As pharmacogenomics delves into some of these issues, that variety may eventually become understandable, presuming that the cost of acquiring the data, or patient willingness to supply the data do not remain barriers.
  • The patient is a primary source of information about their own health. Patients can be poor historians, tend to adopt impressions about themselves that may not be true, suffer from biases that color their observations, and often lack the vocabulary and knowledge to truly represent their current condition. What would we expect an AI system to do with a patient report “I am allergic to all “mycins”?
  • Much of healthcare practice remains a combination of objective and subjective awareness by caregivers. Can AI systems handle the subjective part?
  • Caregivers have the same limitations as patients, and they are the source of a large amount of what makes up medical data.
  • The consequences of risk are not limited to the patient; caregivers can face daunting regulatory and litigatory consequences if their use of (or reliance on) AI results in patient harm or death.
  • As much as it is portrayed in the media as being omniscient, AI requires training[5], and we know that much of the success of AI systems is dependent on how, and by whom they were trained.
  • Humans tend to resist change. Put another way, the radiologists described in the article (or other caregivers, for that matter) will likely not “go gentle into that good night”[6]
  • Retrospective studies of AI on patient populations have the advantage of hindsight; it seems common for these studies to examine how much more quickly we would have detected a condition that we already know was there. It is quite different to look at a speck on a radiograph when we do not have the benefit of hindsight and assert with confidence that it represents a need for intervention.

Thus, it appears to me that it may be some time before we have the professional, regulatory, and personal confidence to accept the judgement of an AI radiologic diagnosis unreviewed by a human. And, being medical professionals, that time will need to be spent amassing a wealth of clinical evidence that the AI can be trusted with the task.

At least that is my take on things. What do you think?

As always, the opinions expressed herein are my own, and not necessarily those of ASHP or my employer.

[1] Leonard B, Reader R Artificial intelligence was supposed to transform health care. It hasn’t. Politico  8/15/2022 03:55 PM EDT viewed at Artificial intelligence was supposed to transform health care. It hasn’t. - POLITICO 8/16/2022.

[2] Magnet, SL When Biometrics Fail: Gender, Race and the Technology of Identity Duke University Press November, 2011

[3] Self Driving Cars Accidents - Bing News

[4] Self-driving car Wikipedia last updated 8/21/2022 viewed at Self-driving car - Wikipedia 8/23/2022

[5] Walch K and Schemlzer, R How to build a machine learning model in 7 steps TechTarget viewed at How to build a machine learning model in 7 steps (techtarget.com) on 8/23/2022

[6] Do not go gentle into that good night a poem by Dylan Thomas (https://poets.org/poem/do-not-go-gentle-good-night) viewed 8/18/2022

0 comments
20 views

Permalink