Advertisement

Blog Viewer

When the AI is wrong

By Dennis Tribble posted 11-30-2023 22:59

  

Today (6/15/2023) I read an article in Beckers regarding a case in which a nurse felt compelled to draw blood on a leukemic patient because the AI came to the conclusion that the patient might be septic even though they disagreed. The nurse felt that the AI did not understand the impact leukemia might have on a patient’s presentation or on the risk of drawing blood. The blood was drawn, that patient was not septic, and, thankfully, the patient did not acquire an infection.

The article goes on to note a couple of other things:

  •  Although the organization indicated that the ultimate decision belongs to the physician and the nurse, caregivers may feel pressured to comply with the AI recommendations.
  • The article cited one nurse trainer who observed that newer, more digitally native caregivers tend to trust the AI more than their own observations.
  • The quality of the AI observations directly relate to how the AI was trained. In this case, was the AI trained to understand the interaction of leukemia and presentations it associates with sepsis?
  • Organizations who punish providers for exercising clinical judgement and deciding that the AI may be wrong will likely select out those providers whose skepticism would keep the AI honest.

This made me reflect on the whole notion of quality assurance regarding AI. It is probably reasonable to expect that an otherwise useful model may not have been trained to all the cases it turns out it needs to handle. What is the process by which such a model can be retrained (e.g., how could the model in this case be retrained to properly understand leukemia as a special case that needs to be better understood?). Are facilities who deploy these AI models prepared to raise these issues with the supplier of the model? Do those suppliers have systems in place to accept and deal with these issues? How long should it take for such a supplier to update such a model? Quicker may not always mean better.

For those of us in pharmacy, I also note (as described in a previous blog) that we seem to have a rather polar relationship with automation: we either trust it implicitly or distrust it entirely. The problem as described in number 2 above is very real. How do we go about learning what AI can teach us and yet maintain a healthy skepticism that permits us to apply our knowledge and experience in places where the AI seems to be incorrect?

What do you think?

As always, the thoughts and opinions in this blog are my own, and do not necessarily reflect the thoughts and opinions of my employer or of ASHP.

Dennis A. Tribble, PharmD, FASHP

Ormond Beach, FL

datdoc@aol.com

0 comments
19 views

Permalink