Just got my most recent paper edition of AJHP (July 15, 2024) and read Steven Smoke's commentary entitled Artificial intelligence in pharmacy: a guide for clinicians. In this article he articulates five principles that seem quite appropriate. His fifth principle, Pharmacists must use AI responsibly is one that I think deserves special attention. I respect that responsible use of AI must include appropriate use and citation if generative AI is used to prepare any kind of publication. I was disappointed that the discussion of responsible use stopped there.
As noted in a previous blog, our historical use of automation (in my experience) indicates that we either trust our automation implicitly or distrust it completely. Neither approach will work well with AI. That previous blog, and the blogs to which it points, describe some interesting use cases.
I have some experience in working with AI applications, and note the following:
- AI attempts to model reality through the use of advanced statistical modeling. The construction of those models is key. In my experience, most of that construction involves iterative application of a model to a known data set until the model returns results we expect of the data set or returns unexpected data that becomes clearly true in a review of the data. That evaluation of model results is done by humans, is mind-numbing in its detail, and subject to the bias and knowledge limitations of those reviewers. As a result, any AI model is an approximation of reality. Because it is an approximation of reality, it will sometimes produce inappropriate results.
- All AI is not self-teaching (at least in the beginning) - In the projects in which I was involved, we spent years in the "person in the middle" phase of operation in which every change to the model required human review. In fact, of the many models with which I worked, only one became so reliable that we removed that human in the middle and let it rebuild itself and that particular model was relatively limited in scope and, I believe, the developers still keep an eye on it.
- AI that is self-teaching can get off-course - it is not a "set it and forget it" software system.
- It is tempting software tools for unintended purposes. Such use adds risks to AI use since the models on which the AI is built may not (and probably have not) anticipated such unintended use.
- AI ultimately advises humans who are innate learners. It is part of human nature to learn from experience with the result that it is tempting learn to implicitly trust an AI model that is "right" the vast majority of the time and to suspend judgement during its use. We need to consciously avoid that temptation. This means that we need to have a set of fundamental knowledge that we apply to every recommendation we get from an AI system as a check against the model moving into uncharted territory.
- Having an AI model be "wrong" in one instance is not a reason reject use of the AI. It is a reason for investigation and, if needed, adjustment of the model or further limitation of its use. Sometimes the AI may be correct, and our impression of reality may be wrong.
So, in my opinion, responsible use of AI requires that we use it with diligence.
Note that many of our most common applications of AI have exceptionally low risk. For example, retail sales applications group customers of similar interests and use that grouping to suggest other purchases. The worst that happens is that a particular customer chooses not to buy, or makes a purchase and returns it as unsuitable. There are uses for AI in our practice that are similarly low risk (e.g., automating as much as possible of our distribution operations).
High-risk AI applications (such as self-driving cars) still require that the driver maintain attention on the road. Our Clinical Pharmacy practice is similarly a high-risk endeavor.
AI my eventually reach the point at which human intervention adds no value. I believe that point in time to be far in the future.
What do you think?
Dennis Tribble, PharmD, FASHP
Retired
Ormond Beach, FL
tribbledennis@gmail.com