Advertisement

Blog Viewer

Failure is an Option: How Pharmacists Learn, Adapt, and Lead in an AI-Augmented Workforce

By Susan Flaker posted 29 days ago

  

When we were in pharmacy school the perils of a medication error were drilled into our head.  We are taught that it is the role of the pharmacist to catch these errors and prevent them from getting to the patient.  We are trained to be meticulous in everything we do; we are trained to be perfect.  In the world of Artificial Intelligence, perfection is no longer an option.  It is through our failures that we learn our greatest lessons and are able to grow and develop our skills more than we can when using books or theoretical case studies.  I had never felt this was more prevalent than when I realized that a project I was working on failed.  We were riding the high of getting a grant to research the safety of electronic prospective medication order review (EPMOR).   EPMOR is a technological functionality that reviews orders before they enter the verification queue if the orders meet all rule based verification criteria that have been established by a group of Subject Matter Experts,  it would not need to go to the pharmacist for verification, if it does not, it would go to the pharmacist with why the order did not meet criteria and allow the pharmacist to evaluate the medication further.  We had just finished one project that had been published, Ojha et al, and were riding high.  Our first attempt had been a success and we were going to change the world of pharmacy, we were going to help pharmacy embrace the world of artificial intelligence to assist pharmacists with that time-consuming, regulatory required work of order verification in a more efficient manner.  So we got the team back together, many of whom worked on the original study.  We designed a study, ran the study and at the end, realized that we had not designed the project well.  I tried to put on a good face, to force a research article about this to get our work out there so that we could keep moving forward.  Pressure was mounting for me, I wanted, no NEEDED to get this out there.  Then in a meeting where I was trying to shoehorn the information into a paper yet again, one of my colleagues, Diana, went off mute and said “I’m going to say what I think everyone is thinking, the data is just bad.”  And she was right, she took what I felt was my failure and put it out there for the world to see: I had designed a bad study and as a result we had collected bad data that wasn’t usable.

No one sets out to design a bad study, but at times it happens.  You try to consider all of the variables and think everything through, but sometimes it just doesn’t work.  In our study we set out to prove the safety of an EPMOR system.  How we accomplished this was difficult.  The best way to safety was to look at safety events, that people enter.  It was quickly realized that this is sporadic and dependent on the person and their perceived severity of the event.  Since the consistency was not there, we decided to go a different way.  We took the EPMOR and decided to turn it on, the idea was to alert the pharmacist that there was a discrepancy between the preprogrammed rules and the medication order we were verifying. It was to look at those orders that a pharmacist was prepared to verify but our system was not going to verify because it needed further pharmacist review.  The desire was that we could either encourage the pharmacist to look at the medication closer or they could help us refine the rule.  We would run the study in three parts:

1. Baseline – Rules ran in the background without pharmacist visibility.
2. Pop-up Alerts – When a pharmacist approved an order the system would have rejected, a pop-up asked for a free-text explanation.
3. Sidebar + Pop-up – Pharmacists could see rules in a sidebar before verification. If they ignored alerts, the same pop-up appeared.

Over the course of the study, 3,227 orders triggered 3,285 alerts.  Of these 1,432 (44%) contained data that was insufficient data for analysis. 

The team reflected on the missteps in this project and identified several key lessons to guide future work.

First, building staff buy-in is critical when introducing technology that disrupts established workflows. Pharmacists are highly skilled in the processes they use daily, and asking them to adopt a new tool, especially one that may change or fail, naturally creates uncertainty. Buy-in is further complicated when staff are unsure whether a change is temporary or permanent. As with most innovations, some individuals are early adopters, while others are more hesitant. By engaging too broad a group at the outset, we failed to cultivate a core group of early adopters who were motivated to explore, refine, and champion the tool. In retrospect, starting with a smaller, engaged group would have allowed the system to evolve more effectively. Embedding these early adopters within the broader work group would also enable them to serve as subject matter experts and trusted change agents during future expansion, increasing overall acceptance.

Second, there must be a deliberate balance between collecting meaningful feedback and minimizing staff burden. Data quality is influenced by many factors, including upfront engagement, study duration, workload, and the number of actions required to provide feedback. These considerations must be tailored to each project rather than applied uniformly. In this study, the feedback process required too much effort from pharmacists. While the additional steps were initially thought to be necessary, it became clear that the burden discouraged meaningful participation. A smaller rollout with highly engaged participants could have supported more detailed data collection, whereas a larger rollout would have required fewer required actions per interaction. Determining the appropriate rollout size and feedback design must occur early in the study design process.

Finally, study design must align with the size and readiness of the participant group. Large-scale rollouts can provide broader representation and larger data sets, but they also introduce greater resistance to change and increase the likelihood of incomplete or unusable data. Researchers must design studies around the expected level of engagement from participants, rather than assuming consistent participation across large groups. Aligning study complexity with participant readiness is essential to generating reliable, actionable data.

Despite the failure we look at this as a huge win.  Our team is moving forward with a redesigned study that is working with a smaller group of engaged group of early adopters.  These pharmacists will help co-create workflows, refine the alert design and serve as change agents for future expansion.  This new design will have robust, dedicated training sessions with opportunities for dialogue.  We will have more structured feedback with radio buttons and a space for optional comments.  Finally we will have more communication between the staff and the team to talk about the “why” behind the technology.  The hope is that while questions arise we will know some of the concerns the staff will have moving forward. 

I failed.  It’s never something that is easy to admit.  With my failure, however, I had a wonderful team behind me willing to support and help redefine the scope of the project.  It was through this failure that we have learned a lot and we hope to have better data moving forward.  I think everyone is going into this next phase with the hope that we are able to produce quality data; however, we know that if we fail, we will go back and through the process again to make something that will work.  As long as people on the team are willing to step up like Diana did and admit when the project produced bad data we can be successful because at the end of the day, the quality of data is the most important thing.  There is so much unknown in the world of artificial intelligence that failure in these early days is not an option, it’s an inevitability that we must be prepared for.  

This work is actively being done at Mayo Clinic by Dr. S. Flaker, Dr. B. Anderson, Dr. E. Draper, T. Le, Dr. D. Schreier and Dr. H. Teaford

0 comments
18 views

Permalink