Advertisement

Blog Viewer

What does a good study regarding automation look like?

By Dennis Tribble posted 04-13-2020 15:50

  
Let me start out by saying that this is all my opinion. I am throwing my thoughts out there in the hope that this will stimulate some discussion about what study design needs to look like and what kinds of evidence is appropriate to represent acceptable evidence that a technology does, or does not provide improvements.

I periodically see or discussions that disparage the evidence for automation as being "weak" for lack of study design, with the notion that the only good study is a randomized, double-blinded, placebo-controlled study with powerful statistical significance.

There can be no doubt that such study design is necessary when dealing with the effects of therapeutic drugs in humans:
  • There is a well-known placebo effect - a 1996 study in the British Medical Journal on the effect of pill color on perceived effectiveness that is an interesting read.
  • Evaluation of effectiveness by clinicians can be colored by their expectation of (or desire for) results.
  • There can be wide variance in human response to a medication; we can only perceive true effectiveness when enough patients from a wide enough variety of patients have tried the medication and it can statistically be shown that those who were treated fared statistically better than those who were not, or who received more standard therapy.

Even then, there can be, and are constraints on what kinds of studies can be performed. For instance, there are ethical prohibitions on denying therapy to patients that might be life-saving or for providing therapies that might be more harmful than useful.

Further, we can, and should rely on basic science to inform some of our decisions. I daresay that none of us needs a double-blind, controlled trial on parachutes nor would any of us agree to participate in such a study. We all experience gravity every day and do not require a study to remind us that it still exists. It is a fundamental, and well-documented part of our human experience (unless, of course, you are an astronaut).

So, how are studies involving technology different?
  1. There really aren't placebos for most of our technologies. They are large enough, and invasive enough that we cannot blind their use. They are either there or they are not. If we need to demonstrate benefit, we need to measure the involved processes before the application of technology and afterward.
  2. Our technologies either produce specific end points or they do not. In that sense, their outcomes are quite measurable and deterministic. Statistics therefore provide less value in differentiating pre- and post-implementation measurements. The question is less about are the pre- and post- systems statistically different than it is about whether or not any difference is meaningful. Most of our technologies can produce reams of data at exceptional levels of detail. So if, for example, the desired outcome for a technology is an improvement in the speed of a process, then it is less important that the pre- and post data are statistically different than it is about whether the improvement in speed is sufficient to produce other benefits. If the improvement in speed is 0.2 seconds, and that difference is statistically different, it is still unlikely to produce a benefit unless the process being measured occurs hundreds of thousands of times a day.
  3. It does turn out that definition of those endpoints if often where technology studies fail. Automation generally affects processes, and the entire process needs to be measured, not just sub-processes. For example, a study of IV workflow that focuses only on the time it takes to physically prepare the dose would not be able to measure benefits like reducing the number of doses being made in a day, or the reduction in waste from being able to re-purpose doses that were made but no longer needed, or the ability to capture the "dead" time between preparation and checking to reduce the overall transit time of a dose through the preparation process.
  4. It also turns out that good technology can change the way work is performed in ways that require some time to relearn how to perform old processes in new ways. So studies on the adoption of technology need to include some measure of what kinds of work changes were needed and how long it took for people to become proficient in the new, automated processes. And it is critically important that measurement of the impact of these technologies be measured after the users of the technology have become proficient with its use.
  5. Technologies may be incompletely designed for the work they are to address. For example, an IV robotic system that can only make doses that have a single active ingredient in a single, commercially available fluid cannot completely replace the laminar air flow hood, because those doses with multiple ingredients still have to be prepared somewhere. There may be enough of those single-agent doses to be prepared that the robotics are still valuable, but it won't eliminate the need for the IV room any time soon. The more things that have to be prepared and dispensed via alternative mechanisms, the less valuable the automation is likely to be.
This list is not meant to be exhaustive; it just contains the things that have been running through my head about what kinds of evidence are needed to properly evaluate automation. It seems pretty clear to me that double-blind, placebo-controlled trials are unlikely to be practical, and that, ultimately, statistical evaluation of results is going to be less meaningful than other, more concrete measures (like reduction in cost, improvement in speed, or improvement in accuracy) which are rather deterministic.

So what do you think?

As always, the contents of this blog represent my own thinking, and not necessarily that of ASHP of my employer, BD.

Dennis A. Tribble, PharmD, FASHP
Ormond Beach, FL
datdoc@aol.com
3 comments
35 views

Permalink

Comments

07-10-2020 14:48

Dennis,

Thanks for sharing that experience. Of the several areas for improvement you identified there, one that stands out is the need for a data-driven mindset and improved data literacy within the profession. 

I'm now curious, and would like to see if there's a real opportunity, about how our pharmacy students are getting such data education and training (I don't recall having this). Not even so sophisticated for being able to apply functional data analytics, but really looking at the ability to achieve operational excellence, or at a minimum be able to understand and support efforts for quality and process improvement. 

But to your last point, having checklists and being data driven doesn't take you far if you don't have the "why" or context behind what you're doing or trying to do. We are good at using critical thinking when it comes to our patient care, but we need to remember to apply that to our operations, as well. 

Thank you again,

Neil

07-06-2020 09:48

Neil,
Thank you for your thoughtful, and well-organized reply.

In general, I find this kind of request coming from people who perceive informatics as "just another practice specialty" like cardiology or infectious disease. When their only tool is a hammer, all tasks look like a nail.

And, in many cases, technology systems can solve problems we have had for so long we no longer think of them or try to measure them. When I first got started in the IV workflow business, I would ask "Are missing medications a problem?" to which the reply was invariably "... horrible problem!!!". But when I would ask about frequency or amount of missing medications, the answer was also invariably "I have no idea". So.. yes... some people need to be convinced that they have a problem that needs to be (and can be) solved.

In general, however, I have seen a troubling trend for our profession to apply checklists rather than reason to problems, and so it becomes easy to dismiss a study that doesn't "check the list", even if doing so would provide no additional value or would be impractical to apply.

Dennis

07-05-2020 22:39

Dennis,


Thank you for this post. It caught my eye, and I’ve thought about it several times over the last couple of months. My initial reaction was that I’m surprised there are such expectations of studies of automation and workflow technologies. I apologize if my reply comes across as rambling, but I am hopeful that it adds to the discussion and others can weigh in. I also don’t want to appear unempathetic or antagonistic to those who require more rigorous studies or analysis of data, so please know that isn’t my intent. I honestly would like to learn more about setting realistic expectations on technology adoption for our fellow pharmacists. If that means providing more hands on workflow mapping, or having more robust “labs” where our colleagues could see the automation in action to visualize how it may fit at their organization, I’d like to best support them. At this point, I also don’t see the viability of double-blind, controlled trials for this type of technology for the reasons you’ve said.


I wonder if the perceived requirement for statistically significant conclusions over meaningful impact is embedded in our expectations when evaluating a new therapy or guidelines. You mentioned something on the Pharmacy IT and Me podcast that has resonated with me; pharmacists are trained to believe the only acceptable level of risk is zero, so there may be some conflation as this is how we’re used to operating (and rightly so when it comes to medication therapies!).     


A question I have is, do you find this request coming from those you’d consider innovators and early adopters, or more of the early or late majority? Or is it really possible to categorize those who are making the request for stronger evidence?  I’d also ask, do the colleagues you get this feedback from acknowledge they have an area or workflow that needs to be improved or optimized? Their threshold may be higher because they look at the automation as more of a “delighter” instead of a required solution. 


I don’t believe there will be any proper studies that would be able to fully satisfy this group, if they don’t already see that they have a gap or problem where automation is a possible solution. I keep thinking back to Everett Rogers’ Diffusion of Innovations book, and they sound like they’re probably the late majority or laggards (the book’s terms, not pejoratives). Most larger scale automation technology would be practically impossible to do a randomized controlled experiment, given the impact to the overall system and cost to implement, and for the other reasons you stated. To get this group to adopt will take seeing how successful the technology is with those who do implement it earlier (the innovators, early adopters, and possibly the early majority) and with the support of opinion leaders. 


To be good stewards of the technology, trust is required. This is done through individuals (KOLs, thought leaders etc), organizational promotion (ASHP putting out opinions on automation, but obviously not endorsing a specific vendor), and then through the vendors building market trust, as described in The Speed of Trust, by Stephen MR Covey. I also think there is a net benefit to all healthcare technology companies as each one builds more trust with the market around innovative solutions.


The reality of resource scarcity in our healthcare system may also be making our colleagues hyper risk averse. I think of the quote, “perfection is the enemy of progress”, though I’m not suggesting we don’t strive for achieving excellence. If these automation solutions require significant resources and take a relatively long time to achieve the desired ROI, Pharmacy may be staking much of their budget or a significant portion of the health-system’s capital on the solution. They may think they need to be 100% certain it will work.


So, to really respond to what is different about studies or data generated to support implementing automation or technology in our pharmacy workflows, I also believe that we need to steer the focus of our colleagues to the following reiterations of what you said:


  1. As you conveyed in a few of the points, the holistic view of the automation’s impact is more important from an organizational, safety, business, and satisfaction viewpoint, than the individual statistics of the data gathered pre and post implementation of the automation solution. As an operational leader, you may know that if you are able to reduce the process time of something, you’ll be able to achieve some gains with the time elsewhere. Or, if you are able to reduce the number of errors in a process done by people, you’ll be able to be successful with certain Quality KPIs. This is where measuring the entire process and not the sub-processes is important.         
  2. Each Pharmacy system (Pharmacy workflow, physical layout, needs) is different, but not so complex as an individual human and a population of humans. We can more easily control the variables and minimize the impact of extraneous variables in the Pharmacy system, so when we introduce the automation intervention, we can reasonably determine that the outcomes observed are in a causal relationship to the automation. In human studies, there are so many possible variables that you can’t extrapolate how a therapy may translate from one single person to another in a very different demographic or cohort. I believe our health-system pharmacy cohort is much less diverse than the entire population who could be taking certain medications, meaning we don’t need such statistically powerful studies done.    
  3. Lastly, I’d say it’s important to remember the Process Improvement model of “People, Process, and Technology”. When implementing an automation technology, don’t forget the impact to the people and the existing processes. You discuss this in point 4, but I wanted to reinforce how critical it is that any examination of a new technology should include observation and evaluation on the people stakeholders (how much re-training is needed, were staff repurposed, how long until proficiency is achieved) and the changes to the direct and ancillary work activities (will a change in inventory location affect deliveries or other workers, did the technology physically displace another work area).

Again, I hope this has added value and further stimulates an important conversation. Thank you for bringing it up here!