Advertisement

Blog Viewer

Generative AI

By Dennis Tribble posted 06-01-2023 00:59

  

In reading an article regarding what some consider 10 trends to watch in healthcare, I encountered a term I was unfamiliar with, the #1 item, Generative AI.  

According to Technopedia, Generative AI “is a broad label that's used to describe any type of artificial intelligence (AI) that can be used to create new text, images, video, audio, code or synthetic data” where synthetic data is “data input that is generated mathematically from a statistical model. Synthetic data plays an important role in finance, healthcare, and artificial intelligence (AI) when it is used to protect personally identifiable information (PII) in raw data and fabricate massive amounts of new data to train machine learning (ML) algorithms.” 

Further, “Synthetic data is created by executing sequential statistical regression models against each variable in a real-world data source. Any new data collected from the regression models will statistically have the same properties as the originating data, but its values will not correspond to a specific record, person, or device.” 

I found myself looking at this notion with some degree of ambivalence. 

  • Part of me thought, “Wow! This would be a great way to create a huge fund of de-identified data that could be used for research and analytics”.
  • Another part of me remembered George Box’s quote “All models are wrong; some are useful” and caused me to wonder how far synthetic data might deviate from the data from which it was derived and how far astray those deviations might lead the AI.

A colleague and I recently discussed our tendency in healthcare to either treat data and AI as fundamentally untrustworthy or to blindly accept its outputs without applying some sanity checks. Neither of these polar opposites seems useful in our use of analytics. Rather, I reassert that we must approach these kinds of systems with the willingness to learn from them tempered by a firm grounding in what we know to be real. So how would we go about reassuring ourselves that this synthetic data was close enough to the real data from which it was derived that we could trust the conclusions derived from it? If the synthetic data were apparently a radiographic image, could we tell the difference between that image and the original image from which it was derived? 

 

Given a recent example where AI proved itself to be less reliable than expected, it feels to me like some caution might be warranted. In a related article, it appears that Duke University has put a system in place to deal with this very issue.  

 

What do you think?  

 

As always, the comments in this blog reflect my thoughts and not necessarily those of ASHP or of my employer. 

Dennis A. Tribble, PharmD, FASHP 

Ormond Beach, FL 

datdoc@aol.com 
0 comments
31 views

Permalink