Advertisement

Blog Viewer

Q2 2025 Update: Charting the Course Towards Pharmacy General Intelligence (PGI)

By Ben Michaels posted 15 days ago

  

Pharmacy General Intelligence: Q2 2025 Update

Welcome to the second in a series of quarterly updates tracking the evolution of AI through the lens of Pharmacy General Intelligence (PGI). As a reminder, PGI focuses specifically on AI's potential to perform at or beyond the level of a pharmacist, envisioning AI agents seamlessly integrated into pharmacy workflows for tasks like medication verification, dose adjustments, and patient counseling notes.

My goal is to provide a clear overview of the AI landscape, highlighting both the advancements propelling us towards PGI and the remaining hurdles. Expect insights into policy changes, industry trends, and technological breakthroughs. Your feedback is invaluable, so please share your thoughts if this is (or is not) helpful!

Q2 2025: Continued Momentum and Emerging Realities

The second quarter of 2025 maintained the dynamic pace set earlier in the year, with significant developments across policy, industry, and technology that will play into the development of PGI. 

One of the high-level takeaways that is now being measured is how fast the integration progression is relative to other new technologies.1 The analysis performed in the Trends - Artificial Intelligence article calls out some staggering trends in the pace of LLM adoption.1  The comparison to how long it took total users outside North America between LLMs and Internet users is one of the biggest shocks.  LLM use has accomplished in 3y what it took the internet 23y to accomplish (90% users outside the North America).1  

Also, if you haven’t checked out the ASHP Artificial Intelligence in Pharmacy Practice Case Studies, I encourage you to do so!2  This is a valuable resource for implementations of pharmacy specific AI solutions across different health systems.

Policy Shifts: The regulatory landscape for AI in healthcare continues to evolve rapidly.  There were three major policy events that occurred in Q2. 

The first was an executive order around cyber security.3   By November 1, 2025, relevant agencies (Commerce/NIST, Energy, Homeland Security/Under Secretary for Science and Technology, National Science Foundation) must make existing cyber defense research datasets accessible to the broader academic community. Additionally, by the same date, the Department of Defense, Homeland Security, and Director of National Intelligence must incorporate AI software vulnerability and compromise management into their existing processes, including incident tracking, response, reporting, and sharing indicators of compromise for AI systems.3 This is potentially telling of what future executive orders may look like for healthcare systems and the requirement around sharing healthcare datasets.

The second executive order "Advancing Artificial Intelligence Education for American Youth," aims to cultivate AI skills and understanding among the nation's youth from K-12 education through postsecondary and lifelong learning opportunities.4 While not directly addressing healthcare, this order would have a long-term impact on the industry by fostering a larger, more AI-competent workforce.  Additionally, these types of requirements may foreshadow requirements for health systems to maintain a workforce specifically for AI integration.   

Last, a proposed 10-year policy, part of a House-passed budget reconciliation bill on May 22, 2025, would implement a moratorium on state and local laws or regulations that limit, restrict, or otherwise regulate AI models, systems, or automated decision systems in interstate commerce.5  This potential legislation is in line with other proposals that look to eliminate any state specific barriers to integration and use of AI.  This could potentially ban states from regulating restrictions on AI use in healthcare.

Industry Trends:

New roles are emerging in healthcare IT, such as AI/ML specialists, healthcare data scientists, and prompt engineers, while traditional administrative roles and rule-based tasks are seeing reduced human involvement due to automation. This shift necessitates upskilling existing staff and cultivating capabilities like ethical AI stewardship and human-centered design, as highlighted by Mayo Clinic.6,7

The Stanford Health Care Data Science Team's FURM assessments directly address the "AI chasm" – the gap between AI model development and real world outcomes – by evaluating AI systems beyond mere model performance. These assessments evaluate six AI model-guided solutions, with two, "Screening for Peripheral Arterial Disease (PAD)" and "Improving Documentation and Coding for Inpatient Care," having moved into an implementation phase.8 The latter is now live, demonstrating real-world impact in an operational setting, crucial for PGI.8

A recent survey indicates healthcare organizations are primarily focused on using GenAI for administrative efficiencies and workforce stability, with 80% prioritizing workflow optimization and 85% prioritizing recruiting/retaining nursing staff.6 Nurses and pharmacists are enthusiastic about GenAI's potential to reduce burnout by cutting down on repetitive non-clinical tasks and assisting with documentation. They also see GenAI as a tool to expand collaboration with universities for professional development and combat workforce shortages. Importantly, no health professionals surveyed believed GenAI would measurably reduce the need for physician or nursing staff, alleviating fears of direct replacement.6  A shout out to the pharmacy personnel in the survey as they ranked 2nd behind nursing for use of AI at work!  This openness to AI integration could allow PGI to be integrated more rapidly as users are accepting of its impact.

Technological Advancements: The quarter saw a continued push towards smaller, more efficient AI models and improved interoperability.8,9,10,11,12,13,14 Google quietly launched its experimental AI Edge Gallery Android app, enabling users to run sophisticated AI models like Gemma 3 directly on their smartphones without an internet connection.9,13 This on-device processing, optimized by Google's LiteRT and MediaPipe frameworks, is particularly valuable for data sensitive sectors like healthcare due to enhanced data privacy and the elimination of network dependencies for core functionality.  Microsoft also advanced its small language models (SLMs) with the introduction of Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning.11 This shift to the ability to create effective locally run small language models is significant for any use that involves patient data.  By allowing the data to be processed on the local device vs sending to a cloud server, it prevents possible data breaches and creates a more secure environment.14

The need for agents to communicate and interoperate effectively is gaining traction and will be incredibly important for applications where sharing patient information is cruicial.16,17 Anthropic's Model Context Protocol (MCP) and Google's Agent2Agent (A2A) protocol are becoming key contenders for a universal language in the agentic AI ecosystem with multiple major tech players adopting as their agent framework. Both protocols aim to break down data silos between agents built on different frameworks, enable agent collaboration, and preserve security and intellectual property protection.16,17 MCP, in particular, offers better control and directionality for enterprises compared to traditional APIs, allowing organizations to configure custom instructions on what agents can access.16,17 This interoperability is crucial for PGI, as a pharmacist's workflow involves interacting with various systems and data sources.  As mentioned in last quarter's update, one of the biggest roadblocks in health care utilization of AI remains the ability to access and share data.  Both MCP and A2A could be solutions to this roadblock if EMRs work to effectively integrate the protocols into their design.

The Road to PGI: Challenges and Opportunities

The progress in Q2 2025 shows a continued push toward PGI, largely driven by increasingly capable and efficient small AI models that can run on edge devices, coupled with the development of interoperability protocols. The reduced cost of AI inference and the rising investment in AI infrastructure will continue to lead to better, more secure, and easily accessible models coupled with a decrease in cost.

However, significant challenges remain. The need for reliably sourced, comprehensive, and timely patient data for LLMs is key. While AI's capabilities approach human-level performance in some benchmarks, complex reasoning still presents a challenge, and accuracy issues persist in early experimental applications.  Many of the actions that pharmacy personnel work on (verification, clinical decision making, prior authorization submissions, etc.) require a high level of not only data recall and processing, but also reasoning and complex thinking. The "AI chasm" between model performance and real-world clinical impact continues to be an issue for effective evaluation.

United States policy continues to eliminate barriers to integration, research, and use of AI in not only healthcare, but also the defense and other sensitive data industries.  In addition, the current policy push looks to make AI research and integration required in many industries while encouraging education around its use and development.

Looking Ahead

Q2 2025 has highlighted the accelerating pace of AI development and its deepening integration into healthcare. The advancements in efficient SLMs and agent interoperability protocols are particularly promising for PGI, enabling more localized, private, and precise AI solutions in pharmacy.

I want to call out the recent blog post by Dennis Tribble discussing the “Wallpaper” problems that health systems face.  The idea behind his post is that the problems have become so large that they are no longer noticed like wallpaper.  This ties into the idea of PGI because the creation of better and better solutions, even leading up to PGI, will create possibilities where these wallpaper problems can potentially be addressed.  Also, core functions that are required for pharmacy personnel today could be shifted to PGI in the future. 

As a pharmacy leader, it is important to consider what your staff could potentially be redeployed to do and what they don’t have time for today.  The two choices which will likely be considered by health systems are either eliminating the positions that can be replaced by AI or offering new services and processes that were never possible in the past.  The health systems that can leverage their existing workforce to do what is not possible today will be health systems that succeed in the future as they are able to offer a new level of care for their patients. 


References:

  1. Meeker M, Simons J, Chae D, Krey A. Trends – Artificial Intelligence. Bond Capital; 2025. https://www.bondcap.com/report/pdf/Trends_Artificial_Intelligence.pdf. Published May 30, 2025.
  2. ASHP. "AI Case Studies - ASHP." (Accessed June 20th, 2025).  https://www.ashp.org/pharmacy-practice/resource-centers/digital-health-and-artificial-intelligence/ai-case-studies
  3. Sustaining Select Efforts to Strengthen the Nation’s Cybersecurity and Amending Executive Order 13694 and Executive Order 14144. The White House. Published June 6, 2025. Accessed June 20, 2025.
  4. Advancing Artificial Intelligence Education for American Youth. The White House.  Published April 28, 2025. Accessed June 20, 2025.
  5. Samp T, Tobey D, Darling C, Loud T. Ten-year moratorium on AI regulation proposed in US Congress. DLA Piper. May 22, 2025. Accessed June 20, 2025. https://www.dlapiper.com/en-us/insights/publications/ai-outlook/2025/ten-year-moratorium-on-ai
  6. Wolters Kluwer.  Generative AI: Balancing today’s needs and tomorrow’s vision.  https://www.wolterskluwer.com/en/know/future-ready-healthcare Accessed June 20th, 2025.
  7. Dyrda, L. "Health systems add, drop roles with AI - Becker's Hospital Review | Healthcare News & Analysis." (Accessed June 20th, 2025).
  8. Callahan, A., McElfresh, D., Banda, J. M., et al. "Standing on FURM Ground: A Framework for Evaluating Fair, Useful, and Reliable AI Models in Health Care Systems." NEJM Catalyst Innovations in Care Delivery, Vol. 5 No. 10 (March 14 2024).
  9. Google. "Google quietly launches AI Edge Gallery, letting Android phones run AI without the cloud | VentureBeat." (June 2, 2025).
  10. Google. "Google unveils Gemma 3: The 'world's best' small AI model that runs on a single GPU | Capacity Media." (March 12, 2025).
  11. Microsoft. "One year of Phi: Small language models making big leaps in AI | Microsoft Azure Blog." (April 30, 2025).
  12. Microsoft. "What Is Edge Computing? | Microsoft Azure." (Accessed June 10, 2025).
  13. Nuñez, M. "Google quietly launches AI Edge Gallery, letting Android phones run AI without the cloud | VentureBeat." (June 2, 2025).
  14. Rooney, P. "IT leaders see big business potential in small AI models | CIO." (May 1, 2025).
  15. Wodecki, B. "Google unveils Gemma 3: The 'world's best' small AI model that runs on a single GPU | Capacity Media." (March 12, 2025).
  16. David, E. "The interoperability breakthrough: How MCP is becoming enterprise AI's universal language | VentureBeat." (May 13, 2025).
  17. Google LLC. "GitHub - google-a2a/A2A: An open protocol enabling communication and interoperability between opaque agentic applications." (Accessed April 20, 2025).
  18. Tribble D. Wallpaper. ASHP Connect. May 23, 2025. https://connect.ashp.org/blogs/dennis-tribble/2025/05/23/wallpaper. Accessed June 22, 2025.
3 comments
20 views

Permalink

Comments

11 days ago

Dennis,

Thank you for the comments!  For the observations:

Patient data – Absolutely right here and great call out.  Even what some may consider non-Healthcare data can be relevant also.  Besides the actual patient data in the EMR, being able to consider insurance coverage, hospital formulary, medication availability, etc. all comes into play.

Healthcare data is really dirty – Unfortunately…..I also have the scars here.  We are leveraging LLMs for exactly what you describe.  The ability to group or classify unstructured data is incredibly valuable and something that LLM based AI excels at. 

Pharmacy’s love-hate relationship with technology – I was surprised and excited by the survey results that Pharmacy ranked second behind Nursing for utilization of AI at work.  This was not what I was expecting when I looked at the survey and it was a pleasant surprise.  As you outline, I hope that pharmacy is open to “smart utilization”.  Being able to recognize the advantages and potential risks of any technology is crucial.

Human frailty – This is a rabbit hole and there are already trials that have shown familiarity with technology is correlated with trust. 

Pharmacy Practice is a State-Regulated Practice – Yes and the regulation of adoption and use is going to be extremely interesting to watch at the state level.  The proposed 10y federal policy could guide this in some ways by removing the barriers at the state level.  With the current federal push for change to traditional systems, there may be some large changes to the traditional state regulations.

Relax. AI Won’t Be Taking Over The World Anytime Soon. Maybe Ever. – I don’t have any answers here regarding superintelligence or if we will ever achieve singularity, but I can add some background on why Sam Altman (and other tech leaders) are focusing on this idea of superintelligence.  Currently the largest push for the newer models, agents, data frameworks, etc. are all based on increasing the capabilities of the systems to be better at coding. 

The idea behind this is related to the Law of Conservation of Information in Search.  Currently the available products for coding have moved from assist to execution, meaning the initial use was more of a QA reference in the traditional chat format.  Now, there are agents coding solutions, adjusting code based on errors, and writing documentation.  The next level that Sam Altman and others are hoping to achieve is moving from this execution stage to an improvement stage beyond human capabilities.  This would be the moment when AI is then able to self-improve at a level exceeding that of human development and could essentially innovate itself to higher and higher capabilities. 

Whether this is actually possible with the current transformer based large language models is related to what Apple was exploring in their paper.  If it is possible, potentially AI could be able to advance itself to the level where it could code abductive logic and exceed the limits of brute force computing power.  I have no idea if this will happen though!

12 days ago

Ben,

Below is an interesting discussion of limits for LLMs and GRI systems you might find interesting. It come from Jim Rickards who is mostly involved with public policy, but it quotes some authoritative sources. Jim is sometimes scary, but is rarely wrong.

I. Relax. AI Won’t Be Taking Over The World Anytime Soon. Maybe Ever.

The best-known figure in the world of AI is Sam Altman. He’s the head of OpenAI, which launched the ChatGPT app a few years ago.

AI began in the 1950s, seemed to hit a wall from a development perspective in the 1980s (a period known as the AI Winter), was largely dormant in the 1990s and early 2000s, then suddenly came alive again in the past ten years.

ChatGPT was the most downloaded app in history over its first few months and has hundreds of millions of users today. Altman was pushed out by the board of OpenAI last year because the company was intended as a non-profit entity that was developing AI for the good of mankind. Altman wanted to turn it into a for profit entity as a prelude to a multi-hundred-billion-dollar IPO. When the top engineers threatened to quit and follow Altman to a new venture, the board quickly reversed course and brought Altman back into the company although the exact legal structure remains under discussion. 

Meanwhile, Altman has charged full-speed ahead with his claims about superintelligence (also known as advanced general intelligence (AGI) with the key word being “general,” which means the system can think like humans, only better). One way to understand superintelligence is the metaphor that humans will be to the computer as apes are to humans. We’ll be considered smart but not smarter than our machine masters. Altman said that “in some ways ChatGPT is already more powerful than any human who ever lived.” He also said he expects AI machines “to do real cognitive work” by 2025 and will create “novel insights” by 2026.

This is all nonsense for several reasons.

The first is that training sets (the materials studied by large language models) are becoming polluted with the output from prior AI models so that the machines are getting dumber not smarter.

The second is what is known as the Law of Conservation of Information in Search. This law (supported by applied mathematics) says that computers may be able to find information faster than humans, but they cannot find any information that does not already exist. In other words, the machines are not really thinking and are not really creative. They just connect dots faster than we do.

A new paper from Apple concludes, “Through extensive experimentation across diverse puzzles, we show that frontier LRMs [Large Reasoning Models] face a complete accuracy collapse beyond certain complexities.

Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.”

This and other evidence point to AI reaching limits of logic that brute force computing power cannot overcome.

Finally, no developer has ever been able to code abductive logic; really common sense or gut instinct. That’s one of the most powerful reasoning tools humans possess. In short, superintelligence will never arrive. 

All the above being said, there is still a lot of "connecting the dots" that is impractical for human minds but a great use for AI.

Dennis Tribble

12 days ago

Ben,

A very interesting and comprehensive review of the landscape. I have some observations from 50+ years in practice, over 30 of that in healthcare automation.

  • Patient data - I think what you really mean here is healthcare data since a lot of our opportunities for PGI can be found in operational rather than clinical applications, some of which involves patient data.
  • Healthcare data is really dirty - it's not just access that is a barrier, quality and standardization is a serious barrier. I have war stories. But the short description of the issue is that people supplying the data do not see supplying that data as their primary job. They see rendering patient care or supporting that rendering as their primary job. They do what they must (and only what they must) to get past supplying the data and doing their "real job". This is not just clinical. I spent the last 5 years or so before retirement working on analytical models using ADC transactional data. It was eye-opening. I see a real opportunity here for PGI in cleaning up and standardizing healthcare data so that it can be meaningfully queried and analyzed.
  • Pharmacy's love-hate relationship with technology - technology is a toolset. It is important to understand the benefits and drawbacks of any tool for any particular job. Talk to a plumber, or a carpenter. They can tell you a lot about their tools, and which tools to use for what jobs, and what to look for to be certain of a desirable outcome. In my experience, pharmacy has generally failed to do that with technology. Rather, we either trust it implicitly or discard it entirely. Neither polar response is appropriate for PGI. We have to know what it can do well, and what things we should consider questionable. IMO, this means that our basic training in biochemistry, pharmacology, and physiology has to serve as a basic backcheck on anything we get out of generative AI.
  • Human frailty - briefly, our brains are built to apply general models. Thinking is really hard. Further, we are learning machines, sometimes learning from experience what we might otherwise reject.  The more often generative AI is correct, the less and less likely we will be to question its results.
  • Pharmacy Practice is a State-Regulated Practice - barriers to our adoption and use of technology at the state level tend to be far firmer than any barriers at the national level. Irrespective of where national laws and regulations may go, we have to convince boards of pharmacy, many of whom perceive themselves as protectors of a very old vision of pharmacy practice, and all of whom perceive a need to be able hold someone accountable when things go awry to properly embrace some technologies.

Just some additional things to think about.

Best,

Dennis Tribble

P.S., Thanks for the shout out. I am glad you found that commentary useful.