Quick Take
- NEJM AI places current deskilling fears in a long historical arc, noting that anxiety about technology atrophying clinician skills has existed since the invention of the printing press. However, the operational reality for pharmacy leaders in 2025 is distinct: the shift from deterministic decision support to probabilistic, agentic AI introduces a novel risk of automation complacency that previous tools did not.
- As health systems aggressively expand autoverification to combat workforce shortages, the pharmacist's role is pivoting from a human-in-the-loop verifier to a human-on-the-loop auditor. The primary safety threat is shifting from a knowledge deficit to an attention deficit, requiring leaders to implement rigorous validation standards to satisfy evolving expectations from CMS and The Joint Commission.
Why it Matters
- Governance of attention: Relying on a verify button is no longer sufficient. Leaders must treat AI not as a static tool but as a clinical product requiring continuous monitoring. Governance must shift from project management to product management, ensuring that models are audited for drift and logic failures post-deployment.
- Targeted friction: Efficiency should not be the only goal. Departments must implement intentional speed bumps for high-risk medications, such as requiring manual weight entry or barcode scans for narrow-therapeutic-index drugs. These forced interactions break the cognitive trance of autoverification and ensure active oversight where it matters most.
- Adversarial validation: To maintain safety, leaders should move from standard testing to adversarial validation. This involves red teaming the workflow by periodically injecting sentinel orders—simulated errors—into the live or test environment to measure if staff are actually catching mistakes or merely rubber-stamping the system's output.
Bottom Line
Automated order approval shifts pharmacists to oversight roles and raises rare but critical failure risks; assign a pharmacy AI governance lead to own safety validation and run regular audits to prevent vigilance atrophy.
Key Details
- Autoverification mechanics: EHR rules now automatically verify orders based on hard criteria like renal function and formulary status. While efficient, this creates a risk of zombie verification, where pharmacists overseeing mostly correct queues lose the situational awareness required to spot rare, catastrophic errors.
- Generative features: Emerging agentic tools in the EHR now draft progress notes, summarize patient histories, and propose orders. Unlike previous alerts that flagged specific conflicts, these tools generate content, shifting the pharmacist from author to editor—a role where plausible but incorrect hallucinations are significantly harder to detect than simple transcription errors.
- Evidence of bias: Validated simulation studies show that when technology offers incorrect advice, clinicians override their own correct decisions 6–11% of the time. This phenomenon is best illustrated by the documented Dilantin versus Diltiazem error, where a nurse trusted a digital dispensing cabinet screen over a correct manual record, demonstrating that screen authority often overrides clinical judgment.
- Regulatory stance: Regulators like CMS and The Joint Commission continue to view the human professional as the primary safety mechanism. This creates a potential liability trap where the pharmacist is legally responsible for AI errors they miss, despite the known psychological difficulty of maintaining vigilance in highly automated workflows.