Quick Take
- Twelve pharmacy regulators (US and Canada) favored non‑binding, principles‑based guidance over prescriptive regulation and identified seven core principles for AI: transparency; redundancy; audit/feedback; quality assurance; privacy/data security; alignment with professional ethics; and interoperability.
- Practical implication: inpatient pharmacy must self‑govern AI — embed the seven principles into local policies, maintain meaningful human‑in‑the‑loop (HiL) oversight, and run continuous audits and QA to prevent deskilling and safety lapses.
Why it Matters
- AI tools are rapidly entering hospital pharmacy workflows (decision support, documentation, inventory) often without formal oversight; 'human‑in‑the‑loop' (HiL) systems can erode clinician skills and, when approvals become rote, effectively operate as 'human‑out‑of‑the‑loop' (HoL) failures that threaten medication safety.
- Regulators reported they can regulate people, not machines — creating a governance gap that shifts legal accountability, data‑privacy exposure, vendor lock‑in risk, and the need for redundancy and continuous quality assurance onto hospitals and licensed pharmacists.
- Operational consequence: inpatient pharmacy must fund and run local stewardship, CDS validation, auditing, contingency planning, and workforce training under real resource constraints to preserve safety and professional accountability.
What They Did
- Purposive and snowball sampling identified 12 active pharmacy regulators across the US and Canada; each completed a 45‑minute Zoom interview after informed consent.
- Researchers used a semi‑structured interview guide; interviews were recorded, transcribed verbatim, anonymized, and securely stored.
- Two researchers independently coded transcripts in NVivo v15 using a constant‑comparative method; themes were iteratively refined to saturation, with two additional interviews to confirm.
- Exploratory qualitative design followed COREQ guidance and received institutional REB approval; the study captured regulator perspectives and is context‑specific rather than statistically generalizable.
What They Found
- All 12 regulators preferred non‑binding, principles‑based guidance over new regulation and consistently articulated seven core principles: transparency; redundancy; audit/feedback; quality assurance; privacy/data security; alignment with professional ethics; and interoperability.
- Participants were unanimous that regulatory authority applies to people, not machines — HiL AI draws oversight only when humans retain meaningful, non‑automatic override capability; HoL AI lies outside regulators' current remit.
- There was no consensus on requiring informed consent or an opt‑out for AI use: transparency to patients and staff was endorsed, but most participants judged formal consent operationally impractical.
- Sample detail and practical implication: participants represented diverse settings (8 urban, 4 rural); 2/12 held pharmacy licensure and 6/12 identified as female. Regulators’ guidance places responsibility for AI validation, continuous QA, audit trails, and meaningful human oversight squarely on pharmacy teams and licensed pharmacists.
Takeaways
- Establish pharmacy‑led AI governance that operationalizes the seven principles: make AI use visible to staff and patients (transparency); require meaningful pharmacist override or sign‑off for clinical decisions (HiL); define QA metrics and audit cadence; and vet privacy, data security, and interoperability before go‑live.
- Engineer redundancy and enforce real oversight in daily workflows: retain and drill manual downtime procedures for order verification/dispensing; define and test kill switches; capture complete AI activity logs; monitor accuracy, bias, and near‑misses via dashboards; and retrain staff to prevent rote approvals that convert HiL into HoL.
- Instrument continuous monitoring and feedback: require tools to support independent verification and exportable logs, run scheduled reviews of AI performance, and route findings into corrective actions, policy updates, and targeted education.
- Operational metaphor: treat AI like a calculator during a power flicker — use it for speed, but always 'show your work' and be prepared to complete the task manually if needed.
Strengths and Limitations
Strengths:
- Rigorous qualitative methods: COREQ‑guided protocol with REB approval, recorded/anonymized transcripts, NVivo constant‑comparative coding, independent double‑coding, and two confirmatory interviews to establish saturation.
- Novel stakeholder focus: primary interviews with active US and Canadian pharmacy regulators provide direct insight into current regulatory reasoning and priorities.
Limitations:
- Sampling scope: purposive/snowball recruitment of 12 US/Canadian regulators limits representativeness and cross‑jurisdictional generalizability.
- Methodological limits: narrative qualitative design is subject to participant and researcher subjectivity; perspectives from vendors, employers, and empirical tool‑validation were not captured.
Bottom Line
Regulators are shifting to principles‑based guidance rather than new regulation, which effectively transfers AI governance to hospital pharmacy: implement governance, validate tools locally, monitor continuously, and pilot before clinical deployment to preserve safety and professional accountability.