Quick Take
- About 73%–76% of US general acute hospitals reported live EHR‑integrated machine learning (ML) in 2023–2024; combined clinical+operational adoption rose 10.7%, with big gains in billing (+19.9%) and scheduling (+14.3%) automation and continued use of inpatient risk prediction (sepsis, falls, readmission).
- Oversight is incomplete: among adopters, accuracy checks rose 62.6%→70.7%, bias assessments 45.0%→56.9%, and only 58.1% reported post‑implementation monitoring in 2024 — require provenance, subgroup performance, and exportable monitoring before operational reliance.
Why it Matters
- Unfiltered flags and poor routing increase verification time and alert fatigue, diverting pharmacists from high‑harm medication work.
- Rapid operational automation can change staffing and throughput incentives faster than governance, affecting access, timing of doses, stewardship, and prior‑authorization workflows.
- Vendor dominance and uneven capabilities widen variation in care and complicate standardization and compliance.
What They Did
- Retrospective national analysis linking the American Hospital Association (AHA) Annual Survey (2022–2023) and AHA IT Supplement (2023–2024); 2,562 hospitals and 4,055 hospital‑year observations assessing live, implemented EHR‑integrated ML across clinical and operational functions.
- Measured function mix (clinical risk, outpatient follow‑up, billing, scheduling), developer source, and self‑reported evaluation practices; pilots were excluded.
- Used inverse‑probability weighting for nonresponse and logistic/multinomial regressions to identify hospital characteristics associated with adoption.
What They Found
- Penetration: ~75% reported any ML; the share adopting both clinical and operational ML rose 10.7% (2023→2024).
- Operational surge: billing automation increased +19.9% and scheduling +14.3%; clinical risk prediction remained common and stable.
- Provenance opacity: 70%–80% reported EHR‑vendor development; “unsure” about the developer jumped 1.0%→16.8% (2023→2024).
- Evaluation gaps: accuracy checks rose to 70.7%, bias assessments to 56.9%, and 58.1% reported post‑deployment monitoring in 2024 — sizable minorities lack routine validation/monitoring.
- Adoption drivers: health-system affiliation (+26.8%), contracts with leading EHR vendor (+20.6%), large size 400+ beds (+15.2%); critical access and for‑profit status (~−8%).
Takeaways
- Role: treat EHR ML as a triage co‑pilot that surfaces candidates for pharmacist review, not as an authoritative decision maker.
- Operational impact: expect more inpatient risk alerts and vendor‑driven changes to queues, scheduling, and prior‑auth workflows; plan pharmacist triage roles and data capture for outcomes.
- Vendor/IT checklist before deployment: documented developer identity and regulatory status; local validation including subgroup performance; bias‑assessment methods and remediation plan; exportable alert/outcome logs; configurable thresholds, routing, pause/tune controls; named monitoring owners and SLAs.
Strengths and Limitations
Strengths:
- National, multi‑year, organization‑level snapshot of live EHR‑integrated ML covering clinical and operational functions with weighting for survey nonresponse.
Limitations:
- Self‑reported survey data with no model‑level performance metrics, no vendor identities, and no clinical outcomes; observational design precludes causal claims.
Bottom Line
With EHR-integrated ML adoption reaching ~75% and operational automation (billing/scheduling) surging, pharmacists must treat these primarily vendor-developed tools as triage assistants rather than clinical authorities. Because significant gaps remain in local bias assessment and post-implementation monitoring, pharmacy leadership should restrict these models to "human-in-the-loop" workflows and demand local validation data before relying on them for antimicrobial stewardship or medication safety protocols.