Quick Take

  • Across 345,371 QTc‑prolonging medication starts, an electronic health record (EHR)–based XGBoost (XGB) model predicted 1‑year QTc ≥500 ms with AUROC 0.859, outperforming optimized RISQ‑PATH (0.701) and Tisdale (0.770).
  • At matched low alert rates (~2–3% predicted positives), the XGB model raised positive predictive value (PPV) to 61.5% (vs 28.3% RISQ‑PATH) and 54.0% (vs 28.3% Tisdale), enabling more focused pharmacist review and monitoring while reducing alert noise.

Why it Matters

  • Drug‑induced QTc prolongation can precipitate torsades de pointes (TDP), yet withholding QTc‑prolonging therapies may deny effective treatment for cancer, infection, psychiatric disease, or arrhythmia — clinicians need patient‑specific risk at the time of prescribing.
  • Established scores have limited applicability to prescribing workflows: Tisdale was derived in a cardiac care unit and often needs a near‑time baseline ECG; RISQ‑PATH is anchored to the outcome ECG rather than the prescription decision, limiting generalizability across inpatient, emergency department (ED), and outpatient starts.
  • A prescribing‑time, EHR‑driven risk estimate can triage high volumes of QTc‑prolonging medication starts, target ECG monitoring and stewardship, and concentrate scarce clinical resources on the highest‑risk starts.

What They Did

  • Retrospective study at Geisinger using Epic medication records linked to MUSE ECGs: 345,371 QTc‑prolonging medication starts (2010–2021) with an on‑drug 12‑lead ECG within one year; 182,448 patients had baseline ECG traces available.
  • Developed three models: an XGBoost model using structured EHR features (demographics, vitals, labs, medication history, structured ECG report flags), a deep neural network (DNN) using 10‑second, 12‑lead ECG voltage traces, and a composite XGB+DNN model that incorporated the DNN risk score.
  • Validated with 5‑fold cross‑validation and compared model performance (AUROC, AUPRC) and operating‑point metrics to Tisdale and an optimized RISQ‑PATH implementation across inpatient, ED, and outpatient subgroups; used CredibleMeds drug list and median imputation for missing EHR values.

What They Found

  • XGB (EHR) performance in the overall cohort (n=345,371; 5.7% reached QTc ≥500 ms): AUROC 0.859, AUPRC 0.394. In the ECG subset (n=182,448; 7.7% events) XGB AUROC 0.869, DNN AUROC 0.864; the combined model reached 0.874 with no clinically meaningful gain.
  • At matched low alert rates, the XGB model materially improved predictive value: when matching ~2% predicted positives vs RISQ‑PATH, PPV 61.5% (XGB) vs 28.3% (RISQ‑PATH) with sensitivity ~21.4% vs 9.8%; when matching ~3% predicted positives vs Tisdale, PPV 54.0% (XGB) vs 28.3% (Tisdale) with sensitivity 28.9% vs 9.8%.
  • Model performance persisted in outpatient and ED settings but was lower for hospital admissions (AUROC 0.811; admission event rate 12.9%). Per‑drug AUROCs ranged roughly from 0.764 (dofetilide) to 0.865 (citalopram).
  • Operational implication for pharmacy: at a conservative ~2% alert threshold, flagged starts would carry >60% risk (PPV), enabling pharmacists to prioritize initiation‑time verification, expedited on‑drug ECG scheduling, and focused stewardship; the discrimination gain derived from integrating rich structured EHR features with ECG‑derived signal.

Takeaways

  • Embed the XGB model in Epic at order entry for CredibleMeds QTc‑prolonging medications; route flagged starts to a pharmacist workqueue and attach a quick order for on‑drug ECG scheduling and initiation‑time verification.
  • Begin with a low alert rate (~2–3% predicted positives) to concentrate reviews on the highest‑risk starts, then tune thresholds by care setting (outpatient/ED vs inpatient) according to pharmacist capacity and local priorities.
  • Technical requirements: map Epic inputs (demographics, vitals, labs, current QTc‑drug count, prior structured ECG flags, maximum prior QTc) and, if available, MUSE ECG measures; surface key features in the review workflow and monitor dashboarded volume and PPV by drug and setting.
  • Operational approach: treat the model as a smart triage line — it queues candidates for additional screening while preserving pharmacist judgment under governance, with override rules and periodic calibration.

Strengths and Limitations

Strengths:

  • Large prescribing‑time cohort (345,371 medication starts) linking Epic medication records with MUSE ECG data and using 5‑fold cross‑validation to report AUROC/AUPRC and threshold metrics.
  • Rigorous comparator implementation with chart‑review validation of Tisdale and RISQ‑PATH, plus subgroup analyses by care setting, baseline QTc group, individual drugs, and a 7‑day outcome window.

Limitations:

  • Single integrated health system (mostly rural, predominantly White) and retrospective design — external, multi‑site prospective validation is required before broad deployment.
  • Surrogate endpoint (QTc ≥500 ms) and requirement for an on‑drug follow‑up ECG within one year may introduce selection bias and miss true torsades de pointes (TDP) or out‑of‑system events; EHR data gaps and median imputation for missing values could affect calibration and generalizability.

Bottom Line

Ready for targeted pilot deployment: the EHR XGBoost model can prioritize pharmacist initiation‑time reviews and on‑drug ECG monitoring to substantially improve PPV and reduce alert burden, but external and prospective validation are needed before wide implementation.