Clinical decision support (CDS) tools have increased in number and complexity as the electronic health record (EHR) has increased in capability. CDS tools come in many forms, including best practice alerts, customized documentation templates, order sets, and warning systems of potential harm. The promise of these tools is to provide clinicians with appropriate, useful, and actionable information at the point of care. The implementation of these tools follows a framework known as “The Five ‘Rights' of CDS”: 1) the right information, 2) to the right people, 3) through the right channels, 4) in the right format, and 5) at the right points in the workflow (1). The framework encourages the spirit of end-user feedback in the design of CDS tools in the EHR to avoid false positives and “alert fatigue” (2).
Several pragmatic randomized controlled trials have investigated EHR alerts across multiple disease states and settings. Selby et al. (3) found that EHR alerts, in addition to a care bundle and an educational program, improved acute kidney injury (AKI) recognition, performance of urinalyses, and increased review of medications in adult patients who were hospitalized. Furthermore, Ghazi et al. (4), in the outpatient Pragmatic Trial of Messaging to Providers About Treatment of Heart Failure (PROMPT-HF), demonstrated that EHR alerts linked with an order-set option increased guideline-directed medical therapy class prescription in patients with heart failure. Interestingly, the Electronic Alerts for Acute Kidney Injury Amelioration (ELAIA-1) study found that EHR alerts for AKI increased mortality within a subgroup of non-teaching hospitals, underscoring the need for randomized trials for CDS (5), even when the intervention may seem to be “common sense.”
Ultimately, these studies indicate that CDS can be quite effective at changing process outcomes (e.g., medication orders) but, to date, have more mixed results in terms of clinical outcomes. To avoid alert fatigue, wherein providers begin to ignore even helpful alerts due to alert proliferation, there must be efforts to minimize the number needed to nudge (NNN), or the number of alerts needed for a successful response per patient (Figure 1). Overall, CDS interventions seek to promote an established process of care that is a best practice and yet is currently under-utilized. CDS interventions should be robustly evaluated in the context of randomization, where feasible, to show they can affect the process and, preferably, downstream clinical outcomes. At all stages, end users should be involved.
References
- 1.↑
Osheroff J, et al. Improving Outcomes with Clinical Decision Support: An Implementer's Guide, Second Edition. CRC Press. 2012.
- 2.↑
Ancker JS, et al.; with the HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17:36. doi: 10.1186/s12911-017-0430-8 [Erratum in Ancker JS, et al. BMC Med Inform Decis Mak 2019; 19:227. doi: 10.1186/s12911-019-0971-0].
- 3.↑
Selby NM, et al. An organizational-level program of intervention for AKI: A pragmatic stepped wedge cluster randomized trial. J Am Soc Nephrol 2019; 30:505–515. doi: 10.1681/ASN.2018090886
- 4.↑
Ghazi L, et al. Electronic alerts to improve heart failure therapy in outpatient practice: A cluster randomized trial. J Am Coll Cardiol 2022; 79:2203–2213. doi: 10.1016/j.jacc.2022.03.338
- 5.↑
Wilson FP, et al. Electronic health record alerts for acute kidney injury: Multicenter, randomized clinical trial. BMJ 2021; 372:m4786. doi: 10.1136/bmj.m4786