Electronic medical records (EMR) have revolutionized clinical practice. They facilitate documentation, improve communication and coordination of care among medical providers, and afford a manner of communication for patients, as well as provide a platform for the storage of large quantities of clinical data. Several of these qualities, in addition to federal incentives for their use, have resulted in nearly universal acceptance. Recently, EMR vendors have attempted to infuse artificial intelligence into their software through the use of algorithms meant to reduce costs and potentially improve care. One such example is the use of “practice advisories,” which appear as pop-ups whenever a patient chart is opened. These advisories are triggered based on patient demographics, the use of specific diagnostic codes, the ordering of specific tests or medications, and abnormal laboratory values. It is expected that early recognition of abnormalities will lead to timely interventions and improve patient outcomes. The detection of most clinically significant events, however, requires algorithms with high sensitivity. This, unfortunately, comes at the expense of many false positives.
In accordance with Bayes theorem, the probability that the trigger represents reality (degree of belief) is dependent on the prevalence of the disease, the so-called “base rate.” It is well known and repeatedly documented in the literature that even the tests with extremely high sensitivities and specificities result in abundant false positives when the pretest probability is low. A common example is the measurement of cardiac troponin for evaluation of chest pain. Troponin T is a highly sensitive and specific marker for damage to cardiac muscle. Widespread availability of this test and a high rate of malpractice litigation in regard to missed myocardial infarction can entice emergency physicians to routinely test for this marker, even in the patients without manifestations of acute coronary syndrome. This practice leads to many false-positive results and increases health care costs through hospitalization and unnecessary downstream testing in otherwise low-risk patients.1
As the prevalence of most targeted clinical events is generally very low, it is expected that a very large number of triggers in the EMR will be ill advised. Acting on such notifications without taking into account the clinical scenario can increase health care costs (such as ordering a serum lactate level when the odds of sepsis are very low) and may even lead to patient harm. Furthermore, mandating that nursing staff apprise the clinicians of these alerts through unnecessary pages can take clinicians away from their duties and adversely impact the quality and efficiency of patient care.2
Clinical practice remains a sophisticated discipline, where decision-making requires the integration of each patient’s individualized history and physical examination, course of illness, and comorbid conditions with the scientific data and evidence-based recommendations. Many medical disciplines require decades of education and clinical apprenticeship to acquire the appropriate acumen to make effective decisions. Replacing the art of medicine through computerized, algorithmic-based decision-assisting tools often leads to over-simplification and can be detrimental to patient care. Physicians should always be allowed to override such practice alerts based on their judgment and the clinical context.
There remains a paucity of medical literature pertaining to the implications of practice alerts on the patient outcomes. Using the key words “best practice alerts” and selecting “clinical trial” as article type retrieves just 16 publications on PubMed. Most of these studies are small and designed to improve adherence to guidelines. To date, there is no evidence that any of these alerts improve clinically relevant patient outcomes. Even more troubling, early data suggest that these interventions may actually increase costs through subsequent unnecessary testing. Well-conducted studies with rigorous methodology are needed to assess the impact of practice advisories on clinical outcomes. Clinicians should always be reminded that algorithms are best used as decision-supporting rather than decision-making tools.
To read this article in its entirety please visit our website.
-Haris Riaz, MD, Richard A. Krasuski, MD
This article originally appeared in the March 2017 issue of The American Journal of Medicine.