Reviewed by Vanessa Stevens, PhD, University of Utah and Rebekah Moehring, MD, MPH, FSHEA, Duke University
As the complexity of medical decision-making increases, the development of technological tools to alleviate the burden on medical professionals has accelerated. Despite great enthusiasm for advanced diagnostics, machine learning, and artificial intelligence, significant challenges remain to move innovations from development to implementation.
In a proof-of-concept study among 224 hospitalized patients with either bacteremia due to Escherichia coli or suspected infections, a case-based reasoning (CBR) artificial intelligence algorithm was applied to recommend antimicrobial treatment decisions through a clinical decision support system (CDSS). The CDSS auto-populated some patient information extracted from the electronic medical record, but several fields required manual data entry from the prescriber. The appropriateness of prescribing decisions by 6 infectious diseases specialists was compared to recommendations from the algorithm based on a combination of final susceptibility results and likely pathogen and sensitivities from local antibiograms. Spectrum of antimicrobial activity was also measured using a spectrum score and the World Health Organization’s AWaRe classification system. The AWaRe system classifies antibiotics into those that should be preferentially used when appropriate due to narrow spectrum and low risk of resistance (access), those that should be used judiciously due to higher risk of resistance (watch), and those that should not be used except in the treatment of multidrug-resistant organisms (reserve). Most antibiotic choices for included patients were deemed appropriate and were similar between strategies (90% of CBR recommendations and 83% of prescriptions by treating physicians). Recommendations from CBR had a lower spectrum score and were more likely to fall in the access category than the watch or reserve classifications as compared to decisions made by physicians. While initial appropriateness results and spectrum scores are promising, the feasibility of scaling up CBR for wider use remains unknown due to limited generalizability and lack of acceptability testing results.
A second study from a large, pediatric emergency department (ED) in Colorado provides another example where implementation of high-tech tools sometimes ends in unexpected results. Rao et al. conducted a randomized controlled trial of 908 children with acute respiratory symptoms in the pre-COVID era. All participants provided nasopharyngeal swabs for the BioFire RP2 panel a molecular, rapid test with 45-minute turnaround time that includes 22 viral and bacterial targets. Subjects were randomized for their providers and families to receive test results vs. not receive results. The primary outcome was antibiotic prescription, hypothesizing that test results would lead to avoidance of unnecessary antibiotics for viral infections. Unfortunately, the study did not show a reduction in antibiotic prescribing. In fact, the intention-to-treat analysis showed that intervention children were more likely to receive antibiotics (relative risk [RR], 1.3; 95% CI, 1.0-1.7) and a diagnosis of infection that would require antibiotics. No significant difference in antiviral prescribing, medical visits, and hospitalization were seen. Specifically, prescribing was higher in kids testing negative for a respiratory pathogen or kids testing positive for a bacterial pathogen. So, despite a conclusion that the reason for higher prescribing was “unknown,” the data suggest that perhaps the extra prescriptions were due to “surprise” findings of bacterial pathogens that may/may not be clinically significant, or due to the viral testing being interpreted as a pertinent negative for viral infection and an assumption that an unidentified bacterial pathogen must be the cause. Overall, those that promote the idea that high-tech “testing” can reduce our problems with clinical uncertainty should think twice. We’ve already learned this lesson with C. difficile PCRs. Extended respiratory panels will also require careful clinical evaluations and diagnostic stewardship — or unintended consequences could result.
Advancing technology has the potential to revolutionize the way we diagnose and treat infectious diseases and to reduce cognitive burden on decision-makers. However, there is a substantial gap between development of promising tools and their implementation. Acceptability, workflows, and proper interpretation of results are barriers to adoption of new technologies. The rapid assessment of multiple viral and bacterial targets from a single clinical sample clearly represents a significant advance in diagnostic capability. Such panels present additional challenges in interpreting surprise findings, including identification of multiple organisms. Similarly, machine learning and other computer-based decision aids represent steps forward in the collation and analysis of data from prior experience. Unfortunately, many of these algorithms suffer from a lack of transparency and interpretability that connect inputs to clinical experience, limiting trust and therefore uptake. For new technologies to truly impact patient care, we must explore the full range of downstream effects – both positive and negative – of implementation.