TY - JOUR
T1 - Applying Machine Learning Across Sites
T2 - External Validation of a Surgical Site Infection Detection Algorithm
AU - Zhu, Ying
AU - Simon, Gyorgy J.
AU - Wick, Elizabeth C.
AU - Abe-Jones, Yumiko
AU - Najafi, Nader
AU - Sheka, Adam
AU - Tourani, Roshan
AU - Skube, Steven J.
AU - Hu, Zhen
AU - Melton, Genevieve B.
N1 - Publisher Copyright:
© 2021 American College of Surgeons
PY - 2021/6
Y1 - 2021/6
N2 - Background: Surgical complications have tremendous consequences and costs. Complication detection is important for quality improvement, but traditional manual chart review is burdensome. Automated mechanisms are needed to make this more efficient. To understand the generalizability of a machine learning algorithm between sites, automated surgical site infection (SSI) detection algorithms developed at one center were tested at another distinct center. Study design: NSQIP patients had electronic health record (EHR) data extracted at one center (University of Minnesota Medical Center, Site A) over a 4-year period for model development and internal validation, and at a second center (University of California San Francisco, Site B) over a subsequent 2-year period for external validation. Models for automated NSQIP SSI detection of superficial, organ space, and total SSI within 30 days postoperatively were validated using area under the curve (AUC) scores and corresponding 95% confidence intervals. Results: For the 8,883 patients (Site A) and 1,473 patients (Site B), AUC scores were not statistically different for any outcome including superficial (external 0.804, internal [0.784, 0.874] AUC); organ/space (external 0.905, internal [0.867, 0.941] AUC); and total (external 0.855, internal [0.854, 0.908] AUC) SSI. False negative rates decreased with increasing case review volume and would be amenable to a strategy in which cases with low predicted probabilities of SSI could be excluded from chart review. Conclusions: Our findings demonstrated that SSI detection machine learning algorithms developed at 1 site were generalizable to another institution. SSI detection models are practically applicable to accelerate and focus chart review.
AB - Background: Surgical complications have tremendous consequences and costs. Complication detection is important for quality improvement, but traditional manual chart review is burdensome. Automated mechanisms are needed to make this more efficient. To understand the generalizability of a machine learning algorithm between sites, automated surgical site infection (SSI) detection algorithms developed at one center were tested at another distinct center. Study design: NSQIP patients had electronic health record (EHR) data extracted at one center (University of Minnesota Medical Center, Site A) over a 4-year period for model development and internal validation, and at a second center (University of California San Francisco, Site B) over a subsequent 2-year period for external validation. Models for automated NSQIP SSI detection of superficial, organ space, and total SSI within 30 days postoperatively were validated using area under the curve (AUC) scores and corresponding 95% confidence intervals. Results: For the 8,883 patients (Site A) and 1,473 patients (Site B), AUC scores were not statistically different for any outcome including superficial (external 0.804, internal [0.784, 0.874] AUC); organ/space (external 0.905, internal [0.867, 0.941] AUC); and total (external 0.855, internal [0.854, 0.908] AUC) SSI. False negative rates decreased with increasing case review volume and would be amenable to a strategy in which cases with low predicted probabilities of SSI could be excluded from chart review. Conclusions: Our findings demonstrated that SSI detection machine learning algorithms developed at 1 site were generalizable to another institution. SSI detection models are practically applicable to accelerate and focus chart review.
UR - http://www.scopus.com/inward/record.url?scp=85104406522&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85104406522&partnerID=8YFLogxK
U2 - 10.1016/j.jamcollsurg.2021.03.026
DO - 10.1016/j.jamcollsurg.2021.03.026
M3 - Article
C2 - 33831539
AN - SCOPUS:85104406522
SN - 1072-7515
VL - 232
SP - 963-971.e1
JO - Journal of the American College of Surgeons
JF - Journal of the American College of Surgeons
IS - 6
ER -