Purpose: A key approach to improving safety performance in radiation oncology is the use of prospective safety measures. Here we quantify the statistical patterns of one such tool, failure mode and effects analysis (FMEA), and compare it to reported process deviation events from an in‐house reporting system. Materials and Methods: FMEA provides a methodology for prospectively identifying potential failure modes and ranking them by importance. Our analysis consisted of individual blinded scoring of each failure mode by approximately 10 staff members. The results were compared to a group consensus on a subset of failure modes, and also to data from our department deviation reporting system acquired over a three‐month time period just after the completion of the FMEA. Results: 159 different failure modes were identified. There was a high level of variability of FMEA scoring between people, with standard deviations of 76% and 34% for risk priority number and severity, respectively. Severity scores were well‐correlated with RPN scores (Pearson r=0.50). Severity and RPN scores from therapists were significantly lower than the scores from physicists and treatment planners (p<0.001). The mean severity scores for individuals were significantly different than the group consensus scores. Despite the large number of failure modes identified in the prospective analysis, only 10 of 24 (42%) of actual reported deviation events were identified in the FMEA. Conclusions: FMEA is a widely‐used tool for prospective safety improvement but is subject to significant variability in scoring and may not uncover some errors actual occurring in the clinic. Severity‐only scoring may provide a simplified alternative to FMEA, and the addition of an error reporting system may prove valuable.