Predictive models are a key component of forest research, and although much literature has been devoted to exploring appropriate functional forms for a variety of modeling scenarios, there is far less addressing the best statistical methods for selecting among multiple candidate models. In this study, we compare model-averaged predictors developed from a common set of models, using two approaches for calculating Bayesian posterior model probabilities (exact analytical calculation using reference priors and estimation based on reversible jump Monte Carlo), the deviance information criterion (DIC) and the small sample-size-corrected form of Akaike’s information criterion (AICc). Our example involves linear variable selection to develop models for predicting outside bark volume of loblolly pine (Pinus taeda), and this comparison is accomplished via cross-validation. We found that analytical calculation of the posterior probabilities, DIC, and AICc, resulted in model sets with comparable predictive performance, whereas reversible jump was less consistent and on average provided less accurate results. In the latter case, poor mixing of the reversible jump algorithm may have contributed to biased estimates of posterior probabilities. In general, our results show that the choice of model selection criteria may lead to divergent results in the choice and weighting of candidate models, although in our case study, these discrepancies had only small effects on predictive performance. However, in other analytical scenarios, these differences may be more profound. Regardless of how model selection is carried out, predictive models should be carefully evaluated, preferably through rigorous evaluation of predictive performance.
- Akaike’s information criterion
- Bayesian model selection
- Deviance information criterion
- Variable selection