Inspired by the primate visual system, computational saliency models decompose the visual input into a set of feature maps across spatial scales. In the standard approach, the feature maps of the pre-specified channels are summed to yield the final saliency map. We study the feature integration problem and propose two improved strategies: first, we learn a weighted linear combination of features using the constraint linear regression algorithm. We further propose an AdaBoost based algorithm to approach the feature selection, thresholding, weight assignment, and nonlinear integration in a single principled framework. Extensive quantitative evaluations of the new models are conducted using four public datasets, and improvements on model predictability power are shown.