Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-Iearning applications, training data can be naturally separated into several groups, and incorporating this group information into learning may improve generalization. Recently, Vapnik  proposed general approach to formalizing such problems, known as Learning With Structured Data (LWSD) and its SVM-based optimization formulation called SVM+. Liang and Cherkassky [5,6] showed empirical validation of SVM+ for classification, and its connections to Multi-Task Learning (MTL) approaches in machine learning. This paper builds upon this recent work [5,6,9] and describes a new methodology for regression problems, combining Vapnik's SVM+ regression  and the MTL classification setting , for regression problems. We also show empirical comparisons between standard SVM regression, SVM+, and proposed SVM+MTL regression method. Practical implementation of new learning technologies, such as SVM+, is often hindered by their complexity, i.e. large number of tuning parameters (vs standard inductive SVM regression). To this end, we provide a practical scheme for model selection that combines analytic selection of parameters for SVM regression  and resampling-based methods for selecting model parameters specific to SVM+ and SVM+MTL.