Exploiting additional information to improve traditional inductive learning is an active research area in machine learning. In many supervised-learning applications, data can be naturally separated into several groups, or tasks, and incorporating this information into learning may improve generalization. There are many Multi-Task Learning (MTL) techniques for classification recently proposed in machine learning. This paper focuses on analysis and comparison of the two recent SVM-based MTL techniques: regularized MTL (rMTL) and SVM+ based MTL (SVM+MTL). In particular, our analysis shows how these two methods can be implemented using standard SVM software. Further, we present extensive empirical comparisons between these two methods, which relates advantages/limitations of each method to statistical characteristics of the training data.