TY - JOUR
T1 - Large margin semi-supervised learning
AU - Wang, Junhui
AU - Shen, Xiaotong
PY - 2007/8
Y1 - 2007/8
N2 - In classification, semi-supervised learning occurs when a large amount of unlabeled data is available with only a small number of labeled data. In such a situation, how to enhance predictability of classification through unlabeled data is the focus. In this article, we introduce a novel large margin semi-supervised learning methodology, using grouping information from unlabeled data, together with the concept of margins, in a form of regularization controlling the interplay between 0labeled and unlabeled data. Based on this methodology, we develop two specific machines involving support vector machines and ψ-learning, denoted as SSVM and SPSI, through difference convex programming. In addition, we estimate the generalization error using both labeled and unlabeled data, for tuning regularizers. Finally, our theoretical and numerical analyses indicate that the proposed methodology achieves the desired objective of delivering high performance in generalization, particularly against some strong performers.
AB - In classification, semi-supervised learning occurs when a large amount of unlabeled data is available with only a small number of labeled data. In such a situation, how to enhance predictability of classification through unlabeled data is the focus. In this article, we introduce a novel large margin semi-supervised learning methodology, using grouping information from unlabeled data, together with the concept of margins, in a form of regularization controlling the interplay between 0labeled and unlabeled data. Based on this methodology, we develop two specific machines involving support vector machines and ψ-learning, denoted as SSVM and SPSI, through difference convex programming. In addition, we estimate the generalization error using both labeled and unlabeled data, for tuning regularizers. Finally, our theoretical and numerical analyses indicate that the proposed methodology achieves the desired objective of delivering high performance in generalization, particularly against some strong performers.
KW - Generalization
KW - Grouping
KW - Sequential quadratic programming
KW - Support vectors
UR - http://www.scopus.com/inward/record.url?scp=35548968410&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=35548968410&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:35548968410
SN - 1532-4435
VL - 8
SP - 1867
EP - 1891
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -