Successive Overrelaxation for Laplacian Support Vector Machine

Semisupervised learning (SSL) problem, which makes use of both a large amount of cheap unlabeled data and a few unlabeled data for training, in the last few years, has attracted amounts of attention in machine learning and data mining. Exploiting the manifold regularization (MR), Belkinet al. proposed a new semisupervised classification algorithm: Laplacian support vector machines (LapSVMs), and have shown the state-of-the-art performance in SSL field. To further improve the LapSVMs, we proposed a fast Laplacian SVM (FLapSVM) solver for classification.

Compared with the standard LapSVM, our method has several improved advantages as follows: 1) FLapSVM does not need to deal with the extra matrix and burden the computations related to the variable switching, which make it more suitable for large scale problems; 2) FLapSVM’s dual problem has the same elegant formulation as that of standard SVMs. This means that the kernel trick can be applied directly into the optimization model; and 3) FLapSVM can be effectively solved by successive overrelaxation technology, which converges linearly to a solution and can process very large data sets that need not reside in memory. In practice, combining the strategies of random scheduling of subproblem and two stopping conditions, the computing speed of FLapSVM is rigidly quicker to that of LapSVM and it is a valid alternative to PLapSVM.

Share this post