Support Vector Machines (SVMs) have become one of the most common and popular machine learning tools for classification, pattern recognition, and object detection. The accelerating requirement for processing SVM yields the implementation of an SVM algorithm on the hardware. In general, the training phase for SVM is performed using software. The SVM algorithm is implemented on the hardware through the parameters generated from the training phase. Hence, training time and hardware overhead are two significant metrics to consider when improving SVM. In this paper, we propose a innovative model of SVM called Highly Parallel SVM (HPSVM) for binary classification. The HPSVM is capable of saving training time and hardware overhead while simultaneously maintaining good classification accuracy. The idea of the HPSVM is to perform the newly proposed Concurrent Gaussian Selection for picking significant training data to learn an ensemble of linear classifiers for approximation of the complicated classifier. By doing so, training time and hardware cost can be tremendously reduced. The experimental results show that, compared to the proposed parallel SVM, Ensemble of Exemplar-SVMs, the HPSVM achieves 3x training time reduction and reduces hardware cost by about 6x while slightly improving the classification accuracy.