TY - GEN
T1 - Low-energy architectures for Support Vector Machine computation
AU - Ayinala, Manohar
AU - Parhi, Keshab K.
PY - 2013/1/1
Y1 - 2013/1/1
N2 - This brief presents a novel architecture for Support Vector Machines (SVMs), a machine learning algorithm that performs classification tasks. SVMs achieve very good classification accuracy at the cost of high computational complexity. We propose a low-energy architecture based on approximate computing by exploiting the inherent error resilience in the SVM computation. We present two design optimizations, fixed-width multiply-add and non-uniform look-up table (LUT) for exponent function to minimize power consumption and hardware complexity while retaining the classification performance. A novel non-uniform quantization scheme is proposed for implementing the exponent function which reduces the size of the look-up table by 50%. The proposed non-uniform look-up table reduces the power consumption by 35% using 10-bit quantization. The proposed architecture is programmable and can evaluate three different kernels (linear, polynomial, radial basis function (RBF)). The proposed design consumes 31% less energy on average compared to a conventional design. We estimate that SVM computation using RBF kernel can be performed in 382.2nJ for 36 features and 5000 support vectors using 65nm technology.
AB - This brief presents a novel architecture for Support Vector Machines (SVMs), a machine learning algorithm that performs classification tasks. SVMs achieve very good classification accuracy at the cost of high computational complexity. We propose a low-energy architecture based on approximate computing by exploiting the inherent error resilience in the SVM computation. We present two design optimizations, fixed-width multiply-add and non-uniform look-up table (LUT) for exponent function to minimize power consumption and hardware complexity while retaining the classification performance. A novel non-uniform quantization scheme is proposed for implementing the exponent function which reduces the size of the look-up table by 50%. The proposed non-uniform look-up table reduces the power consumption by 35% using 10-bit quantization. The proposed architecture is programmable and can evaluate three different kernels (linear, polynomial, radial basis function (RBF)). The proposed design consumes 31% less energy on average compared to a conventional design. We estimate that SVM computation using RBF kernel can be performed in 382.2nJ for 36 features and 5000 support vectors using 65nm technology.
UR - http://www.scopus.com/inward/record.url?scp=84901257447&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84901257447&partnerID=8YFLogxK
U2 - 10.1109/ACSSC.2013.6810693
DO - 10.1109/ACSSC.2013.6810693
M3 - Conference contribution
AN - SCOPUS:84901257447
SN - 9781479923908
T3 - Conference Record - Asilomar Conference on Signals, Systems and Computers
SP - 2167
EP - 2171
BT - Conference Record of the 47th Asilomar Conference on Signals, Systems and Computers
PB - IEEE Computer Society
T2 - 2013 47th Asilomar Conference on Signals, Systems and Computers
Y2 - 3 November 2013 through 6 November 2013
ER -