This brief presents a novel architecture for Support Vector Machines (SVMs), a machine learning algorithm that performs classification tasks. SVMs achieve very good classification accuracy at the cost of high computational complexity. We propose a low-energy architecture based on approximate computing by exploiting the inherent error resilience in the SVM computation. We present two design optimizations, fixed-width multiply-add and non-uniform look-up table (LUT) for exponent function to minimize power consumption and hardware complexity while retaining the classification performance. A novel non-uniform quantization scheme is proposed for implementing the exponent function which reduces the size of the look-up table by 50%. The proposed non-uniform look-up table reduces the power consumption by 35% using 10-bit quantization. The proposed architecture is programmable and can evaluate three different kernels (linear, polynomial, radial basis function (RBF)). The proposed design consumes 31% less energy on average compared to a conventional design. We estimate that SVM computation using RBF kernel can be performed in 382.2nJ for 36 features and 5000 support vectors using 65nm technology.