Near-threshold voltage computing can significantly improve energy efficiency; however, both operating speed and resilience to parametric variation reduce as the operating voltage reaches the threshold voltage. To prevent degradation in throughput performance, more cores should contribute to computation. The corresponding expansion in the chip area, however, is likely to further exacerbate the already intensified vulnerability to variation. Variation itself results in slower cores and ample speed differences among the cores. Worse, variation-induced slowdown might prevent operation at the designated clock speed, leading to timing errors. In this article, the authors exploit the intrinsic error tolerance of emerging Recognition, Mining, and Synthesis applications to mitigate variation. RMS applications can tolerate errors emanating from data-intensive program phases as opposed to control. Accordingly, the authors reserve reliable cores for control, and they execute error-tolerant data-intensive phases on error-prone cores. They also provide a design space exploration for decoupled control and data processing to mitigate variation at near-threshold voltages.
- Approximate computing
- Decoupled control-data execution
- Error tolerance
- Near-threshold voltage computing
- Process variation
- Timing errors