This work presents a framework for dynamic energy reduction in hardware accelerators for convolutional neural networks (CNN). The key idea is based on the early prediction of the features that may be important, with the deactivation of computations related to unimportant features and static bitwidth reduction. The former is applied in late layers of the CNN, while the latter is more effective in the early layers. The procedure includes a methodology for automated threshold tuning to detect feature activation. For various state-of-the-art neural networks, the results show that energy savings of up to about 30% are achievable, after accounting for all implementation overheads, with a small loss in the accuracy.
|Original language||English (US)|
|Journal||IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems|
|State||Accepted/In press - 2020|
- Biological neural networks
- Computer architecture
- deep learning
- energy optimization.
- Feature extraction
- Low-power design
- neural network