TY - GEN
T1 - Power-Efficient Double-Cyclic Low-Precision Training for Convolutional Neural Networks
AU - Kim, Sungrae
AU - Kim, Hyun
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Owing to the rapid development of deep learning, there has been a remarkable growth in the field of computer vision, including image classification. However, because recent deep learning models require many parameters and calculations, it is essential to reduce power consumption through weight reduction for practical use in embedded platforms, such as mobile devices. In particular, recent attempts to train deep learning models on edge/mobiles have been increasing to obtain customized models with user environments and to solve privacy issues. However, because batteries and hardware resources are limited in the edge/mobile environment, the need for low-precision training has increased. In this study, we propose a power-efficient double-cyclic low-precision training method that uses two different precision cycles for weights and activations during training. The results of verifying the proposed method in various ResNet models indicate an average accuracy improvement of 0.25% compared with the existing low-precision training method and an approximately 25% power reduction effect. Consequently, a 92.8% reduction in hardware resources is achieved with negligible performance degradation compared to full-precision training.
AB - Owing to the rapid development of deep learning, there has been a remarkable growth in the field of computer vision, including image classification. However, because recent deep learning models require many parameters and calculations, it is essential to reduce power consumption through weight reduction for practical use in embedded platforms, such as mobile devices. In particular, recent attempts to train deep learning models on edge/mobiles have been increasing to obtain customized models with user environments and to solve privacy issues. However, because batteries and hardware resources are limited in the edge/mobile environment, the need for low-precision training has increased. In this study, we propose a power-efficient double-cyclic low-precision training method that uses two different precision cycles for weights and activations during training. The results of verifying the proposed method in various ResNet models indicate an average accuracy improvement of 0.25% compared with the existing low-precision training method and an approximately 25% power reduction effect. Consequently, a 92.8% reduction in hardware resources is achieved with negligible performance degradation compared to full-precision training.
KW - convolutional neural network
KW - hardware implementation
KW - image classification
KW - low-power
KW - low-precision training
KW - quantization
UR - http://www.scopus.com/inward/record.url?scp=85139030964&partnerID=8YFLogxK
U2 - 10.1109/AICAS54282.2022.9869948
DO - 10.1109/AICAS54282.2022.9869948
M3 - Conference contribution
AN - SCOPUS:85139030964
T3 - Proceeding - IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
SP - 344
EP - 347
BT - Proceeding - IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 4th IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
Y2 - 13 June 2022 through 15 June 2022
ER -