TY - JOUR
T1 - Efficient Convolutional Processing of Spiking Neural Network With Weight-Sharing Filters
AU - Song, Seunghwan
AU - Jeon, Bosung
AU - Kim, Munhyeon
AU - Kim, Jae Joon
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2023/6/1
Y1 - 2023/6/1
N2 - The importance of implementing an efficient convolutional neural network (CNN) is increasing. A weight-sharing spiking CNN inference system (WS-SCNN) employing efficient convolution layers (ECLs) is proposed and modeled to enable the compact convolutional processing of the spiking neural network (SNN) inference. The proposed ECL efficiently maps convolutional features between inputs and filter weights. The ECL does not replicate the synaptic filter array with respect to input sliding, which minimizes the number of synaptic devices required to implement hardware SNNs. A four-bit weight quantization capability of a fabricated charge-trap flash synaptic device is used to verify the accurate multiplication and summation of weights in the ECL. Moreover, a nine-layer WS-SCNN consisting of multiple ECLs is modeled, and the benefits of the WS-SCNN in terms of the area and energy are evaluated. Simulation results show that the WS-SCNN has 5.68 and 103.5 times higher energy and area efficiency than conventional SCNN systems, respectively.
AB - The importance of implementing an efficient convolutional neural network (CNN) is increasing. A weight-sharing spiking CNN inference system (WS-SCNN) employing efficient convolution layers (ECLs) is proposed and modeled to enable the compact convolutional processing of the spiking neural network (SNN) inference. The proposed ECL efficiently maps convolutional features between inputs and filter weights. The ECL does not replicate the synaptic filter array with respect to input sliding, which minimizes the number of synaptic devices required to implement hardware SNNs. A four-bit weight quantization capability of a fabricated charge-trap flash synaptic device is used to verify the accurate multiplication and summation of weights in the ECL. Moreover, a nine-layer WS-SCNN consisting of multiple ECLs is modeled, and the benefits of the WS-SCNN in terms of the area and energy are evaluated. Simulation results show that the WS-SCNN has 5.68 and 103.5 times higher energy and area efficiency than conventional SCNN systems, respectively.
KW - Charge trap flash (CTF)
KW - efficient convolutional processing
KW - spiking neural network (SNN)
UR - https://www.scopus.com/pages/publications/85153348614
U2 - 10.1109/LED.2023.3265065
DO - 10.1109/LED.2023.3265065
M3 - Article
AN - SCOPUS:85153348614
SN - 0741-3106
VL - 44
SP - 1007
EP - 1010
JO - IEEE Electron Device Letters
JF - IEEE Electron Device Letters
IS - 6
ER -