TY - JOUR
T1 - Implementation of Convolutional Neural Networks in Memristor Crossbar Arrays with Binary Activation and Weight Quantization
AU - Park, Jinwoo
AU - Kim, Sungjoon
AU - Song, Min Suk
AU - Youn, Sangwook
AU - Kim, Kyuree
AU - Kim, Tae Hyeon
AU - Kim, Hyungjin
N1 - Publisher Copyright:
© 2024 American Chemical Society
PY - 2024/1/10
Y1 - 2024/1/10
N2 - We propose a hardware-friendly architecture of a convolutional neural network using a 32 × 32 memristor crossbar array having an overshoot suppression layer. The gradual switching characteristics in both set and reset operations enable the implementation of a 3-bit multilevel operation in a whole array that can be utilized as 16 kernels. Moreover, a binary activation function mapped to the read voltage and ground is introduced to evaluate the result of training with a boundary of 0.5 and its estimated gradient. Additionally, we adopt a fixed kernel method, where inputs are sequentially applied to a crossbar array with a differential memristor pair scheme, reducing unused cell waste. The binary activation has robust characteristics against device state variations, and a neuron circuit is experimentally demonstrated on a customized breadboard. Thanks to the analogue switching characteristics of the memristor device, the accurate vector-matrix multiplication (VMM) operations can be experimentally demonstrated by combining sequential inputs and the weights obtained through tuning operations in the crossbar array. In addition, the feature images extracted by VMM during the hardware inference operations on 100 test samples are classified, and the classification performance by off-chip training is compared with the software results. Finally, inference results depending on the tolerance are statistically verified through several tuning cycles.
AB - We propose a hardware-friendly architecture of a convolutional neural network using a 32 × 32 memristor crossbar array having an overshoot suppression layer. The gradual switching characteristics in both set and reset operations enable the implementation of a 3-bit multilevel operation in a whole array that can be utilized as 16 kernels. Moreover, a binary activation function mapped to the read voltage and ground is introduced to evaluate the result of training with a boundary of 0.5 and its estimated gradient. Additionally, we adopt a fixed kernel method, where inputs are sequentially applied to a crossbar array with a differential memristor pair scheme, reducing unused cell waste. The binary activation has robust characteristics against device state variations, and a neuron circuit is experimentally demonstrated on a customized breadboard. Thanks to the analogue switching characteristics of the memristor device, the accurate vector-matrix multiplication (VMM) operations can be experimentally demonstrated by combining sequential inputs and the weights obtained through tuning operations in the crossbar array. In addition, the feature images extracted by VMM during the hardware inference operations on 100 test samples are classified, and the classification performance by off-chip training is compared with the software results. Finally, inference results depending on the tolerance are statistically verified through several tuning cycles.
KW - binary activation function
KW - convolutional neural network
KW - memristor crossbar array
KW - neuromorphic computing
KW - weight quantization
UR - https://www.scopus.com/pages/publications/85181823482
U2 - 10.1021/acsami.3c13775
DO - 10.1021/acsami.3c13775
M3 - Article
C2 - 38163259
AN - SCOPUS:85181823482
SN - 1944-8244
VL - 16
SP - 1054
EP - 1065
JO - ACS Applied Materials and Interfaces
JF - ACS Applied Materials and Interfaces
IS - 1
ER -