TY - JOUR
T1 - VFF-Net
T2 - Evolving forward–forward algorithms into convolutional neural networks for enhanced computational insights
AU - Lee, Gilha
AU - Shin, Jin
AU - Kim, Hyun
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/10
Y1 - 2025/10
N2 - In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP have been explored, the most well-known of which is the forward–forward network (FFN). We propose a visual forward–forward network (VFF-Net) that significantly improves FFNs for deeper networks, focusing on enhancing performance in convolutional neural network (CNN) training. VFF-Net utilizes a label-wise noise labeling method and cosine-similarity-based contrastive loss, which directly uses intermediate features to solve both the input information loss problem and the performance drop problem caused by the goodness function when applied to CNNs. Furthermore, VFF-Net is accompanied by layer grouping, which groups layers with the same output channel for application in well-known existing CNN-based models; this reduces the number of minima that need to be optimized and facilitates the transfer to CNN-based models by demonstrating the effects of ensemble training. VFF-Net improves the test error by up to 8.31% and 3.80% on a model consisting of four convolutional layers compared with the FFN model targeting a conventional CNN on CIFAR-10 and CIFAR-100, respectively. Furthermore, the fully connected layer-based VFF-Net achieved a test error of 1.70% on the MNIST dataset, which is better than that of the existing BP. In conclusion, the proposed VFF-Net significantly reduces the performance gap with BP by improving the FFN and shows the flexibility to be portable to existing CNN-based models.
AB - In recent years, significant efforts have been made to overcome the limitations inherent in the traditional back-propagation (BP) algorithm. These limitations include overfitting, vanishing/exploding gradients, slow convergence, and black-box nature. To address these limitations, alternatives to BP have been explored, the most well-known of which is the forward–forward network (FFN). We propose a visual forward–forward network (VFF-Net) that significantly improves FFNs for deeper networks, focusing on enhancing performance in convolutional neural network (CNN) training. VFF-Net utilizes a label-wise noise labeling method and cosine-similarity-based contrastive loss, which directly uses intermediate features to solve both the input information loss problem and the performance drop problem caused by the goodness function when applied to CNNs. Furthermore, VFF-Net is accompanied by layer grouping, which groups layers with the same output channel for application in well-known existing CNN-based models; this reduces the number of minima that need to be optimized and facilitates the transfer to CNN-based models by demonstrating the effects of ensemble training. VFF-Net improves the test error by up to 8.31% and 3.80% on a model consisting of four convolutional layers compared with the FFN model targeting a conventional CNN on CIFAR-10 and CIFAR-100, respectively. Furthermore, the fully connected layer-based VFF-Net achieved a test error of 1.70% on the MNIST dataset, which is better than that of the existing BP. In conclusion, the proposed VFF-Net significantly reduces the performance gap with BP by improving the FFN and shows the flexibility to be portable to existing CNN-based models.
KW - Backpropagation-free
KW - Convolutional neural network(CNN)
KW - Forward–forward network
UR - https://www.scopus.com/pages/publications/105008126443
U2 - 10.1016/j.neunet.2025.107697
DO - 10.1016/j.neunet.2025.107697
M3 - Article
C2 - 40526999
AN - SCOPUS:105008126443
SN - 0893-6080
VL - 190
JO - Neural Networks
JF - Neural Networks
M1 - 107697
ER -