인공신경망의 연결압축에 대한 연구

Translated title of the contribution: A Study on Compression of Connections in Deep Artificial Neural Networks

Research output: Contribution to journalArticlepeer-review

Abstract

Recently Deep-learning, Technologies using Large or Deep Artificial Neural Networks, have Shown Remarkable Performance, and the Increasing Size of the Network Contributes to its Performance Improvement. However, the Increase in the Size of the Neural Network Leads to an Increase in the Calculation Amount, which Causes Problems Such as Circuit Complexity, Price, Heat Generation, and Real-time Restriction. In This Paper, We Propose and Test a Method to Reduce the Number of Network Connections by Effectively Pruning the Redundancy in the Connection and Showing the Difference between the Performance and the Desired Range of the Original Neural Network. In Particular, we Proposed a Simple Method to Improve the Performance by Re-learning and to Guarantee the Desired Performance by Allocating the Error Rate per Layer in Order to Consider the Difference of each Layer. Experiments have been Performed on a Typical Neural Network Structure such as FCN (full connection network) and CNN (convolution neural network) Structure and Confirmed that the Performance Similar to that of the Original Neural Network can be Obtained by Only about 1/10 Connection.
Translated title of the contributionA Study on Compression of Connections in Deep Artificial Neural Networks
Original languageKorean
Pages (from-to)17-24
Number of pages8
Journal한국산업정보학회논문지
Volume22
Issue number5
DOIs
StatePublished - Oct 2017

Fingerprint

Dive into the research topics of 'A Study on Compression of Connections in Deep Artificial Neural Networks'. Together they form a unique fingerprint.

Cite this