Abstract
This paper is about declipping that restores the original sound from a clipped audio signal, and for this purpose, we propose a new method based on a deep neural network. This technique first detects clipping frames based on the number of clipped audio samples. Then, the network is trained using the magnitude spectra of the clipping frame and the original sound frame as input and output of the deep neural network. Through the experiment comparing the RMSE and LSD between the original sound and the reconstructed signal in the speech database, the proposed method showed that the performance was improved compared to the existing method.
Translated title of the contribution | An Audio Declipping Method Based on Deep Neural Networks |
---|---|
Original language | English |
Pages (from-to) | 1306-1309 |
Number of pages | 4 |
Journal | 한국통신학회논문지 |
Volume | 47 |
Issue number | 9 |
DOIs | |
State | Published - Sep 2022 |