An Audio Declipping Method Based on Deep Neural Networks

Seung Un Choi, Seung Ho Choi

Research output: Contribution to journalArticlepeer-review

Abstract

This paper is about declipping that restores the original sound from a clipped audio signal, and for this purpose, we propose a new method based on a deep neural network. This technique first detects clipping frames based on the number of clipped audio samples. Then, the network is trained using the magnitude spectra of the clipping frame and the original sound frame as input and output of the deep neural network. Through the experiment comparing the RMSE and LSD between the original sound and the reconstructed signal in the speech database, the proposed method showed that the performance was improved compared to the existing method.

Original languageEnglish
Pages (from-to)1306-1309
Number of pages4
JournalJournal of Korean Institute of Communications and Information Sciences
Volume47
Issue number9
DOIs
StatePublished - Sep 2022

Keywords

  • Audio declipping
  • Broadcasting audio
  • Clipping detection
  • Deep neural network
  • Speech communication

Fingerprint

Dive into the research topics of 'An Audio Declipping Method Based on Deep Neural Networks'. Together they form a unique fingerprint.

Cite this