Bandwidth Extension is All You Need

Jiaqi Su, Yunyun Wang, Adam Finkelstein, Zeyu Jin

Speech generation and enhancement have seen recent breakthroughs in quality thanks to deep learning. These methods typically operate at a limited sampling rate of 16-22kHz due to computational complexity and available datasets. This limitation imposes a gap between the output of such methods and that of high-fidelity (>=44kHz) real-world audio applications. This paper proposes a new bandwidth extension (BWE) method that expands 8-16kHz speech signals to 48kHz. The method is based on a feed-forward WaveNet architecture trained with a GAN-based deep feature loss. A mean-opinion-score (MOS) experiment shows significant improvement in quality over state-of-the-art BWE methods. An AB test reveals that our 16-to-48kHz BWE is able to achieve fidelity that is typically indistinguishable from real high-fidelity recordings. We use our method to enhance the output of recent speech generation and denoising methods, and experiments demonstrate significant improvement in sound quality over these baselines. We propose this as a general approach to narrow the gap between generated speech and recorded speech, without the need to adapt such methods to higher sampling rates.

Back to main page
The VCTK Dataset: 16k-to-48k BWE
Loading......
REFERENCES
  1. LP [15]: P. Bachhav, M. Todisco, and N. Evans, “Efficient super-widebandwidth extension using linear prediction based analysis-synthesis,” inICASSP 2018.
  2. Spec [20]: S. E. Eskimez and K. Koishida,“Speech super resolutiongenerative adversarial network,” inICASSP 2019.
  3. FFTNet [6]: B. Feng, Z. Jin, J. Su, and A. Finkelstein, “Learning bandwidthexpansion using perceptually-motivated loss,”ICASSP 2019.
  4. Time [28]: X. Li, V. Chebiyyam, et al., “Speech audio super-resolution forspeech recognition.,”Interspeech 2019.