Bandwidth Extension is All You Need

Jiaqi Su, Yunyun Wang, Adam Finkelstein, Zeyu Jin

Speech generation and enhancement have seen recent breakthroughs in quality thanks to deep learning. These methods typically operate at a limited sampling rate of 16-22kHz due to computational complexity and available datasets. This limitation imposes a gap between the output of such methods and that of high-fidelity (>=44kHz) real-world audio applications. This paper proposes a new bandwidth extension (BWE) method that expands 8-16kHz speech signals to 48kHz. The method is based on a feed-forward WaveNet architecture trained with a GAN-based deep feature loss. A mean-opinion-score (MOS) experiment shows significant improvement in quality over state-of-the-art BWE methods. An AB test reveals that our 16-to-48kHz BWE is able to achieve fidelity that is typically indistinguishable from real high-fidelity recordings. We use our method to enhance the output of recent speech generation and denoising methods, and experiments demonstrate significant improvement in sound quality over these baselines. We propose this as a general approach to narrow the gap between generated speech and recorded speech, without the need to adapt such methods to higher sampling rates.

The DEMAND Dataset: Denoising + BWE >>>More Samples
The CMU Arctic Dataset: Vocoding + BWE >>>More Samples
The VCTK Dataset: 8k-to-48k BWE >>>More Samples
The VCTK Dataset: 16k-to-48k BWE >>>More Samples
The DAPS Dataset: 8k-to-48k BWE >>>More Samples
The DAPS Dataset: 16k-to-48k BWE >>>More Samples