Vocoder-Projected Feature Discriminator

Interspeech 2025

Paper

teaser
Vocoder-projected feature discriminator (proposed)

teaser
Cf. Vocoder waveform discriminator (previous)

TL;DR

We propose a vocoder-projected feature discriminator (VPFD), which leverages vocoder features to facilitate faster and more efficient adversarial training.

Abstract

In text-to-speech (TTS) and voice conversion (VC), acoustic features, such as mel spectrograms, are typically used as synthesis or conversion targets owing to their compactness and ease of learning. However, because the ultimate goal is to generate high-quality waveforms, employing a vocoder to convert these features into waveforms and applying adversarial training in the time domain is reasonable. Nevertheless, upsampling the waveform introduces significant time and memory overheads. To address this issue, we propose a vocoder-projected feature discriminator (VPFD), which uses vocoder features for adversarial training. Experiments on diffusion-based VC distillation demonstrated that a pretrained and frozen vocoder feature extractor with a single upsampling step is necessary and sufficient to achieve a VC performance comparable to that of waveform discriminators while reducing the training time and memory consumption by 9.6 and 11.4 times, respectively.

Contents


Results

Table 1: Comparison of performance with varying number of upsampling steps
Source Target FVG FVG+VPFD0 FVG+VPFD1 FVG+VPFD2 FVG+VPFD3 FVG+VPFD4
Female → Female
Male → Male
Female → Male
Male → Female

Table 2: Analysis of the importance of pretraining and freezing the vocoder feature extractor
Source Target FVG+VPFD1 FVG+VPFD1 FVG+VPFD1 FVG+VPFD1
Pretrained
Frozen
Female → Female
Male → Male
Female → Male
Male → Female

Table 3: Comparison with other training acceleration techniques
Source Target FVG FVGearly FVG w/o MRD FVG w/o MPD FVG+MelDsmall FVG+MelDlarge FVG+VPFD1
Female → Female
Male → Male
Female → Male
Male → Female

Table 4: Subjective evaluations
Source Target DiffVC-30 FVG FVG+VPFD1
Female → Female
Male → Male
Female → Male
Male → Female

Table 5: Results on LibriTTS dataset
Source Target FVG FVG+VPFD1
Female → Female
Male → Male
Female → Male
Male → Female

Citation

@inproceedings{kaneko2025vpfd,
  title={Vocoder-Projected Feature Discriminator},
  author={Kaneko, Takuhiro and Kameoka, Hirokazu and Tanaka, Kou and Kondo, Yuto},
  booktitle={Interspeech},
  year={2025},
}