Editing
Advancements in Neural Network-Based TTS (2000s)
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====== <big>[[wikipedia:Vocoder|Vocoder]]</big> ====== '''WaveNet:''' WaveNet serves as a vital component in Tacotron 2 for speech synthesis. It transforms mel-scale spectrograms, predicted by the feature prediction network, into time-domain waveform samples, resulting in high-quality audio waveforms.In this architecture, WaveNet is adapted to function as a vocoder. It takes the predicted mel spectrograms and uses dilated convolutional layers organized into dilation cycles.<ref name=":0" /> '''Innovation:''' * Dilated Causal Convolutions: Utilizes dilated causal convolutions to exponentially expand the receptive field with the number of layers. * Gated Activation Units: Incorporates gated activation units to regulate information flow within the network. * Skip Connections: Employs skip connections for the network to learn residual functions. * Softmax Output Layer: Utilizes a softmax output layer to model the probability distribution over the next audio sample. * Hierarchical Structure: Adopts a hierarchical structure to model audio at multiple scales.<ref name=":0">Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & van den Oord, A. (2017). Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. arXiv preprint arXiv:1712.05884.</ref> '''Parallel WaveGAN:''' Parallel WaveGAN is a waveform generator utilizing GANs. It employs non-autoregressive WaveNet with multi-resolution spectrogram and adversarial loss functions for high-quality speech waveform synthesis. Unlike traditional models, it doesn't require complex density distillation, offering faster, efficient, small-footprint, and competitive waveform generation suitable for real-time applications. '''Innovation:''' * Non-Autoregressive WaveNet Generator: Utilizes a non-autoregressive WaveNet generator for faster and more efficient speech synthesis compared to traditional autoregressive models. * Multi-Resolution Spectrogram and Adversarial Loss Functions: Trains the generator using multi-resolution spectrogram and adversarial loss functions to capture realistic speech waveforms' time-frequency distribution. * No Density Distillation Required: Simplifies the training process by eliminating the need for density distillation, enhancing overall model efficiency. * High-Fidelity Speech Generation with Small Model: Achieves high-fidelity speech synthesis with a compact model, suitable for real-time applications. * Faster-Than-Real-Time Inference Speed: Provides inference speeds faster than real-time, making it ideal for real-time applications. * Competitive Performance: Achieves competitive performance compared to other waveform generation models, ensuring high-quality speech synthesis.<ref>Yamamoto, R., Inoue, K., Portnoff, M., Tan, X., Inoue, S., Yamamoto, H., ... & Watanabe, S. (2020). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. arXiv preprint arXiv:1910.11480.</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information