Editing
Advancements in Neural Network-Based TTS (2000s)
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Key Innovations == Deep Neural Network-Based Text-to-Speech (TTS) has undergone a remarkable transformation whose evolution marks a shift from traditional rule-based and statistical methods towards neural network-driven solutions. Researchers have continually improved the quality and efficiency of TTS systems during this period. In the 2000s, DNN-based TTS models began to gain prominence, paving the way for more natural and expressive speech synthesis. Commencing with the introduction of foundational models, we shall elucidate the subsequent advancements in Text-to-Speech (TTS) technology. In this paragrah, some working mechanism are introduced to readers.<ref>G. Hinton et al., "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups," in IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, Nov. 2012, doi: 10.1109/MSP.2012.2205597.</ref> ==== <big>Articulatory Features-Based TTS</big> ==== Articulatory feature-based Text-to-Speech (TTS) is a concept that involves using articulatory features, which represent the movements and positions of the speech articulators (such as the tongue, lips, and jaw), as the basis for synthesizing speech. This approach aims to capture the detailed articulatory information present in the speech signal, allowing for more natural and expressive speech synthesis.<ref name=":3">Singampalli, V. D. (2010). ''Statistical identification of articulatory roles in speech production'' (Order No. 10131268). Available from ProQuest Dissertations & Theses A&I. (1810640121). Retrieved from <nowiki>http://server.proxy-ub.rug.nl/login?url=https://www.proquest.com/dissertations-theses/statistical-identification-articulatory-roles/docview/1810640121/se-2</nowiki></ref> '''Innovation:''' * Utilization of Biophysical Phonetics: By incorporating articulatory models and other biophysical phonetic information, this approach enhances the quality and naturalness of speech synthesis. * Enhanced Speech Quality: Articulatory Features-Based TTS improves the quality of synthesized speech by considering articulatory features, making it more closely resemble natural human speech. * Addressing Shortcomings of Traditional TTS: This method aims to compensate for the limitations of traditional TTS systems, particularly in terms of naturalness and quality of speech. * Improved Control Capabilities: Articulatory Features-Based TTS offers enhanced control over speech synthesis, enabling users to adjust parameters such as pitch, speed, and other characteristics. * Data-Driven Learning: This approach leans towards data-driven learning, reducing reliance on manual rules and models for speech synthesis. * These innovations have the potential to enhance the performance of speech synthesis systems, bringing them closer to natural human speech while providing greater control over the synthesized output.<ref name=":3" /> ==== <big>DNN-Based Acoustic TTS-Modeling and Vocoder</big> ==== ====== <big>TTS-Modeling</big> ====== '''Tacotron:''' Tacotron is an end-to-end generative Text-to-Speech (TTS) model that directly synthesizes speech from input characters, utilizing a sequence-to-sequence (seq2seq) architecture with attention. It avoids the need for phoneme-level alignment.<ref name=":4">Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). "Tacotron: Towards End-to-End Speech Synthesis." In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1946-1954).</ref> '''Innovation:''' * End-to-End Generative Model: Tacotron is an end-to-end generative model that synthesizes speech directly from characters, eliminating the need for intermediate linguistic features or acoustic models. * Sequence-to-Sequence Model with Attention: It is based on a sequence-to-sequence model with attention, enabling high-accuracy and natural speech generation. * No Phoneme-Level Alignment Required: Tacotron doesn't necessitate phoneme-level alignment, simplifying scalability to large data with transcripts. * Faster Frame-Level Generation: Tacotron generates speech at the frame level, making it considerably faster than sample-level autoregressive methods like WaveNet. * High Subjective Mean Opinion Score (MOS): Tacotron attains a 3.82 out of 5 on the subjective mean opinion score, surpassing a production parametric system in terms of naturalness, particularly for US English. '''Tacotron 2:''' Tacotron 2 is a neural network architecture for direct text-to-speech synthesis. It consists of two core elements: a feature prediction network and a modified [[wikipedia:WaveNet|WaveNet]] vocoder.<ref name=":0" /> '''Innovation:''' * Compact Acoustic Intermediate Representation: Tacotron 2 utilizes mel spectrograms, providing a streamlined representation of speech features and reducing WaveNet's architectural complexity. * Modified WaveNet Vocoder: Tacotron 2 adapts WaveNet architecture to convert mel spectrograms into time-domain waveform samples, achieving audio quality akin to human speech. * Integration of Tacotron and WaveNet: Combining Tacotron-style prosody modeling and WaveNet vocoder, Tacotron 2 delivers state-of-the-art sound quality in a unified, neural approach to speech synthesis. '''FastSpeech:''' FastSpeech is a neural text-to-speech (TTS) system that addresses the challenges of slow inference speed, lack of robustness, and lack of controllability in traditional TTS models. It uses a feed-forward network based on Transformer to generate mel-spectrograms in parallel, allowing for faster synthesis.<ref name=":1">Ren, J., Xu, L., Zhang, Z., Yang, T., Lai, J., Lu, Z., & Dai, L. R. (2019). FastSpeech: Fast, Robust and Controllable Text to Speech. arXiv preprint arXiv:1905.09263.</ref> '''Innovation:''' * Multi-head attention mechanism: Enhances long-range dependency modeling and parallelization by allowing simultaneous attention to different parts of the input sequence. * Positional encoding: Provides positional information to input sequence elements, aiding in distinguishing elements with identical values. * Layer normalization: Improves training stability by normalizing inputs to each network layer. * Stacked self-attention layers: Enables the network to learn multiple representation levels of the input sequence, enhancing output quality. * No recurrence or convolution: Unlike traditional architectures, the Transformer network omits recurrent connections and convolutions, resulting in improved efficiency and parallelizability.<ref name=":1" /> '''Transformer:''' The Transformer is a novel neural network architecture introduced by Vaswani et al. in 2017. It stands out for its exclusive reliance on attention mechanisms, foregoing recurrent connections and convolutions. In natural language processing, particularly neural machine translation, it has achieved remarkable success. The Transformer comprises an encoder and a decoder, both structured with stacks of identity blocks. It employs multi-head self-attention to model dependencies between input and output sequences.<ref name=":2">Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with transformer network. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI'19/IAAI'19/EAAI'19). AAAI Press, Article 823, 6706β6713. <nowiki>https://doi.org/10.1609/aaai.v33i01.33016706</nowiki></ref> '''Features:''' * Multi-Head Attention Efficiency: In Transformer, multi-head attention in Tacotron2 enhances training efficiency, constructing hidden states in the encoder and decoder concurrently, speeding up training by 4.25 times and effectively addressing long-range dependencies. * WaveNet Vocoder's Role: Within the Transformer TTS network, WaveNet vocoder synthesizes high-quality audio from mel spectrograms, closely resembling human recordings on specific datasets. * Transformer Architecture: The Transformer, introduced by Vaswani et al. in 2017, is a unique neural network architecture solely based on attention mechanisms. It excels in natural language processing tasks like neural machine translation, comprising encoder and decoder stacks of identity blocks and utilizing multi-head self-attention to model input-output dependencies.<ref name=":2" /> ====== <big>[[wikipedia:Vocoder|Vocoder]]</big> ====== '''WaveNet:''' WaveNet serves as a vital component in Tacotron 2 for speech synthesis. It transforms mel-scale spectrograms, predicted by the feature prediction network, into time-domain waveform samples, resulting in high-quality audio waveforms.In this architecture, WaveNet is adapted to function as a vocoder. It takes the predicted mel spectrograms and uses dilated convolutional layers organized into dilation cycles.<ref name=":0" /> '''Innovation:''' * Dilated Causal Convolutions: Utilizes dilated causal convolutions to exponentially expand the receptive field with the number of layers. * Gated Activation Units: Incorporates gated activation units to regulate information flow within the network. * Skip Connections: Employs skip connections for the network to learn residual functions. * Softmax Output Layer: Utilizes a softmax output layer to model the probability distribution over the next audio sample. * Hierarchical Structure: Adopts a hierarchical structure to model audio at multiple scales.<ref name=":0">Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & van den Oord, A. (2017). Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. arXiv preprint arXiv:1712.05884.</ref> '''Parallel WaveGAN:''' Parallel WaveGAN is a waveform generator utilizing GANs. It employs non-autoregressive WaveNet with multi-resolution spectrogram and adversarial loss functions for high-quality speech waveform synthesis. Unlike traditional models, it doesn't require complex density distillation, offering faster, efficient, small-footprint, and competitive waveform generation suitable for real-time applications. '''Innovation:''' * Non-Autoregressive WaveNet Generator: Utilizes a non-autoregressive WaveNet generator for faster and more efficient speech synthesis compared to traditional autoregressive models. * Multi-Resolution Spectrogram and Adversarial Loss Functions: Trains the generator using multi-resolution spectrogram and adversarial loss functions to capture realistic speech waveforms' time-frequency distribution. * No Density Distillation Required: Simplifies the training process by eliminating the need for density distillation, enhancing overall model efficiency. * High-Fidelity Speech Generation with Small Model: Achieves high-fidelity speech synthesis with a compact model, suitable for real-time applications. * Faster-Than-Real-Time Inference Speed: Provides inference speeds faster than real-time, making it ideal for real-time applications. * Competitive Performance: Achieves competitive performance compared to other waveform generation models, ensuring high-quality speech synthesis.<ref>Yamamoto, R., Inoue, K., Portnoff, M., Tan, X., Inoue, S., Yamamoto, H., ... & Watanabe, S. (2020). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. arXiv preprint arXiv:1910.11480.</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information