Editing
Advancements in AI TTS (2020s)
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Key Innovations == The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include: === 1. Transfer Learning and Pretrained Models === One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. <ref>Fang, W., Chung, Y.-A., & Glass, J. (2019). ''Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models'' (arXiv:1906.07307). arXiv. <nowiki>http://arxiv.org/abs/1906.07307</nowiki></ref>For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. <ref>Huang, W.-C., Hayashi, T., Wu, Y.-C., Kameoka, H., & Toda, T. (2019). ''Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining'' (arXiv:1912.06813). arXiv. <nowiki>http://arxiv.org/abs/1912.06813</nowiki></ref>Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks<ref>Dida, H. A., Chakravarthy, D. S. K., & Rabbi, F. (2023). ChatGPT and Big Data: Enhancing Text-to-Speech Conversion. ''Mesopotamian Journal of Big Data'', ''2023'', 33β37.</ref>. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data. === 2. Rule-based TTS Systems === Rule-based TTS systems have continued to contribute to the advancement of AI-driven TTS since 2020. While the fact that rule-based TTS still faces challenges in terms of naturalness and emotional expressiveness compared to neural TTS systems, rule-based TTS finds its value in specialized fields like medicine and law, where precise pronunciation of domain-specific terms is vital. These systems remain relevant for languages with limited linguistic resources, making them suitable for low-resource languages. Additionally, some TTS systems adopt hybrid approaches, combining rule-based and neural network-based techniques to leverage customization while benefiting from naturalness. <ref>McTear, M. (2021). Rule-Based Dialogue Systems: Architecture, Methods, and Tools. In M. McTear, ''Conversational AI'' (pp. 43β70). Springer International Publishing. <nowiki>https://doi.org/10.1007/978-3-031-02176-3_2</nowiki></ref>They excel in applications requiring high control over speech output, such as accessibility solutions. === 3. Concatenative TTS === In the AI era, Concatenative TTS systems have seen new developments aimed at improving their performance and adaptability. These developments include hybrid approaches that combine Concatenative TTS with neural network-driven TTS for higher quality and more natural synthesized speech. Additionally, there is a growing trend toward larger speech databases, enabling better voice selection for smoother and more natural speech synthesis across various text contexts. Real-time applications have seen improvements in reducing latency, making Concatenative TTS more practical for real-time communication, voice assistants, and automated voice responses. Personalized TTS, which leverages AI, allows users to customize synthesized voices to their preferences, with potential applications in education, entertainment, and assistive technologies. Moreover, Concatenative TTS systems are extending their support to multiple languages and dialects, making them applicable to diverse global markets. They also find increasing use in specialized fields such as medicine, law, and science to ensure accurate pronunciation of domain-specific terms.<ref>Soumya Priyadarsini Panda, Ajit Kumar Nayak(2015). A Rule-Based Concatenative Approach to Speech Synthesis in Indian Language Text-to-Speech Systems Intelligent Computing, ''Communication and Devices'', Volume 309 </ref> === 4. Prosody Modeling === Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and expressiveness enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training<ref>Raitio, T., Li, J., & Seshadri, S. (2022). Hierarchical prosody modeling and control in non-autoregressive parallel neural TTS. ''ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 7587β7591. <nowiki>https://ieeexplore.ieee.org/abstract/document/9746253/</nowiki></ref>, enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.<ref>Vainio, M. (2001). ''Artificial neural network based prosody models for Finnish text-to-speech synthesis''. <nowiki>https://helda.helsinki.fi/bitstream/handle/10138/19873/artifici.pdf?sequence=2</nowiki></ref> === 5. Zero-shot Learning === TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.<ref>Jiang, Z., Ren, Y., Ye, Z., Liu, J., Zhang, C., Yang, Q., Ji, S., Huang, R., Wang, C., Yin, X., Ma, Z., & Zhao, Z. (2023). ''Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias'' (arXiv:2306.03509). arXiv. <nowiki>http://arxiv.org/abs/2306.03509</nowiki></ref> === 6. Voice Cloning === Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.<ref>Pecora, A. E. (2023). ''Data driven: AI Voice Cloning'' [PhD Thesis, Politecnico di Torino]. <nowiki>https://webthesis.biblio.polito.it/27738/</nowiki></ref> === 7. Evolving Neural Architectures and [https://wiki.voice-technology.nl/index.php/Development_of_End-to-End_Models End-to-End] Approaches === In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. In this context, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. <ref>Tu, T., Chen, Y.-J., Yeh, C., & Lee, H. (2019). ''End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual Transfer Learning'' (arXiv:1904.06508). arXiv. <nowiki>http://arxiv.org/abs/1904.06508</nowiki></ref>This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.<ref name=":2" />
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information