Advancements in AI TTS (2020s): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
No edit summary
Line 17: Line 17:
The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include:
The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include:


==== '''1. Transfer Learning and Pretrained Models''' ====
==== 1. Transfer Learning and Pretrained Models ====
One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data.
One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data.


==== '''2. Rule-based TTS systems''' ====
==== 2. Rule-based TTS systems ====
Rule-based TTS systems are a category of speech synthesis methods that generate synthetic speech based on predefined linguistic and phonetic rules. These systems follow a set of rules and instructions to convert input text into speech. Here are the key characteristics of rule-based TTS systems. While Neural Network-based TTS has excelled in generating highly natural and expressive speech but may require substantial data and computational resources. Rule-based TTS system finds its place in dealing with generating highly natural and expressive speech but may require substantial data and computational resources with its Enhanced Naturalness, multilingual support, and better customization.
Rule-based TTS systems are a category of speech synthesis methods that generate synthetic speech based on predefined linguistic and phonetic rules. These systems follow a set of rules and instructions to convert input text into speech. Here are the key characteristics of rule-based TTS systems. While Neural Network-based TTS has excelled in generating highly natural and expressive speech but may require substantial data and computational resources. Rule-based TTS system finds its place in dealing with generating highly natural and expressive speech but may require substantial data and computational resources with its Enhanced Naturalness, multilingual support, and better customization.


==== '''3. Concatenative TTS''' ====
==== 3. Concatenative TTS ====
Concatenative TTS is a method that generates synthetic speech by combining small units of pre-recorded human speech. In recent years, its development has continued with a focus on improving naturalness and expressiveness. Advancements have led to more sophisticated unit selection algorithms, enabling smoother transitions between units. Customization options have also been enhanced, allowing users to adapt the concatenated units for personalized voices. Furthermore, Concatenative TTS remains relevant, especially for languages with limited linguistic resources and in scenarios where naturalness is crucial, contributing to the broader evolution of AI-driven TTS.
Concatenative TTS is a method that generates synthetic speech by combining small units of pre-recorded human speech. In recent years, its development has continued with a focus on improving naturalness and expressiveness. Advancements have led to more sophisticated unit selection algorithms, enabling smoother transitions between units. Customization options have also been enhanced, allowing users to adapt the concatenated units for personalized voices. Furthermore, Concatenative TTS remains relevant, especially for languages with limited linguistic resources and in scenarios where naturalness is crucial, contributing to the broader evolution of AI-driven TTS.


==== '''4. Prosody Modeling:''' ====
==== 4. Prosody Modeling: ====
Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and Expressiveness Enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training, enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.
Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and Expressiveness Enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training, enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.


==== '''5. Zero-shot learning:''' ====
==== 5. Zero-shot learning: ====
TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.
TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.


=== '''6. Voice Cloning:''' ===
==== 6. Voice Cloning: ====
Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.
Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.


==== '''7. Evolving Neural Architectures and End-to-End Approaches''' ====
==== 7. Evolving Neural Architectures and End-to-End Approaches ====
In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. Concurrently, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.
In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. Concurrently, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.



Revision as of 09:25, 18 October 2023

Introduction

The rapid advancement of Artificial Intelligence (AI) technology has ushered in a profound transformation in the way we engage with the digital realm. At the forefront of this AI-driven revolution is Text-to-Speech Synthesis (TTS) technology. TTS empowers computers, devices, and applications to communicate intelligently by simulating natural speech and converting text into audible sounds. This Wikipedia entry will delve into the significant strides made in AI TTS during the 2020s, with a particular focus on the pivotal year of 2020 as a defining moment in this field's history.

Historical Context

The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by Christian Kratzenstein in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine could produce five vowels A, E, I, O and U. In 1791, Wolfgang von Kempelen created a tool known as a “speaking machine”. With a series of bellows, reeds and mechanical components, this complex machine represented the model of human vocal tract and could produce vowels and consonants. Then in the 20th century, Bell lab developed Voder (Voice Operating Demonstrator) , an electrical speech synthesizer whose source signals passed through bandpass electronic filters and the output was controlled by an operator.

As we can tell, the main idea of generating speech during the early development of speech synthesis is to copy the speech (Klatt, 1987). The approach to fulfill this idea is either mechanical or electrical. The former creates machines to mimic movements of the vocal tract, and the latter works on filtering electrical signals, as the vocal tract did to source signals. We should notice that these early machines are by standards not text to speech systems, as they do not generate speech from plain text. However, they are undoubtedly milestones in development of modern TTS systems, and their limitations are also obvious. Machines are not capable of producing a full range of vowels and consonants, let alone sentences. In a video about Voder, it can only produce one sentence ‘she saw me’ with stress on different words.

Until the late 20th century, full-fledged text to speech systems emerged. Most of TTS systems at that time were rule-based, meaning speech is generated using predefined linguistic and phonetic rules, letter to sound rules, and stress to prosody rules. One of the earliest full text to speech systems MITalk, created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was also rule-based. The performance of rule-based machines is so much better that users could hear the output speech as they type sentences on the keyboard. However, the speech output still lacks naturalness and expressiveness. What’s more, compiling linguistic rules for languages requires lots of manual work and due to the flexibility of languages, it is not impossible to cover all variations and exceptions.  

From late 20th onwards, more TTS techniques have been developed such as Parametric TTS, Hidden Markov Models (HMMs) , and Concatenative TTS. They dealt with the naturalness, adaptability and flexibility problems to some extent and improved the performance of TTS systems. But they all have their downsides. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.

The recent TTS system, on the contrary, can generate natural and human-like speech thanks to entwinement with artificial intelligence. They have been improved in multiple ways including training models and techniques, functionalities and output performance. We will look deeper into the AI techniques in the field of TTS in the next section.

Key Innovations

The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include:

1. Transfer Learning and Pretrained Models

One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data.

2. Rule-based TTS systems

Rule-based TTS systems are a category of speech synthesis methods that generate synthetic speech based on predefined linguistic and phonetic rules. These systems follow a set of rules and instructions to convert input text into speech. Here are the key characteristics of rule-based TTS systems. While Neural Network-based TTS has excelled in generating highly natural and expressive speech but may require substantial data and computational resources. Rule-based TTS system finds its place in dealing with generating highly natural and expressive speech but may require substantial data and computational resources with its Enhanced Naturalness, multilingual support, and better customization.

3. Concatenative TTS

Concatenative TTS is a method that generates synthetic speech by combining small units of pre-recorded human speech. In recent years, its development has continued with a focus on improving naturalness and expressiveness. Advancements have led to more sophisticated unit selection algorithms, enabling smoother transitions between units. Customization options have also been enhanced, allowing users to adapt the concatenated units for personalized voices. Furthermore, Concatenative TTS remains relevant, especially for languages with limited linguistic resources and in scenarios where naturalness is crucial, contributing to the broader evolution of AI-driven TTS.

4. Prosody Modeling:

Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and Expressiveness Enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training, enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.

5. Zero-shot learning:

TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.

6. Voice Cloning:

Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.

7. Evolving Neural Architectures and End-to-End Approaches

In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. Concurrently, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.

Impact

The advancements in AI Text-to-Speech (TTS) technology in the 2020s have had profound impacts across various domains:

Business

Cost-Effective Marketing

AI TTS has allowed businesses to create cost-effective marketing materials by generating high-quality voiceovers for advertisements, promotional videos, and e-commerce product descriptions[1]. This has enabled smaller businesses to compete with larger counterparts.

Elevated Customer Engagement

AI TTS is being used in customer service and support chatbots, providing a more engaging and interactive experience for customers. This technology reduces the need for human operators in routine tasks and enables 24/7 support and quick responses to customer queries[2].

Multilingual Communication

Companies have expanded their global reach by using AI TTS to provide content in multiple languages, which is especially important for businesses with international customers and markets.

Enhanced Brand Recognition:

Customized brand voices can help businesses stand out in a crowded market. With AI TTS, businesses can maintain a consistent brand voice across various touchpoints.

Education

Accessibility and Inclusion

TTS technology is being used in education and e-learning platforms to provide audio versions of text content. This benefits students with diverse learning styles and those with reading difficulties[3].

Language Learning

TTS technology remains an asset in language learning, helping learners improve pronunciation, fluency, and comprehension in various languages.

Personalized Learning

Educational institutions use AI TTS to provide personalized learning experiences, adapting content to individual student needs and preferences.

Teacher Assistance

TTS tools support educators in creating and delivering content, from generating voiceovers for instructional videos to offering speech feedback on assignments.

Society

Language Preservation

Cross-lingual text-to-speech (CTTS) has facilitated communication across language barriers and has played a role in preserving and revitalizing low-resource and endangered languages[4], promoting linguistic diversity and aiding in documentation and communication. This is invaluable in a globalized world, allowing for better understanding and cooperation.

Digital Inclusion

TTS technology promotes digital inclusion by making digital content accessible to individuals with low literacy skills and those with disabilities. Improved TTS technology has greatly enhanced accessibility for individuals with visual impairments[5]. It allows text-based information to be converted into speech, making digital content more accessible to a wider audience.

Entertainment and Content Creation

The entertainment industry has benefited from TTS technology through voice cloning and dubbing[6]. It has become easier to dub movies, create voiceovers, and even bring back historical voices for documentaries and other media productions. AI TTS continues to support voiceovers in video games, audiobooks, and other audio content, contributing to the entertainment experience.

Emergency Communication

During emergencies and crisis situations, AI TTS is used to disseminate critical information rapidly, ensuring public safety and information access.

Privacy and Ethical Concerns[7]

Deepfake Threat

The potential for AI TTS to be used in deepfake audio and video content has become a growing concern. This emphasizes the need for robust authentication and content verification mechanisms.

Data Privacy

The collection and storage of voice data for TTS models raise concerns about data privacy. Regulations and guidelines have been developed to address these issues.

Bias and Cultural Sensitivity

The challenge of mitigating bias and ensuring cultural sensitivity in TTS models remains a critical consideration in their development and deployment.

Future Research

High-quality speech synthesis

The most important goal of TTS is to synthesize high-quality speech. The quality of speech is determined by many aspects that influence the perception of speech, including intelligibility, naturalness, expressiveness, prosody, emotion, style, robustness, controllability, etc. While neural approaches have significantly improved the quality of synthesized speech, there is still large room to make further improvements[8].

Affective speech synthesis

a. Emotional vocal bursts

Within the realm of emotional speech synthesis, a particularly intriguing area of exploration could revolve around emotional vocal outbursts. In the now famous promotional video for Google Assistant, the crowd erupted in cheers as the assistant assured the hairdresser that “taking one second” to look for an appointment was fine with a mere “Mm-hmm.” This example vividly demonstrates the significance of vocal outbursts in conveying emotional reactions. In fact, the synthesis of such vocal bursts was already the focal point of the 2020 ExVo challenge. The most successful approach employed in this challenge, utilizing StyleGAN2, had already yielded promising outcomes, underscoring the considerable potential inherent in this avenue of research[9].

b. Endowing the agent with an artificial personality

This area has been pursued for several decades. However, this topic has been recently revived in the context of big language models, which can be adapted to emulate a specific personality. As personality has been also shown to manifest in speech signals, it is an evident next step to introduce it to conversational agents as well. In general, as exemplified by the tasks featured in the Computational Paralinguistics Challenge, there exist a plethora of speaker states and traits, which can be modeled from the speech: deception, sincerity, nativeness, cognitive load, likability, interest, and others are all variables that could be added to the capabilities of affective agents[9].

c. Personalization

Personalization is expected to be another major aspect of future speech synthesis systems. Both the expression and the perception of emotion show individualistic effects, which are currently underexploited in the speech synthesis field. Future approaches can benefit a lot from adopting a similar mindset and adapt the production of emotional speech to a style that fits both the speaker and the listener. Such an interpersonal adaptation effect is also seen in human conversations and is a necessary step to foster communication[9].

Specifically, child-speech-synthesis is one promising research area. Due to the difficulties in collecting children speech data and understanding children speech, synthesizing children speech has always been challenging. In recent years, neural-network-based TTS systems have been gaining popularity. For instance, Hasija, Kadyan, and Guleria[10] used Tacotron for the development of children’s synthetic speech.  However, the problem of lacking data for children's speech still persists. For future developments, researchers need to define better acoustic features for children's speech. Moreover, pronunciation modelling is required[11].

d. Interaction between AI and human

The interactions can be accordingly classified as “successful” or not, depending on the goals of the agent. Coupled with effective speech recognition capabilities, these interactions constitute a natural reward signal, which can be further utilized by their agent to improve their speech synthesis and speech recognition capacities in a lifelong reinforcement learning setup, which still remains an elusive goal for the field of affective computing. An overture to this exciting domain can already be found in intelligent dialog generation, where reinforcement learning is already being used to adjust the linguistic style of an agent or to learn backchanneling responses. This paradigm is expected to be more widely used in TTS in the near future[9].

Better representation learning

Good representations of text and speech are beneficial for neural TTS models, which can improve the quality of synthesized speech. Some initial explorations on text pre-training indicate that better text representations can indeed improve the speech prosody. How to learn powerful representations for text/phoneme sequence and especially for speech sequence through unsupervised/self-supervised learning and pre-training is challenging and worth further explorations[8].

Efficient speech synthesis[8]

It is about how to reduce the cost of speech synthesis including the cost of collecting and labeling training data, training and serving TTS models, etc.

Data-efficient TTS

Many low-resource languages are lack of training data. How to leverage unsupervised/semi-supervised learning and cross-lingual transfer learning to help the low-resource languages is an interesting direction. For example, the ZeroSpeech Challenge is a good initiative to explore the techniques to learn only from speech, without any text or linguistic knowledge. Besides, in voice adaptation, a target speaker usually has little adaptation data, which is another application scenario for data-efficient TTS.

Parameter-efficient TTS

Today’s neural TTS systems usually employ large neural networks with tens of millions of parameters to synthesize high-quality speech, which block the applications in mobile, low-end devices due to their limited memory and power consumption. Designing compact and lightweight models with less memory footprints, power consumption and latency are critical for those application scenarios.

Energy-efficient TTS

Training and serving a high-quality TTS model consume a lot of energy and emit a lot of carbon. Improving energy efficiency, e.g., reducing the FLOPs in TTS training and inference, is important to let more populations to benefit from advanced TTS techniques while reducing carbon emissions to protect our environment.

LLM Review

-

Team Members

xinyi xueying jingsi yilan wansu

References

  1. Dale, R. (2022). The Voice Synthesis Business: 2022 Update. Cambridge University Press, 28(3). https://doi.org/10.1017/S1351324922000146
  2. Karrupusamy, P., Balas, V. E., & Shi, Y. (n.d.). Sustainable Communication Networks and Application Proceedings of ICSCN 2021.
  3. Stodden, R. A., Roberts, K. D., Takahashi, K., Park, H. J., & Stodden, N. J. (2012). Use of Text-to-speech Software to Improve Reading Skills of High School Struggling Readers. International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2012). https://www.sciencedirect.com/
  4. Cooper, E. (2019). Text-to-Speech Synthesis Using Found Data for Low-Resource Languages. COLUMBIA UNIVERSITY.
  5. Edward, S., & Xavier, J. B. (2018). Text-To-Speech Device for Visually Impaired People. 119. https://www.acadpubl.eu/hub/
  6. PECORA, A. E. (2023). Data driven: AI Voice Cloning [Master’s Degree Thesis]. POLITECNICO DI TORINO.
  7. Azmoodeh, A., & Dehghantanha, A. (2022). Deep Fake Detection, Deterrence and Response: Challenges and Opportunities. https://doi.org/10.48550/arXiv.2211.14667
  8. 8.0 8.1 8.2 Tan, X., Qin, T., Soong, F., & Liu, T. Y. (2021). A survey on neural speech synthesis. arXiv preprint arXiv:2106.15561.[1]
  9. 9.0 9.1 9.2 9.3 Triantafyllopoulos, A., Schuller, B. W., İymen, G., Sezgin, M., He, X., Yang, Z., ... & Tao, J. (2023). An overview of affective speech synthesis and conversion in the deep learning era. Proceedings of the IEEE.[2]
  10. Hasija, T.; Kadyan, V.; Guleria, K. Out Domain Data Augmentation on Punjabi Children Speech Recognition using Tacotron. In Proceedings of the International Conference on Mathematics and Artificial Intelligence (ICMAI 2021), Chengdu, China, 19–21 March 2021.[3]
  11. Terblanche, C., Harty, M., Pascoe, M., & Tucker, B. V. (2022). A Situational Analysis of Current Speech-Synthesis Systems for Child Voices: A Scoping Review of Qualitative and Quantitative Evidence. Applied Sciences, 12(11), 5623. https://doi.org/10.3390/app12115623