Advancements in AI TTS (2020s): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 12: Line 12:
The central concept during the early development of speech synthesis was to copy the speech<ref>Klatt, D. H. (1987). Review of text-to-speech conversion for English. ''The Journal of the Acoustical Society of America'', ''82''(3), 737–793. <nowiki>https://doi.org/10.1121/1.395275</nowiki></ref>. The approach involved either mechanical or electrical. The former aimed to mimic vocal tract movements, while the latter worked on filtering electrical signals to emulate the vocal tract's function. It is worth noting that these early machines were not text to speech systems, as they convert plain text into speech. Nevertheless, they represented significant milestones in the evolution of modern TTS systems, despite their limitations in producing a full range of sounds or sentences. [https://www.youtube.com/watch?v=0rAyrmm7vv0 A video filmed in 1939] about Voder demonstrated that the Voder could only produce a single sentence ‘she saw me’ with varying word stress.  
The central concept during the early development of speech synthesis was to copy the speech<ref>Klatt, D. H. (1987). Review of text-to-speech conversion for English. ''The Journal of the Acoustical Society of America'', ''82''(3), 737–793. <nowiki>https://doi.org/10.1121/1.395275</nowiki></ref>. The approach involved either mechanical or electrical. The former aimed to mimic vocal tract movements, while the latter worked on filtering electrical signals to emulate the vocal tract's function. It is worth noting that these early machines were not text to speech systems, as they convert plain text into speech. Nevertheless, they represented significant milestones in the evolution of modern TTS systems, despite their limitations in producing a full range of sounds or sentences. [https://www.youtube.com/watch?v=0rAyrmm7vv0 A video filmed in 1939] about Voder demonstrated that the Voder could only produce a single sentence ‘she saw me’ with varying word stress.  


Full-fledged text to speech systems emerged until the late 20th century. Most of TTS systems at that time were rule-based, meaning they generated speech using predefined linguistic and phonetic rules. One of the earliest full text to speech systems MITalk, created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was one such rule-based system. It allowed users to hear speech as they typed. However, the speech output still lacked naturalness and expressiveness. Additionally, compiling linguistic rules for different languages requires lots of manual work and due to the flexibility of languages, it is impossible to cover all variations and exceptions.  
Full-fledged text to speech systems emerged until the late 20th century. Most of TTS systems at that time were rule-based, meaning they generated speech using predefined linguistic and phonetic rules. One of the earliest full text to speech systems MITalk<ref>Allen, J., Hunnicutt, M. S., Klatt, D. H., Armstrong, R. C., & Pisoni, D. B. (1987). ''From text to speech: The MITalk system''. Cambridge University Press.</ref>, created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was one such rule-based system. It allowed users to hear speech as they typed. However, the speech output still lacked naturalness and expressiveness. Additionally, compiling linguistic rules for different languages requires lots of manual work and due to the flexibility of languages, it is impossible to cover all variations and exceptions.  


From late 20th onwards, new TTS techniques such as Parametric TTS, [[Hidden Markov Models in Speech Synthesis|Hidden Markov Models (HMMs)]], and Concatenative TTS were developed to address naturalness, adaptability and flexibility issues, greatly improving the performance of TTS. However, the limitations of these techniques are inevitable. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.  
From late 20th onwards, new TTS techniques such as Parametric TTS, [[Hidden Markov Models in Speech Synthesis|Hidden Markov Models (HMMs)]], and Concatenative TTS were developed to address naturalness, adaptability and flexibility issues, greatly improving the performance of TTS. However, the limitations of these techniques are inevitable. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.  

Revision as of 09:58, 18 October 2023

Introduction

Since the turn of the 21st century, AI Text-to-Speech (TTS) technology has experienced a remarkable evolution, transitioning from rule-driven synthesis to end-to-end approaches founded on deep learning. These transformative shifts have resulted in remarkable improvements in the quality, naturalness, and adaptability of TTS systems for speech synthesis.

In the 2020s, AI TTS underwent groundbreaking changes, introducing innovations such as voice cloning and zero-shot learning, ultimately achieving an unparalleled level of natural and expressive speech synthesis. These advancements laid the cornerstone for the modern TTS technology we have today.

AI TTS has made an indelible impact on various domains, including business, education, and society, by transcending language barriers and gaining the ability to convey emotions and individuality. However, its widespread adoption has concurrently raised valid concerns regarding privacy and bias.

Historical Context

The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by Christian Kratzenstein in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine produced five vowels A, E, I, O and U. In 1791, Wolfgang von Kempelen created a “speaking machine” that simulated the human vocal tract and could produce vowels and consonants using a series of bellows, reeds and mechanical components. Later in the 20th century, Bell Labs developed Voder (Voice Operating Demonstrator) , an electrical speech synthesizer that employed bandpass electronic filters and operator control for sound generation.

The central concept during the early development of speech synthesis was to copy the speech[1]. The approach involved either mechanical or electrical. The former aimed to mimic vocal tract movements, while the latter worked on filtering electrical signals to emulate the vocal tract's function. It is worth noting that these early machines were not text to speech systems, as they convert plain text into speech. Nevertheless, they represented significant milestones in the evolution of modern TTS systems, despite their limitations in producing a full range of sounds or sentences. A video filmed in 1939 about Voder demonstrated that the Voder could only produce a single sentence ‘she saw me’ with varying word stress.

Full-fledged text to speech systems emerged until the late 20th century. Most of TTS systems at that time were rule-based, meaning they generated speech using predefined linguistic and phonetic rules. One of the earliest full text to speech systems MITalk[2], created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was one such rule-based system. It allowed users to hear speech as they typed. However, the speech output still lacked naturalness and expressiveness. Additionally, compiling linguistic rules for different languages requires lots of manual work and due to the flexibility of languages, it is impossible to cover all variations and exceptions.  

From late 20th onwards, new TTS techniques such as Parametric TTS, Hidden Markov Models (HMMs), and Concatenative TTS were developed to address naturalness, adaptability and flexibility issues, greatly improving the performance of TTS. However, the limitations of these techniques are inevitable. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.

Thanks to the advancement of artificial intelligence (AI), recent TTS systems can generate natural, human-like speech by adopting innovative training and learning models, as well as enhanced traditional techniques. In the upcoming section, we will delve deeper into AI techniques.

Key Innovations

The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include:

Transfer Learning and Pretrained Models

One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. [3]Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data.

Rule-based TTS Systems

Rule-based TTS systems have continued to contribute to the advancement of AI-driven TTS since 2020. They find value in specialized fields like medicine and law, where precise pronunciation of domain-specific terms is vital. These systems remain relevant for languages with limited linguistic resources, making them suitable for low-resource languages. Additionally, some TTS systems adopt hybrid approaches, combining rule-based and neural network-based techniques to leverage customization while benefiting from naturalness. They excel in applications requiring high control over speech output, such as accessibility solutions. However, it's essential to recognize that rule-based TTS still faces challenges in terms of naturalness and emotional expressiveness compared to neural TTS systems, which have made significant strides in enhancing the quality and expressiveness of synthesized speech.

Concatenative TTS

In the AI era, Concatenative TTS systems have seen new developments aimed at improving their performance and adaptability. These developments include hybrid approaches that combine Concatenative TTS with neural network-driven TTS for higher quality and more natural synthesized speech. Additionally, there is a growing trend toward larger speech databases, enabling better voice selection for smoother and more natural speech synthesis across various text contexts. Real-time applications have seen improvements in reducing latency, making Concatenative TTS more practical for real-time communication, voice assistants, and automated voice responses. Personalized TTS, which leverages AI, allows users to customize synthesized voices to their preferences, with potential applications in education, entertainment, and assistive technologies. Moreover, Concatenative TTS systems are extending their support to multiple languages and dialects, making them applicable to diverse global markets. They also find increasing use in specialized fields such as medicine, law, and science to ensure accurate pronunciation of domain-specific terms.

Prosody Modeling

Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and Expressiveness Enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training, enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.

Zero-shot Learning

TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.

Voice Cloning

Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.

Evolving Neural Architectures and End-to-End Approaches

In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. Concurrently, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.

Impacts

The advancements in AI Text-to-Speech (TTS) technology in the 2020s have had profound impacts across various domains:

Business

Cost-Effective Marketing

AI TTS has allowed businesses to create cost-effective marketing materials by generating high-quality voiceovers for advertisements, promotional videos, and e-commerce product descriptions[4]. This has enabled smaller businesses to compete with larger counterparts.

Elevated Customer Engagement

AI TTS is being used in customer service and support chatbots, providing a more engaging and interactive experience for customers. This technology reduces the need for human operators in routine tasks and enables 24/7 support and quick responses to customer queries[5].

Multilingual Communication

Companies have expanded their global reach by using AI TTS to provide content in multiple languages, which is especially important for businesses with international customers and markets.

Enhanced Brand Recognition

Customized brand voices can help businesses stand out in a crowded market. With AI TTS, businesses can maintain a consistent brand voice across various touchpoints.

Education

Accessibility and Inclusion

TTS technology is being used in education and e-learning platforms to provide audio versions of text content. This benefits students with diverse learning styles and those with reading difficulties[6].

Language Learning

TTS technology remains an asset in language learning, helping learners improve pronunciation, fluency, and comprehension in various languages.

Personalized Learning

Educational institutions use AI TTS to provide personalized learning experiences, adapting content to individual student needs and preferences.

Teacher Assistance

TTS tools support educators in creating and delivering content, from generating voiceovers for instructional videos to offering speech feedback on assignments.

Society

Language Preservation

Cross-lingual text-to-speech (CTTS) has facilitated communication across language barriers and has played a role in preserving and revitalizing low-resource and endangered languages[7], promoting linguistic diversity and aiding in documentation and communication. This is invaluable in a globalized world, allowing for better understanding and cooperation.

Digital Inclusion

TTS technology promotes digital inclusion by making digital content accessible to individuals with low literacy skills and those with disabilities. Improved TTS technology has greatly enhanced accessibility for individuals with visual impairments[8]. It allows text-based information to be converted into speech, making digital content more accessible to a wider audience.

Entertainment and Content Creation

The entertainment industry has benefited from TTS technology through voice cloning and dubbing[9]. It has become easier to dub movies, create voiceovers, and even bring back historical voices for documentaries and other media productions. AI TTS continues to support voiceovers in video games, audiobooks, and other audio content, contributing to the entertainment experience.

Emergency Communication

During emergencies and crisis situations, AI TTS is used to disseminate critical information rapidly, ensuring public safety and information access.

Privacy and Ethical Concerns[10]

Deepfake Threat

The potential for AI TTS to be used in deepfake audio and video content has become a growing concern. This emphasizes the need for robust authentication and content verification mechanisms.

Data Privacy

The collection and storage of voice data for TTS models raise concerns about data privacy. Regulations and guidelines have been developed to address these issues.

Bias and Cultural Sensitivity

The challenge of mitigating bias and ensuring cultural sensitivity in TTS models remains a critical consideration in their development and deployment.

Future Research

High-quality speech synthesis

The most important goal of TTS is to synthesize high-quality speech. The quality of speech is determined by many aspects that influence the perception of speech, including intelligibility, naturalness, expressiveness, prosody, emotion, style, robustness, controllability, etc. While neural approaches have significantly improved the quality of synthesized speech, there is still large room to make further improvements[11].

Affective speech synthesis

a. Emotional vocal bursts

Within the realm of emotional speech synthesis, a particularly intriguing area of exploration could revolve around emotional vocal outbursts. In the now famous promotional video for Google Assistant, the crowd erupted in cheers as the assistant assured the hairdresser that “taking one second” to look for an appointment was fine with a mere “Mm-hmm.” This example vividly demonstrates the significance of vocal outbursts in conveying emotional reactions. In fact, the synthesis of such vocal bursts was already the focal point of the 2020 ExVo challenge. The most successful approach employed in this challenge, utilizing StyleGAN2, had already yielded promising outcomes, underscoring the considerable potential inherent in this avenue of research[12].

b. Endowing the agent with an artificial personality

This area has been pursued for several decades. However, this topic has been recently revived in the context of big language models, which can be adapted to emulate a specific personality. As personality has been also shown to manifest in speech signals, it is an evident next step to introduce it to conversational agents as well. In general, as exemplified by the tasks featured in the Computational Paralinguistics Challenge, there exist a plethora of speaker states and traits, which can be modeled from the speech: deception, sincerity, nativeness, cognitive load, likability, interest, and others are all variables that could be added to the capabilities of affective agents[12].

c. Personalization

Personalization is expected to be another major aspect of future speech synthesis systems. Both the expression and the perception of emotion show individualistic effects, which are currently underexploited in the speech synthesis field. Future approaches can benefit a lot from adopting a similar mindset and adapt the production of emotional speech to a style that fits both the speaker and the listener. Such an interpersonal adaptation effect is also seen in human conversations and is a necessary step to foster communication[12].

Specifically, child-speech-synthesis is one promising research area. Due to the difficulties in collecting children speech data and understanding children speech, synthesizing children speech has always been challenging. In recent years, neural-network-based TTS systems have been gaining popularity. For instance, Hasija, Kadyan, and Guleria[13] used Tacotron for the development of children’s synthetic speech.  However, the problem of lacking data for children's speech still persists. For future developments, researchers need to define better acoustic features for children's speech. Moreover, pronunciation modelling is required[14].

d. Interaction between AI and human

The interactions can be accordingly classified as “successful” or not, depending on the goals of the agent. Coupled with effective speech recognition capabilities, these interactions constitute a natural reward signal, which can be further utilized by their agent to improve their speech synthesis and speech recognition capacities in a lifelong reinforcement learning setup, which still remains an elusive goal for the field of affective computing. An overture to this exciting domain can already be found in intelligent dialog generation, where reinforcement learning is already being used to adjust the linguistic style of an agent or to learn backchanneling responses. This paradigm is expected to be more widely used in TTS in the near future[12].

Better representation learning

Good representations of text and speech are beneficial for neural TTS models, which can improve the quality of synthesized speech. Some initial explorations on text pre-training indicate that better text representations can indeed improve the speech prosody. How to learn powerful representations for text/phoneme sequence and especially for speech sequence through unsupervised/self-supervised learning and pre-training is challenging and worth further explorations[11].

Efficient speech synthesis[11]

It is about how to reduce the cost of speech synthesis including the cost of collecting and labeling training data, training and serving TTS models, etc.

Data-efficient TTS

Many low-resource languages are lack of training data. How to leverage unsupervised/semi-supervised learning and cross-lingual transfer learning to help the low-resource languages is an interesting direction. For example, the ZeroSpeech Challenge is a good initiative to explore the techniques to learn only from speech, without any text or linguistic knowledge. Besides, in voice adaptation, a target speaker usually has little adaptation data, which is another application scenario for data-efficient TTS.

Parameter-efficient TTS

Today’s neural TTS systems usually employ large neural networks with tens of millions of parameters to synthesize high-quality speech, which block the applications in mobile, low-end devices due to their limited memory and power consumption. Designing compact and lightweight models with less memory footprints, power consumption and latency are critical for those application scenarios.

Energy-efficient TTS

Training and serving a high-quality TTS model consume a lot of energy and emit a lot of carbon. Improving energy efficiency, e.g., reducing the FLOPs in TTS training and inference, is important to let more populations to benefit from advanced TTS techniques while reducing carbon emissions to protect our environment.

LLM Review

-

Team Members

Yilan Wei

Xueying Liu

Xinyi Ma

Jingsi Huang

Wansu Zhu

References

  1. Klatt, D. H. (1987). Review of text-to-speech conversion for English. The Journal of the Acoustical Society of America, 82(3), 737–793. https://doi.org/10.1121/1.395275
  2. Allen, J., Hunnicutt, M. S., Klatt, D. H., Armstrong, R. C., & Pisoni, D. B. (1987). From text to speech: The MITalk system. Cambridge University Press.
  3. Huang, W.-C., Hayashi, T., Wu, Y.-C., Kameoka, H., & Toda, T. (2019). Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining (arXiv:1912.06813). arXiv. http://arxiv.org/abs/1912.06813
  4. Dale, R. (2022). The Voice Synthesis Business: 2022 Update. Cambridge University Press, 28(3). https://doi.org/10.1017/S1351324922000146
  5. Karrupusamy, P., Balas, V. E., & Shi, Y. (n.d.). Sustainable Communication Networks and Application Proceedings of ICSCN 2021.
  6. Stodden, R. A., Roberts, K. D., Takahashi, K., Park, H. J., & Stodden, N. J. (2012). Use of Text-to-speech Software to Improve Reading Skills of High School Struggling Readers. International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2012). https://www.sciencedirect.com/
  7. Cooper, E. (2019). Text-to-Speech Synthesis Using Found Data for Low-Resource Languages. COLUMBIA UNIVERSITY.
  8. Edward, S., & Xavier, J. B. (2018). Text-To-Speech Device for Visually Impaired People. 119. https://www.acadpubl.eu/hub/
  9. PECORA, A. E. (2023). Data driven: AI Voice Cloning [Master’s Degree Thesis]. POLITECNICO DI TORINO.
  10. Azmoodeh, A., & Dehghantanha, A. (2022). Deep Fake Detection, Deterrence and Response: Challenges and Opportunities. https://doi.org/10.48550/arXiv.2211.14667
  11. 11.0 11.1 11.2 Tan, X., Qin, T., Soong, F., & Liu, T. Y. (2021). A survey on neural speech synthesis. arXiv preprint arXiv:2106.15561.[1]
  12. 12.0 12.1 12.2 12.3 Triantafyllopoulos, A., Schuller, B. W., İymen, G., Sezgin, M., He, X., Yang, Z., ... & Tao, J. (2023). An overview of affective speech synthesis and conversion in the deep learning era. Proceedings of the IEEE.[2]
  13. Hasija, T.; Kadyan, V.; Guleria, K. Out Domain Data Augmentation on Punjabi Children Speech Recognition using Tacotron. In Proceedings of the International Conference on Mathematics and Artificial Intelligence (ICMAI 2021), Chengdu, China, 19–21 March 2021.[3]
  14. Terblanche, C., Harty, M., Pascoe, M., & Tucker, B. V. (2022). A Situational Analysis of Current Speech-Synthesis Systems for Child Voices: A Scoping Review of Qualitative and Quantitative Evidence. Applied Sciences, 12(11), 5623. https://doi.org/10.3390/app12115623