Advancements in AI TTS (2020s): Difference between revisions
No edit summary |
|||
(37 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
Since the turn of the 21st century, AI Text-to-Speech (TTS) technology has experienced a remarkable evolution, transitioning from rule-driven synthesis to end-to-end approaches founded on deep learning. These transformative shifts have resulted in remarkable improvements in the quality, naturalness, and adaptability of TTS systems for speech synthesis. | |||
In the 2020s, AI TTS underwent groundbreaking changes, introducing innovations such as voice cloning and zero-shot learning, ultimately achieving an unparalleled level of natural and expressive speech synthesis. These advancements laid the cornerstone for the modern TTS technology we have today. | |||
AI TTS has made an indelible impact on various domains, including business, education, and society, by transcending language barriers and gaining the ability to convey emotions and individuality. However, its widespread adoption has concurrently raised valid concerns regarding privacy and bias. | |||
== Historical Context == | == Historical Context == | ||
The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by Christian Kratzenstein in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine | The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by [[Mechanical synthesis|Christian Kratzenstein]] in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine produced five vowels A, E, I, O and U. In 1791, [[Wolfgang von Kempelen's Speaking Machine (1769)|Wolfgang von Kempelen]] created a “speaking machine” that simulated the human vocal tract and could produce vowels and consonants using a series of bellows, reeds and mechanical components. Later in the 20th century, Bell Labs developed [[Voder]] (Voice Operating Demonstrator) , an electrical speech synthesizer that employed bandpass electronic filters and operator control for sound generation. | ||
The central concept during the early development of speech synthesis was to copy the speech<ref name=":2">Klatt, D. H. (1987). Review of text-to-speech conversion for English. ''The Journal of the Acoustical Society of America'', ''82''(3), 737–793. <nowiki>https://doi.org/10.1121/1.395275</nowiki></ref>. The approach involved either mechanical or electrical. The former aimed to mimic vocal tract movements, while the latter worked on filtering electrical signals to emulate the vocal tract's function. It is worth noting that these early machines were not text to speech systems, as they convert plain text into speech. Nevertheless, they represented significant milestones in the evolution of modern TTS systems, despite their limitations in producing a full range of sounds or sentences. [https://www.youtube.com/watch?v=0rAyrmm7vv0 A video filmed in 1939] about Voder demonstrated that the Voder could only produce a single sentence ‘she saw me’ with varying word stress. | |||
Full-fledged text to speech systems emerged until the late 20th century. Most of TTS systems at that time were rule-based, meaning they generated speech using predefined linguistic and phonetic rules. One of the earliest full text to speech systems MITalk<ref>Allen, J., Hunnicutt, M. S., Klatt, D. H., Armstrong, R. C., & Pisoni, D. B. (1987). ''From text to speech: The MITalk system''. Cambridge University Press.</ref>, created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was one such rule-based system. It allowed users to hear speech as they typed. However, the speech output still lacked naturalness and expressiveness. Additionally, compiling linguistic rules for different languages requires lots of manual work and due to the flexibility of languages, it is impossible to cover all variations and exceptions. | |||
From late 20th onwards, | From late 20th onwards, new TTS techniques such as Parametric TTS, [[Hidden Markov Models in Speech Synthesis|Hidden Markov Models (HMMs)]], and Concatenative TTS were developed to address naturalness, adaptability and flexibility issues, greatly improving the performance of TTS. However, the limitations of these techniques are inevitable. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data. | ||
Thanks to the advancement of artificial intelligence (AI), recent TTS systems can generate natural, human-like speech by adopting innovative training and learning models, as well as enhanced traditional techniques. In the upcoming section, we will delve deeper into AI techniques. | |||
== Key Innovations == | == Key Innovations == | ||
The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include: | The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include: | ||
=== | === 1. Transfer Learning and Pretrained Models === | ||
One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data. | One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. <ref>Fang, W., Chung, Y.-A., & Glass, J. (2019). ''Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models'' (arXiv:1906.07307). arXiv. <nowiki>http://arxiv.org/abs/1906.07307</nowiki></ref>For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. <ref>Huang, W.-C., Hayashi, T., Wu, Y.-C., Kameoka, H., & Toda, T. (2019). ''Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining'' (arXiv:1912.06813). arXiv. <nowiki>http://arxiv.org/abs/1912.06813</nowiki></ref>Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks<ref>Dida, H. A., Chakravarthy, D. S. K., & Rabbi, F. (2023). ChatGPT and Big Data: Enhancing Text-to-Speech Conversion. ''Mesopotamian Journal of Big Data'', ''2023'', 33–37.</ref>. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data. | ||
=== | === 2. Rule-based TTS Systems === | ||
Rule-based TTS systems | Rule-based TTS systems have continued to contribute to the advancement of AI-driven TTS since 2020. While the fact that rule-based TTS still faces challenges in terms of naturalness and emotional expressiveness compared to neural TTS systems, rule-based TTS finds its value in specialized fields like medicine and law, where precise pronunciation of domain-specific terms is vital. These systems remain relevant for languages with limited linguistic resources, making them suitable for low-resource languages. Additionally, some TTS systems adopt hybrid approaches, combining rule-based and neural network-based techniques to leverage customization while benefiting from naturalness. <ref>McTear, M. (2021). Rule-Based Dialogue Systems: Architecture, Methods, and Tools. In M. McTear, ''Conversational AI'' (pp. 43–70). Springer International Publishing. <nowiki>https://doi.org/10.1007/978-3-031-02176-3_2</nowiki></ref>They excel in applications requiring high control over speech output, such as accessibility solutions. | ||
=== | === 3. Concatenative TTS === | ||
Concatenative TTS | In the AI era, Concatenative TTS systems have seen new developments aimed at improving their performance and adaptability. These developments include hybrid approaches that combine Concatenative TTS with neural network-driven TTS for higher quality and more natural synthesized speech. Additionally, there is a growing trend toward larger speech databases, enabling better voice selection for smoother and more natural speech synthesis across various text contexts. Real-time applications have seen improvements in reducing latency, making Concatenative TTS more practical for real-time communication, voice assistants, and automated voice responses. Personalized TTS, which leverages AI, allows users to customize synthesized voices to their preferences, with potential applications in education, entertainment, and assistive technologies. Moreover, Concatenative TTS systems are extending their support to multiple languages and dialects, making them applicable to diverse global markets. They also find increasing use in specialized fields such as medicine, law, and science to ensure accurate pronunciation of domain-specific terms.<ref>Soumya Priyadarsini Panda, Ajit Kumar Nayak(2015). A Rule-Based Concatenative Approach to Speech Synthesis in Indian Language Text-to-Speech Systems Intelligent Computing, ''Communication and Devices'', Volume 309 | ||
</ref> | |||
=== | === 4. Prosody Modeling === | ||
Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and expressiveness enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training<ref>Raitio, T., Li, J., & Seshadri, S. (2022). Hierarchical prosody modeling and control in non-autoregressive parallel neural TTS. ''ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 7587–7591. <nowiki>https://ieeexplore.ieee.org/abstract/document/9746253/</nowiki></ref>, enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.<ref>Vainio, M. (2001). ''Artificial neural network based prosody models for Finnish text-to-speech synthesis''. <nowiki>https://helda.helsinki.fi/bitstream/handle/10138/19873/artifici.pdf?sequence=2</nowiki></ref> | |||
=== | === 5. Zero-shot Learning === | ||
TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.<ref>Jiang, Z., Ren, Y., Ye, Z., Liu, J., Zhang, C., Yang, Q., Ji, S., Huang, R., Wang, C., Yin, X., Ma, Z., & Zhao, Z. (2023). ''Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias'' (arXiv:2306.03509). arXiv. <nowiki>http://arxiv.org/abs/2306.03509</nowiki></ref> | |||
=== | === 6. Voice Cloning === | ||
Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.<ref>Pecora, A. E. (2023). ''Data driven: AI Voice Cloning'' [PhD Thesis, Politecnico di Torino]. <nowiki>https://webthesis.biblio.polito.it/27738/</nowiki></ref> | |||
== | === 7. Evolving Neural Architectures and [https://wiki.voice-technology.nl/index.php/Development_of_End-to-End_Models End-to-End] Approaches === | ||
In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. In this context, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. <ref>Tu, T., Chen, Y.-J., Yeh, C., & Lee, H. (2019). ''End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual Transfer Learning'' (arXiv:1904.06508). arXiv. <nowiki>http://arxiv.org/abs/1904.06508</nowiki></ref>This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.<ref name=":2" /> | |||
== Impacts == | |||
The advancements in AI Text-to-Speech (TTS) technology in the 2020s have had profound impacts across various domains: | The advancements in AI Text-to-Speech (TTS) technology in the 2020s have had profound impacts across various domains: | ||
=== Business === | === 1. Business === | ||
'''Cost-Effective Marketing''' | |||
AI TTS has allowed businesses to create cost-effective marketing materials by generating high-quality [[wikipedia:Voice-over|voiceovers]] for advertisements, promotional videos, and e-commerce product descriptions<ref>Dale, R. (2022). The Voice Synthesis Business: 2022 Update. ''Cambridge University Press'', ''28''(3). <nowiki>https://doi.org/10.1017/S1351324922000146</nowiki></ref>. This has enabled smaller businesses to compete with larger counterparts. | |||
AI TTS has allowed businesses to create cost-effective marketing materials by generating high-quality voiceovers for advertisements, promotional videos, and e-commerce product descriptions. This has enabled smaller businesses to compete with larger counterparts. | |||
'''Elevated Customer Engagement'''<br>AI TTS is being used in customer service and support chatbots, providing a more engaging and interactive experience for customers. This technology reduces the need for human operators in routine tasks and enables 24/7 support and quick responses to customer queries<ref>Karrupusamy, P., Balas, V. E., & Shi, Y. (n.d.). ''Sustainable Communication Networks and Application Proceedings of ICSCN 2021''.</ref>. | |||
AI TTS is being used in customer service and support chatbots, providing a more engaging and interactive experience for customers. This technology reduces the need for human operators in routine tasks and enables 24/7 support and quick responses to customer queries. | |||
'''Multilingual Communication'''<br>Companies have expanded their global reach by using AI TTS to provide content in multiple languages, which is especially important for businesses with international customers and markets. | |||
Companies have expanded their global reach by using AI TTS to provide content in multiple languages, which is especially important for businesses with international customers and markets. | |||
'''Enhanced Brand Recognition'''<br>Customized brand voices can help businesses stand out in a crowded market. With AI TTS, businesses can maintain a consistent brand voice across various touchpoints. | |||
Customized brand voices can help businesses stand out in a crowded market. With AI TTS, businesses can maintain a consistent brand voice across various touchpoints. | |||
=== Education === | === 2. Education === | ||
'''Accessibility and Inclusion'''<br>TTS technology is being used in education and e-learning platforms to provide audio versions of text content. This benefits students with diverse learning styles and those with reading difficulties<ref>Stodden, R. A., Roberts, K. D., Takahashi, K., Park, H. J., & Stodden, N. J. (2012). ''Use of Text-to-speech Software to Improve Reading Skills of High School Struggling Readers''. International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2012). <nowiki>https://www.sciencedirect.com/</nowiki></ref>. | |||
TTS technology is being used in education and e-learning platforms to provide audio versions of text content. This benefits students with diverse learning styles and those with reading difficulties. | |||
'''Language Learning'''<br>TTS technology remains an asset in language learning, helping learners improve pronunciation, fluency, and comprehension in various languages. | |||
TTS technology remains an asset in language learning, helping learners improve pronunciation, fluency, and comprehension in various languages. | |||
'''Personalized Learning'''<br>Educational institutions use AI TTS to provide personalized learning experiences, adapting content to individual student needs and preferences. | |||
Educational institutions use AI TTS to provide personalized learning experiences, adapting content to individual student needs and preferences. | |||
'''Teacher Assistance'''<br>TTS tools support educators in creating and delivering content, from generating voiceovers for instructional videos to offering speech feedback on assignments. | |||
TTS tools support educators in creating and delivering content, from generating voiceovers for instructional videos to offering speech feedback on assignments. | |||
=== Society === | === 3. Society === | ||
'''Language Preservation'''<br>Cross-lingual text-to-speech (CTTS) has facilitated communication across language barriers and has played a role in preserving and revitalizing low-resource and endangered languages<ref>Cooper, E. (2019). ''Text-to-Speech Synthesis Using Found Data for Low-Resource Languages''. COLUMBIA UNIVERSITY.</ref>, promoting linguistic diversity and aiding in documentation and communication. This is invaluable in a globalized world, allowing for better understanding and cooperation. | |||
Cross-lingual text-to-speech (CTTS) has facilitated communication across language barriers and has played a role in preserving and revitalizing low-resource and endangered languages, promoting linguistic diversity and aiding in documentation and communication. This is invaluable in a globalized world, allowing for better understanding and cooperation. | |||
'''Digital Inclusion'''<br>TTS technology promotes digital inclusion by making digital content accessible to individuals with low literacy skills and those with disabilities. Improved TTS technology has greatly enhanced accessibility for individuals with visual impairments<ref>Edward, S., & Xavier, J. B. (2018). ''Text-To-Speech Device for Visually Impaired People''. ''119''. <nowiki>https://www.acadpubl.eu/hub/</nowiki></ref>. It allows text-based information to be converted into speech, making digital content more accessible to a wider audience. | |||
TTS technology promotes digital inclusion by making digital content accessible to individuals with low literacy skills and those with disabilities. Improved TTS technology has greatly enhanced accessibility for individuals with visual impairments. It allows text-based information to be converted into speech, making digital content more accessible to a wider audience. | |||
'''Entertainment and Content Creation'''<br>The entertainment industry has benefited from TTS technology through voice cloning and dubbing<ref>PECORA, A. E. (2023). ''Data driven: AI Voice Cloning'' [Master’s Degree Thesis]. POLITECNICO DI TORINO.</ref>. It has become easier to dub movies, create voiceovers, and even bring back historical voices for documentaries and other media productions. AI TTS continues to support voiceovers in video games, audiobooks, and other audio content, contributing to the entertainment experience. | |||
The entertainment industry has benefited from TTS technology through voice cloning and dubbing. It has become easier to dub movies, create voiceovers, and even bring back historical voices for documentaries and other media productions. AI TTS continues to support voiceovers in video games, audiobooks, and other audio content, contributing to the entertainment experience. | |||
'''Emergency Communication'''<br>During emergencies and crisis situations, AI TTS is used to disseminate critical information rapidly, ensuring public safety and information access. | |||
During emergencies and crisis situations, AI TTS is used to disseminate critical information rapidly, ensuring public safety and information access. | |||
=== Privacy and Ethical Concerns === | === 4. Privacy and Ethical Concerns<ref>Azmoodeh, A., & Dehghantanha, A. (2022). ''Deep Fake Detection, Deterrence and Response: Challenges and Opportunities''. <nowiki>https://doi.org/10.48550/arXiv.2211.14667</nowiki></ref> === | ||
'''Deepfake Threat'''<br>The potential for AI TTS to be used in deepfake audio and video content has become a growing concern. This emphasizes the need for robust authentication and content verification mechanisms. | |||
The potential for AI TTS to be used in deepfake audio and video content has become a growing concern. This emphasizes the need for robust authentication and content verification mechanisms. | |||
'''Data Privacy'''<br>The collection and storage of voice data for TTS models raise concerns about data privacy. Regulations and guidelines have been developed to address these issues. | |||
The collection and storage of voice data for TTS models raise concerns about data privacy. Regulations and guidelines have been developed to address these issues. | |||
'''Bias and Cultural Sensitivity'''<br>The challenge of mitigating bias and ensuring cultural sensitivity in TTS models remains a critical consideration in their development and deployment. | |||
The challenge of mitigating bias and ensuring cultural sensitivity in TTS models remains a critical consideration in their development and deployment. | |||
== Future Research == | == Future Research == | ||
=== High-quality | === 1. High-quality Speech Synthesis === | ||
The most important goal of TTS is to synthesize high-quality speech. The quality of speech is determined by many aspects that influence the perception of speech, including intelligibility, naturalness, expressiveness, prosody, emotion, style, robustness, controllability, etc. While neural approaches have significantly improved the quality of synthesized speech, there is still large room to make further improvements<ref name=":0">Tan, X., Qin, T., Soong, F., & Liu, T. Y. (2021). A survey on neural speech synthesis. ''arXiv preprint arXiv:2106.15561''.[https://arxiv.org/abs/2106.15561]</ref>. | The most important goal of TTS is to synthesize high-quality speech. The quality of speech is determined by many aspects that influence the perception of speech, including intelligibility, naturalness, expressiveness, prosody, emotion, style, robustness, controllability, etc. While neural approaches have significantly improved the quality of synthesized speech, there is still large room to make further improvements<ref name=":0">Tan, X., Qin, T., Soong, F., & Liu, T. Y. (2021). A survey on neural speech synthesis. ''arXiv preprint arXiv:2106.15561''.[https://arxiv.org/abs/2106.15561]</ref>. | ||
==== Affective | ==== 1.1 Affective Speech Synthesis ==== | ||
===== a. Emotional | ===== a. Emotional Vocal Bursts ===== | ||
Within the realm of emotional speech synthesis, a particularly intriguing area of exploration could revolve around emotional vocal outbursts. In the now famous promotional video for Google Assistant, the crowd erupted in cheers as the assistant assured the hairdresser that “taking one second” to look for an appointment was fine with a mere “Mm-hmm.” This example vividly demonstrates the significance of vocal outbursts in conveying emotional reactions. In fact, the synthesis of such vocal bursts was already the focal point of the 2020 [https://www.competitions.hume.ai/exvo2022 ExVo challenge]. The most successful approach employed in this challenge, utilizing StyleGAN2, had already yielded promising outcomes, underscoring the considerable potential inherent in this avenue of research<ref name=":1">Triantafyllopoulos, A., Schuller, B. W., İymen, G., Sezgin, M., He, X., Yang, Z., ... & Tao, J. (2023). An overview of affective speech synthesis and conversion in the deep learning era. ''Proceedings of the IEEE''.[https://ieeexplore.ieee.org/abstract/document/10065433?casa_token=x7b0fbCS-JYAAAAA:46lTQNSjvQ8vvX5dFAzVn2ESL3HRyOyDbHmiMDBPzTYB1EGd9ITjCm-NpLrbINMeluhsJ69A]</ref>. | Within the realm of emotional speech synthesis, a particularly intriguing area of exploration could revolve around emotional vocal outbursts. In the now famous promotional video for Google Assistant, the crowd erupted in cheers as the assistant assured the hairdresser that “taking one second” to look for an appointment was fine with a mere “Mm-hmm.” This example vividly demonstrates the significance of vocal outbursts in conveying emotional reactions. In fact, the synthesis of such vocal bursts was already the focal point of the 2020 [https://www.competitions.hume.ai/exvo2022 ExVo challenge]. The most successful approach employed in this challenge, utilizing StyleGAN2, had already yielded promising outcomes, underscoring the considerable potential inherent in this avenue of research<ref name=":1">Triantafyllopoulos, A., Schuller, B. W., İymen, G., Sezgin, M., He, X., Yang, Z., ... & Tao, J. (2023). An overview of affective speech synthesis and conversion in the deep learning era. ''Proceedings of the IEEE''.[https://ieeexplore.ieee.org/abstract/document/10065433?casa_token=x7b0fbCS-JYAAAAA:46lTQNSjvQ8vvX5dFAzVn2ESL3HRyOyDbHmiMDBPzTYB1EGd9ITjCm-NpLrbINMeluhsJ69A]</ref>. | ||
===== b. Endowing the | ===== b. Endowing the Agent with an Artificial Personality ===== | ||
This area has been pursued for several decades. However, this topic has been recently revived in the context of big language models, which can be adapted to emulate a specific personality. As personality has been also shown to manifest in speech signals, it is an evident next step to introduce it to conversational agents as well. In general, as exemplified by the tasks featured in the Computational Paralinguistics Challenge, there exist a plethora of speaker states and traits, which can be modeled from the speech: deception, sincerity, nativeness, cognitive load, likability, interest, and others are all variables that could be added to the capabilities of affective agents<ref name=":1" />. | This area has been pursued for several decades. However, this topic has been recently revived in the context of big language models, which can be adapted to emulate a specific personality. As personality has been also shown to manifest in speech signals, it is an evident next step to introduce it to conversational agents as well. In general, as exemplified by the tasks featured in the Computational Paralinguistics Challenge, there exist a plethora of speaker states and traits, which can be modeled from the speech: deception, sincerity, nativeness, cognitive load, likability, interest, and others are all variables that could be added to the capabilities of affective agents<ref name=":1" />. | ||
Line 112: | Line 104: | ||
Specifically, '''child-speech-synthesis''' is one promising research area. Due to the difficulties in collecting children speech data and understanding children speech, synthesizing children speech has always been challenging. In recent years, neural-network-based TTS systems have been gaining popularity. For instance, Hasija, Kadyan, and Guleria<ref>Hasija, T.; Kadyan, V.; Guleria, K. Out Domain Data Augmentation on Punjabi Children Speech Recognition using Tacotron. In Proceedings of the International Conference on Mathematics and Artificial Intelligence (ICMAI 2021), Chengdu, China, 19–21 March 2021.[https://iopscience.iop.org/article/10.1088/1742-6596/1950/1/012044]</ref> used Tacotron for the development of children’s synthetic speech. However, the problem of lacking data for children's speech still persists. For future developments, researchers need to define better acoustic features for children's speech. Moreover, pronunciation modelling is required<ref>Terblanche, C., Harty, M., Pascoe, M., & Tucker, B. V. (2022). A Situational Analysis of Current Speech-Synthesis Systems for Child Voices: A Scoping Review of Qualitative and Quantitative Evidence. ''Applied Sciences'', ''12''(11), 5623. <nowiki>https://doi.org/10.3390/</nowiki>[https://www.mdpi.com/2076-3417/12/11/5623 app12115623]</ref>. | Specifically, '''child-speech-synthesis''' is one promising research area. Due to the difficulties in collecting children speech data and understanding children speech, synthesizing children speech has always been challenging. In recent years, neural-network-based TTS systems have been gaining popularity. For instance, Hasija, Kadyan, and Guleria<ref>Hasija, T.; Kadyan, V.; Guleria, K. Out Domain Data Augmentation on Punjabi Children Speech Recognition using Tacotron. In Proceedings of the International Conference on Mathematics and Artificial Intelligence (ICMAI 2021), Chengdu, China, 19–21 March 2021.[https://iopscience.iop.org/article/10.1088/1742-6596/1950/1/012044]</ref> used Tacotron for the development of children’s synthetic speech. However, the problem of lacking data for children's speech still persists. For future developments, researchers need to define better acoustic features for children's speech. Moreover, pronunciation modelling is required<ref>Terblanche, C., Harty, M., Pascoe, M., & Tucker, B. V. (2022). A Situational Analysis of Current Speech-Synthesis Systems for Child Voices: A Scoping Review of Qualitative and Quantitative Evidence. ''Applied Sciences'', ''12''(11), 5623. <nowiki>https://doi.org/10.3390/</nowiki>[https://www.mdpi.com/2076-3417/12/11/5623 app12115623]</ref>. | ||
===== d. Interaction between AI and | ===== d. Interaction between AI and Human ===== | ||
The interactions can be accordingly classified as “successful” or not, depending on the goals of the agent. Coupled with effective speech recognition capabilities, these interactions constitute a natural reward signal, which can be further utilized by their agent to improve their speech synthesis and speech recognition capacities in a lifelong reinforcement learning setup, which still remains an elusive goal for the field of affective computing. An overture to this exciting domain can already be found in intelligent dialog generation, where reinforcement learning is already being used to adjust the linguistic style of an agent or to learn backchanneling responses. This paradigm is expected to be more widely used in TTS in the near future<ref name=":1" />. | The interactions can be accordingly classified as “successful” or not, depending on the goals of the agent. Coupled with effective speech recognition capabilities, these interactions constitute a natural reward signal, which can be further utilized by their agent to improve their speech synthesis and speech recognition capacities in a lifelong reinforcement learning setup, which still remains an elusive goal for the field of affective computing. An overture to this exciting domain can already be found in intelligent dialog generation, where reinforcement learning is already being used to adjust the linguistic style of an agent or to learn backchanneling responses. This paradigm is expected to be more widely used in TTS in the near future<ref name=":1" />. | ||
==== Better | ==== 1.2 Better Representation Learning ==== | ||
Good representations of text and speech are beneficial for neural TTS models, which can improve the quality of synthesized speech. Some initial explorations on text pre-training indicate that better text representations can indeed improve the speech prosody. How to learn powerful representations for text/phoneme sequence and especially for speech sequence through unsupervised/self-supervised learning and pre-training is challenging and worth further explorations<ref name=":0" />. | Good representations of text and speech are beneficial for neural TTS models, which can improve the quality of synthesized speech. Some initial explorations on text pre-training indicate that better text representations can indeed improve the speech prosody. How to learn powerful representations for text/phoneme sequence and especially for speech sequence through unsupervised/self-supervised learning and pre-training is challenging and worth further explorations<ref name=":0" />. | ||
=== Efficient | === 2. Efficient Speech Synthesis<ref name=":0" /> === | ||
It is about how to reduce the cost of speech synthesis including the cost of collecting and labeling training data, training and serving TTS models, etc. | It is about how to reduce the cost of speech synthesis including the cost of collecting and labeling training data, training and serving TTS models, etc. | ||
==== Data-efficient TTS ==== | ==== 2.1 Data-efficient TTS ==== | ||
Many '''low-resource languages''' are lack of training data. How to leverage unsupervised/semi-supervised learning and cross-lingual transfer learning to help the low-resource languages is an interesting direction. For example, the [https://www.zerospeech.com/ ZeroSpeech Challenge] is a good initiative to explore the techniques to learn only from speech, without any text or linguistic knowledge. Besides, in voice adaptation, a target speaker usually has little adaptation data, which is another application scenario for data-efficient TTS. | Many '''low-resource languages''' are lack of training data. How to leverage unsupervised/semi-supervised learning and cross-lingual transfer learning to help the low-resource languages is an interesting direction. For example, the [https://www.zerospeech.com/ ZeroSpeech Challenge] is a good initiative to explore the techniques to learn only from speech, without any text or linguistic knowledge. Besides, in voice adaptation, a target speaker usually has little adaptation data, which is another application scenario for data-efficient TTS. | ||
==== Parameter-efficient TTS ==== | ==== 2.2 Parameter-efficient TTS ==== | ||
Today’s neural TTS systems usually employ large neural networks with tens of millions of parameters to synthesize high-quality speech, which block the applications in mobile, low-end devices due to their limited memory and power consumption. Designing compact and lightweight models with less memory footprints, power consumption and latency are critical for those application scenarios. | Today’s neural TTS systems usually employ large neural networks with tens of millions of parameters to synthesize high-quality speech, which block the applications in mobile, low-end devices due to their limited memory and power consumption. Designing compact and lightweight models with less memory footprints, power consumption and latency are critical for those application scenarios. | ||
==== Energy-efficient TTS ==== | ==== 2.3 Energy-efficient TTS ==== | ||
Training and serving a high-quality TTS model consume a lot of energy and emit a lot of carbon. Improving energy efficiency, e.g., reducing the [[wikipedia:FLOPS|FLOPs]] in TTS training and inference, is important to let more populations to benefit from advanced TTS techniques while reducing carbon emissions to protect our environment. | Training and serving a high-quality TTS model consume a lot of energy and emit a lot of carbon. Improving energy efficiency, e.g., reducing the [[wikipedia:FLOPS|FLOPs]] in TTS training and inference, is important to let more populations to benefit from advanced TTS techniques while reducing carbon emissions to protect our environment. | ||
== LLM Review == | == LLM Review == | ||
- | We first asked GPT to review our text and the prompt we used is: '''"Act as a reader who has no previous knowledge about AI-driven Text to Speech and provide feedback for this text."''' The response from GPT are as following: <blockquote>As a reader with no previous knowledge about AI-driven Text to Speech (TTS), I found this text to be highly informative and well-structured. It provides a comprehensive introduction to the evolution of TTS technology, starting from its historical roots in the 18th century to the groundbreaking innovations of the 2020s. | ||
# '''Clarity and Structure:''' The text is well-organized, with clear section headings, making it easy to follow the progression of TTS technology. The historical context section provides essential background information, and the subsequent sections delve into key innovations, impacts, and future research. | |||
# '''Historical Context:''' The historical context provided in the text is enlightening and helps create a foundation for understanding the evolution of TTS technology. It's fascinating to learn about the early attempts at mimicking human speech. | |||
# '''Key Innovations:''' The section on key innovations is particularly valuable for someone new to the topic. It breaks down the significant advancements in TTS, such as transfer learning, rule-based systems, concatenative TTS, prosody modeling, zero-shot learning, and voice cloning. This helps in understanding the diverse approaches within TTS technology. | |||
# '''Impacts:''' The text effectively highlights the practical applications of AI TTS across various domains, including business, education, and society. It's beneficial to know how this technology is being used in real-world scenarios, such as customer engagement, language preservation, and accessibility. | |||
# '''Privacy and Ethical Concerns:''' Addressing the concerns related to privacy and ethical considerations is crucial. It's good to see that the text acknowledges these issues, particularly the potential misuse of AI TTS in deepfake content. | |||
# '''Future Research:''' The section on future research provides a glimpse into the ongoing developments and potential directions in the field of TTS. It's forward-looking and showcases the evolving nature of the technology. | |||
</blockquote>While we appreciated the positive feedback, we believed that GPT's responses are excessively complimentary and somewhat generic. We then changed the prompt: '''"Please provide a critique of this popular science article from the perspective of an expert in the TTS (Text-to-Speech) field, highlighting the areas that require improvement."''' The response from GPT are as following:<blockquote>This popular science article provides a comprehensive overview of the evolution of Text-to-Speech (TTS) technology, including historical context, key innovations, impacts, and future research directions. However, from an expert perspective in the TTS field, there are some areas that could be improved: | |||
# '''In-Depth Technical Insights''': While the article covers a wide range of TTS topics, it lacks in-depth technical insights. As a TTS expert, I would be interested in more technical details about the various TTS techniques and models mentioned, such as neural TTS architectures, concatenative TTS improvements, and prosody modeling. Adding more technical depth would make the article more valuable to experts in the field. | |||
# '''Voice Cloning and Deepfakes''': Given the rise of voice cloning and deepfake technologies, it's crucial to address these topics in the article, discussing both the positive and potentially harmful applications of TTS in these areas. | |||
# '''Expand on Impact:''' While you've touched upon the impacts of AI TTS in various domains, you can provide more specific examples or case studies to illustrate how businesses, educational institutions, and other sectors have benefited from this technology. Real-world examples can make the impacts more tangible to readers. | |||
# '''Future Research Directions''': The section on future research directions is comprehensive but could be more focused. TTS experts would appreciate a more concise and prioritized list of the most pressing research areas and challenges in the field. | |||
# '''Visual Aids''': The inclusion of visuals like diagrams or charts to illustrate key concepts and the evolution of TTS techniques would enhance the article's clarity and appeal to a broader audience. | |||
# '''Transition Sentences:''' Use clear transition sentences to guide readers from one section to the next. This can make the text flow more smoothly and aid in comprehension. | |||
# '''Shorten Sentences and Paragraphs:''' Some sentences and paragraphs are quite long. Breaking them up or simplifying complex sentences can improve readability. | |||
</blockquote>This time, GPT gave more professional and targeted suggestions, but we thought that several of them maybe too professional and did not take into account the general audience that reads this wiki page, and GPT apprently ignored the keyword in the prompt "popular science article". So we have ignored suggestions #1, #3, and #5, and adopted suggestions #4, #7 and #8 that help to make the article more readable and comprehensible. We also ignored suggestion #6 because it's difficult to implement (editing pages can't directly insert images). | |||
== Team Members == | == Team Members == | ||
Yilan Wei | |||
Xueying Liu | |||
Xinyi Ma | |||
Jingsi Huang | |||
Wansu Zhu | |||
== References == | == References == |
Latest revision as of 20:28, 18 October 2023
Introduction[edit | edit source]
Since the turn of the 21st century, AI Text-to-Speech (TTS) technology has experienced a remarkable evolution, transitioning from rule-driven synthesis to end-to-end approaches founded on deep learning. These transformative shifts have resulted in remarkable improvements in the quality, naturalness, and adaptability of TTS systems for speech synthesis.
In the 2020s, AI TTS underwent groundbreaking changes, introducing innovations such as voice cloning and zero-shot learning, ultimately achieving an unparalleled level of natural and expressive speech synthesis. These advancements laid the cornerstone for the modern TTS technology we have today.
AI TTS has made an indelible impact on various domains, including business, education, and society, by transcending language barriers and gaining the ability to convey emotions and individuality. However, its widespread adoption has concurrently raised valid concerns regarding privacy and bias.
Historical Context[edit | edit source]
The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by Christian Kratzenstein in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine produced five vowels A, E, I, O and U. In 1791, Wolfgang von Kempelen created a “speaking machine” that simulated the human vocal tract and could produce vowels and consonants using a series of bellows, reeds and mechanical components. Later in the 20th century, Bell Labs developed Voder (Voice Operating Demonstrator) , an electrical speech synthesizer that employed bandpass electronic filters and operator control for sound generation.
The central concept during the early development of speech synthesis was to copy the speech[1]. The approach involved either mechanical or electrical. The former aimed to mimic vocal tract movements, while the latter worked on filtering electrical signals to emulate the vocal tract's function. It is worth noting that these early machines were not text to speech systems, as they convert plain text into speech. Nevertheless, they represented significant milestones in the evolution of modern TTS systems, despite their limitations in producing a full range of sounds or sentences. A video filmed in 1939 about Voder demonstrated that the Voder could only produce a single sentence ‘she saw me’ with varying word stress.
Full-fledged text to speech systems emerged until the late 20th century. Most of TTS systems at that time were rule-based, meaning they generated speech using predefined linguistic and phonetic rules. One of the earliest full text to speech systems MITalk[2], created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was one such rule-based system. It allowed users to hear speech as they typed. However, the speech output still lacked naturalness and expressiveness. Additionally, compiling linguistic rules for different languages requires lots of manual work and due to the flexibility of languages, it is impossible to cover all variations and exceptions.
From late 20th onwards, new TTS techniques such as Parametric TTS, Hidden Markov Models (HMMs), and Concatenative TTS were developed to address naturalness, adaptability and flexibility issues, greatly improving the performance of TTS. However, the limitations of these techniques are inevitable. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.
Thanks to the advancement of artificial intelligence (AI), recent TTS systems can generate natural, human-like speech by adopting innovative training and learning models, as well as enhanced traditional techniques. In the upcoming section, we will delve deeper into AI techniques.
Key Innovations[edit | edit source]
The 2020s have marked a significant decade in the evolution of Text-to-Speech (TTS) technology driven by artificial intelligence (AI). This period has witnessed a host of groundbreaking innovations that have further refined and expanded the capabilities of TTS systems. Some key innovations in the 2020s include:
1. Transfer Learning and Pretrained Models[edit | edit source]
One of the pivotal advancements in the 2020s has been the widespread adoption of transfer learning in TTS. Transfer learning allows knowledge learned from one or more base models to be transferred to other tasks. [3]For TTS, this can include knowledge learned from a general speech model and then applied to personalized voice synthesis. This helps expedite the training of personalized voice models as the model already possesses some general speech characteristics. [4]Models like GPT-3 and BERT, initially designed for natural language processing, have been adapted for TTS tasks[5]. This approach has led to more efficient training and improved performance in TTS systems, with less need for extensive domain-specific data.
2. Rule-based TTS Systems[edit | edit source]
Rule-based TTS systems have continued to contribute to the advancement of AI-driven TTS since 2020. While the fact that rule-based TTS still faces challenges in terms of naturalness and emotional expressiveness compared to neural TTS systems, rule-based TTS finds its value in specialized fields like medicine and law, where precise pronunciation of domain-specific terms is vital. These systems remain relevant for languages with limited linguistic resources, making them suitable for low-resource languages. Additionally, some TTS systems adopt hybrid approaches, combining rule-based and neural network-based techniques to leverage customization while benefiting from naturalness. [6]They excel in applications requiring high control over speech output, such as accessibility solutions.
3. Concatenative TTS[edit | edit source]
In the AI era, Concatenative TTS systems have seen new developments aimed at improving their performance and adaptability. These developments include hybrid approaches that combine Concatenative TTS with neural network-driven TTS for higher quality and more natural synthesized speech. Additionally, there is a growing trend toward larger speech databases, enabling better voice selection for smoother and more natural speech synthesis across various text contexts. Real-time applications have seen improvements in reducing latency, making Concatenative TTS more practical for real-time communication, voice assistants, and automated voice responses. Personalized TTS, which leverages AI, allows users to customize synthesized voices to their preferences, with potential applications in education, entertainment, and assistive technologies. Moreover, Concatenative TTS systems are extending their support to multiple languages and dialects, making them applicable to diverse global markets. They also find increasing use in specialized fields such as medicine, law, and science to ensure accurate pronunciation of domain-specific terms.[7]
4. Prosody Modeling[edit | edit source]
Focusing on prosody, or the melody and rhythm of speech, prosody modeling has been a significant area of advancement. Research in modeling prosody has led to more Prosody and expressiveness enhancement by focusing on refining the prosody and expressiveness of synthesized speech. Advanced models now incorporate prosody-aware training[8], enabling TTS systems to convey emotions, nuances, and variations in pitch and rhythm more effectively, making the speech sound more natural and human-like.[9]
5. Zero-shot Learning[edit | edit source]
TTS systems have significantly advanced in recent years with deep learning approaches. These advances have motivated research that aims to synthesize speech into the voice of a target speaker using just a few seconds of speech. This approach is called zero-shot multi-speaker TTS. Innovations in zero-shot learning have allowed TTS models to generate speech in languages and styles they were not explicitly trained on. These models leverage multilingual and cross-lingual capabilities, enabling TTS systems to be more versatile and adaptable to diverse linguistic contexts.[10]
6. Voice Cloning[edit | edit source]
Voice cloning models are trained to capture the specific speaker's pitch, tone, and speech characteristics, making the generated speech more similar to the specific speaker. This can be achieved using deep learning techniques, such as Generative Adversarial Networks (GANs). With this approach, we saw more advancements in creating personalized and customizable voices since the 2020s. TTS systems can now mimic specific voices or allow users to tailor the characteristics of the generated speech, fostering more engaging and adaptive human-computer interactions.[11]
7. Evolving Neural Architectures and End-to-End Approaches[edit | edit source]
In recent years, the field of AI-driven TTS has seen remarkable progress. Neural network architectures, particularly Transformers and their variants, have revolutionized TTS research by enhancing parallelization, enabling real-time, and high-quality TTS. These architectures, featuring attention mechanisms and positional embeddings, have become standard for capturing context and improving synthesis quality. In this context, End-to-End Approaches have made significant strides since 2020, streamlining the TTS process by utilizing powerful neural networks to directly transform text into speech waveforms. [12]This development has yielded voices that are more human-like and of higher quality, with enhanced customization and personalization capabilities. However, challenges such as data availability and fine-tuning for less common languages persist, yet End-to-End TTS continues to find application in real-time scenarios and personalized voice synthesis.[1]
Impacts[edit | edit source]
The advancements in AI Text-to-Speech (TTS) technology in the 2020s have had profound impacts across various domains:
1. Business[edit | edit source]
Cost-Effective Marketing
AI TTS has allowed businesses to create cost-effective marketing materials by generating high-quality voiceovers for advertisements, promotional videos, and e-commerce product descriptions[13]. This has enabled smaller businesses to compete with larger counterparts.
Elevated Customer Engagement
AI TTS is being used in customer service and support chatbots, providing a more engaging and interactive experience for customers. This technology reduces the need for human operators in routine tasks and enables 24/7 support and quick responses to customer queries[14].
Multilingual Communication
Companies have expanded their global reach by using AI TTS to provide content in multiple languages, which is especially important for businesses with international customers and markets.
Enhanced Brand Recognition
Customized brand voices can help businesses stand out in a crowded market. With AI TTS, businesses can maintain a consistent brand voice across various touchpoints.
2. Education[edit | edit source]
Accessibility and Inclusion
TTS technology is being used in education and e-learning platforms to provide audio versions of text content. This benefits students with diverse learning styles and those with reading difficulties[15].
Language Learning
TTS technology remains an asset in language learning, helping learners improve pronunciation, fluency, and comprehension in various languages.
Personalized Learning
Educational institutions use AI TTS to provide personalized learning experiences, adapting content to individual student needs and preferences.
Teacher Assistance
TTS tools support educators in creating and delivering content, from generating voiceovers for instructional videos to offering speech feedback on assignments.
3. Society[edit | edit source]
Language Preservation
Cross-lingual text-to-speech (CTTS) has facilitated communication across language barriers and has played a role in preserving and revitalizing low-resource and endangered languages[16], promoting linguistic diversity and aiding in documentation and communication. This is invaluable in a globalized world, allowing for better understanding and cooperation.
Digital Inclusion
TTS technology promotes digital inclusion by making digital content accessible to individuals with low literacy skills and those with disabilities. Improved TTS technology has greatly enhanced accessibility for individuals with visual impairments[17]. It allows text-based information to be converted into speech, making digital content more accessible to a wider audience.
Entertainment and Content Creation
The entertainment industry has benefited from TTS technology through voice cloning and dubbing[18]. It has become easier to dub movies, create voiceovers, and even bring back historical voices for documentaries and other media productions. AI TTS continues to support voiceovers in video games, audiobooks, and other audio content, contributing to the entertainment experience.
Emergency Communication
During emergencies and crisis situations, AI TTS is used to disseminate critical information rapidly, ensuring public safety and information access.
4. Privacy and Ethical Concerns[19][edit | edit source]
Deepfake Threat
The potential for AI TTS to be used in deepfake audio and video content has become a growing concern. This emphasizes the need for robust authentication and content verification mechanisms.
Data Privacy
The collection and storage of voice data for TTS models raise concerns about data privacy. Regulations and guidelines have been developed to address these issues.
Bias and Cultural Sensitivity
The challenge of mitigating bias and ensuring cultural sensitivity in TTS models remains a critical consideration in their development and deployment.
Future Research[edit | edit source]
1. High-quality Speech Synthesis[edit | edit source]
The most important goal of TTS is to synthesize high-quality speech. The quality of speech is determined by many aspects that influence the perception of speech, including intelligibility, naturalness, expressiveness, prosody, emotion, style, robustness, controllability, etc. While neural approaches have significantly improved the quality of synthesized speech, there is still large room to make further improvements[20].
1.1 Affective Speech Synthesis[edit | edit source]
a. Emotional Vocal Bursts[edit | edit source]
Within the realm of emotional speech synthesis, a particularly intriguing area of exploration could revolve around emotional vocal outbursts. In the now famous promotional video for Google Assistant, the crowd erupted in cheers as the assistant assured the hairdresser that “taking one second” to look for an appointment was fine with a mere “Mm-hmm.” This example vividly demonstrates the significance of vocal outbursts in conveying emotional reactions. In fact, the synthesis of such vocal bursts was already the focal point of the 2020 ExVo challenge. The most successful approach employed in this challenge, utilizing StyleGAN2, had already yielded promising outcomes, underscoring the considerable potential inherent in this avenue of research[21].
b. Endowing the Agent with an Artificial Personality[edit | edit source]
This area has been pursued for several decades. However, this topic has been recently revived in the context of big language models, which can be adapted to emulate a specific personality. As personality has been also shown to manifest in speech signals, it is an evident next step to introduce it to conversational agents as well. In general, as exemplified by the tasks featured in the Computational Paralinguistics Challenge, there exist a plethora of speaker states and traits, which can be modeled from the speech: deception, sincerity, nativeness, cognitive load, likability, interest, and others are all variables that could be added to the capabilities of affective agents[21].
c. Personalization[edit | edit source]
Personalization is expected to be another major aspect of future speech synthesis systems. Both the expression and the perception of emotion show individualistic effects, which are currently underexploited in the speech synthesis field. Future approaches can benefit a lot from adopting a similar mindset and adapt the production of emotional speech to a style that fits both the speaker and the listener. Such an interpersonal adaptation effect is also seen in human conversations and is a necessary step to foster communication[21].
Specifically, child-speech-synthesis is one promising research area. Due to the difficulties in collecting children speech data and understanding children speech, synthesizing children speech has always been challenging. In recent years, neural-network-based TTS systems have been gaining popularity. For instance, Hasija, Kadyan, and Guleria[22] used Tacotron for the development of children’s synthetic speech. However, the problem of lacking data for children's speech still persists. For future developments, researchers need to define better acoustic features for children's speech. Moreover, pronunciation modelling is required[23].
d. Interaction between AI and Human[edit | edit source]
The interactions can be accordingly classified as “successful” or not, depending on the goals of the agent. Coupled with effective speech recognition capabilities, these interactions constitute a natural reward signal, which can be further utilized by their agent to improve their speech synthesis and speech recognition capacities in a lifelong reinforcement learning setup, which still remains an elusive goal for the field of affective computing. An overture to this exciting domain can already be found in intelligent dialog generation, where reinforcement learning is already being used to adjust the linguistic style of an agent or to learn backchanneling responses. This paradigm is expected to be more widely used in TTS in the near future[21].
1.2 Better Representation Learning[edit | edit source]
Good representations of text and speech are beneficial for neural TTS models, which can improve the quality of synthesized speech. Some initial explorations on text pre-training indicate that better text representations can indeed improve the speech prosody. How to learn powerful representations for text/phoneme sequence and especially for speech sequence through unsupervised/self-supervised learning and pre-training is challenging and worth further explorations[20].
2. Efficient Speech Synthesis[20][edit | edit source]
It is about how to reduce the cost of speech synthesis including the cost of collecting and labeling training data, training and serving TTS models, etc.
2.1 Data-efficient TTS[edit | edit source]
Many low-resource languages are lack of training data. How to leverage unsupervised/semi-supervised learning and cross-lingual transfer learning to help the low-resource languages is an interesting direction. For example, the ZeroSpeech Challenge is a good initiative to explore the techniques to learn only from speech, without any text or linguistic knowledge. Besides, in voice adaptation, a target speaker usually has little adaptation data, which is another application scenario for data-efficient TTS.
2.2 Parameter-efficient TTS[edit | edit source]
Today’s neural TTS systems usually employ large neural networks with tens of millions of parameters to synthesize high-quality speech, which block the applications in mobile, low-end devices due to their limited memory and power consumption. Designing compact and lightweight models with less memory footprints, power consumption and latency are critical for those application scenarios.
2.3 Energy-efficient TTS[edit | edit source]
Training and serving a high-quality TTS model consume a lot of energy and emit a lot of carbon. Improving energy efficiency, e.g., reducing the FLOPs in TTS training and inference, is important to let more populations to benefit from advanced TTS techniques while reducing carbon emissions to protect our environment.
LLM Review[edit | edit source]
We first asked GPT to review our text and the prompt we used is: "Act as a reader who has no previous knowledge about AI-driven Text to Speech and provide feedback for this text." The response from GPT are as following:
As a reader with no previous knowledge about AI-driven Text to Speech (TTS), I found this text to be highly informative and well-structured. It provides a comprehensive introduction to the evolution of TTS technology, starting from its historical roots in the 18th century to the groundbreaking innovations of the 2020s.
- Clarity and Structure: The text is well-organized, with clear section headings, making it easy to follow the progression of TTS technology. The historical context section provides essential background information, and the subsequent sections delve into key innovations, impacts, and future research.
- Historical Context: The historical context provided in the text is enlightening and helps create a foundation for understanding the evolution of TTS technology. It's fascinating to learn about the early attempts at mimicking human speech.
- Key Innovations: The section on key innovations is particularly valuable for someone new to the topic. It breaks down the significant advancements in TTS, such as transfer learning, rule-based systems, concatenative TTS, prosody modeling, zero-shot learning, and voice cloning. This helps in understanding the diverse approaches within TTS technology.
- Impacts: The text effectively highlights the practical applications of AI TTS across various domains, including business, education, and society. It's beneficial to know how this technology is being used in real-world scenarios, such as customer engagement, language preservation, and accessibility.
- Privacy and Ethical Concerns: Addressing the concerns related to privacy and ethical considerations is crucial. It's good to see that the text acknowledges these issues, particularly the potential misuse of AI TTS in deepfake content.
- Future Research: The section on future research provides a glimpse into the ongoing developments and potential directions in the field of TTS. It's forward-looking and showcases the evolving nature of the technology.
While we appreciated the positive feedback, we believed that GPT's responses are excessively complimentary and somewhat generic. We then changed the prompt: "Please provide a critique of this popular science article from the perspective of an expert in the TTS (Text-to-Speech) field, highlighting the areas that require improvement." The response from GPT are as following:
This popular science article provides a comprehensive overview of the evolution of Text-to-Speech (TTS) technology, including historical context, key innovations, impacts, and future research directions. However, from an expert perspective in the TTS field, there are some areas that could be improved:
- In-Depth Technical Insights: While the article covers a wide range of TTS topics, it lacks in-depth technical insights. As a TTS expert, I would be interested in more technical details about the various TTS techniques and models mentioned, such as neural TTS architectures, concatenative TTS improvements, and prosody modeling. Adding more technical depth would make the article more valuable to experts in the field.
- Voice Cloning and Deepfakes: Given the rise of voice cloning and deepfake technologies, it's crucial to address these topics in the article, discussing both the positive and potentially harmful applications of TTS in these areas.
- Expand on Impact: While you've touched upon the impacts of AI TTS in various domains, you can provide more specific examples or case studies to illustrate how businesses, educational institutions, and other sectors have benefited from this technology. Real-world examples can make the impacts more tangible to readers.
- Future Research Directions: The section on future research directions is comprehensive but could be more focused. TTS experts would appreciate a more concise and prioritized list of the most pressing research areas and challenges in the field.
- Visual Aids: The inclusion of visuals like diagrams or charts to illustrate key concepts and the evolution of TTS techniques would enhance the article's clarity and appeal to a broader audience.
- Transition Sentences: Use clear transition sentences to guide readers from one section to the next. This can make the text flow more smoothly and aid in comprehension.
- Shorten Sentences and Paragraphs: Some sentences and paragraphs are quite long. Breaking them up or simplifying complex sentences can improve readability.
This time, GPT gave more professional and targeted suggestions, but we thought that several of them maybe too professional and did not take into account the general audience that reads this wiki page, and GPT apprently ignored the keyword in the prompt "popular science article". So we have ignored suggestions #1, #3, and #5, and adopted suggestions #4, #7 and #8 that help to make the article more readable and comprehensible. We also ignored suggestion #6 because it's difficult to implement (editing pages can't directly insert images).
Team Members[edit | edit source]
Yilan Wei
Xueying Liu
Xinyi Ma
Jingsi Huang
Wansu Zhu
References[edit | edit source]
- ↑ 1.0 1.1 Klatt, D. H. (1987). Review of text-to-speech conversion for English. The Journal of the Acoustical Society of America, 82(3), 737–793. https://doi.org/10.1121/1.395275
- ↑ Allen, J., Hunnicutt, M. S., Klatt, D. H., Armstrong, R. C., & Pisoni, D. B. (1987). From text to speech: The MITalk system. Cambridge University Press.
- ↑ Fang, W., Chung, Y.-A., & Glass, J. (2019). Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models (arXiv:1906.07307). arXiv. http://arxiv.org/abs/1906.07307
- ↑ Huang, W.-C., Hayashi, T., Wu, Y.-C., Kameoka, H., & Toda, T. (2019). Voice Transformer Network: Sequence-to-Sequence Voice Conversion Using Transformer with Text-to-Speech Pretraining (arXiv:1912.06813). arXiv. http://arxiv.org/abs/1912.06813
- ↑ Dida, H. A., Chakravarthy, D. S. K., & Rabbi, F. (2023). ChatGPT and Big Data: Enhancing Text-to-Speech Conversion. Mesopotamian Journal of Big Data, 2023, 33–37.
- ↑ McTear, M. (2021). Rule-Based Dialogue Systems: Architecture, Methods, and Tools. In M. McTear, Conversational AI (pp. 43–70). Springer International Publishing. https://doi.org/10.1007/978-3-031-02176-3_2
- ↑ Soumya Priyadarsini Panda, Ajit Kumar Nayak(2015). A Rule-Based Concatenative Approach to Speech Synthesis in Indian Language Text-to-Speech Systems Intelligent Computing, Communication and Devices, Volume 309
- ↑ Raitio, T., Li, J., & Seshadri, S. (2022). Hierarchical prosody modeling and control in non-autoregressive parallel neural TTS. ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 7587–7591. https://ieeexplore.ieee.org/abstract/document/9746253/
- ↑ Vainio, M. (2001). Artificial neural network based prosody models for Finnish text-to-speech synthesis. https://helda.helsinki.fi/bitstream/handle/10138/19873/artifici.pdf?sequence=2
- ↑ Jiang, Z., Ren, Y., Ye, Z., Liu, J., Zhang, C., Yang, Q., Ji, S., Huang, R., Wang, C., Yin, X., Ma, Z., & Zhao, Z. (2023). Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias (arXiv:2306.03509). arXiv. http://arxiv.org/abs/2306.03509
- ↑ Pecora, A. E. (2023). Data driven: AI Voice Cloning [PhD Thesis, Politecnico di Torino]. https://webthesis.biblio.polito.it/27738/
- ↑ Tu, T., Chen, Y.-J., Yeh, C., & Lee, H. (2019). End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual Transfer Learning (arXiv:1904.06508). arXiv. http://arxiv.org/abs/1904.06508
- ↑ Dale, R. (2022). The Voice Synthesis Business: 2022 Update. Cambridge University Press, 28(3). https://doi.org/10.1017/S1351324922000146
- ↑ Karrupusamy, P., Balas, V. E., & Shi, Y. (n.d.). Sustainable Communication Networks and Application Proceedings of ICSCN 2021.
- ↑ Stodden, R. A., Roberts, K. D., Takahashi, K., Park, H. J., & Stodden, N. J. (2012). Use of Text-to-speech Software to Improve Reading Skills of High School Struggling Readers. International Conference on Software Development for Enhancing Accessibility and Fighting Info-exclusion (DSAI 2012). https://www.sciencedirect.com/
- ↑ Cooper, E. (2019). Text-to-Speech Synthesis Using Found Data for Low-Resource Languages. COLUMBIA UNIVERSITY.
- ↑ Edward, S., & Xavier, J. B. (2018). Text-To-Speech Device for Visually Impaired People. 119. https://www.acadpubl.eu/hub/
- ↑ PECORA, A. E. (2023). Data driven: AI Voice Cloning [Master’s Degree Thesis]. POLITECNICO DI TORINO.
- ↑ Azmoodeh, A., & Dehghantanha, A. (2022). Deep Fake Detection, Deterrence and Response: Challenges and Opportunities. https://doi.org/10.48550/arXiv.2211.14667
- ↑ 20.0 20.1 20.2 Tan, X., Qin, T., Soong, F., & Liu, T. Y. (2021). A survey on neural speech synthesis. arXiv preprint arXiv:2106.15561.[1]
- ↑ 21.0 21.1 21.2 21.3 Triantafyllopoulos, A., Schuller, B. W., İymen, G., Sezgin, M., He, X., Yang, Z., ... & Tao, J. (2023). An overview of affective speech synthesis and conversion in the deep learning era. Proceedings of the IEEE.[2]
- ↑ Hasija, T.; Kadyan, V.; Guleria, K. Out Domain Data Augmentation on Punjabi Children Speech Recognition using Tacotron. In Proceedings of the International Conference on Mathematics and Artificial Intelligence (ICMAI 2021), Chengdu, China, 19–21 March 2021.[3]
- ↑ Terblanche, C., Harty, M., Pascoe, M., & Tucker, B. V. (2022). A Situational Analysis of Current Speech-Synthesis Systems for Child Voices: A Scoping Review of Qualitative and Quantitative Evidence. Applied Sciences, 12(11), 5623. https://doi.org/10.3390/app12115623