Editing
Advancements in AI TTS (2020s)
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== 1. High-quality Speech Synthesis === The most important goal of TTS is to synthesize high-quality speech. The quality of speech is determined by many aspects that influence the perception of speech, including intelligibility, naturalness, expressiveness, prosody, emotion, style, robustness, controllability, etc. While neural approaches have significantly improved the quality of synthesized speech, there is still large room to make further improvements<ref name=":0">Tan, X., Qin, T., Soong, F., & Liu, T. Y. (2021). A survey on neural speech synthesis. ''arXiv preprint arXiv:2106.15561''.[https://arxiv.org/abs/2106.15561]</ref>. ==== 1.1 Affective Speech Synthesis ==== ===== a. Emotional Vocal Bursts ===== Within the realm of emotional speech synthesis, a particularly intriguing area of exploration could revolve around emotional vocal outbursts. In the now famous promotional video for Google Assistant, the crowd erupted in cheers as the assistant assured the hairdresser that “taking one second” to look for an appointment was fine with a mere “Mm-hmm.” This example vividly demonstrates the significance of vocal outbursts in conveying emotional reactions. In fact, the synthesis of such vocal bursts was already the focal point of the 2020 [https://www.competitions.hume.ai/exvo2022 ExVo challenge]. The most successful approach employed in this challenge, utilizing StyleGAN2, had already yielded promising outcomes, underscoring the considerable potential inherent in this avenue of research<ref name=":1">Triantafyllopoulos, A., Schuller, B. W., İymen, G., Sezgin, M., He, X., Yang, Z., ... & Tao, J. (2023). An overview of affective speech synthesis and conversion in the deep learning era. ''Proceedings of the IEEE''.[https://ieeexplore.ieee.org/abstract/document/10065433?casa_token=x7b0fbCS-JYAAAAA:46lTQNSjvQ8vvX5dFAzVn2ESL3HRyOyDbHmiMDBPzTYB1EGd9ITjCm-NpLrbINMeluhsJ69A]</ref>. ===== b. Endowing the Agent with an Artificial Personality ===== This area has been pursued for several decades. However, this topic has been recently revived in the context of big language models, which can be adapted to emulate a specific personality. As personality has been also shown to manifest in speech signals, it is an evident next step to introduce it to conversational agents as well. In general, as exemplified by the tasks featured in the Computational Paralinguistics Challenge, there exist a plethora of speaker states and traits, which can be modeled from the speech: deception, sincerity, nativeness, cognitive load, likability, interest, and others are all variables that could be added to the capabilities of affective agents<ref name=":1" />. ===== c. Personalization ===== Personalization is expected to be another major aspect of future speech synthesis systems. Both the expression and the perception of emotion show individualistic effects, which are currently underexploited in the speech synthesis field. Future approaches can benefit a lot from adopting a similar mindset and adapt the production of emotional speech to a style that fits both the speaker and the listener. Such an interpersonal adaptation effect is also seen in human conversations and is a necessary step to foster communication<ref name=":1" />. Specifically, '''child-speech-synthesis''' is one promising research area. Due to the difficulties in collecting children speech data and understanding children speech, synthesizing children speech has always been challenging. In recent years, neural-network-based TTS systems have been gaining popularity. For instance, Hasija, Kadyan, and Guleria<ref>Hasija, T.; Kadyan, V.; Guleria, K. Out Domain Data Augmentation on Punjabi Children Speech Recognition using Tacotron. In Proceedings of the International Conference on Mathematics and Artificial Intelligence (ICMAI 2021), Chengdu, China, 19–21 March 2021.[https://iopscience.iop.org/article/10.1088/1742-6596/1950/1/012044]</ref> used Tacotron for the development of children’s synthetic speech. However, the problem of lacking data for children's speech still persists. For future developments, researchers need to define better acoustic features for children's speech. Moreover, pronunciation modelling is required<ref>Terblanche, C., Harty, M., Pascoe, M., & Tucker, B. V. (2022). A Situational Analysis of Current Speech-Synthesis Systems for Child Voices: A Scoping Review of Qualitative and Quantitative Evidence. ''Applied Sciences'', ''12''(11), 5623. <nowiki>https://doi.org/10.3390/</nowiki>[https://www.mdpi.com/2076-3417/12/11/5623 app12115623]</ref>. ===== d. Interaction between AI and Human ===== The interactions can be accordingly classified as “successful” or not, depending on the goals of the agent. Coupled with effective speech recognition capabilities, these interactions constitute a natural reward signal, which can be further utilized by their agent to improve their speech synthesis and speech recognition capacities in a lifelong reinforcement learning setup, which still remains an elusive goal for the field of affective computing. An overture to this exciting domain can already be found in intelligent dialog generation, where reinforcement learning is already being used to adjust the linguistic style of an agent or to learn backchanneling responses. This paradigm is expected to be more widely used in TTS in the near future<ref name=":1" />. ==== 1.2 Better Representation Learning ==== Good representations of text and speech are beneficial for neural TTS models, which can improve the quality of synthesized speech. Some initial explorations on text pre-training indicate that better text representations can indeed improve the speech prosody. How to learn powerful representations for text/phoneme sequence and especially for speech sequence through unsupervised/self-supervised learning and pre-training is challenging and worth further explorations<ref name=":0" />.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information