Editing
Festival Speech Synthesis System (1997)
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Impact == === Festival Speech Synthesis' Impacts on Speech Technology === Before the born of festival speech synthesis, languages were often stored as strings, phrase symbols, accents and other elements mixed together, which meant that the string needed to be parsed every time the module was called, which was very clumsy. The current system has given up on string-based processing and uses more MLDS, such as Delta<ref>usan R. Hertz. The delta programming language: an integrated approach to non-linear phonology, phonetics and speech synthesis. In John Kingston and Mary E. Beckman, editors, Papers in Laboratory Phonology 1. Cambridge University Press, 1990.</ref>. Its advantage is that it forms a word stream and a syllable stream, which are relatively fixed. However, MLDS also has some disadvantages. For example, when the number of streams is large, every time a new stream is created, all the previous ones need to be linked to one place, which leads to a waste of time. The main impact of Festival speech synthesis on speech synthesis is: first, it allows language structures such as trees and lists to exist in one or more projects at the same time. Second, any amount and type of information is allowed, and there is no need to recompile the information to be stored, which reduces a lot of trouble and saves time. Third, it is more flexible in terms of programming language such as C++ and scheme are both acceptable, and JAVA may also be opened in the future, so programmers have greater flexibility in deciding the use of computer languages. <ref>Paul, Taylor., Alan, W., Black., Richard, Caley. (1998). The Architecture of the Festival Speech Synthesis System. 147-152. </ref> There is a method called unit selection algorithm in the festival language synthesis system, which is used for the problem of unnatural diphthongs in signal processing. It's a smart way to ensure that the voices we create with Festival sound as natural and pleasant as possible, although rare diphone need to be handled in most cases<ref>Möbius, B. Rare Events and Closed Domains: Two Delicate Concepts in Speech Synthesis. ''International Journal of Speech Technology'' 6, 57–71 (2003). <nowiki>https://doi.org/10.1023/A:1021052023237</nowiki></ref>. It requires less signal processing than standard diphone synthesis, or ideally no signal processing. In order to synthesize a new utterance, the target utterance includes words, syllables, duration, etc., and is usually first made using a statistical model. Subsequently, sequences of units are searched and obtained from different locations in the database to match diphthongs, helping to generate more natural and smooth high-quality audio. <ref>Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. ''Speech Communication'', ''49''(4), 317–330. <nowiki>https://doi.org/10.1016/j.specom.2007.01.014</nowiki></ref> First, the Unit selection algorithm allows the system to select the most appropriate speech unit to match the text to be synthesized. This makes synthesized speech more natural because it captures the variation and continuity of human speech rather than simply splicing phonemes together. Secondly, because unit selection has a larger selection space, it is usually able to produce higher-quality synthetic speech. This means that speech synthesis systems using unit selection algorithms can generate sounds that are closer to real human pronunciation. Also important is that it can enhance contextual coherence. Unit selection can better maintain contextual coherence in speech synthesis because it takes into account the matching of surrounding units. This helps eliminate choppy or stilted speech synthesis. More detailed tutorial can be found here.<ref>Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''1'', 373–376. <nowiki>https://doi.org/10.1109/ICASSP.1996.541110</nowiki></ref> === Festival Speech Synthesis' Impacts on Real Life === In terms of [[Multimodal Speech Recognition|multi-modal speech]], Festival speech synthesis’s speech synthesis technology can complement XR devices, allowing users to have voice conversations with virtual objects in virtual or augmented reality environments. For example, in VR games, users can use voice to interact with virtual characters to increase the immersion of the game. Or it has higher efficiency and more realistic scenes in VR driving practice and airplane practice. Festival speech synthesis's situational awareness technology can also be used in car navigation systems to automatically adjust the speed and clarity of voice instructions based on the vehicle's speed, location and traffic conditions. For example, on highways, voice instructions can be faster, while in urban congestion, the voice can be slowed down to ensure better navigation guidance and take care of the driver's emotions. [[Commerical TTS - Google, Amazon, Apple and Microsoft (2010s)|Voice assistants]] such as Siri, Google Assistant and Alexa use Festival's speech synthesis technology to provide users with a voice interface. Users can naturally have verbal conversations with these assistants, ask questions, send instructions and receive vocal responses, giving users a more intuitive experience. Meanwhile, the voice assistant allows users to ask questions and get real-time answers through voice. For example, users can ask for information about the weather, news, etc. and get timely responses. Users can also use the voice assistant to set reminders, calendar events and task lists. The assistant generates voice reminders, allowing users to better organize their daily lives.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information