Festival Speech Synthesis System (1997): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
No edit summary
 
(104 intermediate revisions by 12 users not shown)
Line 1: Line 1:
== Introduction ==
== Introduction ==
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean odio turpis, sodales a urna a, rutrum elementum libero. Phasellus pellentesque dapibus odio quis sodales. Duis a dignissim odio. Maecenas lobortis sapien purus, eu laoreet magna varius et. Vestibulum euismod pharetra lorem, id dignissim lorem porta ac. Proin euismod vehicula eleifend. Nunc hendrerit efficitur dolor vitae sodales. Fusce sit amet quam laoreet, aliquet diam at, sollicitudin turpis. Vestibulum eget posuere nibh, sit amet sodales purus. Curabitur vel vulputate eros. Vivamus pellentesque libero non magna iaculis tempor. Aenean sodales velit ut nulla aliquam, ut varius orci blandit. Aliquam semper neque ac rutrum porta.
Festival offers a general framework for building speech synthesis systems as well as including examples of various modules, which is originally developed by Alan W. Black, Paul Taylor and Richard Caley at the Centre for Speech Technology Research (CSTR) at the University of Edinburgh.<ref name=":0">Taylor, P., Black, A. W., & Caley, R. (1998). The architecture of the Festival speech synthesis system. In ''The third ESCA/COCOSDA workshop (ETRW) on speech synthesis''.</ref> As a whole it offers full text to speech through a number APIs: from shell level, though a Scheme command interpreter, as a C++ library, from Java, and an Emacs interface. Festival is multi-lingual (currently English (British and American), and Spanish) though English is the most advanced. Other groups release new languages for the system. And full tools and documentation for build new voices are available through Carnegie Mellon's [http://festvox.org FestVox project].
 
The system is written in C++ and uses the Edinburgh Speech Tools Library for low level architecture and has a Scheme (SIOD) based command interpreter for control. Documentation is given in the FSF texinfo format which can generate, a printed manual, info files and HTML.
 
Festival is free software. Festival and the speech tools are distributed under an X11-type licence allowing unrestricted commercial and non-commercial use alike.<ref>The Centre for Speech Technology Research (2019), Festival., ''www.cstr.ed.ac.uk''.</ref>


== Historical Context ==
== Historical Context ==
Before Voder, several significant milestones have marked progress in recreating human speech artificially.
Prior to the 1980s, early synthesis systems commonly employed a string re-writing mechanism as their primary data structure.<ref name=":0" /> This mechanism stored linguistic representations as strings and involved rewriting these strings with extra symbols during processing. The main drawback of this approach was the merging of diverse elements, such as words, phrase symbols, stress symbols, and phonemes, into a single string. This approach proved clumsy because it necessitated parsing the string each time a module was called.
 
Between the 1980s and the mid-1990s, with advancements in programming, Multi-Level Data Structures (MLDS) were introduced into speech synthesis, notably in systems like Delta. MLDS organized linguistic information into separate streams, typically linear lists or arrays of linguistic elements. While it provided some structural organization, it faced limitations in representing non-linear structures, making the handling of tree-like structures challenging. Additionally, as new streams were added, the task of establishing connections to ensure full connectivity became increasingly complex.
 
In 1996, the development of the Festival Speech Synthesis System commenced, and it quickly emerged as a versatile tool for creating new voices. The Festival system introduced a departure from linear lists by allowing for graph structures, offering greater efficiency in representation. Furthermore, items could be included in different modules, enhancing the system's flexibility. Different modules worked in harmony to produce synthetic speech.<ref>N.Kayte, S., Mundada, M., & Kayte, C. (2015). Speech Synthesis System for Marathi Accent using FESTVOX. ''International Journal of Computer Applications'', ''130''(6), 38–42. <nowiki>https://doi.org/10.5120/ijca2015907024</nowiki></ref> This enabled the handling of complex relations and the efficient calculation of information on-the-fly.
 
Since the 21st century began, Festival has continued its rapid development. Its open-source nature has made it a magnet for researchers and developers worldwide. They have made significant contributions to the system by introducing various speech synthesis engines and language models, thus enabling support for multiple languages and a wide array of application domains.<ref>Rajan, B. K., Rijoy, V., Gopinath, D. P., & George, N. (2015). Duration modeling for text to speech synthesis system using festival speech engine developed for Malayalam language. ''2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015]'', 1–5. <nowiki>https://doi.org/10.1109/ICCPCT.2015.7159332</nowiki></ref><ref>Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. ''Speech Communication'', ''49''(4), 317–330. <nowiki>https://doi.org/10.1016/j.specom.2007.01.014</nowiki></ref>
 
== Key Innovations ==
Based on the historical context that mentioned above, we have acquired a basic understanding of the early speech synthesis systems as well as its limitations. Over the years, there has been a transition from a String Based Processing System to Systems based on MLDS. This has revealed a comprehensive picture about how [https://www.cstr.ed.ac.uk/projects/festival/ Festival] stood out from other contemporary speech synthesis systems. While the Delta System, which was based on the MLDS, had gain much reputation in this domain, its short comings were also noteworthy. The [[wikipedia:Linear_complex_structure|linear structure]] applied by the multi-layered systems can pose constrains on data management, as well as restrict the flexibility of data representation and ability to adapt to complex linguistic relationships and structures. <ref>Hertz, S. R. (1990). The Delta programming language: an integrated approach to nonlinear phonology, phonetics, and speech synthesis. ''Papers in laboratory phonology'', ''1'', 215-257.</ref>


The earliest attempts towards speech synthesis using mechanical means can be traced back to 1779 when [[Mechanical synthesis#Christian Kratzenstein, 1779: generating vowels with acoustic resonators|Christian Gottlieb Kratzenstein]] produced the five vowel sounds /a/, /e/, /i/, /o/, /u/ using various shaped tubes. A few years later, W. R. von Kempelen of Vienna further advanced this and managed to produce not only vowel sounds but also a number of consonant sounds. His [[Mechanical synthesis#Wolfgang von Kempelen, 1791: Acoustic-Mechanical Speech Machine|Speaking Machine]], dating back to the late 18th century, utilized bellows and reeds to simulate limited vowel and consonant sounds, underscoring the potential for creating full-fledged artificial speech.
=== Flexibility of Multiple Structure ===
In contrast, Festival diverges significantly from MLDS-based systems in several aspects, which also emerged as key innovations for it. First of all, Festival liberates linguistic items form the constraints of linear lists, making the utilization of various graph structures happen, like trees, lists and other linguistic structure. This intersecting relations also contributes to saving on redundancy.


Charles Wheatstone built upon Kempelen's work with his Speaking Machine in the early 19th century. By incorporating a vibrating reed, [[Mechanical synthesis#Charles Wheatstone, 1800s: upgraded version of von Kempelen's machine|Wheatstone's machine]] could produce a wider range of sounds compared to previous attempts, resulting in more accurate and recognizable speech sounds.
Furthermore, items within Festival can exist within multiple structures, leading to more efficient and adaptable representations. Significantly, Festival allows for the incorporation of elements in various structures, fostering the creation of more effective and flexible representations. This innovative strategy empowers Festival to overcome the constraints imposed by the linear framework of traditional MLDS-based systems, fundamentally transforming the field of linguistic data processing.<ref>Black, A. W. (1999). " The Festival Speech synthesis System," System documentation, Edition 1.4, for Festival Version 1.4. 0. ''<nowiki>http://www</nowiki>. cstr. ed. ac. uk/projects/festival/manual/festival_toc. html''.</ref>


== Working Mechanism ==
=== New Method for Concatenate Synthesis ===
The Voder is a manually operated speech synthesizer that recreates the physiological characteristics of the human voice. It works by breaking up human speech into its acoustic components using a set of ten contiguous band-pass filters that cover the entire speech frequency range and are connected in parallel. The pass bands of the filters were chosen after a careful analysis of how the human ear interprets speech sounds. The initial sounds produced by either the oscillator or the gas discharge tube were passed through these filters, and their outputs were passed through an amplifier that mixed and modulated them and passed it on to a loudspeaker in order to produce an electronic human speech. The potentiometers (devices that control how much electricity flows through a circuit) controlled by the finger keys were used to operate the band-pass filter outputs.
In terms of concatenate synthesis, except for the conventional approach of using a single-instance diphone-based method<ref>Beutnagel, M. C., & Conkie, A. (1999, September). Interaction of units in a unit selection database. In ''EUROSPEECH'' (Vol. 99, pp. 1063-1066).</ref> with an inventory comprising one recording of each diphone type, Festival's speech synthesis system introduced a novel "clunits" technique. This method involves an inventory of units recorded within natural sentences and employs a limited form of unit selection.<ref>Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. ''Speech Communication'', ''49''(4), 317–330. <nowiki>https://doi.org/10.1016/j.specom.2007.01.014</nowiki></ref>


Two basic sounds are used to create speech sounds: the buzz tone and the hissing noise. The buzz tone is used to create voiced vowels and nasal sounds, while the hissing noise is used to create voiceless fricative sounds. The pitch control is achieved by a foot pedal, which also converts the tones and hissing sounds to vowels, consonants, and inflections. The Voder's filters divide speech sounds into their acoustic components, which are then recreated using the buzz and hiss sounds.
To be more specific, the fundamental strategy involves organizing units within a specific category, like a particular sound, based on considerations about the context of pronunciation and intonation. This categorization relies on questions related to linguistic elements, such as whether the unit appears at the end of a phrase or is stressed within a syllable. Then the Festival system constructs decision trees for each sound in the database, with each leaf representing a list of database units determined by the questions leading to that specific leaf. During the synthesis process, Festival utilizes the appropriate decision tree to identify the optimal cluster of potential units for each targeted specification. A search is then conducted to determine the most suitable path through the potential units, taking into account the distance of each unit from its cluster center and the cost associated with joining adjacent units.<ref>Black, A. W., & Taylor, P. A. (1997). ''Automatically clustering similar units for unit selection in speech synthesis.'' <nowiki>https://era.ed.ac.uk/handle/1842/1236</nowiki></ref>


== Key Innovations ==
This new technique can avoid the issue of estimating weights in a feature-based target distance measure while maintaining sensitivity to prosodic and phonetic distinctions. Additionally, it effectively manages variability in unit sparseness, with the tree-building algorithm initiating a cluster split only when significant variations are identifiable.
The Voder was among the first devices to allow manual control of speech synthesis. It was a pioneer in electronic sound generation, breaking down human speech into its fundamental acoustic components and reproducing these patterns electronically: this was a significant advancement in the early stages of electronic speech synthesis. Moreover, Voder was the first successful attempt at recreating an important physiological characteristic of the human voice – the ability to create voiced and unvoiced sounds.


To improve the operator's performance, the Voder had a recording and playback feature that allowed operators to objectively analyze their areas of improvement. This feature is similar to modern-day contact centers that use call recording and analysis to improve agent performance.
=== Multilingual Compatibility ===
Another innovation point of this system lies in its multilingual compatibility'''.''' Festival supports various languages and can also be adapted to different linguistic contexts. Festival comes with support for English (British and American accent), [[wikipedia:Welsh_language|Welsh]] and [[wikipedia:Spanish_language|Spanish]]. Voice packages are available for various other languages, including [[wikipedia:Spanish_language|Castellano]], [[wikipedia:Czech_language|Czech]], [[wikipedia:Finnish_language|Finnish]], [[wikipedia:Hindi_language|Hindi]], [[wikipedia:Italian_language|Italian]], [[wikipedia:Marathi_language|Marathi]], [[wikipedia:Polish_language|Polish,]] [[wikipedia:Russian_language|Russian]], and [[wikipedia:Telugu_language|Telugu]].<ref>Black, A., Taylor, P., Caley, R., Clark, R., Richmond, K., King, S., ... & Zen, H. (2001). The festival speech synthesis system, version 1.4. 2. ''Unpublished document available via <nowiki>http://www</nowiki>. cstr. ed. ac. uk/projects/festival. html'', ''6'', 365-377.</ref>


== Impact ==
== Impact ==
The Voder was demonstrated to the public at the 1939 New York World's Fair, attracting widespread attention and showcasing the possibilities of artificial speech production. It was a significant step towards public awareness and interest in the field of speech synthesis.


In fact, the abilities of Voder go beyond human voice, as it can also produce non-speech sounds such as musical tones and sound effects, and thus it was used in a variety of applications, including radio broadcasts, sound effects for movies, and even music performances.
=== Festival Speech Synthesis' Impacts on Speech Technology ===
Before the born of festival speech synthesis, languages were often stored as strings, phrase symbols, accents and other elements mixed together, which meant that the string needed to be parsed every time the module was called, which was very clumsy. The current system has given up on string-based processing and uses more MLDS, such as Delta<ref>usan R. Hertz. The delta programming language: an integrated approach to non-linear phonology, phonetics and speech synthesis. In John Kingston and Mary E. Beckman, editors, Papers in Laboratory Phonology 1. Cambridge University Press, 1990.</ref>. Its advantage is that it forms a word stream and a syllable stream, which are relatively fixed. However, MLDS also has some disadvantages. For example, when the number of streams is large, every time a new stream is created, all the previous ones need to be linked to one place, which leads to a waste of time. The main impact of Festival speech synthesis on speech synthesis is: first, it allows language structures such as trees and lists to exist in one or more projects at the same time. Second, any amount and type of information is allowed, and there is no need to recompile the information to be stored, which reduces a lot of trouble and saves time. Third, it is more flexible in terms of programming language such as C++ and scheme are both acceptable, and JAVA may also be opened in the future, so programmers have greater flexibility in deciding the use of computer languages. <ref>Paul, Taylor., Alan, W., Black., Richard, Caley. (1998). The Architecture of the Festival Speech Synthesis System.  147-152. </ref>
 
There is a method called unit selection algorithm in the festival language synthesis system, which is used for the problem of unnatural diphthongs in signal processing. It's a smart way to ensure that the voices we create with Festival sound as natural and pleasant as possible, although rare diphone need to be handled in most cases<ref>Möbius, B. Rare Events and Closed Domains: Two Delicate Concepts in Speech Synthesis. ''International Journal of Speech Technology'' 6, 57–71 (2003). <nowiki>https://doi.org/10.1023/A:1021052023237</nowiki></ref>. It requires less signal processing than standard diphone synthesis, or ideally no signal processing. In order to synthesize a new utterance, the target utterance includes words, syllables, duration, etc., and is usually first made using a statistical model. Subsequently, sequences of units are searched and obtained from different locations in the database to match diphthongs, helping to generate more natural and smooth high-quality audio. <ref>Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. ''Speech Communication'', ''49''(4), 317–330. <nowiki>https://doi.org/10.1016/j.specom.2007.01.014</nowiki></ref> First, the Unit selection algorithm allows the system to select the most appropriate speech unit to match the text to be synthesized. This makes synthesized speech more natural because it captures the variation and continuity of human speech rather than simply splicing phonemes together. Secondly, because unit selection has a larger selection space, it is usually able to produce higher-quality synthetic speech. This means that speech synthesis systems using unit selection algorithms can generate sounds that are closer to real human pronunciation. Also important is that it can enhance contextual coherence. Unit selection can better maintain contextual coherence in speech synthesis because it takes into account the matching of surrounding units. This helps eliminate choppy or stilted speech synthesis. More detailed tutorial can be found here.<ref>Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''1'', 373–376. <nowiki>https://doi.org/10.1109/ICASSP.1996.541110</nowiki></ref>
 
=== Festival Speech Synthesis' Impacts on Real Life ===
In terms of [[Multimodal Speech Recognition|multi-modal speech]], Festival speech synthesis’s speech synthesis technology can complement XR devices, allowing users to have voice conversations with virtual objects in virtual or augmented reality environments. For example, in VR games, users can use voice to interact with virtual characters to increase the immersion of the game. Or it has higher efficiency and more realistic scenes in VR driving practice and airplane practice.
 
Festival speech synthesis's situational awareness technology can also be used in car navigation systems to automatically adjust the speed and clarity of voice instructions based on the vehicle's speed, location and traffic conditions. For example, on highways, voice instructions can be faster, while in urban congestion, the voice can be slowed down to ensure better navigation guidance and take care of the driver's emotions.
 
[[Commerical TTS - Google, Amazon, Apple and Microsoft (2010s)|Voice assistants]] such as Siri, Google Assistant and Alexa use Festival's speech synthesis technology to provide users with a voice interface. Users can naturally have verbal conversations with these assistants, ask questions, send instructions and receive vocal responses, giving users a more intuitive experience. Meanwhile, the voice assistant allows users to ask questions and get real-time answers through voice. For example, users can ask for information about the weather, news, etc. and get timely responses. Users can also use the voice assistant to set reminders, calendar events and task lists. The assistant generates voice reminders, allowing users to better organize their daily lives.


== Future research ==
== Future research ==
-
In this section, we propose a direction for future research. The festival speech synthesis system is a powerful and versatile open-source text-to-speech (TTS) system. Although this system supports multiple languages, there may be a limited selection of pre-built voices available for certain languages and dialects and creating new voices can be a time-consuming process.<ref>Karolina K., Pawel K. and Aleksandra W.(2018), Speech synthesis systems: disadvantages and limitations. ''International Journal of Engineering & Technology.'', Vol. 7, No. 2.28, 234-239, <nowiki>https://doi.org/10.14419/ijet.v7i2.28.12933</nowiki></ref><ref>A.Acero(2002), An overview of text-to-speech synthesis, ''IEEE Workshop on Speech Coding Proceedings'', 17-20, doi: 10.1109/SCFT.2000.878372</ref>


Since the 2000s, the development of, [[Advancements in Neural Network-Based TTS (2000s)|advancements in Neural Network-Based TTS]] has contributed greatly to reducing the time it takes to synthesize speech, increasing accuracy, and effective speech collection. [[Hidden Markov Models in Speech Synthesis|HMM-based speech synthesis]] offers the capability to produce speech in various speaking styles while demanding less storage space.<ref>Xu Tan, Tao Qin, Frank Soong, Tie-Yan Liu.(2021), "A Survey on Neural Speech Synthesis", doi : 10.48550/arXiv.2106.15561.</ref><ref>G. Hinton et al.(2012), "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups," ''IEEE Signal Processing Magazine'', vol. 29, no. 6, pp. 82-97, doi: 10.1109/MSP.2012.2205597.</ref> 
Since the 2010s then, the market has become more active with the development of [[Commerical TTS - Google, Amazon and Apple (2010s)|commercial TTS-speech synthesis services]] such as Google, Amazon, and Apple. Speech synthesis services, led by large companies, have contributed to effectively collecting minority-oriented speech for specific languages and dialects and shortening the time it takes to generate new speech after the festival speech synthesis system.   
In summary, the festival speech synthesis system in the context of voice synthesis technology is underscored by collecting large data set in specific words. In other words, although pre-built speech data for specific languages and dialects is being collected by large companies, there are still limitations. However, as data on minority languages gradually accumulates due to the development of [[Advancements in AI TTS (2020s)|AI TTS]], the field of speech synthesis is expected to develop even more rapidly.<ref>Dan. B., Peter. C.(2021), Challenges for Edge-AI Implementation of Text-To-Speech Synthesis, ''IEEE International Conference on Customer Electronics(ICCE)'', doi: 10.1109/ICCE50685.2021.9427679</ref>       
== LLM Reviews ==
We used Chat GPT developed by OpenAI to improve content quality and make up for the shortcomings. We let GPT act as a professor, speech synthesis researcher, the person who wants to jump into this field and the person who doesn’t have a background in speech synthesis. Generally, we got a great feedback from GPT. According to the general feedback from GPT, the content of our wiki page is informative and well-structured, providing a reasonably comprehensive and insightful overview of Festival Speech Synthesis System, including its historical context, key innovations, impacts on the field of speech technology, real-life applications, and future research directions. However, we were mindful that GPT was designed to give relatively good comments. Therefore we were careful to take what we could from it and not what we wouldn't.
The prompts we raised and the corresponding improvement approaches provided by GPT are as follows:
# '''Act as an voice technology professor and review this website, grade from 1-100 : 92'''
#* We let chat gpt as a professor to expect what grade we are getting and want to get a feedback from the person who has a professional background. We got a generally good feedback and several points for improvement from chat gpt as it always is. One of the improvement point we got is ''"You present a direction for future research, acknowledging the need for more voices in various languages and dialects. However, the section could benefit from more details on the specific areas of potential research".'' We respect the opinion that this needs to be improved, but we agree with the point that this may be off topic, so we did not change the contents.
# '''If you were a speech synthesis researcher, please evaluate the wiki page in terms of speech synthesis, and give us the feedback, which part can we improve?'''
#* The reason for the GPT review from the standpoint of a speech synthesis specialist is to ensure precise terminology, thorough historical context, and comprehensive coverage of technological advancements in the field.
#* Language Clarity: To ensure accessibility for a broader audience, consider simplifying complex technical terms and concepts, and provide explanations where necessary. We agree on this improvement point, thus we reviewed the whole content and tried to use a clear and concise language in order to enhance the overall comprehensibility of the content.
#* Inclusion of Recent Developments: Continuously update the page to include recent advancements in speech synthesis technology, such as the integration of neural network-based TTS and the latest developments in the commercial TTS services sector. Providing a section dedicated to recent developments can help readers grasp the current state and trends of the field more effectively.
# '''If you have some voice background and want to jump into the speech synthesis, does it help you? Any suggestions?'''
#* We let GPT be as students like us, who already have had rough knowledge about speech synthesis, and try to find out whether this wiki page is helpful and provide suggestions.
#* Technical Details: Adding more technical details about the inner workings of Festival Speech Synthesis, including the algorithms and models it employs, would be beneficial. This would help students grasp the core technology behind this system. According to the feedback from GPT, we add more technical details and information in key innovation part.
#* Demonstrations: Including practical demonstrations or examples of how Festival is used for text-to-speech conversion would be valuable. This could involve audio samples or step-by-step walkthroughs of the process. Challenges and Solutions: Discussing specific challenges in speech synthesis and how Festival addresses them would provide a deeper insight. For example, addressing the issue of diphthongs using the unit selection algorithm.
#'''If you didn’t know about speech synthesis before, will you have a comprehensive understanding after reading this wiki page? Which part do you think needs to be explained more clearly and understandably?'''
#*The reason for the GPT review from the viewpoint of a non-professional is to make sure this wiki page can assist individuals interested in voice technology, especially those with limited prior knowledge, in acquiring knowledge more effectively.
#*Enhancing Clarity with Examples: Incorporating additional real-world illustrations and simplifying technical descriptions can enhance the content's accessibility to a wider audience, particularly those who lack familiarity with speech synthesis and related technologies. In response to this feedback, we have heeded the suggestion and enriched the "Impacts" section with more detailed explanations for improved comprehension.
== Team Members ==
== Team Members ==
page,
Introduction - Page
 
Historical context - Jingxuan
 
Impact - Weihao
 
Innovation - Ting
 
Future research - Soogyeong
 
LLM reviews- together
 
== References ==
<references />

Latest revision as of 23:20, 17 October 2023

Introduction[edit | edit source]

Festival offers a general framework for building speech synthesis systems as well as including examples of various modules, which is originally developed by Alan W. Black, Paul Taylor and Richard Caley at the Centre for Speech Technology Research (CSTR) at the University of Edinburgh.[1] As a whole it offers full text to speech through a number APIs: from shell level, though a Scheme command interpreter, as a C++ library, from Java, and an Emacs interface. Festival is multi-lingual (currently English (British and American), and Spanish) though English is the most advanced. Other groups release new languages for the system. And full tools and documentation for build new voices are available through Carnegie Mellon's FestVox project.

The system is written in C++ and uses the Edinburgh Speech Tools Library for low level architecture and has a Scheme (SIOD) based command interpreter for control. Documentation is given in the FSF texinfo format which can generate, a printed manual, info files and HTML.

Festival is free software. Festival and the speech tools are distributed under an X11-type licence allowing unrestricted commercial and non-commercial use alike.[2]

Historical Context[edit | edit source]

Prior to the 1980s, early synthesis systems commonly employed a string re-writing mechanism as their primary data structure.[1] This mechanism stored linguistic representations as strings and involved rewriting these strings with extra symbols during processing. The main drawback of this approach was the merging of diverse elements, such as words, phrase symbols, stress symbols, and phonemes, into a single string. This approach proved clumsy because it necessitated parsing the string each time a module was called.

Between the 1980s and the mid-1990s, with advancements in programming, Multi-Level Data Structures (MLDS) were introduced into speech synthesis, notably in systems like Delta. MLDS organized linguistic information into separate streams, typically linear lists or arrays of linguistic elements. While it provided some structural organization, it faced limitations in representing non-linear structures, making the handling of tree-like structures challenging. Additionally, as new streams were added, the task of establishing connections to ensure full connectivity became increasingly complex.

In 1996, the development of the Festival Speech Synthesis System commenced, and it quickly emerged as a versatile tool for creating new voices. The Festival system introduced a departure from linear lists by allowing for graph structures, offering greater efficiency in representation. Furthermore, items could be included in different modules, enhancing the system's flexibility. Different modules worked in harmony to produce synthetic speech.[3] This enabled the handling of complex relations and the efficient calculation of information on-the-fly.

Since the 21st century began, Festival has continued its rapid development. Its open-source nature has made it a magnet for researchers and developers worldwide. They have made significant contributions to the system by introducing various speech synthesis engines and language models, thus enabling support for multiple languages and a wide array of application domains.[4][5]

Key Innovations[edit | edit source]

Based on the historical context that mentioned above, we have acquired a basic understanding of the early speech synthesis systems as well as its limitations. Over the years, there has been a transition from a String Based Processing System to Systems based on MLDS. This has revealed a comprehensive picture about how Festival stood out from other contemporary speech synthesis systems. While the Delta System, which was based on the MLDS, had gain much reputation in this domain, its short comings were also noteworthy. The linear structure applied by the multi-layered systems can pose constrains on data management, as well as restrict the flexibility of data representation and ability to adapt to complex linguistic relationships and structures. [6]

Flexibility of Multiple Structure[edit | edit source]

In contrast, Festival diverges significantly from MLDS-based systems in several aspects, which also emerged as key innovations for it. First of all, Festival liberates linguistic items form the constraints of linear lists, making the utilization of various graph structures happen, like trees, lists and other linguistic structure. This intersecting relations also contributes to saving on redundancy.

Furthermore, items within Festival can exist within multiple structures, leading to more efficient and adaptable representations. Significantly, Festival allows for the incorporation of elements in various structures, fostering the creation of more effective and flexible representations. This innovative strategy empowers Festival to overcome the constraints imposed by the linear framework of traditional MLDS-based systems, fundamentally transforming the field of linguistic data processing.[7]

New Method for Concatenate Synthesis[edit | edit source]

In terms of concatenate synthesis, except for the conventional approach of using a single-instance diphone-based method[8] with an inventory comprising one recording of each diphone type, Festival's speech synthesis system introduced a novel "clunits" technique. This method involves an inventory of units recorded within natural sentences and employs a limited form of unit selection.[9]

To be more specific, the fundamental strategy involves organizing units within a specific category, like a particular sound, based on considerations about the context of pronunciation and intonation. This categorization relies on questions related to linguistic elements, such as whether the unit appears at the end of a phrase or is stressed within a syllable. Then the Festival system constructs decision trees for each sound in the database, with each leaf representing a list of database units determined by the questions leading to that specific leaf. During the synthesis process, Festival utilizes the appropriate decision tree to identify the optimal cluster of potential units for each targeted specification. A search is then conducted to determine the most suitable path through the potential units, taking into account the distance of each unit from its cluster center and the cost associated with joining adjacent units.[10]

This new technique can avoid the issue of estimating weights in a feature-based target distance measure while maintaining sensitivity to prosodic and phonetic distinctions. Additionally, it effectively manages variability in unit sparseness, with the tree-building algorithm initiating a cluster split only when significant variations are identifiable.

Multilingual Compatibility[edit | edit source]

Another innovation point of this system lies in its multilingual compatibility. Festival supports various languages and can also be adapted to different linguistic contexts. Festival comes with support for English (British and American accent), Welsh and Spanish. Voice packages are available for various other languages, including Castellano, Czech, Finnish, Hindi, Italian, Marathi, Polish, Russian, and Telugu.[11]

Impact[edit | edit source]

Festival Speech Synthesis' Impacts on Speech Technology[edit | edit source]

Before the born of festival speech synthesis, languages were often stored as strings, phrase symbols, accents and other elements mixed together, which meant that the string needed to be parsed every time the module was called, which was very clumsy. The current system has given up on string-based processing and uses more MLDS, such as Delta[12]. Its advantage is that it forms a word stream and a syllable stream, which are relatively fixed. However, MLDS also has some disadvantages. For example, when the number of streams is large, every time a new stream is created, all the previous ones need to be linked to one place, which leads to a waste of time. The main impact of Festival speech synthesis on speech synthesis is: first, it allows language structures such as trees and lists to exist in one or more projects at the same time. Second, any amount and type of information is allowed, and there is no need to recompile the information to be stored, which reduces a lot of trouble and saves time. Third, it is more flexible in terms of programming language such as C++ and scheme are both acceptable, and JAVA may also be opened in the future, so programmers have greater flexibility in deciding the use of computer languages. [13]

There is a method called unit selection algorithm in the festival language synthesis system, which is used for the problem of unnatural diphthongs in signal processing. It's a smart way to ensure that the voices we create with Festival sound as natural and pleasant as possible, although rare diphone need to be handled in most cases[14]. It requires less signal processing than standard diphone synthesis, or ideally no signal processing. In order to synthesize a new utterance, the target utterance includes words, syllables, duration, etc., and is usually first made using a statistical model. Subsequently, sequences of units are searched and obtained from different locations in the database to match diphthongs, helping to generate more natural and smooth high-quality audio. [15] First, the Unit selection algorithm allows the system to select the most appropriate speech unit to match the text to be synthesized. This makes synthesized speech more natural because it captures the variation and continuity of human speech rather than simply splicing phonemes together. Secondly, because unit selection has a larger selection space, it is usually able to produce higher-quality synthetic speech. This means that speech synthesis systems using unit selection algorithms can generate sounds that are closer to real human pronunciation. Also important is that it can enhance contextual coherence. Unit selection can better maintain contextual coherence in speech synthesis because it takes into account the matching of surrounding units. This helps eliminate choppy or stilted speech synthesis. More detailed tutorial can be found here.[16]

Festival Speech Synthesis' Impacts on Real Life[edit | edit source]

In terms of multi-modal speech, Festival speech synthesis’s speech synthesis technology can complement XR devices, allowing users to have voice conversations with virtual objects in virtual or augmented reality environments. For example, in VR games, users can use voice to interact with virtual characters to increase the immersion of the game. Or it has higher efficiency and more realistic scenes in VR driving practice and airplane practice.

Festival speech synthesis's situational awareness technology can also be used in car navigation systems to automatically adjust the speed and clarity of voice instructions based on the vehicle's speed, location and traffic conditions. For example, on highways, voice instructions can be faster, while in urban congestion, the voice can be slowed down to ensure better navigation guidance and take care of the driver's emotions.

Voice assistants such as Siri, Google Assistant and Alexa use Festival's speech synthesis technology to provide users with a voice interface. Users can naturally have verbal conversations with these assistants, ask questions, send instructions and receive vocal responses, giving users a more intuitive experience. Meanwhile, the voice assistant allows users to ask questions and get real-time answers through voice. For example, users can ask for information about the weather, news, etc. and get timely responses. Users can also use the voice assistant to set reminders, calendar events and task lists. The assistant generates voice reminders, allowing users to better organize their daily lives.

Future research[edit | edit source]

In this section, we propose a direction for future research. The festival speech synthesis system is a powerful and versatile open-source text-to-speech (TTS) system. Although this system supports multiple languages, there may be a limited selection of pre-built voices available for certain languages and dialects and creating new voices can be a time-consuming process.[17][18]

Since the 2000s, the development of, advancements in Neural Network-Based TTS has contributed greatly to reducing the time it takes to synthesize speech, increasing accuracy, and effective speech collection. HMM-based speech synthesis offers the capability to produce speech in various speaking styles while demanding less storage space.[19][20]

Since the 2010s then, the market has become more active with the development of commercial TTS-speech synthesis services such as Google, Amazon, and Apple. Speech synthesis services, led by large companies, have contributed to effectively collecting minority-oriented speech for specific languages and dialects and shortening the time it takes to generate new speech after the festival speech synthesis system.

In summary, the festival speech synthesis system in the context of voice synthesis technology is underscored by collecting large data set in specific words. In other words, although pre-built speech data for specific languages and dialects is being collected by large companies, there are still limitations. However, as data on minority languages gradually accumulates due to the development of AI TTS, the field of speech synthesis is expected to develop even more rapidly.[21]

LLM Reviews[edit | edit source]

We used Chat GPT developed by OpenAI to improve content quality and make up for the shortcomings. We let GPT act as a professor, speech synthesis researcher, the person who wants to jump into this field and the person who doesn’t have a background in speech synthesis. Generally, we got a great feedback from GPT. According to the general feedback from GPT, the content of our wiki page is informative and well-structured, providing a reasonably comprehensive and insightful overview of Festival Speech Synthesis System, including its historical context, key innovations, impacts on the field of speech technology, real-life applications, and future research directions. However, we were mindful that GPT was designed to give relatively good comments. Therefore we were careful to take what we could from it and not what we wouldn't.

The prompts we raised and the corresponding improvement approaches provided by GPT are as follows:

  1. Act as an voice technology professor and review this website, grade from 1-100 : 92
    • We let chat gpt as a professor to expect what grade we are getting and want to get a feedback from the person who has a professional background. We got a generally good feedback and several points for improvement from chat gpt as it always is. One of the improvement point we got is "You present a direction for future research, acknowledging the need for more voices in various languages and dialects. However, the section could benefit from more details on the specific areas of potential research". We respect the opinion that this needs to be improved, but we agree with the point that this may be off topic, so we did not change the contents.
  2. If you were a speech synthesis researcher, please evaluate the wiki page in terms of speech synthesis, and give us the feedback, which part can we improve?
    • The reason for the GPT review from the standpoint of a speech synthesis specialist is to ensure precise terminology, thorough historical context, and comprehensive coverage of technological advancements in the field.
    • Language Clarity: To ensure accessibility for a broader audience, consider simplifying complex technical terms and concepts, and provide explanations where necessary. We agree on this improvement point, thus we reviewed the whole content and tried to use a clear and concise language in order to enhance the overall comprehensibility of the content.
    • Inclusion of Recent Developments: Continuously update the page to include recent advancements in speech synthesis technology, such as the integration of neural network-based TTS and the latest developments in the commercial TTS services sector. Providing a section dedicated to recent developments can help readers grasp the current state and trends of the field more effectively.
  3. If you have some voice background and want to jump into the speech synthesis, does it help you? Any suggestions?
    • We let GPT be as students like us, who already have had rough knowledge about speech synthesis, and try to find out whether this wiki page is helpful and provide suggestions.
    • Technical Details: Adding more technical details about the inner workings of Festival Speech Synthesis, including the algorithms and models it employs, would be beneficial. This would help students grasp the core technology behind this system. According to the feedback from GPT, we add more technical details and information in key innovation part.
    • Demonstrations: Including practical demonstrations or examples of how Festival is used for text-to-speech conversion would be valuable. This could involve audio samples or step-by-step walkthroughs of the process. Challenges and Solutions: Discussing specific challenges in speech synthesis and how Festival addresses them would provide a deeper insight. For example, addressing the issue of diphthongs using the unit selection algorithm.
  4. If you didn’t know about speech synthesis before, will you have a comprehensive understanding after reading this wiki page? Which part do you think needs to be explained more clearly and understandably?
    • The reason for the GPT review from the viewpoint of a non-professional is to make sure this wiki page can assist individuals interested in voice technology, especially those with limited prior knowledge, in acquiring knowledge more effectively.
    • Enhancing Clarity with Examples: Incorporating additional real-world illustrations and simplifying technical descriptions can enhance the content's accessibility to a wider audience, particularly those who lack familiarity with speech synthesis and related technologies. In response to this feedback, we have heeded the suggestion and enriched the "Impacts" section with more detailed explanations for improved comprehension.

Team Members[edit | edit source]

Introduction - Page

Historical context - Jingxuan

Impact - Weihao

Innovation - Ting

Future research - Soogyeong

LLM reviews- together

References[edit | edit source]

  1. 1.0 1.1 Taylor, P., Black, A. W., & Caley, R. (1998). The architecture of the Festival speech synthesis system. In The third ESCA/COCOSDA workshop (ETRW) on speech synthesis.
  2. The Centre for Speech Technology Research (2019), Festival., www.cstr.ed.ac.uk.
  3. N.Kayte, S., Mundada, M., & Kayte, C. (2015). Speech Synthesis System for Marathi Accent using FESTVOX. International Journal of Computer Applications, 130(6), 38–42. https://doi.org/10.5120/ijca2015907024
  4. Rajan, B. K., Rijoy, V., Gopinath, D. P., & George, N. (2015). Duration modeling for text to speech synthesis system using festival speech engine developed for Malayalam language. 2015 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2015], 1–5. https://doi.org/10.1109/ICCPCT.2015.7159332
  5. Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. Speech Communication, 49(4), 317–330. https://doi.org/10.1016/j.specom.2007.01.014
  6. Hertz, S. R. (1990). The Delta programming language: an integrated approach to nonlinear phonology, phonetics, and speech synthesis. Papers in laboratory phonology, 1, 215-257.
  7. Black, A. W. (1999). " The Festival Speech synthesis System," System documentation, Edition 1.4, for Festival Version 1.4. 0. http://www. cstr. ed. ac. uk/projects/festival/manual/festival_toc. html.
  8. Beutnagel, M. C., & Conkie, A. (1999, September). Interaction of units in a unit selection database. In EUROSPEECH (Vol. 99, pp. 1063-1066).
  9. Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. Speech Communication, 49(4), 317–330. https://doi.org/10.1016/j.specom.2007.01.014
  10. Black, A. W., & Taylor, P. A. (1997). Automatically clustering similar units for unit selection in speech synthesis. https://era.ed.ac.uk/handle/1842/1236
  11. Black, A., Taylor, P., Caley, R., Clark, R., Richmond, K., King, S., ... & Zen, H. (2001). The festival speech synthesis system, version 1.4. 2. Unpublished document available via http://www. cstr. ed. ac. uk/projects/festival. html, 6, 365-377.
  12. usan R. Hertz. The delta programming language: an integrated approach to non-linear phonology, phonetics and speech synthesis. In John Kingston and Mary E. Beckman, editors, Papers in Laboratory Phonology 1. Cambridge University Press, 1990.
  13. Paul, Taylor., Alan, W., Black., Richard, Caley. (1998). The Architecture of the Festival Speech Synthesis System. 147-152.
  14. Möbius, B. Rare Events and Closed Domains: Two Delicate Concepts in Speech Synthesis. International Journal of Speech Technology 6, 57–71 (2003). https://doi.org/10.1023/A:1021052023237
  15. Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. Speech Communication, 49(4), 317–330. https://doi.org/10.1016/j.specom.2007.01.014
  16. Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, 1, 373–376. https://doi.org/10.1109/ICASSP.1996.541110
  17. Karolina K., Pawel K. and Aleksandra W.(2018), Speech synthesis systems: disadvantages and limitations. International Journal of Engineering & Technology., Vol. 7, No. 2.28, 234-239, https://doi.org/10.14419/ijet.v7i2.28.12933
  18. A.Acero(2002), An overview of text-to-speech synthesis, IEEE Workshop on Speech Coding Proceedings, 17-20, doi: 10.1109/SCFT.2000.878372
  19. Xu Tan, Tao Qin, Frank Soong, Tie-Yan Liu.(2021), "A Survey on Neural Speech Synthesis", doi : 10.48550/arXiv.2106.15561.
  20. G. Hinton et al.(2012), "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups," IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, doi: 10.1109/MSP.2012.2205597.
  21. Dan. B., Peter. C.(2021), Challenges for Edge-AI Implementation of Text-To-Speech Synthesis, IEEE International Conference on Customer Electronics(ICCE), doi: 10.1109/ICCE50685.2021.9427679