Festival Speech Synthesis System (1997)

From MSc Voice Technology
Jump to navigation Jump to search

Introduction

Festival offers a general framework for building speech synthesis systems as well as including examples of various modules, which is originally developed by Alan W. Black, Paul Taylor and Richard Caley at the Centre for Speech Technology Research (CSTR) at the University of Edinburgh.[1] As a whole it offers full text to speech through a number APIs: from shell level, though a Scheme command interpreter, as a C++ library, from Java, and an Emacs interface. Festival is multi-lingual (currently English (British and American), and Spanish) though English is the most advanced. Other groups release new languages for the system. And full tools and documentation for build new voices are available through Carnegie Mellon's FestVox project (http://festvox.org).

The system is written in C++ and uses the Edinburgh Speech Tools Library for low level architecture and has a Scheme (SIOD) based command interpreter for control. Documentation is given in the FSF texinfo format which can generate, a printed manual, info files and HTML.

Festival is free software. Festival and the speech tools are distributed under an X11-type licence allowing unrestricted commercial and non-commercial use alike.

Historical Context

Prior to the 1980s, early synthesis systems commonly employed a string re-writing mechanism as their primary data structure.[1] This mechanism stored linguistic representations as strings and involved rewriting these strings with extra symbols during processing. The main drawback of this approach was the merging of diverse elements, such as words, phrase symbols, stress symbols, and phonemes, into a single string. This approach proved clumsy because it necessitated parsing the string each time a module was called.

Between the 1980s and the mid-1990s, with advancements in programming, multi-level data structures (MLDS) were introduced into speech synthesis, notably in systems like Delta. MLDS organized linguistic information into separate streams, typically linear lists or arrays of linguistic elements. While it provided some structural organization, it faced limitations in representing non-linear structures, making the handling of tree-like structures challenging. Additionally, as new streams were added, the task of establishing connections to ensure full connectivity became increasingly complex.

In 1996, the development of the Festival Speech Synthesis System commenced, and it quickly emerged as a versatile tool for creating new voices. The Festival system introduced a departure from linear lists by allowing for graph structures, offering greater efficiency in representation. Furthermore, items could be included in different modules, enhancing the system's flexibility. Different modules worked in harmony to produce synthetic speech.[2] This enabled the handling of complex relations and the efficient calculation of information on-the-fly.

Since the 21st century began, Festival has continued its rapid development. Its open-source nature has made it a magnet for researchers and developers worldwide. They have made significant contributions to the system by introducing various speech synthesis engines and language models, thus enabling support for multiple languages and a wide array of application domains.

Key Innovations

Based on the historical context that mentioned above, we have acquired a basic understanding of the early speech synthesis systems as well as its limitations. Over the years, there has been a transition from a String Based Processing System to Systems based on Multi-level Data Structures. This has revealed a comprehensive picture about how Festival stood out from other contemporary speech synthesis systems.

Flexibility of Multiple Structure

While the Delta System, which was based on the multi-level data structures (MLDS), had gain much reputation in this domain, its short comings were also noteworthy. The linear structure applied by the multi-layered systems can pose constrains on data management, as well as restrict the flexibility of data representation and ability to adapt to complex linguistic relationships and structures. [3]

In contrast, Festival diverges significantly from MLDS-based systems in several aspects, which also emerged as key innovations for it. First of all, Festival liberates linguistic items form the constraints of linear lists, making the utilization of various graph structures happen, like trees, lists and other linguistic structure. This intersecting relations also contributes to saving on redundancy.

Furthermore, items within Festival can exist within multiple structures, leading to more efficient and adaptable representations. Significantly, Festival allows for the incorporation of elements in various structures, fostering the creation of more effective and flexible representations. This innovative strategy empowers Festival to overcome the constraints imposed by the linear framework of traditional MLDS-based systems, fundamentally transforming the field of linguistic data processing.[4]

Speed

In addition to its multi-structural flexibility, Festival also functions as a run-time system, necessitating a significant portion of its code to be composed in a compiled low-level programming language like C or C++. This strategic incorporation of low-level languages enables Festival to achieve the desired speed, a critical aspect in its operation as both a research platform and a run-time system.[1] Festival's distinctive feature lies in its integration of a run-time system with an interpreter, allowing immediate adjustments and efficient comparisons of various algorithms during experiments. Unlike fully compiled systems that limit real-time adaptations, Festival's interpreter empowers researchers to explore a wide range of algorithmic options dynamically, thus enhancing the system's adaptability and effectiveness in linguistic data processing.

Multilingual Compatibility

Another innovation point of this system lies in its multilingual compatibility. Festival supports various languages and can also be adapted to different linguistic contexts. Festival comes with support for English (British and American accent), Welsh and Spanish. Voice packages are available for various other languages, including Castellano, Czech, Finnish, Hindi, Italian, Marathi, Polish, Russian, and Telugu.[5]

Impact

Festival Speech Synthesis' Impacts on Speech Technology

Before the born of festival speech synthesis, languages were often stored as strings, phrase symbols, accents and other elements mixed together, which meant that the string needed to be parsed every time the module was called, which was very clumsy. The current system has given up on string-based processing and uses more MLDS (Multi-Level Data Structures), such as Delta[6]. Its advantage is that it forms a word stream and a syllable stream, which are relatively fixed. However, MLDS also has some disadvantages. For example, when the number of streams is large, every time a new stream is created, all the previous ones need to be linked to one place, which leads to a waste of time. The main impact of Festival speech synthesis on speech synthesis is: first, it allows language structures such as trees and lists to exist in one or more projects at the same time. Second, any amount and type of information is allowed, and there is no need to recompile the information to be stored, which reduces a lot of trouble and saves time. Third, it is more flexible in terms of programming language such as C++ and scheme are both acceptable, and JAVA may also be opened in the future, so programmers have greater flexibility in deciding the use of computer languages. [7]


There is a method called unit selection algorithm in the festival language synthesis system, which is used for the problem of unnatural diphthongs in signal processing. It requires less signal processing than standard diphone synthesis, or ideally no signal processing. In order to synthesize a new utterance, the target utterance includes words, syllables, duration, etc., and is usually first made using a statistical model. Subsequently, sequences of units are searched and obtained from different locations in the database to match diphthongs, helping to generate more natural and smooth high-quality audio. [8] First, the Unit selection algorithm allows the system to select the most appropriate speech unit to match the text to be synthesized. This makes synthesized speech more natural because it captures the variation and continuity of human speech rather than simply splicing phonemes together. Secondly, because unit selection has a larger selection space, it is usually able to produce higher-quality synthetic speech. This means that speech synthesis systems using unit selection algorithms can generate sounds that are closer to real human pronunciation. Also important is that it can enhance contextual coherence. Unit selection can better maintain contextual coherence in speech synthesis because it takes into account the matching of surrounding units. This helps eliminate choppy or stilted speech synthesis.

Festival Speech Synthesis' Impacts on Real Life

In terms of multi-modal speech, Festival speech synthesis’s speech synthesis technology can complement VAR devices, allowing users to have voice conversations with virtual objects in virtual or augmented reality environments. For example, in VR games, users can use voice to interact with virtual characters to increase the immersion of the game. Or it has higher efficiency and more realistic scenes in VR driving practice and airplane practice.

Festival speech synthesis's situational awareness technology can also be used in car navigation systems to automatically adjust the speed and clarity of voice instructions based on the vehicle's speed, location and traffic conditions. For example, on highways, voice instructions can be faster, while in urban congestion, the voice can be slowed down to ensure better navigation guidance and take care of the driver's emotions.

Voice assistants such as Siri, Google Assistant and Alexa use Festival's speech synthesis technology to provide users with a voice interface. Users can naturally have verbal conversations with these assistants, ask questions, send instructions and receive vocal responses, giving users a more intuitive experience. Meanwhile, the voice assistant allows users to ask questions and get real-time answers through voice. For example, users can ask for information about the weather, news, etc. and get timely responses. Users can also use the voice assistant to set reminders, calendar events and task lists. The assistant generates voice reminders, allowing users to better organize their daily lives.

Future research

In this section, we propose a direction for future research. The festival speech synthesis system is a powerful and versatile open-source text-to-speech (TTS) system. Although this system supports multiple languages, there may be a limited selection of pre-built voices available for certain languages and dialects and creating new voices can be a time-consuming process. [9]

Since the 2000s, the development of, Advancements in Neural Network-Based TTS has contributed greatly to reducing the time it takes to synthesize speech, increasing accuracy, and effective speech collection. HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage.

Since the 2010s then, the market has become more active with the development of commercial TTS-speech synthesis services such as Google, Amazon, and Apple. Speech synthesis services, led by large companies, have contributed to effectively collecting minority-oriented speech for specific languages and dialects and shortening the time it takes to generate new speech after the festival speech synthesis system.

In summary, the festival speech synthesis system in the context of voice synthesis technology is underscored by collecting large data set in specific words.

LLM Reviews

We used Chat GPT developed by OpenAI to improve contents quality and make up for the shortcomings. Below is how we used chatGPT.

  1. Act as an voice technology professor and review this website, grade from 1-100
  2. If you were a speech synthesis researcher, please evaluate the wiki page in terms of speech synthesis, and give us the feedback, which part can we improve?
  3. If you have some voice background and want to jump into the speech synthesis, does it help you ? Any suggestions can help you to understand better? (We let GPT as the students like us, who already have had rough understanding about speech synthesis, and try to find out whether this wiki page is useful to us.)
  4. If you didn’t know about speech synthesis before, will you have a comprehensive understanding after reading it? Which part do you think needs to be explained more clearly and understandably?

Team Members

Introduction - Page

Historical context - Jingxuan

Impact - Weihao

Innovation - Ting

Future research - Soogyeong

LLM reviews- together

References

  1. 1.0 1.1 1.2 Taylor, P., Black, A. W., & Caley, R. (1998). The architecture of the Festival speech synthesis system. In The third ESCA/COCOSDA workshop (ETRW) on speech synthesis.
  2. N.Kayte, S., Mundada, M., & Kayte, C. (2015). Speech Synthesis System for Marathi Accent using FESTVOX. International Journal of Computer Applications, 130(6), 38–42. https://doi.org/10.5120/ijca2015907024
  3. Hertz, S. R. (1990). The Delta programming language: an integrated approach to nonlinear phonology, phonetics, and speech synthesis. Papers in laboratory phonology, 1, 215-257.
  4. Black, A. W. (1999). " The Festival Speech synthesis System," System documentation, Edition 1.4, for Festival Version 1.4. 0. http://www. cstr. ed. ac. uk/projects/festival/manual/festival_toc. html.
  5. Black, A., Taylor, P., Caley, R., Clark, R., Richmond, K., King, S., ... & Zen, H. (2001). The festival speech synthesis system, version 1.4. 2. Unpublished document available via http://www. cstr. ed. ac. uk/projects/festival. html, 6, 365-377.
  6. usan R. Hertz. The delta programming language: an integrated ap�proach to non-linear phonology, phonetics and speech synthesis. In John Kingston and Mary E. Beckman, editors, Papers in Laboratory Phonology 1. Cambridge University Press, 1990.
  7. Paul, Taylor., Alan, W., Black., Richard, Caley. (1998). The Architecture of the Festival Speech Synthesis System. 147-152.
  8. Clark, R. A. J., Richmond, K., & King, S. (2007). Multisyn: Open-domain unit selection for the Festival speech synthesis system. Speech Communication, 49(4), 317–330. https://doi.org/10.1016/j.specom.2007.01.014
  9. Speech synthesis systems: disadvantages and limitations