Editing
Hidden Markov Models in Speech Synthesis
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Historical Context == === Early speech synthesis Development === Speech synthesis, also known as TTS (text-to-speech), refers to the artificial generation of human speech through computer technology. It is the automated process of converting written text into an acoustic speech signal. The historical perspective on speech synthesis research reveals that the earliest systems, often referred to as "têtes parlantes" or talking heads, emerged in the eighteenth century <ref>Kuligowska, K., Kisielewicz, P., & Włodarz, A. (2018). Speech synthesis systems: disadvantages and limitations. ''Int J Res Eng Technol (UAE)'', ''7'', 234-239.</ref>. The mechanical nature of early systems limited their ability to reproduce speech that closely resembled natural human speech. Before HMMs became involved in speech synthesis, several techniques were employed to generate synthetic speech. These techniques included formant synthesis, articulatory synthesis, oncatenative synthesis and unit selection synthesis <ref>Tabet, Y., & Boughazi, M. (2011, May). Speech synthesis techniques. A survey. In ''International Workshop on Systems, Signal Processing and their Applications, WOSSPA'' (pp. 67-70). IEEE.</ref>. ==== Formant Synthesis ==== Formant synthesis involves the use of resonance structures called formants to generate speech. In some cases, a combination of parallel and cascade resonators is employed. A notable example is the '''Klatt synthesizer''', which utilized 39 parameters updated every 5 milliseconds. While formant synthesis can produce intelligible speech, it is often considered less natural than other methods. ==== Articulatory Synthesis ==== Articulatory synthesis seeks to generate speech by directly modeling the movements of human articulators. This method offers the potential for high-quality speech but is challenging to implement. Articulatory control parameters include various factors like lip aperture, tongue position, and height. However, acquiring accurate articulatory data, often through x-ray photography, and finding a balance between precision and simplicity are challenges. The results of articulatory synthesis may not always match the quality of other synthesis methods. ==== Concatenative Synthesis ==== Concatenative synthesis addresses the difficulty in generating speech parameters from input text specifications. It employs a data-driven approach by connecting natural, prerecorded speech units, such as words, syllables, or diphones. Diphones are widely used and start in the middle of one phoneme and extend to the middle of the following one, capturing coarticulation. Building a diphone inventory involves recording all phonemes within possible contexts and labeling and segmenting diphones <ref>Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. ''Proceedings of the IEEE'', ''101''(5), 1234–1252. <nowiki>https://doi.org/10.1109/JPROC.2013.2251852</nowiki></ref>. The pitch and duration of each diphone must be adjusted to match the prosody part of the specification. This approach balances memory requirements, complexity, and naturalness. ==== Unit Selection Synthesis ==== During the 1990s, unit selection synthesis, also known as corpus-based concatenative synthesis, emerged, driven by the growing power of computer technology and the increasing availability of speech and linguistic resources <ref>Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''1'', 373–376. <nowiki>https://doi.org/10.1109/ICASSP.1996.541110</nowiki></ref>. Unit selection synthesis addresses issues associated with prosodic modifications in concatenative synthesis. It stores multiple instances of each unit with varying prosodies in the unit inventory, allowing for better matching to the target prosody. An algorithm selects the units that best match the target specification based on minimizing target cost and join cost functions. However, Unit selection synthesis has limitations in terms of expressiveness, customization, and prosody control due to its reliance on recorded speech units and extensive databases <ref>Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. ''Proceedings of the IEEE'', ''101''(5), 1234–1252. <nowiki>https://doi.org/10.1109/JPROC.2013.2251852</nowiki></ref>. === Adoption of Hidden Markov Models (HMMs): === [[Hidden Markov Models]] (HMMs), originally developed for speech recognition, have recently gained attention for their potential in speech synthesis applications. The fundamental theory behind HMMs owes its roots to groundbreaking work by Baum and his colleagues. Stratonovich (1960) is credited with earlier work in this field, proposing an optimal nonlinear filtering model grounded in the theory of conditional Markov processes. A significant advancement in applying HMMs to speech was achieved by Rabiner in 1989. His work led to the formulation of a statistical method for representing speech, resulting in a successful implementation of an HMM system capable of handling discrete or continuous density parameter distributions. These collective contributions have laid the foundation for the exploration of HMMs in speech synthesis, highlighting their versatility in modeling and generating speech, a field in which they were not initially envisioned but have now become a pivotal technology <ref>Awad, M., & Khanna, R. (2015). Hidden Markov Model. In M. Awad & R. Khanna, ''Efficient Learning Machines'' (pp. 81–104). Apress. <nowiki>https://doi.org/10.1007/978-1-4302-5990-9_5</nowiki></ref>. === HMM-based speech synthesis systems(HTS) === In traditional speech synthesis systems that rely on the selection and concatenation of acoustical units, the need for a substantial volume of speech data to encompass various voice characteristics can be a significant challenge. Collecting and storing such a large dataset can be complex and resource-intensive. To address this issue and construct speech synthesis systems capable of generating a wide range of voice characteristics, the HMM-based speech synthesis system (HTS) was introduced <ref>Tokuda, K., Zen, H., & Black, A. W. (2002, September). An HMM-based speech synthesis system applied to English. In ''IEEE speech synthesis workshop'' (pp. 227-230). Santa Monica: IEEE.</ref>. HMM-based speech synthesis, often referred to as "HTS", represents a significant advancement in the realm of Text-to-Speech technology. Emerging in the late 1990s, this data-driven approach provides a novel means of achieving precise control over speech variations. By modeling various acoustic parameters using a time-series stochastic generative model, HMM-based speech synthesis offers a powerful alternative to traditional unit selection and concatenation methods. One of its key advantages is the ability to perform voice alterations without the need for extensive databases, while maintaining a level of quality that rivals the traditional approaches <ref>Kayte, S., Mundada, M., & Gujrathi, J. (2015). Hidden Markov Model based Speech Synthesis: A Review. ''International Journal of Computer Applications'', ''130''(3), 35–39. <nowiki>https://doi.org/10.5120/ijca2015906965</nowiki></ref>. This flexibility in voice modification, combined with its capacity to generate natural and intelligible speech, has contributed to the growing popularity and success of HMM-based speech synthesis in recent years. The adoption of HMM-based speech synthesis has been facilitated by well-established machine learning algorithms, many of which originated in the field of automatic speech recognition (ASR). These algorithms, such as Baum-Welch, Viterbi, and clustering methods, have proven their efficiency and effectiveness. Additionally, the availability of open-source toolkits covering essential areas like text analysis, signal processing, and HMMs has contributed to the widespread use of this technology in both academic and commercial organizations. This surge in interest is underscored by the fact that approximately 76% of the papers presented at INTERSPEECH 2012, a prominent international conference on speech information processing, utilized HMM-based approaches. This widespread adoption strongly reinforces the need for and potential of this innovative approach, confirming its status as a pivotal technology in the field of speech synthesis <ref>Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. ''Proceedings of the IEEE'', ''101''(5), 1234–1252. <nowiki>https://doi.org/10.1109/JPROC.2013.2251852</nowiki></ref>.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information