Hidden Markov Models in Speech Synthesis: Difference between revisions
No edit summary |
|||
(21 intermediate revisions by 8 users not shown) | |||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
Hidden Markov Model (HMM)-based speech synthesis is a remarkably effective technique in synthesizing speech. The most attractive part of HTS system is that speaker identities, speaking styles, or emotions can easily be modified by transforming HMM parameters using various techniques such as adaptation, interpolation, Eigen voice, Or multiple Regression. | |||
In the realm of Text-to-speech (TTS) synthesis, the main goal is to transform input text into intelligible and natural sounding speech. The TTS components involves the two phases, the front end and the back end. The front end analyses the text, creates possible pronunciations for each word in the context with grapheme to phoneme conversion. The back end generates the speech waveform along with the prosody of the sentence to be spoken. The evaluation of TTS system is based on three critical attributes: accuracy, intelligibility, and naturalness. | |||
The HTS system provides the frequency spectrum (Vocal tract), fundamental frequency (vocal source) and duration (Prosody) of speech, which are statistically generated by using HMM based on maximum likelihood criterion. HTS system is an open source tool which provides a research and development platform for statistical parametric speech synthesis. The HMM-based speech synthesis system (HTS) has been developed by the HTS working group as an extension of the HMM toolkit (HTK)<ref>Kayte, S., Mundada, M., & Gujrathi, J. (2015). Hidden Markov model based speech synthesis: A review. ''International Journal of Computer Applications'', ''130''(3), 35-39.</ref>. | |||
== Historical Context == | == Historical Context == | ||
=== Early speech synthesis Development === | |||
Speech synthesis, also known as TTS (text-to-speech), refers to the artificial generation of human speech through computer technology. It is the automated process of converting written text into an acoustic speech signal. The historical perspective on speech synthesis research reveals that the earliest systems, often referred to as "têtes parlantes" or talking heads, emerged in the eighteenth century <ref>Kuligowska, K., Kisielewicz, P., & Włodarz, A. (2018). Speech synthesis systems: disadvantages and limitations. ''Int J Res Eng Technol (UAE)'', ''7'', 234-239.</ref>. The mechanical nature of early systems limited their ability to reproduce speech that closely resembled natural human speech. | |||
Before HMMs became involved in speech synthesis, several techniques were employed to generate synthetic speech. These techniques included formant synthesis, articulatory synthesis, oncatenative synthesis and unit selection synthesis <ref>Tabet, Y., & Boughazi, M. (2011, May). Speech synthesis techniques. A survey. In ''International Workshop on Systems, Signal Processing and their Applications, WOSSPA'' (pp. 67-70). IEEE.</ref>. | |||
==== Formant Synthesis ==== | |||
Formant synthesis involves the use of resonance structures called formants to generate speech. In some cases, a combination of parallel and cascade resonators is employed. A notable example is the '''Klatt synthesizer''', which utilized 39 parameters updated every 5 milliseconds. While formant synthesis can produce intelligible speech, it is often considered less natural than other methods. | |||
==== Articulatory Synthesis ==== | |||
Articulatory synthesis seeks to generate speech by directly modeling the movements of human articulators. This method offers the potential for high-quality speech but is challenging to implement. Articulatory control parameters include various factors like lip aperture, tongue position, and height. However, acquiring accurate articulatory data, often through x-ray photography, and finding a balance between precision and simplicity are challenges. The results of articulatory synthesis may not always match the quality of other synthesis methods. | |||
==== Concatenative Synthesis ==== | |||
Concatenative synthesis addresses the difficulty in generating speech parameters from input text specifications. It employs a data-driven approach by connecting natural, prerecorded speech units, such as words, syllables, or diphones. Diphones are widely used and start in the middle of one phoneme and extend to the middle of the following one, capturing coarticulation. Building a diphone inventory involves recording all phonemes within possible contexts and labeling and segmenting diphones <ref>Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. ''Proceedings of the IEEE'', ''101''(5), 1234–1252. <nowiki>https://doi.org/10.1109/JPROC.2013.2251852</nowiki></ref>. The pitch and duration of each diphone must be adjusted to match the prosody part of the specification. This approach balances memory requirements, complexity, and naturalness. | |||
==== Unit Selection Synthesis ==== | |||
During the 1990s, unit selection synthesis, also known as corpus-based concatenative synthesis, emerged, driven by the growing power of computer technology and the increasing availability of speech and linguistic resources <ref>Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''1'', 373–376. <nowiki>https://doi.org/10.1109/ICASSP.1996.541110</nowiki></ref>. | |||
Unit selection synthesis addresses issues associated with prosodic modifications in concatenative synthesis. It stores multiple instances of each unit with varying prosodies in the unit inventory, allowing for better matching to the target prosody. An algorithm selects the units that best match the target specification based on minimizing target cost and join cost functions. | |||
However, Unit selection synthesis has limitations in terms of expressiveness, customization, and prosody control due to its reliance on recorded speech units and extensive databases <ref>Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. ''Proceedings of the IEEE'', ''101''(5), 1234–1252. <nowiki>https://doi.org/10.1109/JPROC.2013.2251852</nowiki></ref>. | |||
=== Adoption of Hidden Markov Models (HMMs): === | |||
[[Hidden Markov Models]] (HMMs), originally developed for speech recognition, have recently gained attention for their potential in speech synthesis applications. The fundamental theory behind HMMs owes its roots to groundbreaking work by Baum and his colleagues. Stratonovich (1960) is credited with earlier work in this field, proposing an optimal nonlinear filtering model grounded in the theory of conditional Markov processes. | |||
A significant advancement in applying HMMs to speech was achieved by Rabiner in 1989. His work led to the formulation of a statistical method for representing speech, resulting in a successful implementation of an HMM system capable of handling discrete or continuous density parameter distributions. These collective contributions have laid the foundation for the exploration of HMMs in speech synthesis, highlighting their versatility in modeling and generating speech, a field in which they were not initially envisioned but have now become a pivotal technology <ref>Awad, M., & Khanna, R. (2015). Hidden Markov Model. In M. Awad & R. Khanna, ''Efficient Learning Machines'' (pp. 81–104). Apress. <nowiki>https://doi.org/10.1007/978-1-4302-5990-9_5</nowiki></ref>. | |||
=== HMM-based speech synthesis systems(HTS) === | |||
In traditional speech synthesis systems that rely on the selection and concatenation of acoustical units, the need for a substantial volume of speech data to encompass various voice characteristics can be a significant challenge. Collecting and storing such a large dataset can be complex and resource-intensive. To address this issue and construct speech synthesis systems capable of generating a wide range of voice characteristics, the HMM-based speech synthesis system (HTS) was introduced <ref>Tokuda, K., Zen, H., & Black, A. W. (2002, September). An HMM-based speech synthesis system applied to English. In ''IEEE speech synthesis workshop'' (pp. 227-230). Santa Monica: IEEE.</ref>. | |||
HMM-based speech synthesis, often referred to as "HTS", represents a significant advancement in the realm of Text-to-Speech technology. Emerging in the late 1990s, this data-driven approach provides a novel means of achieving precise control over speech variations. By modeling various acoustic parameters using a time-series stochastic generative model, HMM-based speech synthesis offers a powerful alternative to traditional unit selection and concatenation methods. One of its key advantages is the ability to perform voice alterations without the need for extensive databases, while maintaining a level of quality that rivals the traditional approaches <ref>Kayte, S., Mundada, M., & Gujrathi, J. (2015). Hidden Markov Model based Speech Synthesis: A Review. ''International Journal of Computer Applications'', ''130''(3), 35–39. <nowiki>https://doi.org/10.5120/ijca2015906965</nowiki></ref>. | |||
This flexibility in voice modification, combined with its capacity to generate natural and intelligible speech, has contributed to the growing popularity and success of HMM-based speech synthesis in recent years. | |||
The adoption of HMM-based speech synthesis has been facilitated by well-established machine learning algorithms, many of which originated in the field of automatic speech recognition (ASR). These algorithms, such as Baum-Welch, Viterbi, and clustering methods, have proven their efficiency and effectiveness. Additionally, the availability of open-source toolkits covering essential areas like text analysis, signal processing, and HMMs has contributed to the widespread use of this technology in both academic and commercial organizations. | |||
This surge in interest is underscored by the fact that approximately 76% of the papers presented at INTERSPEECH 2012, a prominent international conference on speech information processing, utilized HMM-based approaches. This widespread adoption strongly reinforces the need for and potential of this innovative approach, confirming its status as a pivotal technology in the field of speech synthesis <ref>Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. ''Proceedings of the IEEE'', ''101''(5), 1234–1252. <nowiki>https://doi.org/10.1109/JPROC.2013.2251852</nowiki></ref>. | |||
== Key Innovations == | == Key Innovations == | ||
Line 11: | Line 51: | ||
Researchers continuously improve the structure and parameter estimation methods of Hidden Markov Model (HMM) to enhance its precision and accuracy in speech synthesis. This includes modeling improvements in states, transition probabilities, emission probabilities, and more. Key innovations in high-precision model design in the domain of speech synthesis with HMMs focus on enhancing the model's structure, parameter estimation methods, and integration with neural networks. These innovations play a crucial role in improving the accuracy, temporal modeling capabilities, and expressive power of HMMs, providing valuable insights and methods for high-precision model design in speech synthesis. These innovations make HMM models more adaptable and expressive in the field of speech synthesis, laying the foundation for achieving high-quality, natural speech synthesis. | Researchers continuously improve the structure and parameter estimation methods of Hidden Markov Model (HMM) to enhance its precision and accuracy in speech synthesis. This includes modeling improvements in states, transition probabilities, emission probabilities, and more. Key innovations in high-precision model design in the domain of speech synthesis with HMMs focus on enhancing the model's structure, parameter estimation methods, and integration with neural networks. These innovations play a crucial role in improving the accuracy, temporal modeling capabilities, and expressive power of HMMs, providing valuable insights and methods for high-precision model design in speech synthesis. These innovations make HMM models more adaptable and expressive in the field of speech synthesis, laying the foundation for achieving high-quality, natural speech synthesis. | ||
==== Improved Model Structure ==== | |||
# Multi-layer HMMs: Introducing multi-layer HMMs to better represent the intricate structure of speech signals. [Young et al., 1994]<ref>Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., ... & Woodland, P. (1994). [https://ieeexplore.ieee.org/document/294559/ The HTK Book]. Cambridge University Engineering Department.</ref> | # Multi-layer HMMs: Introducing multi-layer HMMs to better represent the intricate structure of speech signals. [Young et al., 1994]<ref>Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., ... & Woodland, P. (1994). [https://ieeexplore.ieee.org/document/294559/ The HTK Book]. Cambridge University Engineering Department.</ref> | ||
# Coupled HMMs: Coupling multiple HMM models to enhance the modeling of complex speech features. [Lee and Hon, 1989]<ref>Lee, K.-F., & Hon, H.-W. (1989). [https://ieeexplore.ieee.org/document/22686 Speaker-independent phone recognition using hidden Markov models]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11), 1641-1648.</ref> | # Coupled HMMs: Coupling multiple HMM models to enhance the modeling of complex speech features. [Lee and Hon, 1989]<ref>Lee, K.-F., & Hon, H.-W. (1989). [https://ieeexplore.ieee.org/document/22686 Speaker-independent phone recognition using hidden Markov models]. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11), 1641-1648.</ref> | ||
==== Parameter Estimation Methods ==== | |||
# Maximum Likelihood Estimation(MLE): Employing MLE to estimate parameters like state transition and emission probabilities, enhancing model fitting accuracy. [Rabiner and Juang, 1986]<ref>Rabiner, L. R., & Juang, B. H. (1986). [https://ieeexplore.ieee.org/document/18626 An introduction to hidden Markov models]. IEEE ASSP Magazine, 3(1), 4-16.</ref> | # Maximum Likelihood Estimation (MLE): Employing MLE to estimate parameters like state transition and emission probabilities, enhancing model fitting accuracy. [Rabiner and Juang, 1986]<ref>Rabiner, L. R., & Juang, B. H. (1986). [https://ieeexplore.ieee.org/document/18626 An introduction to hidden Markov models]. IEEE ASSP Magazine, 3(1), 4-16.</ref> | ||
# Baum-Welch Algorithm: Utilizing the Baum-Welch algorithm for iterative parameter estimation, maximizing the likelihood function of observed data. [Baum et al., 1970]<ref>- Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970). [https://doi.org/10.1002/j.1538-7305.1970.tb01790.x A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains]. The annals of mathematical statistics, 41(1), 164-171.</ref> | # Baum-Welch Algorithm: Utilizing the Baum-Welch algorithm for iterative parameter estimation, maximizing the likelihood function of observed data. [Baum et al., 1970]<ref>- Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970). [https://doi.org/10.1002/j.1538-7305.1970.tb01790.x A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains]. The annals of mathematical statistics, 41(1), 164-171.</ref> | ||
==== Integration with Neural Networks ==== | |||
# Combination of Deep Neural Network and HMM (DNN-HMM): Fusing deep neural networks with HMM to model the relationship between speech features and HMM states, enhancing the model's expression ability. [Hinton et al., 2012]<ref name=":0">Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., ... & Kingsbury, B. (2012). [https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups]. IEEE Signal Processing Magazine, 29(6), 82-97.</ref> | # Combination of Deep Neural Network and HMM (DNN-HMM): Fusing deep neural networks with HMM to model the relationship between speech features and HMM states, enhancing the model's expression ability. [Hinton et al., 2012]<ref name=":0">Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., ... & Kingsbury, B. (2012). [https://www.cs.toronto.edu/~hinton/absps/NatureDeepReview.pdf Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups]. IEEE Signal Processing Magazine, 29(6), 82-97.</ref> | ||
# Combination of Recurrent Neural Network and HMM (RNN-HMM): Integrating recurrent neural networks with HMM to model temporal information, further improving the time series modeling ability. [Graves et al., 2013]<ref>Graves, A., Mohamed, A. R., Hinton, G., (2013). Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).</ref> | # Combination of Recurrent Neural Network and HMM (RNN-HMM): Integrating recurrent neural networks with HMM to model temporal information, further improving the time series modeling ability. [Graves et al., 2013]<ref>Graves, A., Mohamed, A. R., Hinton, G., (2013). Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).</ref> | ||
Line 26: | Line 66: | ||
Introducing more complex mixture models or state aggregation methods to enhance the model's ability to model diverse speech features, making synthesized speech sound more natural. Key innovations in mixture models and state aggregation in the domain of speech synthesis with HMMs primarily focus on model structure design and parameter optimization. Mixture models and state aggregation help improve HMMs' ability to model complex speech features, resulting in more natural synthesized speech. | Introducing more complex mixture models or state aggregation methods to enhance the model's ability to model diverse speech features, making synthesized speech sound more natural. Key innovations in mixture models and state aggregation in the domain of speech synthesis with HMMs primarily focus on model structure design and parameter optimization. Mixture models and state aggregation help improve HMMs' ability to model complex speech features, resulting in more natural synthesized speech. | ||
==== Hybrid Models ==== | |||
# Gaussian Mixture Models (GMMs): Frequently used to model HMM state output probability, enhancing the modeling ability for speech features. [Reynolds et al., 2000]<ref>Reynolds, D. A., Rose, R. C., & Quatieri, T. F. (2000). [https://ieeexplore.ieee.org/document/862134 Robust text-independent speaker identification using Gaussian mixture speaker models]. IEEE Transactions on Speech and Audio Processing, 8(2), 128-142.</ref> | # Gaussian Mixture Models (GMMs): Frequently used to model HMM state output probability, enhancing the modeling ability for speech features. [Reynolds et al., 2000]<ref>Reynolds, D. A., Rose, R. C., & Quatieri, T. F. (2000). [https://ieeexplore.ieee.org/document/862134 Robust text-independent speaker identification using Gaussian mixture speaker models]. IEEE Transactions on Speech and Audio Processing, 8(2), 128-142.</ref> | ||
# Various Output Probability Distributions: Introduction of different distributions to more accurately describe the distribution of speech features, thereby improving the quality and naturalness of synthetic speech. | # Various Output Probability Distributions: Introduction of different distributions to more accurately describe the distribution of speech features, thereby improving the quality and naturalness of synthetic speech. | ||
==== State Aggregation ==== | |||
# Tied-State Models: Aggregating states with similar characteristics and sharing state parameters to reduce model complexity and improve efficiency. [Bahl et al., 1986]<ref>Bahl, L. R., Jelinek, F., & Mercer, R. L. (1986). [https://ieeexplore.ieee.org/document/22618 A maximum likelihood approach to continuous speech recognition]. IEEE Transactions on Pattern Analysis and Machine Intelligence, (5), 179-190.</ref> | # Tied-State Models: Aggregating states with similar characteristics and sharing state parameters to reduce model complexity and improve efficiency. [Bahl et al., 1986]<ref>Bahl, L. R., Jelinek, F., & Mercer, R. L. (1986). [https://ieeexplore.ieee.org/document/22618 A maximum likelihood approach to continuous speech recognition]. IEEE Transactions on Pattern Analysis and Machine Intelligence, (5), 179-190.</ref> | ||
# State Merging of Sub-word Units: Merging states of different sub-word units to reduce model complexity and enhance modeling ability for diverse speech features. [Juang and Rabiner, 1991]<ref>Juang, B. H., & Rabiner, L. R. (1991). [https://ieeexplore.ieee.org/document/214466/ Hidden Markov models for speech recognition]. Technometrics, 33(3), 251-272.</ref> | # State Merging of Sub-word Units: Merging states of different sub-word units to reduce model complexity and enhance modeling ability for diverse speech features. [Juang and Rabiner, 1991]<ref>Juang, B. H., & Rabiner, L. R. (1991). [https://ieeexplore.ieee.org/document/214466/ Hidden Markov models for speech recognition]. Technometrics, 33(3), 251-272.</ref> | ||
Line 39: | Line 77: | ||
Optimizing the model's temporal properties to better capture the temporal variations in speech signals, especially in terms of coherence and fluency between different phonemes. Key innovations in temporal model optimization in the domain of speech synthesis with HMMs primarily focus on improving the model's ability to model temporal properties, handle long sequences, and enhance the model's temporal prediction accuracy. These innovations have significantly advanced HMM models in temporal modeling, laying the foundation for achieving more natural and fluent speech synthesis. | Optimizing the model's temporal properties to better capture the temporal variations in speech signals, especially in terms of coherence and fluency between different phonemes. Key innovations in temporal model optimization in the domain of speech synthesis with HMMs primarily focus on improving the model's ability to model temporal properties, handle long sequences, and enhance the model's temporal prediction accuracy. These innovations have significantly advanced HMM models in temporal modeling, laying the foundation for achieving more natural and fluent speech synthesis. | ||
==== Long-term Sequential Modeling ==== | |||
# Methods for Long-time Sequence Modeling: Addressing long-time sequence problems by introducing methods to capture long-time sequence features more accurately, enhancing the naturalness of synthetic speech. [Tjandra et al., 2017]<ref>Tjandra, A., Cohn, T., Schuster, M., & Schröder, S. (2017). [https://arxiv.org/abs/1703.10135 Listening while speaking: Speech recognition during speech production is modulated by visually perceived speech rate.] PloS one, 12(3), e0173612.</ref> | # Methods for Long-time Sequence Modeling: Addressing long-time sequence problems by introducing methods to capture long-time sequence features more accurately, enhancing the naturalness of synthetic speech. [Tjandra et al., 2017]<ref>Tjandra, A., Cohn, T., Schuster, M., & Schröder, S. (2017). [https://arxiv.org/abs/1703.10135 Listening while speaking: Speech recognition during speech production is modulated by visually perceived speech rate.] PloS one, 12(3), e0173612.</ref> | ||
# Long-term Sequence Memory Mechanisms: Implementing long-term sequence memory mechanisms like gated recurrent units (GRU) or long-short-term memory networks (LSTM) to effectively capture long-term sequence information and improve timing modeling ability. [Chung et al., 2015]<ref>Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2015). [https://arxiv.org/abs/1412.3555 Gated feedback recurrent neural networks]. In Proceedings of the 32nd International Conference on International Conference on Machine Learning (pp. 2067-2075).</ref> | # Long-term Sequence Memory Mechanisms: Implementing long-term sequence memory mechanisms like gated recurrent units (GRU) or long-short-term memory networks (LSTM) to effectively capture long-term sequence information and improve timing modeling ability. [Chung et al., 2015]<ref>Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2015). [https://arxiv.org/abs/1412.3555 Gated feedback recurrent neural networks]. In Proceedings of the 32nd International Conference on International Conference on Machine Learning (pp. 2067-2075).</ref> | ||
==== Improved Accuracy of Time Series Prediction ==== | |||
# Enhanced Training Strategies: Utilizing enhanced training strategies, such as reinforcement learning, to optimize the prediction accuracy of the time series, resulting in more natural rhythm and fluency in synthetic speech. [Ping et al., 2018]<ref>Ping, W., Peng, B., & Yang, X. (2018). [https://arxiv.org/abs/1802.08759 Deep voice 3: 2000-speaker neural text-to-speech]. In Advances in Neural Information Processing Systems (pp. 4967-4977).</ref> | # Enhanced Training Strategies: Utilizing enhanced training strategies, such as reinforcement learning, to optimize the prediction accuracy of the time series, resulting in more natural rhythm and fluency in synthetic speech. [Ping et al., 2018]<ref>Ping, W., Peng, B., & Yang, X. (2018). [https://arxiv.org/abs/1802.08759 Deep voice 3: 2000-speaker neural text-to-speech]. In Advances in Neural Information Processing Systems (pp. 4967-4977).</ref> | ||
# Model Fusion: Fusing HMM with other models, such as neural networks, to achieve higher timing modeling accuracy and prediction ability, resulting in smoother and more natural synthetic speech. [Zen et al., 2009]<ref>Zen, H., Tokuda, K., Masuko, T., Kobayashi, T., & Kitamura, T. (2009). [https://ieeexplore.ieee.org/document/4783645 HMM-based speech synthesis integrated with speaker adaptation and its evaluation]. IEEE Transactions on Audio, Speech, and Language Processing, 17(2), 363-376.</ref> | # Model Fusion: Fusing HMM with other models, such as neural networks, to achieve higher timing modeling accuracy and prediction ability, resulting in smoother and more natural synthetic speech. [Zen et al., 2009]<ref>Zen, H., Tokuda, K., Masuko, T., Kobayashi, T., & Kitamura, T. (2009). [https://ieeexplore.ieee.org/document/4783645 HMM-based speech synthesis integrated with speaker adaptation and its evaluation]. IEEE Transactions on Audio, Speech, and Language Processing, 17(2), 363-376.</ref> | ||
=== HMM Improvement Based on Neural Networks === | === HMM Improvement Based on [[Advancements in Neural Network-Based TTS (2000s)|Neural Networks]] === | ||
Integrating neural networks with HMM, such as Deep Neural Network-HMM (DNN-HMM) and Recurrent Neural Network-HMM (RNN-HMM), to enhance the model's expressive power and efficiency. Significant innovations in improving HMMs based on neural networks in the field of speech synthesis focus on combining neural networks with HMMs to enhance the model's expressive power, accuracy, and naturalness. These innovations combining neural networks with HMMs result in models with stronger expressive capabilities, more accurate modeling, and more natural speech synthesis effects. | Integrating neural networks with HMM, such as Deep Neural Network-HMM (DNN-HMM) and Recurrent Neural Network-HMM (RNN-HMM), to enhance the model's expressive power and efficiency. Significant innovations in improving HMMs based on neural networks in the field of speech synthesis focus on combining neural networks with HMMs to enhance the model's expressive power, accuracy, and naturalness. These innovations combining neural networks with HMMs result in models with stronger expressive capabilities, more accurate modeling, and more natural speech synthesis effects. | ||
==== Integration of Deep Neural Network and HMM (DNN-HMM) ==== | |||
# DNN for Acoustic Modeling: Employing deep neural networks to model HMM state emission probability, replacing the traditional Gaussian mixture model (GMM), significantly enhancing the accuracy and naturalness of the speech synthesis system. [Hinton et al., 2012]<ref name=":0" /> | # DNN for Acoustic Modeling: Employing deep neural networks to model HMM state emission probability, replacing the traditional Gaussian mixture model (GMM), significantly enhancing the accuracy and naturalness of the speech synthesis system. [Hinton et al., 2012]<ref name=":0" /> | ||
# DNN as a Front-end Feature Extractor: Utilizing DNN as a feature extractor and its output as the input feature of HMM to enhance feature representation. [Zeiler et al., 2013]<ref>Zeiler, M. D., Krishnan, D., Taylor, G. W., Fergus, R. (2013). [https://arxiv.org/abs/1310.1531 Deconvolutional networks. In Computer Vision and Pattern Recognition] (CVPR), 2010 IEEE Conference on (pp. 2528-2535).</ref> | # DNN as a Front-end Feature Extractor: Utilizing DNN as a feature extractor and its output as the input feature of HMM to enhance feature representation. [Zeiler et al., 2013]<ref>Zeiler, M. D., Krishnan, D., Taylor, G. W., Fergus, R. (2013). [https://arxiv.org/abs/1310.1531 Deconvolutional networks. In Computer Vision and Pattern Recognition] (CVPR), 2010 IEEE Conference on (pp. 2528-2535).</ref> | ||
==== Integration of Recurrent Neural Network and HMM (RNN-HMM) ==== | |||
# RNN for Timing Modeling: Integrating recurrent neural networks (RNN) to model the timing of HMM, better capturing long-term information and improving fluency and naturalness of synthetic speech. [Graves et al., 2013]<ref>Graves, A., Mohamed, A. R., Hinton, G. (2013). [https://www.cs.toronto.edu/~hinton/absps/RNN13.pdf Speech recognition with deep recurrent neural networks]. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).</ref> | # RNN for Timing Modeling: Integrating recurrent neural networks (RNN) to model the timing of HMM, better capturing long-term information and improving fluency and naturalness of synthetic speech. [Graves et al., 2013]<ref>Graves, A., Mohamed, A. R., Hinton, G. (2013). [https://www.cs.toronto.edu/~hinton/absps/RNN13.pdf Speech recognition with deep recurrent neural networks]. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).</ref> | ||
# Long-term Sequence Modeling: Introducing long-term memory networks (LSTM) as a variant of RNN to address the problem of gradient vanishing in long-time series modeling and improve the time series modeling ability. [Hochreiter and Schmidhuber, 1997]<ref>Hochreiter, S., & Schmidhuber, J. (1997). [https://www.mitpressjournals.org/doi/10.1162/neco.1997.9.8.1735 Long short-term memory]. Neural computation, 9(8), 1735-1780.</ref> | # Long-term Sequence Modeling: Introducing long-term memory networks (LSTM) as a variant of RNN to address the problem of gradient vanishing in long-time series modeling and improve the time series modeling ability. [Hochreiter and Schmidhuber, 1997]<ref>Hochreiter, S., & Schmidhuber, J. (1997). [https://www.mitpressjournals.org/doi/10.1162/neco.1997.9.8.1735 Long short-term memory]. Neural computation, 9(8), 1735-1780.</ref> | ||
Line 64: | Line 99: | ||
Adopting an end-to-end learning approach, mapping input text directly to acoustic features, avoiding manually designed feature extraction steps, simplifying the system, and improving performance. Key innovations in end-to-end learning in the domain of speech synthesis with HMMs primarily focus on the direct mapping from input text to synthesized speech in end-to-end model design and training. This approach bypasses the cumbersome intermediate steps in traditional speech synthesis systems and directly maps text to corresponding acoustic features. These innovations have made end-to-end learning a significant research direction in the field of speech synthesis, greatly simplifying the system flow while improving the quality and naturalness of synthesized speech. | Adopting an end-to-end learning approach, mapping input text directly to acoustic features, avoiding manually designed feature extraction steps, simplifying the system, and improving performance. Key innovations in end-to-end learning in the domain of speech synthesis with HMMs primarily focus on the direct mapping from input text to synthesized speech in end-to-end model design and training. This approach bypasses the cumbersome intermediate steps in traditional speech synthesis systems and directly maps text to corresponding acoustic features. These innovations have made end-to-end learning a significant research direction in the field of speech synthesis, greatly simplifying the system flow while improving the quality and naturalness of synthesized speech. | ||
==== [[Development of End-to-End Models|End-to-End Model]] Design ==== | |||
# Text-to-Speech End-to-End Model: Designing an integrated neural network model that takes text as input and outputs corresponding acoustic features, achieving direct mapping from text to acoustic features. [Wang et al., 2017]<ref name=":1">Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & Auli, M. (2017). [https://arxiv.org/abs/1703.10135 Tacotron: Towards end-to-end speech synthesis]. arXiv preprint arXiv:1703.10135.</ref> | # Text-to-Speech End-to-End Model: Designing an integrated neural network model that takes text as input and outputs corresponding acoustic features, achieving direct mapping from text to acoustic features. [Wang et al., 2017]<ref name=":1">Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & Auli, M. (2017). [https://arxiv.org/abs/1703.10135 Tacotron: Towards end-to-end speech synthesis]. arXiv preprint arXiv:1703.10135.</ref> | ||
# Self-Attention Mechanism: Introducing a self-attention mechanism to capture long-distance dependencies in input text, enhancing the model's ability to handle longer texts. [Shen et al., 2018]<ref>Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., ... & Wu, Y. (2018). [https://arxiv.org/abs/1804.10216 Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions]. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4779-4783).</ref> | # Self-Attention Mechanism: Introducing a self-attention mechanism to capture long-distance dependencies in input text, enhancing the model's ability to handle longer texts. [Shen et al., 2018]<ref>Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., ... & Wu, Y. (2018). [https://arxiv.org/abs/1804.10216 Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions]. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4779-4783).</ref> | ||
==== Joint Training ==== | |||
# Joint Training of Text and Acoustic Features: Training an end-to-end model to jointly predict text and acoustic features for improved performance and more natural speech synthesis results. [Wang et al., 2017]<ref name=":1" /> | # Joint Training of Text and Acoustic Features: Training an end-to-end model to jointly predict text and acoustic features for improved performance and more natural speech synthesis results. [Wang et al., 2017]<ref name=":1" /> | ||
# Reinforcement Learning Optimization: Leveraging reinforcement learning to optimize the model, enabling it to generate acoustic feature sequences that align better with the fluency of natural speech. [Ping et al., 2017]<ref>Ping, W., Peng, B., & Yang, X. (2017). [https://arxiv.org/abs/1802.08759 Deep Voice 3: 2000-speaker neural text-to-speech]. In Advances in Neural Information Processing Systems (pp. 4967-4977).</ref> | # Reinforcement Learning Optimization: Leveraging reinforcement learning to optimize the model, enabling it to generate acoustic feature sequences that align better with the fluency of natural speech. [Ping et al., 2017]<ref>Ping, W., Peng, B., & Yang, X. (2017). [https://arxiv.org/abs/1802.08759 Deep Voice 3: 2000-speaker neural text-to-speech]. In Advances in Neural Information Processing Systems (pp. 4967-4977).</ref> | ||
Line 76: | Line 110: | ||
Integrating emotion and intonation information into the HMM model to achieve more rich and vivid speech synthesis, allowing synthesized speech to convey a broader range of emotional nuances and expressions. Key innovations in emotion and intonation synthesis in the domain of speech synthesis with HMMs primarily focus on model structure, feature extraction, and the use of emotion-annotated data. These innovations aim to make synthesized speech more emotionally expressive and closer to natural speech. These innovations have significantly improved HMM models in emotion and intonation synthesis, providing important insights and methods for achieving more rich and natural speech synthesis. | Integrating emotion and intonation information into the HMM model to achieve more rich and vivid speech synthesis, allowing synthesized speech to convey a broader range of emotional nuances and expressions. Key innovations in emotion and intonation synthesis in the domain of speech synthesis with HMMs primarily focus on model structure, feature extraction, and the use of emotion-annotated data. These innovations aim to make synthesized speech more emotionally expressive and closer to natural speech. These innovations have significantly improved HMM models in emotion and intonation synthesis, providing important insights and methods for achieving more rich and natural speech synthesis. | ||
==== Emotional Modeling ==== | |||
# Emotion-Driven HMM Model: Designing an emotion-driven HMM model that takes emotion tags as input features, enabling the model to adjust synthetic speech characteristics based on emotional information for emotional synthesis. [Mills et al., 2012]<ref>Mills, G., & Schuller, B. W. (2012). [https://ieeexplore.ieee.org/document/6267902/ HMM-based synthesis of emotional speech with multi-formant filter representation]. In Proceedings of the 4th International Workshop on Emotion Corpora and Recognition (pp. 9-14).</ref> | # Emotion-Driven HMM Model: Designing an emotion-driven HMM model that takes emotion tags as input features, enabling the model to adjust synthetic speech characteristics based on emotional information for emotional synthesis. [Mills et al., 2012]<ref>Mills, G., & Schuller, B. W. (2012). [https://ieeexplore.ieee.org/document/6267902/ HMM-based synthesis of emotional speech with multi-formant filter representation]. In Proceedings of the 4th International Workshop on Emotion Corpora and Recognition (pp. 9-14).</ref> | ||
# Modeling Emotion in a Multidimensional Space: Treating emotion as a multi-dimensional space and synthesizing different emotions by modeling speech feature distribution in this space. [Schuller et al., 2003]<ref>Schuller, B., Batliner, A., Seppi, D., Steidl, S., Vogt, T., Wagner, J., ... & Devillers, L. (2003). [https://ieeexplore.ieee.org/document/1234102/ The relevance of feature type for the automatic classification of emotional user states: Low level descriptors and functionals]. In European Conference on Speech Communication and Technology.</ref> | # Modeling Emotion in a Multidimensional Space: Treating emotion as a multi-dimensional space and synthesizing different emotions by modeling speech feature distribution in this space. [Schuller et al., 2003]<ref>Schuller, B., Batliner, A., Seppi, D., Steidl, S., Vogt, T., Wagner, J., ... & Devillers, L. (2003). [https://ieeexplore.ieee.org/document/1234102/ The relevance of feature type for the automatic classification of emotional user states: Low level descriptors and functionals]. In European Conference on Speech Communication and Technology.</ref> | ||
==== Intonation Modeling ==== | |||
# Intonation Modeling Based on Prosodic Patterns: Introducing prosodic pattern modeling to simulate intonation by adjusting pitch and volume. [Hunt and Black, 1996]<ref>Hunt, A., & Black, A. (1996). [https://ieeexplore.ieee.org/document/547974/ Unit selection in a concatenative speech synthesis system using a large speech database]. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 373-376).</ref> | # Intonation Modeling Based on Prosodic Patterns: Introducing prosodic pattern modeling to simulate intonation by adjusting pitch and volume. [Hunt and Black, 1996]<ref>Hunt, A., & Black, A. (1996). [https://ieeexplore.ieee.org/document/547974/ Unit selection in a concatenative speech synthesis system using a large speech database]. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 373-376).</ref> | ||
# F0 Profile Modeling: Controlling intonation in speech synthesis by modeling the F0 (fundamental frequency) profile, representing pitch changes in sound. [Tóth and Bánhalmi, 2011]<ref>Tóth, L., & Bánhalmi, A. (2011). [https://ieeexplore.ieee.org/document/6118115/ HMM-based synthesis of F0 contours for TTS]. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (pp. 749-752).</ref> | # F0 Profile Modeling: Controlling intonation in speech synthesis by modeling the F0 (fundamental frequency) profile, representing pitch changes in sound. [Tóth and Bánhalmi, 2011]<ref>Tóth, L., & Bánhalmi, A. (2011). [https://ieeexplore.ieee.org/document/6118115/ HMM-based synthesis of F0 contours for TTS]. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (pp. 749-752).</ref> | ||
== Impact == | == Impact == | ||
Hidden Markov Models (HMMs) are versatile probabilistic modeling tools whose emergence has had a wide and far-reaching impact on a variety of fields. Originally developed as a mathematical model for describing the probabilistic relationship between an implied state and a sequence of observations, HMMs have been used in a wide variety of domains, advancing the development of many technologies and applications. | |||
HMMs are powerful probabilistic modeling tools, and their emergence has had a profound impact on natural language processing, speech processing, bioinformatics, and many other fields. First, HMMs play a key role in the field of speech processing. By modeling acoustic signals, HMMs enable computers to understand and Audio-visual synthesis and recognition. This not only facilitates the development of speech recognition technology, but also brings convenience to innovations in the fields of automated voice assistants, voice control, and accessible communication. | |||
Moreover, HMMs produce more natural synthesized speech in speech synthesis, making the synthesized voice closer to the natural voice. | |||
In addition, HMMs are useful in image processing (e.g., Human motion synthesis, Face animation synthesis), audio processing (e.g., Prosodic event recognition, Very low-bitrate speech code), weather prediction, and handwriting recognition ( Online handwriting recognition) are widely used. They provide powerful tools for pattern recognition, object detection, audio event recognition, and face recognition.<ref>Zen H, Nose T, Yamagishi J, et al. The HMM-based speech synthesis system (HTS) version 2.0[J]. SSW, 2007, 6: 294-299.</ref> | |||
In conclusion, the multidisciplinary applications of HMMs show the broad applicability of this probabilistic modeling approach for dealing with time-series data, implicit structures, and incomplete information. They have had a far-reaching impact in several fields, promoting technological innovation and scientific research, and providing powerful tools for the solution of many practical problems. Therefore, HMMs can be considered an important pillar of modern computing and data science. | |||
== Future | == Future Research == | ||
HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009)<ref>Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.</ref>. This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems<ref>Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.</ref><ref>Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.</ref><ref>K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.</ref> and b) personalized speech synthesizers for individuals with vocal disabilities <ref>J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.</ref> and HTS is also widely applied in [[Commerical TTS - Google, Amazon, Apple and Microsoft (2010s)]]. | HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009)<ref>Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.</ref>. This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems<ref>Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.</ref><ref>Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.</ref><ref>K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.</ref> and b) personalized speech synthesizers for individuals with vocal disabilities <ref>J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.</ref> and HTS is also widely applied in [[Commerical TTS - Google, Amazon, Apple and Microsoft (2010s)]]. | ||
Line 93: | Line 135: | ||
Looking ahead, there are many directions for further research in the future. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation<ref>H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.</ref>.Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed <ref>H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.</ref> based on the formulation of the discrete-time stochastic process version of the Fujisaki model <ref>H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.</ref>. On a different trajectory, focusing on speech synthesis using "Big data" <ref>J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.</ref>, which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced <ref>J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.</ref>, allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored <ref>T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.</ref>. | Looking ahead, there are many directions for further research in the future. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation<ref>H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.</ref>.Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed <ref>H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.</ref> based on the formulation of the discrete-time stochastic process version of the Fujisaki model <ref>H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.</ref>. On a different trajectory, focusing on speech synthesis using "Big data" <ref>J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.</ref>, which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced <ref>J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.</ref>, allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored <ref>T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.</ref>. | ||
== LLM Review == | |||
We asked ChatGPT acted as a professor of speech science, review this text, and give us feedback: | |||
# Evaluate the assignment's language fluency and expression. Offer suggestions for refining the language to enhance engagement and professionalism. | |||
# Assess the assignment's organization, structure, clarity of concept explanation, and the extent of content coverage. Provide recommendations for improving these aspects. | |||
Below are the evaluations and suggestions: | |||
# Language Fluency and Expression: | |||
#* The language used is academically appropriate and maintains a professional tone throughout. However, it tends to be quite technical, potentially limiting the engagement of readers unfamiliar with the subject matter. | |||
#* To enhance engagement, consider simplifying complex sentences and technical jargon where possible. For instance, explanations of terms like "formant synthesis" and "articulatory synthesis" could include simpler, more relatable examples or analogies. | |||
#* The manuscript could benefit from a more active voice to make the content more dynamic and less textbook-like. For example, instead of saying, "HMM-based speech synthesis was introduced," you might say, "Researchers introduced HMM-based speech synthesis." | |||
#* Some sections are quite dense and could be made more reader-friendly with the use of bullet points or numbered lists, especially when listing types of synthesis, key innovations, or historical context. | |||
# Organization, Structure, Clarity, and Content Coverage: | |||
#* The organization of the content is logical, starting with an introduction to HMM-based speech synthesis, moving through historical context, key innovations, and future research. This structure helps the reader understand the evolution of the technology. | |||
#* However, the manuscript could benefit from clearer demarcations between sections and subsections. Consider using more pronounced headers or even visual elements to distinguish various parts of the content. | |||
#* The explanations of concepts are thorough but sometimes overly complex, which might be challenging for those not versed in this field. It would be beneficial to break down the most complex ideas into more digestible parts, possibly with additional subheadings for clarity. | |||
#* The content coverage is extensive, but this breadth sometimes leads to dense paragraphs that could be off-putting for some readers. To mitigate this, you might consider adding real-world applications or case studies to illustrate the practical uses of these technological advancements. This approach can help in bridging the gap between theoretical information and practical application, making the content more relatable and less abstract. | |||
#* The conclusion section could be expanded to succinctly reiterate the key advancements in HMM-based speech synthesis and their implications for the future, thereby providing a clear takeaway for readers. | |||
In summary, while the content is rich and informative, its accessibility could be improved through simplification of language, breaking down complex ideas, enhancing the active voice, and incorporating more practical examples or case studies. These changes would make the manuscript not only informative but also engaging and comprehensible to a broader audience. | |||
== Members == | == Members == | ||
Youyang Cai --- [Introduction] | |||
Weixi Lai --- [Historical Context] | |||
Jingwen Shi --- [Key Innovations] | |||
Siqi Zheng --- [Impact] | |||
Yining Lei --- [Future Research] | |||
Together ---[LLM Review] | |||
== References == | == References == | ||
To insert a reference, type <nowiki><ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.</nowiki> and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis. | To insert a reference, type <nowiki><ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.</nowiki> and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis. |
Latest revision as of 17:30, 17 October 2023
Introduction[edit | edit source]
Hidden Markov Model (HMM)-based speech synthesis is a remarkably effective technique in synthesizing speech. The most attractive part of HTS system is that speaker identities, speaking styles, or emotions can easily be modified by transforming HMM parameters using various techniques such as adaptation, interpolation, Eigen voice, Or multiple Regression.
In the realm of Text-to-speech (TTS) synthesis, the main goal is to transform input text into intelligible and natural sounding speech. The TTS components involves the two phases, the front end and the back end. The front end analyses the text, creates possible pronunciations for each word in the context with grapheme to phoneme conversion. The back end generates the speech waveform along with the prosody of the sentence to be spoken. The evaluation of TTS system is based on three critical attributes: accuracy, intelligibility, and naturalness.
The HTS system provides the frequency spectrum (Vocal tract), fundamental frequency (vocal source) and duration (Prosody) of speech, which are statistically generated by using HMM based on maximum likelihood criterion. HTS system is an open source tool which provides a research and development platform for statistical parametric speech synthesis. The HMM-based speech synthesis system (HTS) has been developed by the HTS working group as an extension of the HMM toolkit (HTK)[1].
Historical Context[edit | edit source]
Early speech synthesis Development[edit | edit source]
Speech synthesis, also known as TTS (text-to-speech), refers to the artificial generation of human speech through computer technology. It is the automated process of converting written text into an acoustic speech signal. The historical perspective on speech synthesis research reveals that the earliest systems, often referred to as "têtes parlantes" or talking heads, emerged in the eighteenth century [2]. The mechanical nature of early systems limited their ability to reproduce speech that closely resembled natural human speech.
Before HMMs became involved in speech synthesis, several techniques were employed to generate synthetic speech. These techniques included formant synthesis, articulatory synthesis, oncatenative synthesis and unit selection synthesis [3].
Formant Synthesis[edit | edit source]
Formant synthesis involves the use of resonance structures called formants to generate speech. In some cases, a combination of parallel and cascade resonators is employed. A notable example is the Klatt synthesizer, which utilized 39 parameters updated every 5 milliseconds. While formant synthesis can produce intelligible speech, it is often considered less natural than other methods.
Articulatory Synthesis[edit | edit source]
Articulatory synthesis seeks to generate speech by directly modeling the movements of human articulators. This method offers the potential for high-quality speech but is challenging to implement. Articulatory control parameters include various factors like lip aperture, tongue position, and height. However, acquiring accurate articulatory data, often through x-ray photography, and finding a balance between precision and simplicity are challenges. The results of articulatory synthesis may not always match the quality of other synthesis methods.
Concatenative Synthesis[edit | edit source]
Concatenative synthesis addresses the difficulty in generating speech parameters from input text specifications. It employs a data-driven approach by connecting natural, prerecorded speech units, such as words, syllables, or diphones. Diphones are widely used and start in the middle of one phoneme and extend to the middle of the following one, capturing coarticulation. Building a diphone inventory involves recording all phonemes within possible contexts and labeling and segmenting diphones [4]. The pitch and duration of each diphone must be adjusted to match the prosody part of the specification. This approach balances memory requirements, complexity, and naturalness.
Unit Selection Synthesis[edit | edit source]
During the 1990s, unit selection synthesis, also known as corpus-based concatenative synthesis, emerged, driven by the growing power of computer technology and the increasing availability of speech and linguistic resources [5].
Unit selection synthesis addresses issues associated with prosodic modifications in concatenative synthesis. It stores multiple instances of each unit with varying prosodies in the unit inventory, allowing for better matching to the target prosody. An algorithm selects the units that best match the target specification based on minimizing target cost and join cost functions.
However, Unit selection synthesis has limitations in terms of expressiveness, customization, and prosody control due to its reliance on recorded speech units and extensive databases [6].
Adoption of Hidden Markov Models (HMMs):[edit | edit source]
Hidden Markov Models (HMMs), originally developed for speech recognition, have recently gained attention for their potential in speech synthesis applications. The fundamental theory behind HMMs owes its roots to groundbreaking work by Baum and his colleagues. Stratonovich (1960) is credited with earlier work in this field, proposing an optimal nonlinear filtering model grounded in the theory of conditional Markov processes.
A significant advancement in applying HMMs to speech was achieved by Rabiner in 1989. His work led to the formulation of a statistical method for representing speech, resulting in a successful implementation of an HMM system capable of handling discrete or continuous density parameter distributions. These collective contributions have laid the foundation for the exploration of HMMs in speech synthesis, highlighting their versatility in modeling and generating speech, a field in which they were not initially envisioned but have now become a pivotal technology [7].
HMM-based speech synthesis systems(HTS)[edit | edit source]
In traditional speech synthesis systems that rely on the selection and concatenation of acoustical units, the need for a substantial volume of speech data to encompass various voice characteristics can be a significant challenge. Collecting and storing such a large dataset can be complex and resource-intensive. To address this issue and construct speech synthesis systems capable of generating a wide range of voice characteristics, the HMM-based speech synthesis system (HTS) was introduced [8].
HMM-based speech synthesis, often referred to as "HTS", represents a significant advancement in the realm of Text-to-Speech technology. Emerging in the late 1990s, this data-driven approach provides a novel means of achieving precise control over speech variations. By modeling various acoustic parameters using a time-series stochastic generative model, HMM-based speech synthesis offers a powerful alternative to traditional unit selection and concatenation methods. One of its key advantages is the ability to perform voice alterations without the need for extensive databases, while maintaining a level of quality that rivals the traditional approaches [9].
This flexibility in voice modification, combined with its capacity to generate natural and intelligible speech, has contributed to the growing popularity and success of HMM-based speech synthesis in recent years.
The adoption of HMM-based speech synthesis has been facilitated by well-established machine learning algorithms, many of which originated in the field of automatic speech recognition (ASR). These algorithms, such as Baum-Welch, Viterbi, and clustering methods, have proven their efficiency and effectiveness. Additionally, the availability of open-source toolkits covering essential areas like text analysis, signal processing, and HMMs has contributed to the widespread use of this technology in both academic and commercial organizations.
This surge in interest is underscored by the fact that approximately 76% of the papers presented at INTERSPEECH 2012, a prominent international conference on speech information processing, utilized HMM-based approaches. This widespread adoption strongly reinforces the need for and potential of this innovative approach, confirming its status as a pivotal technology in the field of speech synthesis [10].
Key Innovations[edit | edit source]
The key innovations of Hidden Markov Model (HMM) in the field of speech synthesis primarily focus on several critical aspects:
High-Precision Model Design[edit | edit source]
Researchers continuously improve the structure and parameter estimation methods of Hidden Markov Model (HMM) to enhance its precision and accuracy in speech synthesis. This includes modeling improvements in states, transition probabilities, emission probabilities, and more. Key innovations in high-precision model design in the domain of speech synthesis with HMMs focus on enhancing the model's structure, parameter estimation methods, and integration with neural networks. These innovations play a crucial role in improving the accuracy, temporal modeling capabilities, and expressive power of HMMs, providing valuable insights and methods for high-precision model design in speech synthesis. These innovations make HMM models more adaptable and expressive in the field of speech synthesis, laying the foundation for achieving high-quality, natural speech synthesis.
Improved Model Structure[edit | edit source]
- Multi-layer HMMs: Introducing multi-layer HMMs to better represent the intricate structure of speech signals. [Young et al., 1994][11]
- Coupled HMMs: Coupling multiple HMM models to enhance the modeling of complex speech features. [Lee and Hon, 1989][12]
Parameter Estimation Methods[edit | edit source]
- Maximum Likelihood Estimation (MLE): Employing MLE to estimate parameters like state transition and emission probabilities, enhancing model fitting accuracy. [Rabiner and Juang, 1986][13]
- Baum-Welch Algorithm: Utilizing the Baum-Welch algorithm for iterative parameter estimation, maximizing the likelihood function of observed data. [Baum et al., 1970][14]
Integration with Neural Networks[edit | edit source]
- Combination of Deep Neural Network and HMM (DNN-HMM): Fusing deep neural networks with HMM to model the relationship between speech features and HMM states, enhancing the model's expression ability. [Hinton et al., 2012][15]
- Combination of Recurrent Neural Network and HMM (RNN-HMM): Integrating recurrent neural networks with HMM to model temporal information, further improving the time series modeling ability. [Graves et al., 2013][16]
Hybrid Model and State Aggregation[edit | edit source]
Introducing more complex mixture models or state aggregation methods to enhance the model's ability to model diverse speech features, making synthesized speech sound more natural. Key innovations in mixture models and state aggregation in the domain of speech synthesis with HMMs primarily focus on model structure design and parameter optimization. Mixture models and state aggregation help improve HMMs' ability to model complex speech features, resulting in more natural synthesized speech.
Hybrid Models[edit | edit source]
- Gaussian Mixture Models (GMMs): Frequently used to model HMM state output probability, enhancing the modeling ability for speech features. [Reynolds et al., 2000][17]
- Various Output Probability Distributions: Introduction of different distributions to more accurately describe the distribution of speech features, thereby improving the quality and naturalness of synthetic speech.
State Aggregation[edit | edit source]
- Tied-State Models: Aggregating states with similar characteristics and sharing state parameters to reduce model complexity and improve efficiency. [Bahl et al., 1986][18]
- State Merging of Sub-word Units: Merging states of different sub-word units to reduce model complexity and enhance modeling ability for diverse speech features. [Juang and Rabiner, 1991][19]
Time Series Model Optimization[edit | edit source]
Optimizing the model's temporal properties to better capture the temporal variations in speech signals, especially in terms of coherence and fluency between different phonemes. Key innovations in temporal model optimization in the domain of speech synthesis with HMMs primarily focus on improving the model's ability to model temporal properties, handle long sequences, and enhance the model's temporal prediction accuracy. These innovations have significantly advanced HMM models in temporal modeling, laying the foundation for achieving more natural and fluent speech synthesis.
Long-term Sequential Modeling[edit | edit source]
- Methods for Long-time Sequence Modeling: Addressing long-time sequence problems by introducing methods to capture long-time sequence features more accurately, enhancing the naturalness of synthetic speech. [Tjandra et al., 2017][20]
- Long-term Sequence Memory Mechanisms: Implementing long-term sequence memory mechanisms like gated recurrent units (GRU) or long-short-term memory networks (LSTM) to effectively capture long-term sequence information and improve timing modeling ability. [Chung et al., 2015][21]
Improved Accuracy of Time Series Prediction[edit | edit source]
- Enhanced Training Strategies: Utilizing enhanced training strategies, such as reinforcement learning, to optimize the prediction accuracy of the time series, resulting in more natural rhythm and fluency in synthetic speech. [Ping et al., 2018][22]
- Model Fusion: Fusing HMM with other models, such as neural networks, to achieve higher timing modeling accuracy and prediction ability, resulting in smoother and more natural synthetic speech. [Zen et al., 2009][23]
HMM Improvement Based on Neural Networks[edit | edit source]
Integrating neural networks with HMM, such as Deep Neural Network-HMM (DNN-HMM) and Recurrent Neural Network-HMM (RNN-HMM), to enhance the model's expressive power and efficiency. Significant innovations in improving HMMs based on neural networks in the field of speech synthesis focus on combining neural networks with HMMs to enhance the model's expressive power, accuracy, and naturalness. These innovations combining neural networks with HMMs result in models with stronger expressive capabilities, more accurate modeling, and more natural speech synthesis effects.
Integration of Deep Neural Network and HMM (DNN-HMM)[edit | edit source]
- DNN for Acoustic Modeling: Employing deep neural networks to model HMM state emission probability, replacing the traditional Gaussian mixture model (GMM), significantly enhancing the accuracy and naturalness of the speech synthesis system. [Hinton et al., 2012][15]
- DNN as a Front-end Feature Extractor: Utilizing DNN as a feature extractor and its output as the input feature of HMM to enhance feature representation. [Zeiler et al., 2013][24]
Integration of Recurrent Neural Network and HMM (RNN-HMM)[edit | edit source]
- RNN for Timing Modeling: Integrating recurrent neural networks (RNN) to model the timing of HMM, better capturing long-term information and improving fluency and naturalness of synthetic speech. [Graves et al., 2013][25]
- Long-term Sequence Modeling: Introducing long-term memory networks (LSTM) as a variant of RNN to address the problem of gradient vanishing in long-time series modeling and improve the time series modeling ability. [Hochreiter and Schmidhuber, 1997][26]
End-to-End Learning[edit | edit source]
Adopting an end-to-end learning approach, mapping input text directly to acoustic features, avoiding manually designed feature extraction steps, simplifying the system, and improving performance. Key innovations in end-to-end learning in the domain of speech synthesis with HMMs primarily focus on the direct mapping from input text to synthesized speech in end-to-end model design and training. This approach bypasses the cumbersome intermediate steps in traditional speech synthesis systems and directly maps text to corresponding acoustic features. These innovations have made end-to-end learning a significant research direction in the field of speech synthesis, greatly simplifying the system flow while improving the quality and naturalness of synthesized speech.
End-to-End Model Design[edit | edit source]
- Text-to-Speech End-to-End Model: Designing an integrated neural network model that takes text as input and outputs corresponding acoustic features, achieving direct mapping from text to acoustic features. [Wang et al., 2017][27]
- Self-Attention Mechanism: Introducing a self-attention mechanism to capture long-distance dependencies in input text, enhancing the model's ability to handle longer texts. [Shen et al., 2018][28]
Joint Training[edit | edit source]
- Joint Training of Text and Acoustic Features: Training an end-to-end model to jointly predict text and acoustic features for improved performance and more natural speech synthesis results. [Wang et al., 2017][27]
- Reinforcement Learning Optimization: Leveraging reinforcement learning to optimize the model, enabling it to generate acoustic feature sequences that align better with the fluency of natural speech. [Ping et al., 2017][29]
Emotion and Intonation Synthesis[edit | edit source]
Integrating emotion and intonation information into the HMM model to achieve more rich and vivid speech synthesis, allowing synthesized speech to convey a broader range of emotional nuances and expressions. Key innovations in emotion and intonation synthesis in the domain of speech synthesis with HMMs primarily focus on model structure, feature extraction, and the use of emotion-annotated data. These innovations aim to make synthesized speech more emotionally expressive and closer to natural speech. These innovations have significantly improved HMM models in emotion and intonation synthesis, providing important insights and methods for achieving more rich and natural speech synthesis.
Emotional Modeling[edit | edit source]
- Emotion-Driven HMM Model: Designing an emotion-driven HMM model that takes emotion tags as input features, enabling the model to adjust synthetic speech characteristics based on emotional information for emotional synthesis. [Mills et al., 2012][30]
- Modeling Emotion in a Multidimensional Space: Treating emotion as a multi-dimensional space and synthesizing different emotions by modeling speech feature distribution in this space. [Schuller et al., 2003][31]
Intonation Modeling[edit | edit source]
- Intonation Modeling Based on Prosodic Patterns: Introducing prosodic pattern modeling to simulate intonation by adjusting pitch and volume. [Hunt and Black, 1996][32]
- F0 Profile Modeling: Controlling intonation in speech synthesis by modeling the F0 (fundamental frequency) profile, representing pitch changes in sound. [Tóth and Bánhalmi, 2011][33]
Impact[edit | edit source]
Hidden Markov Models (HMMs) are versatile probabilistic modeling tools whose emergence has had a wide and far-reaching impact on a variety of fields. Originally developed as a mathematical model for describing the probabilistic relationship between an implied state and a sequence of observations, HMMs have been used in a wide variety of domains, advancing the development of many technologies and applications.
HMMs are powerful probabilistic modeling tools, and their emergence has had a profound impact on natural language processing, speech processing, bioinformatics, and many other fields. First, HMMs play a key role in the field of speech processing. By modeling acoustic signals, HMMs enable computers to understand and Audio-visual synthesis and recognition. This not only facilitates the development of speech recognition technology, but also brings convenience to innovations in the fields of automated voice assistants, voice control, and accessible communication.
Moreover, HMMs produce more natural synthesized speech in speech synthesis, making the synthesized voice closer to the natural voice.
In addition, HMMs are useful in image processing (e.g., Human motion synthesis, Face animation synthesis), audio processing (e.g., Prosodic event recognition, Very low-bitrate speech code), weather prediction, and handwriting recognition ( Online handwriting recognition) are widely used. They provide powerful tools for pattern recognition, object detection, audio event recognition, and face recognition.[34]
In conclusion, the multidisciplinary applications of HMMs show the broad applicability of this probabilistic modeling approach for dealing with time-series data, implicit structures, and incomplete information. They have had a far-reaching impact in several fields, promoting technological innovation and scientific research, and providing powerful tools for the solution of many practical problems. Therefore, HMMs can be considered an important pillar of modern computing and data science.
Future Research[edit | edit source]
HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009)[35]. This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems[36][37][38] and b) personalized speech synthesizers for individuals with vocal disabilities [39] and HTS is also widely applied in Commerical TTS - Google, Amazon, Apple and Microsoft (2010s).
However, despite these advantages, HTS does possess a few limitations and requires improving. Primarily, enhancing the intelligibility of synthetic speech in noise remains a challenge. Researchers have pointed out that unmodified synthetic speech tends to experience a more pronounced reduction in intelligibility compared to unmodified natural speech in noisy environments [40]. Secondly, HMM-based synthesis faces constraints related to acoustic modeling accuracy. The statistical averaging process used in parametric methods generates smooth speech trajectories, resulting in muffled speech and sometimes incorrectly extracts pitch information[41].
Looking ahead, there are many directions for further research in the future. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation[42].Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed [43] based on the formulation of the discrete-time stochastic process version of the Fujisaki model [44]. On a different trajectory, focusing on speech synthesis using "Big data" [45], which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced [46], allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored [47].
LLM Review[edit | edit source]
We asked ChatGPT acted as a professor of speech science, review this text, and give us feedback:
- Evaluate the assignment's language fluency and expression. Offer suggestions for refining the language to enhance engagement and professionalism.
- Assess the assignment's organization, structure, clarity of concept explanation, and the extent of content coverage. Provide recommendations for improving these aspects.
Below are the evaluations and suggestions:
- Language Fluency and Expression:
- The language used is academically appropriate and maintains a professional tone throughout. However, it tends to be quite technical, potentially limiting the engagement of readers unfamiliar with the subject matter.
- To enhance engagement, consider simplifying complex sentences and technical jargon where possible. For instance, explanations of terms like "formant synthesis" and "articulatory synthesis" could include simpler, more relatable examples or analogies.
- The manuscript could benefit from a more active voice to make the content more dynamic and less textbook-like. For example, instead of saying, "HMM-based speech synthesis was introduced," you might say, "Researchers introduced HMM-based speech synthesis."
- Some sections are quite dense and could be made more reader-friendly with the use of bullet points or numbered lists, especially when listing types of synthesis, key innovations, or historical context.
- Organization, Structure, Clarity, and Content Coverage:
- The organization of the content is logical, starting with an introduction to HMM-based speech synthesis, moving through historical context, key innovations, and future research. This structure helps the reader understand the evolution of the technology.
- However, the manuscript could benefit from clearer demarcations between sections and subsections. Consider using more pronounced headers or even visual elements to distinguish various parts of the content.
- The explanations of concepts are thorough but sometimes overly complex, which might be challenging for those not versed in this field. It would be beneficial to break down the most complex ideas into more digestible parts, possibly with additional subheadings for clarity.
- The content coverage is extensive, but this breadth sometimes leads to dense paragraphs that could be off-putting for some readers. To mitigate this, you might consider adding real-world applications or case studies to illustrate the practical uses of these technological advancements. This approach can help in bridging the gap between theoretical information and practical application, making the content more relatable and less abstract.
- The conclusion section could be expanded to succinctly reiterate the key advancements in HMM-based speech synthesis and their implications for the future, thereby providing a clear takeaway for readers.
In summary, while the content is rich and informative, its accessibility could be improved through simplification of language, breaking down complex ideas, enhancing the active voice, and incorporating more practical examples or case studies. These changes would make the manuscript not only informative but also engaging and comprehensible to a broader audience.
Members[edit | edit source]
Youyang Cai --- [Introduction]
Weixi Lai --- [Historical Context]
Jingwen Shi --- [Key Innovations]
Siqi Zheng --- [Impact]
Yining Lei --- [Future Research]
Together ---[LLM Review]
References[edit | edit source]
To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear. and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.
- ↑ Kayte, S., Mundada, M., & Gujrathi, J. (2015). Hidden Markov model based speech synthesis: A review. International Journal of Computer Applications, 130(3), 35-39.
- ↑ Kuligowska, K., Kisielewicz, P., & Włodarz, A. (2018). Speech synthesis systems: disadvantages and limitations. Int J Res Eng Technol (UAE), 7, 234-239.
- ↑ Tabet, Y., & Boughazi, M. (2011, May). Speech synthesis techniques. A survey. In International Workshop on Systems, Signal Processing and their Applications, WOSSPA (pp. 67-70). IEEE.
- ↑ Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. Proceedings of the IEEE, 101(5), 1234–1252. https://doi.org/10.1109/JPROC.2013.2251852
- ↑ Hunt, A. J., & Black, A. W. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, 1, 373–376. https://doi.org/10.1109/ICASSP.1996.541110
- ↑ Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. Proceedings of the IEEE, 101(5), 1234–1252. https://doi.org/10.1109/JPROC.2013.2251852
- ↑ Awad, M., & Khanna, R. (2015). Hidden Markov Model. In M. Awad & R. Khanna, Efficient Learning Machines (pp. 81–104). Apress. https://doi.org/10.1007/978-1-4302-5990-9_5
- ↑ Tokuda, K., Zen, H., & Black, A. W. (2002, September). An HMM-based speech synthesis system applied to English. In IEEE speech synthesis workshop (pp. 227-230). Santa Monica: IEEE.
- ↑ Kayte, S., Mundada, M., & Gujrathi, J. (2015). Hidden Markov Model based Speech Synthesis: A Review. International Journal of Computer Applications, 130(3), 35–39. https://doi.org/10.5120/ijca2015906965
- ↑ Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech Synthesis Based on Hidden Markov Models. Proceedings of the IEEE, 101(5), 1234–1252. https://doi.org/10.1109/JPROC.2013.2251852
- ↑ Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., ... & Woodland, P. (1994). The HTK Book. Cambridge University Engineering Department.
- ↑ Lee, K.-F., & Hon, H.-W. (1989). Speaker-independent phone recognition using hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11), 1641-1648.
- ↑ Rabiner, L. R., & Juang, B. H. (1986). An introduction to hidden Markov models. IEEE ASSP Magazine, 3(1), 4-16.
- ↑ - Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. The annals of mathematical statistics, 41(1), 164-171.
- ↑ 15.0 15.1 Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., ... & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82-97.
- ↑ Graves, A., Mohamed, A. R., Hinton, G., (2013). Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).
- ↑ Reynolds, D. A., Rose, R. C., & Quatieri, T. F. (2000). Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Transactions on Speech and Audio Processing, 8(2), 128-142.
- ↑ Bahl, L. R., Jelinek, F., & Mercer, R. L. (1986). A maximum likelihood approach to continuous speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, (5), 179-190.
- ↑ Juang, B. H., & Rabiner, L. R. (1991). Hidden Markov models for speech recognition. Technometrics, 33(3), 251-272.
- ↑ Tjandra, A., Cohn, T., Schuster, M., & Schröder, S. (2017). Listening while speaking: Speech recognition during speech production is modulated by visually perceived speech rate. PloS one, 12(3), e0173612.
- ↑ Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2015). Gated feedback recurrent neural networks. In Proceedings of the 32nd International Conference on International Conference on Machine Learning (pp. 2067-2075).
- ↑ Ping, W., Peng, B., & Yang, X. (2018). Deep voice 3: 2000-speaker neural text-to-speech. In Advances in Neural Information Processing Systems (pp. 4967-4977).
- ↑ Zen, H., Tokuda, K., Masuko, T., Kobayashi, T., & Kitamura, T. (2009). HMM-based speech synthesis integrated with speaker adaptation and its evaluation. IEEE Transactions on Audio, Speech, and Language Processing, 17(2), 363-376.
- ↑ Zeiler, M. D., Krishnan, D., Taylor, G. W., Fergus, R. (2013). Deconvolutional networks. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (pp. 2528-2535).
- ↑ Graves, A., Mohamed, A. R., Hinton, G. (2013). Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).
- ↑ Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
- ↑ 27.0 27.1 Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & Auli, M. (2017). Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:1703.10135.
- ↑ Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., ... & Wu, Y. (2018). Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4779-4783).
- ↑ Ping, W., Peng, B., & Yang, X. (2017). Deep Voice 3: 2000-speaker neural text-to-speech. In Advances in Neural Information Processing Systems (pp. 4967-4977).
- ↑ Mills, G., & Schuller, B. W. (2012). HMM-based synthesis of emotional speech with multi-formant filter representation. In Proceedings of the 4th International Workshop on Emotion Corpora and Recognition (pp. 9-14).
- ↑ Schuller, B., Batliner, A., Seppi, D., Steidl, S., Vogt, T., Wagner, J., ... & Devillers, L. (2003). The relevance of feature type for the automatic classification of emotional user states: Low level descriptors and functionals. In European Conference on Speech Communication and Technology.
- ↑ Hunt, A., & Black, A. (1996). Unit selection in a concatenative speech synthesis system using a large speech database. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 373-376).
- ↑ Tóth, L., & Bánhalmi, A. (2011). HMM-based synthesis of F0 contours for TTS. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (pp. 749-752).
- ↑ Zen H, Nose T, Yamagishi J, et al. The HMM-based speech synthesis system (HTS) version 2.0[J]. SSW, 2007, 6: 294-299.
- ↑ Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.
- ↑ Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.
- ↑ Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.
- ↑ K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.
- ↑ J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.
- ↑ S. King and V. Karaiskos, “The Blizzard Challenge 2010,” in Proc. Blizzard Challenge Workshop, Kyoto, Japan, Sep. 2010.
- ↑ Zen H, Tokuda K, Black AW (2009) Statistical parametric speech synthesis. Speech Commun 51(11):10391064.
- ↑ H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.
- ↑ H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.
- ↑ H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.
- ↑ J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.
- ↑ J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.
- ↑ T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.