Hidden Markov Models in Speech Synthesis

From MSc Voice Technology
Revision as of 17:37, 14 October 2023 by 64.246.65.130 (talk)
Jump to navigation Jump to search

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Historical Context

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Key Innovations

The key innovations of Hidden Markov Model (HMM) in the field of speech synthesis primarily focus on several critical aspects:

High-Precision Model Design

Researchers continuously improve the structure and parameter estimation methods of Hidden Markov Model (HMM) to enhance its precision and accuracy in speech synthesis. This includes modeling improvements in states, transition probabilities, emission probabilities, and more. Key innovations in high-precision model design in the domain of speech synthesis with HMMs focus on enhancing the model's structure, parameter estimation methods, and integration with neural networks. These innovations play a crucial role in improving the accuracy, temporal modeling capabilities, and expressive power of HMMs, providing valuable insights and methods for high-precision model design in speech synthesis. These innovations make HMM models more adaptable and expressive in the field of speech synthesis, laying the foundation for achieving high-quality, natural speech synthesis.

  • Improved Model Structure
  1. Multi-layer HMMs: Introducing multi-layer HMMs to better represent the intricate structure of speech signals. [Young et al., 1994][1]
  2. Coupled HMMs: Coupling multiple HMM models to enhance the modeling of complex speech features. [Lee and Hon, 1989][2]
  • Parameter Estimation Methods
  1. Maximum Likelihood Estimation(MLE): Employing MLE to estimate parameters like state transition and emission probabilities, enhancing model fitting accuracy. [Rabiner and Juang, 1986][3]
  2. Baum-Welch Algorithm: Utilizing the Baum-Welch algorithm for iterative parameter estimation, maximizing the likelihood function of observed data. [Baum et al., 1970][4]
  • Integration with Neural Networks
  1. Combination of Deep Neural Network and HMM (DNN-HMM): Fusing deep neural networks with HMM to model the relationship between speech features and HMM states, enhancing the model's expression ability. [Hinton et al., 2012][5]
  2. Combination of Recurrent Neural Network and HMM (RNN-HMM): Integrating recurrent neural networks with HMM to model temporal information, further improving the time series modeling ability. [Graves et al., 2013][6]

Hybrid Model and State Aggregation

Introducing more complex mixture models or state aggregation methods to enhance the model's ability to model diverse speech features, making synthesized speech sound more natural. Key innovations in mixture models and state aggregation in the domain of speech synthesis with HMMs primarily focus on model structure design and parameter optimization. Mixture models and state aggregation help improve HMMs' ability to model complex speech features, resulting in more natural synthesized speech.

Impact

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Future research

HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009)[7]. This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems[8][9][10] and b) personalized speech synthesizers for individuals with vocal disabilities [11] and HTS is also widely applied in Commerical TTS - Google, Amazon, Apple and Microsoft (2010s).

However, despite these advantages, HTS does possess a few limitations and requires improving. Primarily, enhancing the intelligibility of synthetic speech in noise remains a challenge. Researchers have pointed out that unmodified synthetic speech tends to experience a more pronounced reduction in intelligibility compared to unmodified natural speech in noisy environments [12]. Secondly, HMM-based synthesis faces constraints related to acoustic modeling accuracy. The statistical averaging process used in parametric methods generates smooth speech trajectories, resulting in muffled speech and sometimes incorrectly extracts pitch information[13].

Looking ahead, there are many directions for further research in the future. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation[14].Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed [15] based on the formulation of the discrete-time stochastic process version of the Fujisaki model [16]. On a different trajectory, focusing on speech synthesis using "Big data" [17], which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced [18], allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored [19].

Members

Jingwen Shi

Yining Lei

Weixi Lai

Youyang Cai

Siqi Zheng

References

To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear. and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

  1. Young, S., Evermann, G., Gales, M., Hain, T., Kershaw, D., Liu, X., ... & Woodland, P. (1994). The HTK Book. Cambridge University Engineering Department.
  2. Lee, K.-F., & Hon, H.-W. (1989). Speaker-independent phone recognition using hidden Markov models. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37(11), 1641-1648.
  3. Rabiner, L. R., & Juang, B. H. (1986). An introduction to hidden Markov models. IEEE ASSP Magazine, 3(1), 4-16.
  4. - Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970). A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. The annals of mathematical statistics, 41(1), 164-171.
  5. Hinton, G. E., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., ... & Kingsbury, B. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82-97.
  6. Graves, A., Mohamed, A. R., Hinton, G., (2013). Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (ICASSP), 2013 IEEE international conference (pp. 6645-6649).
  7. Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.
  8. Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.
  9. Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.
  10. K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.
  11. J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.
  12. S. King and V. Karaiskos, “The Blizzard Challenge 2010,” in Proc. Blizzard Challenge Workshop, Kyoto, Japan, Sep. 2010.
  13. Zen H, Tokuda K, Black AW (2009) Statistical parametric speech synthesis. Speech Commun 51(11):10391064.
  14. H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.
  15. H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.
  16. H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.
  17. J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.
  18. J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.
  19. T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.