Hidden Markov Models in Speech Synthesis
Introduction
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Historical Context
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Key Innovations
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Impact
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Future research
HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009).[1] This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems[2][3][4] and b) personalized speech synthesizers for individuals with vocal disabilities [5].
However, despite these advantages, HTS does possess a few limitations. Primarily, enhancing the intelligibility of synthetic speech in noise remains a challenge. Researchers have pointed out that unmodified synthetic speech tends to experience a more pronounced reduction in intelligibility compared to unmodified natural speech in noisy environments [6]. Secondly, HMM-based synthesis faces constraints related to acoustic modeling accuracy. The statistical averaging process used in parametric methods generates smooth speech trajectories, resulting in muffled speech and sometimes incorrectly extracts pitch information[7].
Looking ahead, there are numerous promising research directions. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation[8].Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed [9] based on the formulation of the discrete-time stochastic process version of the Fujisaki model [10]. On a different trajectory, focusing on speech synthesis using "Big data" [11], which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced [12], allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored [13].
Members
Jingwen Shi
Yining Lei
Weixi Lai
Youyang Cai
Siqi Zheng
References
To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear. and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.
Insert paragraph
- ↑ Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.
- ↑ Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.
- ↑ Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.
- ↑ K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.
- ↑ J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.
- ↑ S. King and V. Karaiskos, “The Blizzard Challenge 2010,” in Proc. Blizzard Challenge Workshop, Kyoto, Japan, Sep. 2010.
- ↑ Zen H, Tokuda K, Black AW (2009) Statistical parametric speech synthesis. Speech Commun 51(11):10391064.
- ↑ H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.
- ↑ H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.
- ↑ H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.
- ↑ J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.
- ↑ J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.
- ↑ T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.