Hidden Markov Models in Speech Synthesis: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
== Introduction ==
== Introduction ==
Jingwen Shi
Yining Lei
Weixi Lai
Youyang Cai
Siqi Zheng
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.


Line 23: Line 12:


== Future research ==
== Future research ==
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.  
HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009).<ref>Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.</ref> This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems<ref>Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.</ref><ref>Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.</ref><ref>K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.</ref> and b) personalized speech synthesizers for individuals with vocal disabilities <ref>J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.</ref>.
 
However, despite these advantages, HTS does possess a few limitations. Primarily, enhancing the intelligibility of synthetic speech in noise remains a challenge. Researchers have pointed out that unmodified synthetic speech tends to experience a more pronounced reduction in intelligibility compared to unmodified natural speech in noisy environments <ref>S. King and V. Karaiskos, “The Blizzard Challenge 2010,” in Proc. Blizzard Challenge Workshop, Kyoto, Japan, Sep. 2010.</ref>. Secondly, HMM-based synthesis faces constraints related to acoustic modeling accuracy. The statistical averaging process used in parametric methods generates smooth speech trajectories, resulting in muffled speech and sometimes incorrectly extracts pitch information<ref>Zen H, Tokuda K, Black AW (2009) Statistical parametric speech synthesis. Speech Commun 51(11):10391064.</ref>.
 
Looking ahead, there are numerous promising research directions. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation<ref>H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.</ref>.Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed <ref>H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.</ref> based on the formulation of the discrete-time stochastic process version of the Fujisaki model <ref>H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.</ref>. On a different trajectory, focusing on speech synthesis using "Big data" <ref>J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.</ref>, which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced <ref>J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.</ref>, allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored <ref>T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.</ref>.


== References ==
== References ==
Line 31: Line 24:


== Members ==
== Members ==
Jingwen Shi
Yining Lei
Weixi Lai
Youyang Cai
Siqi Zheng

Revision as of 22:05, 13 October 2023

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Historical Context

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Key Innovations

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Impact

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Future research

HMM-based speech synthesis has shown promise in generating speech with diverse speaking styles and demands less storage (Drugman et al., 2009).[1] This has led to the emergence of several applications, notably: a) personalized speech-to-speech translation systems[2][3][4] and b) personalized speech synthesizers for individuals with vocal disabilities [5].

However, despite these advantages, HTS does possess a few limitations. Primarily, enhancing the intelligibility of synthetic speech in noise remains a challenge. Researchers have pointed out that unmodified synthetic speech tends to experience a more pronounced reduction in intelligibility compared to unmodified natural speech in noisy environments [6]. Secondly, HMM-based synthesis faces constraints related to acoustic modeling accuracy. The statistical averaging process used in parametric methods generates smooth speech trajectories, resulting in muffled speech and sometimes incorrectly extracts pitch information[7].

Looking ahead, there are numerous promising research directions. For the quality of the synthesized speech, Kawahara et al. proposed pitch-adaptive spectral analysis combined with a surface reconstruction method and an excitation method using instantaneous frequency calculation[8].Addressing F0 modeling, a statistical model of speech fundamental frequency contours has been proposed [9] based on the formulation of the discrete-time stochastic process version of the Fujisaki model [10]. On a different trajectory, focusing on speech synthesis using "Big data" [11], which includes ebooks, Internet radio, podcasts, etc., has shown potential. Training HMMs with automatic speech transcription that encompasses errors is one approach in this domain. Additionally, a framework for the analysis of emotion in texts for speech synthesis has been introduced [12], allowing the automatic translation of plain texts into appropriate speaking styles. Speech synthesis via physical simulation, particularly an MRI-based articulatory speech synthesis system, is another avenue being explored [13].

References

To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear. and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

Insert paragraph

Members

Jingwen Shi

Yining Lei

Weixi Lai

Youyang Cai

Siqi Zheng

  1. Drugman, T, Wilfart G, Dutoit T (2009) A deterministic plus stochastic model of the residual signal for improved parametric speech synthesis.
  2. Y. Qian, H. Liang, and F. K. Soong, “A cross-language state sharing and mapping approach to bilingual (Mandarin – English) TTS,” IEEE Trans. Audio Speech Lang. Process., vol. 17, no. 6, pp. 1231–1239, Aug. 2009.
  3. Y.-J. Wu, S. King, and K. Tokuda, “Cross-language speaker adaptation for HMM-based speech synthesis,” in Proc. ISCSLP, 2008, pp. 9–12.
  4. K. Oura, J. Yamagishi, M. Wester, S. King, and K. Tokuda, “Analysis of unsupervised cross-lingual speaker adaptation for HMM-based speech synthesis using KLD-based transform mapping,” Speech Commun., vol. 54, no. 6, pp. 703–714, 2012.
  5. J. Yamagishi, C. Veaux, S. King, and S. Renals, “Speech synthesis technologies for individuals with vocal disabilities: voice banking and reconstruction,” in Acoustical Science & Technology, 2012, vol. 33, pp. 1–5.
  6. S. King and V. Karaiskos, “The Blizzard Challenge 2010,” in Proc. Blizzard Challenge Workshop, Kyoto, Japan, Sep. 2010.
  7. Zen H, Tokuda K, Black AW (2009) Statistical parametric speech synthesis. Speech Commun 51(11):10391064.
  8. H. Kawahara, I. Masuda-Katsuse, and A. Cheveigne, “Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based f0 extraction: possible role of a repetitive structure in sounds,” Speech Commun., vol. 27, pp. 187–207, 1999.
  9. H. Kameoka, J. Le Roux, and Y. Ohishi, “A statistical model of speech F0 contours,” in Proc. SAPA, 2010, pp. 43–48.
  10. H. Fujisaki and K. Hirose, “Analysis of voice fundamental frequency contours for declarative sentences of Japanese,” J. Acoust. Soc. Jpn. (E), vol. 5, no. 4, pp. 233–242, 1984.
  11. J. Ni and H. Kawai, “On the effects of transcript errors across dataset sizes on hmm-based voices,” in Proc. the Autumn Meeting of ASJ, 2011, pp. 339–342.
  12. J. R. Bellegarda, “A data-driven affective analysis framework toward naturally expressive speech synthesis,” IEEE Trans. Audio Speech Lang. Process., vol. 19, no. 5, pp. 1113–1122, 2011.
  13. T. Kitamura, H. Takemoto, P. Mokhtari, and T. Hirai, “MRI-based time-domain speech synthesis system,” in Proc. ASA/ASJ joint meeting, 2006.