Editing
Hidden Markov Models in Speech Synthesis
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Emotion and Intonation Synthesis === Integrating emotion and intonation information into the HMM model to achieve more rich and vivid speech synthesis, allowing synthesized speech to convey a broader range of emotional nuances and expressions. Key innovations in emotion and intonation synthesis in the domain of speech synthesis with HMMs primarily focus on model structure, feature extraction, and the use of emotion-annotated data. These innovations aim to make synthesized speech more emotionally expressive and closer to natural speech. These innovations have significantly improved HMM models in emotion and intonation synthesis, providing important insights and methods for achieving more rich and natural speech synthesis. ==== Emotional Modeling ==== # Emotion-Driven HMM Model: Designing an emotion-driven HMM model that takes emotion tags as input features, enabling the model to adjust synthetic speech characteristics based on emotional information for emotional synthesis. [Mills et al., 2012]<ref>Mills, G., & Schuller, B. W. (2012). [https://ieeexplore.ieee.org/document/6267902/ HMM-based synthesis of emotional speech with multi-formant filter representation]. In Proceedings of the 4th International Workshop on Emotion Corpora and Recognition (pp. 9-14).</ref> # Modeling Emotion in a Multidimensional Space: Treating emotion as a multi-dimensional space and synthesizing different emotions by modeling speech feature distribution in this space. [Schuller et al., 2003]<ref>Schuller, B., Batliner, A., Seppi, D., Steidl, S., Vogt, T., Wagner, J., ... & Devillers, L. (2003). [https://ieeexplore.ieee.org/document/1234102/ The relevance of feature type for the automatic classification of emotional user states: Low level descriptors and functionals]. In European Conference on Speech Communication and Technology.</ref> ==== Intonation Modeling ==== # Intonation Modeling Based on Prosodic Patterns: Introducing prosodic pattern modeling to simulate intonation by adjusting pitch and volume. [Hunt and Black, 1996]<ref>Hunt, A., & Black, A. (1996). [https://ieeexplore.ieee.org/document/547974/ Unit selection in a concatenative speech synthesis system using a large speech database]. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 373-376).</ref> # F0 Profile Modeling: Controlling intonation in speech synthesis by modeling the F0 (fundamental frequency) profile, representing pitch changes in sound. [Tóth and Bánhalmi, 2011]<ref>Tóth, L., & Bánhalmi, A. (2011). [https://ieeexplore.ieee.org/document/6118115/ HMM-based synthesis of F0 contours for TTS]. In Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH) (pp. 749-752).</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information