Editing
Hidden Markov Models in Speech Synthesis
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Time Series Model Optimization === Optimizing the model's temporal properties to better capture the temporal variations in speech signals, especially in terms of coherence and fluency between different phonemes. Key innovations in temporal model optimization in the domain of speech synthesis with HMMs primarily focus on improving the model's ability to model temporal properties, handle long sequences, and enhance the model's temporal prediction accuracy. These innovations have significantly advanced HMM models in temporal modeling, laying the foundation for achieving more natural and fluent speech synthesis. ==== Long-term Sequential Modeling ==== # Methods for Long-time Sequence Modeling: Addressing long-time sequence problems by introducing methods to capture long-time sequence features more accurately, enhancing the naturalness of synthetic speech. [Tjandra et al., 2017]<ref>Tjandra, A., Cohn, T., Schuster, M., & Schröder, S. (2017). [https://arxiv.org/abs/1703.10135 Listening while speaking: Speech recognition during speech production is modulated by visually perceived speech rate.] PloS one, 12(3), e0173612.</ref> # Long-term Sequence Memory Mechanisms: Implementing long-term sequence memory mechanisms like gated recurrent units (GRU) or long-short-term memory networks (LSTM) to effectively capture long-term sequence information and improve timing modeling ability. [Chung et al., 2015]<ref>Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2015). [https://arxiv.org/abs/1412.3555 Gated feedback recurrent neural networks]. In Proceedings of the 32nd International Conference on International Conference on Machine Learning (pp. 2067-2075).</ref> ==== Improved Accuracy of Time Series Prediction ==== # Enhanced Training Strategies: Utilizing enhanced training strategies, such as reinforcement learning, to optimize the prediction accuracy of the time series, resulting in more natural rhythm and fluency in synthetic speech. [Ping et al., 2018]<ref>Ping, W., Peng, B., & Yang, X. (2018). [https://arxiv.org/abs/1802.08759 Deep voice 3: 2000-speaker neural text-to-speech]. In Advances in Neural Information Processing Systems (pp. 4967-4977).</ref> # Model Fusion: Fusing HMM with other models, such as neural networks, to achieve higher timing modeling accuracy and prediction ability, resulting in smoother and more natural synthetic speech. [Zen et al., 2009]<ref>Zen, H., Tokuda, K., Masuko, T., Kobayashi, T., & Kitamura, T. (2009). [https://ieeexplore.ieee.org/document/4783645 HMM-based speech synthesis integrated with speaker adaptation and its evaluation]. IEEE Transactions on Audio, Speech, and Language Processing, 17(2), 363-376.</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information