Editing
Hidden Markov Models in Speech Synthesis
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== End-to-End Learning === Adopting an end-to-end learning approach, mapping input text directly to acoustic features, avoiding manually designed feature extraction steps, simplifying the system, and improving performance. Key innovations in end-to-end learning in the domain of speech synthesis with HMMs primarily focus on the direct mapping from input text to synthesized speech in end-to-end model design and training. This approach bypasses the cumbersome intermediate steps in traditional speech synthesis systems and directly maps text to corresponding acoustic features. These innovations have made end-to-end learning a significant research direction in the field of speech synthesis, greatly simplifying the system flow while improving the quality and naturalness of synthesized speech. ==== [[Development of End-to-End Models|End-to-End Model]] Design ==== # Text-to-Speech End-to-End Model: Designing an integrated neural network model that takes text as input and outputs corresponding acoustic features, achieving direct mapping from text to acoustic features. [Wang et al., 2017]<ref name=":1">Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & Auli, M. (2017). [https://arxiv.org/abs/1703.10135 Tacotron: Towards end-to-end speech synthesis]. arXiv preprint arXiv:1703.10135.</ref> # Self-Attention Mechanism: Introducing a self-attention mechanism to capture long-distance dependencies in input text, enhancing the model's ability to handle longer texts. [Shen et al., 2018]<ref>Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., ... & Wu, Y. (2018). [https://arxiv.org/abs/1804.10216 Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions]. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 4779-4783).</ref> ==== Joint Training ==== # Joint Training of Text and Acoustic Features: Training an end-to-end model to jointly predict text and acoustic features for improved performance and more natural speech synthesis results. [Wang et al., 2017]<ref name=":1" /> # Reinforcement Learning Optimization: Leveraging reinforcement learning to optimize the model, enabling it to generate acoustic feature sequences that align better with the fluency of natural speech. [Ping et al., 2017]<ref>Ping, W., Peng, B., & Yang, X. (2017). [https://arxiv.org/abs/1802.08759 Deep Voice 3: 2000-speaker neural text-to-speech]. In Advances in Neural Information Processing Systems (pp. 4967-4977).</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information