Editing
State-of-the-art
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Subtheme 1: State-of-the-art Models ==== ===== Tan, X., Chen, J., Liu, H., Cong, J., Zhang, C., Liu, Y., Wang, X., Leng, Y., Yi, Y., He, L., Soong, F., Qin, T., Zhao, S., & Liu, T.-Y. (2022). NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality. ''arXiv preprint arXiv:2205.04421''. ===== * Summary: NaturalSpeech proposes a system for converting text to speech (TTS) that achieves human-level quality. It leverages a variational autoencoder (VAE) to bridge the gap between text and speech waveforms. * RQ (Research Question): Can a TTS system achieve speech quality indistinguishable from humans? * Hypothesis: By incorporating a VAE and specific techniques to improve the model's understanding of text and speech features, NaturalSpeech can generate speech indistinguishable from humans. * Conclusion: The paper argues that NaturalSpeech achieves human-level speech quality based on statistical measures (MOS and CMOS) in human evaluations. * Critical Observations: The evaluation relies on subjective human ratings, which might be influenced by factors beyond speech quality.The research focuses on a single benchmark dataset, limiting generalizability.The paper doesn't explore how NaturalSpeech performs on diverse speaking styles or accents. * Relevance: This is related to my study because it provides a definition of human-level quality, and this particular model has achieved the highest Mean Opinion Score (MOS) recorded thus far. Hence, I am considering using this model as a basis for my study. ===== Kong, J., Kim, J., & Bae, J. (2020). Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. ''Advances in neural information processing systems'', ''33'', 17022-17033. ===== * Summary: This article introduces HiFi-GAN, a model that can efficiently synthesize high-quality speech audio. HiFi-GAN consists of a generator and two discriminators: multi-scale discriminator and multi-period discriminator. Improve training stability and model performance by adversarially training the generator and discriminator and using two additional loss functions. * RQ:Can HiFi-GAN effectively synthesize high-quality speech audio with computational efficiency comparable to human-level synthesis, while also demonstrating generalization across speakers and adaptability to various configurations? * Hypothesis:By leveraging the characteristic patterns of speech audio and designing a discriminator to capture these patterns effectively, it is possible to develop a speech synthesis model, HiFi-GAN, that outperforms existing models in terms of synthesis quality and speed. * Conclusion:HiFi-GAN significantly advances speech synthesis by efficiently generating high-quality audio, surpassing existing models in both synthesis quality and speed. By leveraging speech audio patterns and a carefully designed discriminator, this model demonstrates robustness across various scenarios, including unseen speakers and noisy inputs, while offering potential for on-device natural speech synthesis with low latency and memory requirements. Additionally, the flexibility of generator configurations enhances adaptability without the need for extensive hyper-parameter search. * Critical observations:Due to the wide application of HiFi-GAN technology in the field of speech synthesis, there may be some ethical or social impacts, including concerns related to voice cloning, privacy and false information. * Relevance:This paper is closely related to the topic of non-language-specific text-to-speech, as it demonstrates a breakthrough in HiFi-GAN models in synthesizing high-quality speech, with generalization capabilities, and the ability to handle inputs of different languages and speaking styles. ===== Huang, R., Huang, J., Yang, D., Ren, Y., Liu, L., Li, M., ... & Zhao, Z. (2023, July). Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. In ''International Conference on Machine Learning'' (pp. 13916-13932). PMLR. ===== * Summary: The article introduces "Make-An-Audio," a system utilizing a prompt-enhanced diffusion model for TTS generation, aiming to improve the naturalness and expressiveness of synthesized audio. * RQ: How does the model improve the naturalness of TTS? * Hypothesis: By introducing pseudo prompt enhancement and spectrogram autoencoders, the model can effectively utilize unsupervised language-free data and higher-level semantic understanding to enhance the naturalness and expressiveness of speech synthesis. * Conclusion: "Make-An-Audio" successfully enhances the naturalness and expressiveness of speech synthesis, achieving state-of-the-art performance in evaluations. * Critical observations: The performance of "Make-An-Audio" is still partly dependent on extensive data and complex model training. In addition, there is still space for improvement in expressing the emotions and rhythms of human conversations. * Relevance: The "Make-An-Audio" system presented in the paper offers an effective solution to the limitations in naturalness and expressiveness currently faced by TTS
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information