Advancements in Neural Network-Based TTS (2000s): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
mNo edit summary
Line 10: Line 10:


== Key Innovations[edit | edit source] ==
== Key Innovations[edit | edit source] ==
Deep Neural Network-Based Text-to-Speech (TTS) has undergone a remarkable transformation whose evolution marks a shift from traditional rule-based and statistical methods towards neural network-driven solutions. Researchers have continually improved the quality and efficiency of TTS systems during this period. In the 2000s, DNN-based TTS models began to gain prominence, paving the way for more natural and expressive speech synthesis. Commencing with the introduction of foundational models, we shall elucidate the subsequent advancements in Text-to-Speech (TTS) technology. In this paragrah, some working mechanism are introduced to readers.  
Deep Neural Network-Based Text-to-Speech (TTS) has undergone a remarkable transformation whose evolution marks a shift from traditional rule-based and statistical methods towards neural network-driven solutions. Researchers have continually improved the quality and efficiency of TTS systems during this period. In the 2000s, DNN-based TTS models began to gain prominence, paving the way for more natural and expressive speech synthesis. Commencing with the introduction of foundational models, we shall elucidate the subsequent advancements in Text-to-Speech (TTS) technology. In this paragrah, some working mechanism are introduced to readers.<ref>G. Hinton et al., "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups," in IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, Nov. 2012, doi: 10.1109/MSP.2012.2205597.</ref>


=== '''<big>DNN-Based Acoustic Modeling and Vocoder</big>''' ===
=== '''<big>DNN-Based Acoustic Modeling and Vocoder</big>''' ===


==== '''<big>Modeling</big>''' ====
==== '''<big>Modeling</big>''' ====
'''Tacotron:''' Tacotron is an end-to-end generative Text-to-Speech (TTS) model that directly synthesizes speech from input characters, utilizing a sequence-to-sequence (seq2seq) architecture with attention. It avoids the need for phoneme-level alignment.
'''Tacotron:''' Tacotron is an end-to-end generative Text-to-Speech (TTS) model that directly synthesizes speech from input characters, utilizing a sequence-to-sequence (seq2seq) architecture with attention. It avoids the need for phoneme-level alignment.<ref>Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). "Tacotron: Towards End-to-End Speech Synthesis." In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1946-1954).</ref>


'''Features:'''
'''Features:'''
Line 33: Line 33:
* Advantages: Tacotron 2 combines the feature prediction network and the WaveNet vocoder to directly convert text into speech. This streamlines the speech synthesis process, eliminating the need for complex feature engineering and delivering high-quality, human-like speech.
* Advantages: Tacotron 2 combines the feature prediction network and the WaveNet vocoder to directly convert text into speech. This streamlines the speech synthesis process, eliminating the need for complex feature engineering and delivering high-quality, human-like speech.


'''FastSpeech:''' FastSpeech is a neural text-to-speech (TTS) system that addresses the challenges of slow inference speed, lack of robustness, and lack of controllability in traditional TTS models. It uses a feed-forward network based on Transformer to generate mel-spectrograms in parallel, allowing for faster synthesis.  
'''FastSpeech:''' FastSpeech is a neural text-to-speech (TTS) system that addresses the challenges of slow inference speed, lack of robustness, and lack of controllability in traditional TTS models. It uses a feed-forward network based on Transformer to generate mel-spectrograms in parallel, allowing for faster synthesis.<ref>Ren, J., Xu, L., Zhang, Z., Yang, T., Lai, J., Lu, Z., & Dai, L. R. (2019). FastSpeech: Fast, Robust and Controllable Text to Speech. arXiv preprint arXiv:1905.09263.</ref>


'''Features:'''  
'''Features:'''  
Line 42: Line 42:
* Comparable Quality: FastSpeech delivers speech quality nearly indistinguishable from autoregressive Transformer TTS models, while providing superior speed, robustness, and control.
* Comparable Quality: FastSpeech delivers speech quality nearly indistinguishable from autoregressive Transformer TTS models, while providing superior speed, robustness, and control.


'''Transformer:''' The Transformer is a novel neural network architecture introduced by Vaswani et al. in 2017. It stands out for its exclusive reliance on attention mechanisms, foregoing recurrent connections and convolutions. In natural language processing, particularly neural machine translation, it has achieved remarkable success. The Transformer comprises an encoder and a decoder, both structured with stacks of identity blocks. It employs multi-head self-attention to model dependencies between input and output sequences.
'''Transformer:''' The Transformer is a novel neural network architecture introduced by Vaswani et al. in 2017. It stands out for its exclusive reliance on attention mechanisms, foregoing recurrent connections and convolutions. In natural language processing, particularly neural machine translation, it has achieved remarkable success. The Transformer comprises an encoder and a decoder, both structured with stacks of identity blocks. It employs multi-head self-attention to model dependencies between input and output sequences.<ref>Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with transformer network. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI'19/IAAI'19/EAAI'19). AAAI Press, Article 823, 6706–6713. <nowiki>https://doi.org/10.1609/aaai.v33i01.33016706</nowiki></ref>


'''Features:'''
'''Features:'''
Line 55: Line 55:
These layers' dilation rates determine their receptive field sizes, allowing WaveNet to capture long-term dependencies across frames.
These layers' dilation rates determine their receptive field sizes, allowing WaveNet to capture long-term dependencies across frames.


In Tacotron 2, WaveNet is conditioned on the mel spectrograms generated by the feature prediction network. This conditioning simplifies the speech synthesis process and produces high-quality audio output.
In Tacotron 2, WaveNet is conditioned on the mel spectrograms generated by the feature prediction network. This conditioning simplifies the speech synthesis process and produces high-quality audio output.<ref>Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & van den Oord, A. (2017). Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. arXiv preprint arXiv:1712.05884.</ref>


'''Parallel WaveGAN:''' Parallel WaveGAN is a waveform generator utilizing GANs. It employs non-autoregressive WaveNet with multi-resolution spectrogram and adversarial loss functions for high-quality speech waveform synthesis. Unlike traditional models, it doesn't require complex density distillation, offering faster, efficient, small-footprint, and competitive waveform generation suitable for real-time applications.
'''Parallel WaveGAN:''' Parallel WaveGAN is a waveform generator utilizing GANs. It employs non-autoregressive WaveNet with multi-resolution spectrogram and adversarial loss functions for high-quality speech waveform synthesis. Unlike traditional models, it doesn't require complex density distillation, offering faster, efficient, small-footprint, and competitive waveform generation suitable for real-time applications.<ref>Yamamoto, R., Inoue, K., Portnoff, M., Tan, X., Inoue, S., Yamamoto, H., ... & Watanabe, S. (2020). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. arXiv preprint arXiv:1910.11480.</ref>


=== Articulatory Features-Based TTS ===
=== Articulatory Features-Based TTS ===
Line 70: Line 70:




'''Algorithem:'''
'''Algorithm:'''


* The ACIDA algorithm works by extracting articulatory features from speech data using statistical and geometric techniques. It involves several steps, including data collection and preprocessing, articulatory feature extraction, and articulatory feature analysis.
* The ACIDA algorithm works by extracting articulatory features from speech data using statistical and geometric techniques. It involves several steps, including data collection and preprocessing, articulatory feature extraction, and articulatory feature analysis.
* In the data collection and preprocessing step, the MOCHA-TIMIT database is used, and the data is preprocessed to remove noise and align the audio and articulatory signals.
* In the data collection and preprocessing step, the MOCHA-TIMIT database is used, and the data is preprocessed to remove noise and align the audio and articulatory signals.
* The articulatory feature extraction step involves the ACIDA algorithm itself. This algorithm combines statistical analysis and geometric modeling to extract articulatory features from the data. It identifies critical coordinates and their phonetic analysis, providing insights into the articulatory movements involved in speech production.
* The articulatory feature extraction step involves the ACIDA algorithm itself. This algorithm combines statistical analysis and geometric modeling to extract articulatory features from the data. It identifies critical coordinates and their phonetic analysis, providing insights into the articulatory movements involved in speech production.
* The extracted articulatory features are then analyzed and compared with expected critical coordinates, demonstrating good agreement. This analysis helps validate the effectiveness of the ACIDA algorithm in capturing relevant articulatory information.
* The extracted articulatory features are then analyzed and compared with expected critical coordinates, demonstrating good agreement. This analysis helps validate the effectiveness of the ACIDA algorithm in capturing relevant articulatory information.<ref>Singampalli, V. D. (2010). ''Statistical identification of articulatory roles in speech production'' (Order No. 10131268). Available from ProQuest Dissertations & Theses A&I. (1810640121). Retrieved from <nowiki>http://server.proxy-ub.rug.nl/login?url=https://www.proquest.com/dissertations-theses/statistical-identification-articulatory-roles/docview/1810640121/se-2</nowiki></ref>


== Impact[edit | edit source] ==
== Impact[edit | edit source] ==

Revision as of 09:52, 13 October 2023

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean odio turpis, sodales a urna a, rutrum elementum libero. Phasellus pellentesque dapibus odio quis sodales. Duis a dignissim odio. Maecenas lobortis sapien purus, eu laoreet magna varius et. Vestibulum euismod pharetra lorem, id dignissim lorem porta ac. Proin euismod vehicula eleifend. Nunc hendrerit efficitur dolor vitae sodales. Fusce sit amet quam laoreet, aliquet diam at, sollicitudin turpis. Vestibulum eget posuere nibh, sit amet sodales purus. Curabitur vel vulputate eros. Vivamus pellentesque libero non magna iaculis tempor. Aenean sodales velit ut nulla aliquam, ut varius orci blandit. Aliquam semper neque ac rutrum porta.

Historical Context

The history of neural network-based text-to-speech (TTS) can be traced back to the early days of artificial intelligence research.In the 1980s, researchers began to explore the use of neural networks to model the human speech production process.In the 1990s, Hidden Markov Models were introduced to TTS, which brought significant improvements in speech synthesis. HMM-based systems allowed for better control of speech characteristics and were widely adopted for several years.

In early 2000s when researchers started exploring the use of deep neural networks (DNNs) for speech synthesis. However, it wasn’t until the introduction of generative adversarial networks (GANs) and autoregressive models that the quality of synthesized speech improved significantly. In recent years, the development of deep learning and artificial intelligence has led to a surge in research on neural network-based TTS.[1]

One of the key breakthroughs in neural network-based TTS came in 2006 with the introduction of the WaveNet model by Google AI. WaveNet was the first neural network-based TTS system to generate high-quality speech waveforms directly from text, without the need for intermediate representations such as phonemes or mel spectrograms.[2] This led to a significant improvement in the naturalness and expressiveness of synthesized speech.

Key Innovations[edit | edit source]

Deep Neural Network-Based Text-to-Speech (TTS) has undergone a remarkable transformation whose evolution marks a shift from traditional rule-based and statistical methods towards neural network-driven solutions. Researchers have continually improved the quality and efficiency of TTS systems during this period. In the 2000s, DNN-based TTS models began to gain prominence, paving the way for more natural and expressive speech synthesis. Commencing with the introduction of foundational models, we shall elucidate the subsequent advancements in Text-to-Speech (TTS) technology. In this paragrah, some working mechanism are introduced to readers.[3]

DNN-Based Acoustic Modeling and Vocoder

Modeling

Tacotron: Tacotron is an end-to-end generative Text-to-Speech (TTS) model that directly synthesizes speech from input characters, utilizing a sequence-to-sequence (seq2seq) architecture with attention. It avoids the need for phoneme-level alignment.[4]

Features:

  • Minimal feature engineering: Tacotron eliminates the need for laborious feature engineering, avoiding complex heuristics and design choices present in traditional TTS systems.
  • Rich conditioning: Tacotron supports conditioning on attributes like speaker, language, or sentiment, offering flexibility at the model's beginning.
  • Robustness: As an end-to-end model, Tacotron exhibits resilience to error accumulation and adapts to varied pronunciations and speaking styles.
  • Scalability: Tacotron can efficiently scale to vast amounts of real-world data without requiring phoneme-level alignment.
  • Performance: Tacotron achieves high speech synthesis quality, surpassing production parametric systems in terms of mean opinion score (MOS) while operating substantially faster than autoregressive methods at the frame level.

Tacotron 2: Tacotron 2 is a neural network architecture for direct text-to-speech synthesis. It consists of two core elements: a feature prediction network and a modified WaveNet vocoder.

Features:

  • Feature Prediction Network: Takes text as input, converting it into mel-scale spectrograms representing speech in the frequency domain. Utilizes convolutional layers, LSTMs, and attention mechanisms to generate mel spectrograms.
  • WaveNet Vocoder: Receives the mel spectrograms from the feature prediction network and synthesizes waveform samples. It employs dilated convolutional layers and a mixture of logistic distributions to produce speech closely resembling human speech.
  • Advantages: Tacotron 2 combines the feature prediction network and the WaveNet vocoder to directly convert text into speech. This streamlines the speech synthesis process, eliminating the need for complex feature engineering and delivering high-quality, human-like speech.

FastSpeech: FastSpeech is a neural text-to-speech (TTS) system that addresses the challenges of slow inference speed, lack of robustness, and lack of controllability in traditional TTS models. It uses a feed-forward network based on Transformer to generate mel-spectrograms in parallel, allowing for faster synthesis.[5]

Features:

  • Remarkable Speed: FastSpeech accelerates synthesis with a 270x speedup in mel-spectrogram generation and a 38x speedup in end-to-end speech synthesis when compared to autoregressive Transformer TTS models.
  • Enhanced Robustness: FastSpeech mitigates error propagation and alignment issues often found in autoregressive models. A phoneme duration predictor ensures precise alignment, resulting in robust synthesized speech with minimal word issues.
  • Precise Control: FastSpeech offers fine-grained control over voice speed and prosody. The length regulator empowers users to adjust phoneme durations and prosody, allowing for nuanced control.
  • Comparable Quality: FastSpeech delivers speech quality nearly indistinguishable from autoregressive Transformer TTS models, while providing superior speed, robustness, and control.

Transformer: The Transformer is a novel neural network architecture introduced by Vaswani et al. in 2017. It stands out for its exclusive reliance on attention mechanisms, foregoing recurrent connections and convolutions. In natural language processing, particularly neural machine translation, it has achieved remarkable success. The Transformer comprises an encoder and a decoder, both structured with stacks of identity blocks. It employs multi-head self-attention to model dependencies between input and output sequences.[6]

Features:

  • Multi-Head Attention Efficiency: In Transformer, multi-head attention in Tacotron2 enhances training efficiency, constructing hidden states in the encoder and decoder concurrently, speeding up training by 4.25 times and effectively addressing long-range dependencies.
  • WaveNet Vocoder's Role: Within the Transformer TTS network, WaveNet vocoder synthesizes high-quality audio from mel spectrograms, closely resembling human recordings on specific datasets.
  • Transformer Architecture: The Transformer, introduced by Vaswani et al. in 2017, is a unique neural network architecture solely based on attention mechanisms. It excels in natural language processing tasks like neural machine translation, comprising encoder and decoder stacks of identity blocks and utilizing multi-head self-attention to model input-output dependencies.

Vocoder

WaveNet: WaveNet serves as a vital component in Tacotron 2 for speech synthesis. It transforms mel-scale spectrograms, predicted by the feature prediction network, into time-domain waveform samples, resulting in high-quality audio waveforms.In this architecture, WaveNet is adapted to function as a vocoder. It takes the predicted mel spectrograms and uses dilated convolutional layers organized into dilation cycles.

These layers' dilation rates determine their receptive field sizes, allowing WaveNet to capture long-term dependencies across frames.

In Tacotron 2, WaveNet is conditioned on the mel spectrograms generated by the feature prediction network. This conditioning simplifies the speech synthesis process and produces high-quality audio output.[7]

Parallel WaveGAN: Parallel WaveGAN is a waveform generator utilizing GANs. It employs non-autoregressive WaveNet with multi-resolution spectrogram and adversarial loss functions for high-quality speech waveform synthesis. Unlike traditional models, it doesn't require complex density distillation, offering faster, efficient, small-footprint, and competitive waveform generation suitable for real-time applications.[8]

Articulatory Features-Based TTS

Articulatory feature-based Text-to-Speech (TTS) is a concept that involves using articulatory features, which represent the movements and positions of the speech articulators (such as the tongue, lips, and jaw), as the basis for synthesizing speech. This approach aims to capture the detailed articulatory information present in the speech signal, allowing for more natural and expressive speech synthesis.

Method:

  • Data Collection: High-quality speech data is collected, often using techniques such as electromagnetic articulography (EMA) or magnetic resonance imaging (MRI), to capture the articulatory movements during speech production .
  • Articulatory Feature Extraction: The collected data is processed to extract articulatory features, which represent the relevant articulatory movements and positions. This can be done using algorithms such as the ACIDA algorithm, which combines statistical analysis and geometric modeling to extract articulatory features from the data .
  • Articulatory Feature Mapping: The extracted articulatory features are then mapped to the corresponding linguistic units, such as phonemes or graphemes, in the text. This mapping can be done using statistical models or machine learning techniques .
  • Synthesis: The mapped articulatory features are used to drive a speech synthesis system, which generates the corresponding speech waveform. This can be achieved using techniques such as concatenative synthesis, where pre-recorded speech units are concatenated based on the articulatory features, or parametric synthesis, where the articulatory features are used to control a speech synthesis model.


Algorithm:

  • The ACIDA algorithm works by extracting articulatory features from speech data using statistical and geometric techniques. It involves several steps, including data collection and preprocessing, articulatory feature extraction, and articulatory feature analysis.
  • In the data collection and preprocessing step, the MOCHA-TIMIT database is used, and the data is preprocessed to remove noise and align the audio and articulatory signals.
  • The articulatory feature extraction step involves the ACIDA algorithm itself. This algorithm combines statistical analysis and geometric modeling to extract articulatory features from the data. It identifies critical coordinates and their phonetic analysis, providing insights into the articulatory movements involved in speech production.
  • The extracted articulatory features are then analyzed and compared with expected critical coordinates, demonstrating good agreement. This analysis helps validate the effectiveness of the ACIDA algorithm in capturing relevant articulatory information.[9]

Impact[edit | edit source]

Neural Network-Based Vocoders

The advent of neural network-based Text-to-Speech (TTS) systems has significantly impacted the development and capabilities of vocoders in speech synthesis.

  • Early neural vocoders, such as WaveNet, Char2Wav, and WaveRNN, directly take linguistic features as input to generate waveforms.
  • Subsequent works, such as those by Prenger et al., Kim et al., Kumar et al., and Yamamoto et al., take mel-spectrograms as input to generate waveforms.
  • Due to the length of speech waveforms, generative models like Flow, GAN, VAE, and DDPM (Denoising Diffusion Probabilistic Model, or Diffusion) are utilized in waveform generation.[10]

Enhanced quality of speech synthesis

Neural network-based vocoders, such as WaveNet, have significantly improved the quality of synthesized speech, providing more natural and expressive voice outputs. This has been pivotal in reducing the robotic tones often occurs in earlier TTS systems. Using neural network techniques like Tacotron 2 and WaveNet, we can process transcript-free noisy speech datasets in more precise way. Based on it we can create a model capable of generating audio in speakers' voices that not present in the original data.[1]

Prosody Modeling

The emergence of neural network models in speech synthesis has dramatically influenced prosody modeling, which involves the prediction and generation of prosodic features like pitch, duration, and energy. It is crucial for producing speech that sounds rhythmic and sentimental. It enable the synthesis of expressive and emotional speech by learning and generating varied prosody, which is crucial for conveying different emotions and speaking styles. Neural network based end-to-end text to speech (TTS) can facilitate the development of controllable speech synthesis systems where prosody can be manipulated to generate speech with desired pitch, stress, and rhythm.[11]

End-to-End Systems

Neural network-based vocoders on End-to-End (E2E) Systems can directly convert text to speech without requiring intermediate representations. It enables systems to learn complex mappings from input to output and often simplifying the traditional multi-stage processing pipeline. Tacotron, an end-to-end generative text-to-speech model can achieve speech synthesis directly from characters, which significantly reduces requirements of expertise of acoustic and many other domain.[12]This has also enabled the creation of speech that can be more dynamically adjusted to various contexts and emotional tones, enhancing applications like virtual assistants and conversational agents.

Generative Modeling

Neural networks have enabled the development of generative models that can produce high-quality, natural-sounding speech, improving upon traditional concatenative and parametric methods. It can generate speech with varied emotional content, and can be trained to mimic different speakers, accents, and styles, providing versatility in speech synthesis applications. Generative models have advantages in situations with limited training data to synthesize speech, enabling the creation of voices for speakers with limited available recordings.[13]

Future research[edit | edit source]

References

  1. Xu Tan∗, Tao Qin, Frank Soong, Tie-Yan Liu. "A Survey on Neural Speech Synthesis" arXiv:2106.15561 (2021)
  2. Dario Amodei, Dario, Aidan N. Gomez, et al. "WaveNet: A Generative Model for Raw Audio." arXiv preprint arXiv:1609.03499 (2016).
  3. G. Hinton et al., "Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups," in IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82-97, Nov. 2012, doi: 10.1109/MSP.2012.2205597.
  4. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). "Tacotron: Towards End-to-End Speech Synthesis." In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 1946-1954).
  5. Ren, J., Xu, L., Zhang, Z., Yang, T., Lai, J., Lu, Z., & Dai, L. R. (2019). FastSpeech: Fast, Robust and Controllable Text to Speech. arXiv preprint arXiv:1905.09263.
  6. Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, and Ming Liu. 2019. Neural speech synthesis with transformer network. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence (AAAI'19/IAAI'19/EAAI'19). AAAI Press, Article 823, 6706–6713. https://doi.org/10.1609/aaai.v33i01.33016706
  7. Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., ... & van den Oord, A. (2017). Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions. arXiv preprint arXiv:1712.05884.
  8. Yamamoto, R., Inoue, K., Portnoff, M., Tan, X., Inoue, S., Yamamoto, H., ... & Watanabe, S. (2020). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. arXiv preprint arXiv:1910.11480.
  9. Singampalli, V. D. (2010). Statistical identification of articulatory roles in speech production (Order No. 10131268). Available from ProQuest Dissertations & Theses A&I. (1810640121). Retrieved from http://server.proxy-ub.rug.nl/login?url=https://www.proquest.com/dissertations-theses/statistical-identification-articulatory-roles/docview/1810640121/se-2
  10. https://arxiv.org/pdf/2106.15561.pdf
  11. https://arxiv.org/abs/1905.09263
  12. https://arxiv.org/pdf/1703.10135.pdf%EF%BC%89
  13. Takuhiro Kaneko, Hirokazu Kameoka, Nobukatsu Hojo, Yusuke Ijima, Kaoru Hiramatsu, Kunio Kashino. NTT Corporation, Japan. GENERATIVE ADVERSARIAL NETWORK-BASED POSTFILTER FOR STATISTICAL PARAMETRIC SPEECH SYNTHESIS

Team Members

Qing Li

Lifan Qu

Yi Lei