Advancements in Neural Network-Based TTS (2000s): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 20: Line 20:


== Impact[edit | edit source] ==
== Impact[edit | edit source] ==
The Voder was demonstrated to the public at the 1939 New York World's Fair, attracting widespread attention and showcasing the possibilities of artificial speech production. It was a significant step towards public awareness and interest in the field of speech synthesis.


In fact, the abilities of Voder go beyond human voice, as it can also produce non-speech sounds such as musical tones and sound effects, and thus it was used in a variety of applications, including radio broadcasts, sound effects for movies, and even music performances.
=== Neural Network-Based Vocoders ===
The advent of neural network-based Text-to-Speech (TTS) systems has significantly impacted the development and capabilities of vocoders in speech synthesis
 
* Early neural vocoders, such as WaveNet, Char2Wav, and WaveRNN, directly take linguistic features as input to generate waveforms.
* Subsequent works, such as those by Prenger et al., Kim et al., Kumar et al., and Yamamoto et al., take mel-spectrograms as input to generate waveforms.
* Due to the length of speech waveforms, generative models like Flow, GAN, VAE, and DDPM (Denoising Diffusion Probabilistic Model, or Diffusion) are utilized in waveform generation.
 
=== Enhanced quality of speech synthesis ===
Neural network-based vocoders, such as WaveNet,  have significantly improved the quality of synthesized speech, providing more natural and expressive voice outputs. This has been pivotal in reducing the robotic tones often occurs in earlier TTS systems. Using neural network techniques like Tacotron 2 and WaveNet, we can process transcript-free noisy speech datasets in more precise way. Based on it we can create a model capable of generating audio in speakers' voices that not present in the original data.[https://proceedings.neurips.cc/paper_files/paper/2018/file/6832a7b24bc06775d02b7406880b93fc-Paper.pdf]
 
=== End-to-End Systems ===
Neural network-based vocoders on End-to-End (E2E) Systems can directly convert text to speech without requiring intermediate representations. It enables systems to learn complex mappings from input to output and often simplifying the traditional multi-stage processing pipeline. Tacotron, an end-to-end generative text-to-speech model can achieve speech synthesis directly from characters, which significantly reduces requirements of expertise of acoustic and many other domain.<ref>https://arxiv.org/pdf/1703.10135.pdf%EF%BC%89</ref>
 
=== Generative Modeling<ref>http://www.kecl.ntt.co.jp/people/kameoka.hirokazu/publications/Kaneko2017ICASSP03_published.pdf</ref> ===


== Future research[edit | edit source] ==
== Future research[edit | edit source] ==
-


== References ==
<references />
== Team Members ==
== Team Members ==
Qing Li
Qing Li

Revision as of 18:47, 12 October 2023

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean odio turpis, sodales a urna a, rutrum elementum libero. Phasellus pellentesque dapibus odio quis sodales. Duis a dignissim odio. Maecenas lobortis sapien purus, eu laoreet magna varius et. Vestibulum euismod pharetra lorem, id dignissim lorem porta ac. Proin euismod vehicula eleifend. Nunc hendrerit efficitur dolor vitae sodales. Fusce sit amet quam laoreet, aliquet diam at, sollicitudin turpis. Vestibulum eget posuere nibh, sit amet sodales purus. Curabitur vel vulputate eros. Vivamus pellentesque libero non magna iaculis tempor. Aenean sodales velit ut nulla aliquam, ut varius orci blandit. Aliquam semper neque ac rutrum porta.

Historical Context

The history of neural network-based text-to-speech (TTS) can be traced back to the early days of artificial intelligence research.In the 1980s, researchers began to explore the use of neural networks to model the human speech production process.In the 1990s, Hidden Markov Models were introduced to TTS, which brought significant improvements in speech synthesis. HMM-based systems allowed for better control of speech characteristics and were widely adopted for several years.

In early 2000s when researchers started exploring the use of deep neural networks (DNNs) for speech synthesis. However, it wasn’t until the introduction of generative adversarial networks (GANs) and autoregressive models that the quality of synthesized speech improved significantly. In recent years, the development of deep learning and artificial intelligence has led to a surge in research on neural network-based TTS.[1]

One of the key breakthroughs in neural network-based TTS came in 2006 with the introduction of the WaveNet model by Google AI. WaveNet was the first neural network-based TTS system to generate high-quality speech waveforms directly from text, without the need for intermediate representations such as phonemes or mel spectrograms.[2] This led to a significant improvement in the naturalness and expressiveness of synthesized speech.

Working Mechanism[edit | edit source]

The Voder is a manually operated speech synthesizer that recreates the physiological characteristics of the human voice. It works by breaking up human speech into its acoustic components using a set of ten contiguous band-pass filters that cover the entire speech frequency range and are connected in parallel. The pass bands of the filters were chosen after a careful analysis of how the human ear interprets speech sounds. The initial sounds produced by either the oscillator or the gas discharge tube were passed through these filters, and their outputs were passed through an amplifier that mixed and modulated them and passed it on to a loudspeaker in order to produce an electronic human speech. The potentiometers (devices that control how much electricity flows through a circuit) controlled by the finger keys were used to operate the band-pass filter outputs.

Two basic sounds are used to create speech sounds: the buzz tone and the hissing noise. The buzz tone is used to create voiced vowels and nasal sounds, while the hissing noise is used to create voiceless fricative sounds. The pitch control is achieved by a foot pedal, which also converts the tones and hissing sounds to vowels, consonants, and inflections. The Voder's filters divide speech sounds into their acoustic components, which are then recreated using the buzz and hiss sounds.

Key Innovations[edit | edit source]

The Voder was among the first devices to allow manual control of speech synthesis. It was a pioneer in electronic sound generation, breaking down human speech into its fundamental acoustic components and reproducing these patterns electronically: this was a significant advancement in the early stages of electronic speech synthesis. Moreover, Voder was the first successful attempt at recreating an important physiological characteristic of the human voice – the ability to create voiced and unvoiced sounds.

To improve the operator's performance, the Voder had a recording and playback feature that allowed operators to objectively analyze their areas of improvement. This feature is similar to modern-day contact centers that use call recording and analysis to improve agent performance.

Impact[edit | edit source]

Neural Network-Based Vocoders

The advent of neural network-based Text-to-Speech (TTS) systems has significantly impacted the development and capabilities of vocoders in speech synthesis

  • Early neural vocoders, such as WaveNet, Char2Wav, and WaveRNN, directly take linguistic features as input to generate waveforms.
  • Subsequent works, such as those by Prenger et al., Kim et al., Kumar et al., and Yamamoto et al., take mel-spectrograms as input to generate waveforms.
  • Due to the length of speech waveforms, generative models like Flow, GAN, VAE, and DDPM (Denoising Diffusion Probabilistic Model, or Diffusion) are utilized in waveform generation.

Enhanced quality of speech synthesis

Neural network-based vocoders, such as WaveNet, have significantly improved the quality of synthesized speech, providing more natural and expressive voice outputs. This has been pivotal in reducing the robotic tones often occurs in earlier TTS systems. Using neural network techniques like Tacotron 2 and WaveNet, we can process transcript-free noisy speech datasets in more precise way. Based on it we can create a model capable of generating audio in speakers' voices that not present in the original data.[1]

End-to-End Systems

Neural network-based vocoders on End-to-End (E2E) Systems can directly convert text to speech without requiring intermediate representations. It enables systems to learn complex mappings from input to output and often simplifying the traditional multi-stage processing pipeline. Tacotron, an end-to-end generative text-to-speech model can achieve speech synthesis directly from characters, which significantly reduces requirements of expertise of acoustic and many other domain.[3]

Generative Modeling[4]

Future research[edit | edit source]

References

  1. Xu Tan∗, Tao Qin, Frank Soong, Tie-Yan Liu. "A Survey on Neural Speech Synthesis" arXiv:2106.15561 (2021)
  2. Dario Amodei, Dario, Aidan N. Gomez, et al. "WaveNet: A Generative Model for Raw Audio." arXiv preprint arXiv:1609.03499 (2016).
  3. https://arxiv.org/pdf/1703.10135.pdf%EF%BC%89
  4. http://www.kecl.ntt.co.jp/people/kameoka.hirokazu/publications/Kaneko2017ICASSP03_published.pdf

Team Members

Qing Li

Lifan Qu

Yi Lei