Vocoder Development

From MSc Voice Technology
Jump to navigation Jump to search

Introduction

First emerged in the 1930s, the term "vocoder" is derived from "voice" and "coder", and was initially conceived for telecommunication purposes. In the 1950s, Lincoln Laboratory conducted research on detecting pitch in speech, which subsequently influenced the development of voice coders, commonly known as vocoders. These devices are designed to decrease the amount of bandwidth required for transmitting speech. This reduction in bandwidth offers two advantages: it reduces the cost of transmitting and receiving speech, while also enhancing the potential for maintaining privacy.[1]

Initially, a vocoder described a device designed to compress speech for efficient transmission over telephone lines. The idea involved splitting speech into parts using filters and then reconstructing it on the other end. The goal was to save bandwidth, but in practice, early vocoders faced challenges in preserving the natural quality of speech. Additionally, the original vocoder only transmitted loudness, not sound quality[2].

As technology advanced, the phase vocoder emerged, preserving both the loudness and sound quality of speech. The phase vocoder is a significant advancement in vocoder technology and plays a pivotal role in modern speech synthesis and audio signal processing. It was developed to address critical limitations in early vocoders and revolutionized the way we process and manipulate audio signals. [2]

Historical Context

Speech synthesis, commonly known as Text-to-Speech (TTS), has gained increasing importance in people's lives. The development of TTS technology spans centuries, beginning with the Mechanical and Electro-Mechanical Era that relied on mechanical and electro-mechanical components to simulate speech sounds. It progressed through the Electrical and Electronic Era, where electronic technology was used to create early formant synthesizers and speech coding systems, and eventually entered the digital and computational era, marked by the transition to digital signal processing and computational methods for speech synthesis.

Mechanical and Electro-Mechanical Era

This era commenced with early speech synthesis attempts in the late 18th century. In 1769, Wolfgang von Kempelen engineered a mechanical speaking device that ingeniously emulated speech sounds through the mechanical simulation of vocal cords, vocal tracts, and lungs. Concurrently, Christian Kratzenstein embarked on early mechanical synthesis by employing five organ pipe-like resonators to successfully reproduce five distinct vowels. Later in 1824, Wheatstone introduced a "Speaking Machine," and in 1846, Joseph Faber constructed "Euphonia," both utilizing a series of mechanical components to produce speech-like sounds. During this period, the mechanical and electro-mechanical era of speech synthesis was drawing to a close. Despite their constraints in generating an extensive vocabulary and sentences, these early endeavors made noteworthy contributions to the exploration of speech synthesis, igniting extensive research into the physiology of speech production and experimental phonetics. The comprehension of acoustic resonators, spectral components, and formants marked a pivotal shift toward the scientific investigation of human sound production, setting the stage for the subsequent era of speech synthesis (Schroeder, 1993; Story, 2019).

Electrical and Electronic Era

In the electrical and electronic era, Text-to-Speech (TTS) technology underwent a transformative evolution, with the convergence of electricity and emerging electronic components giving rise to more efficient speech synthesis systems. This era dated back to 1922 when John Q. Stewart designed a system utilizing early electronic technology to generate speech sounds, essentially introducing the first electrical formant synthesizer. Electronic speech coding was inaugurated by Homer Dudley in response to bandwidth limitations on telegraph cables and the substantial bandwidth requirements for transmitting spectral content. In the 1940s, Dudley created the first analysis-synthesis system, the Vocoder, introducing the concept of analyzing speech into spectral components and synthesizing it which solved the issue. This laid the foundation for subsequent TTS developments and paved the way for further research in the field of speech synthesis. However, the Vocoder had limitations in terms of naturalness and intelligibility in speech synthesis, primarily due to its complexity and dependence on manual tuning, making it challenging to achieve high-quality and versatile speech synthesis (Story, 2019)

Key Innovations

Amber

Impact

Wenjun

Future Research

Despite the significant advancements in technology and the development of new speech synthesis systems, ranging from DECtalk in the 1980s to the cutting-edge AI models of today, vocoders continue to play a key role in various applications. Today, vocoders are integral components of state-of-the-art speech synthesis systems, including WORLD[3], designed specifically for real-time applications, and BigVGAN[4] , which uses the power of generative adversarial networks (GANs[5]). A majority of contemporary vocoders rely on neural networks for their operation, but improvements could be made in this direction too. Additionally, the integration of generative AI holds the promise of further enhancing the quality of vocoder-synthesized voices.

Furthermore, vocoders are largely used in the field of music production, although we have yet to achieve the creation of truly authentic singing voices through them. The potential to explore and refine the use of vocoders in this context offers a pathway to a broader advancement in speech synthesis technology.

Additionally, vocoders are currently deployed to address other challenges in voice technology. Vocoder-synthesized voices serve as tools in training noise-robust models[6] and detect fake audio[7].

LLM Review

Erin

References

  1. Gold, Bernard. “Gold A History of Vocoder Research at Lincoln Laboratory.”, The Lincoln Laboratory Journal. Volume 3. Number 2 (1990)
  2. 2.0 2.1 Gordon, John William, and John Strawn. An introduction to the phase vocoder No. 55. CCRMA, Department of Music, Stanford University, 1987.
  3. Morise, Masanori, Fumiya Yokomori, e Kenji Ozawa. «WORLD: A Vocoder-Based High-Quality Speech Synthesis System for Real-Time Applications». IEICE Transactions on Information and Systems E99.D, fasc. 7 (2016): 1877–84. https://doi.org/10.1587/transinf.2015EDP7457.
  4. Lee, Sang-gil, Wei Ping, Boris Ginsburg, Bryan Catanzaro, and Sungroh Yoon. «BigVGAN: A Universal Neural Vocoder with Large-Scale Training». arXiv, 16 february 2023. http://arxiv.org/abs/2206.04658.
  5. Rocca, Joseph. "Understanding generative adversarial networks (gans)." Medium 7 (2019): 20.
  6. Zheng, Nengheng, Yupeng Shi, Yuyong Kang, e Qinglin Meng. «A Noise-Robust Signal Processing Strategy for Cochlear Implants Using Neural Networks». In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 8343–47, 2021. https://doi.org/10.1109/ICASSP39728.2021.9413452.
  7. Yan, Xinrui, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Haoxin Ma, Tao Wang, Shiming Wang, e Ruibo Fu. «An Initial Investigation for Detecting Vocoder Fingerprints of Fake Audio». In Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia, 61–68. DDAM ’22. New York, NY, USA: Association for Computing Machinery, 2022. https://doi.org/10.1145/3552466.3556525.

Team Members

Alice Vanni, Amber, Chenyi, Erin Shi, Wenjun