Advancements in AI TTS (2020s): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 4: Line 4:


== Historical Context ==
== Historical Context ==
Before Voder, several significant milestones have marked progress in recreating human speech artificially.
The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by Christian Kratzenstein in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine could produce five vowels A, E, I, O and U. In 1791, Wolfgang von Kempelen created a tool known as a “speaking machine”. With a series of bellows, reeds and mechanical components, this complex machine represented the model of human vocal tract and could produce vowels and consonants. Then in the 20th century, Bell lab developed Voder (Voice Operating Demonstrator) , an electrical speech synthesizer whose source signals passed through bandpass electronic filters and the output was controlled by an operator.


The earliest attempts towards speech synthesis using mechanical means can be traced back to 1779 when [[Mechanical synthesis#Christian Kratzenstein, 1779: generating vowels with acoustic resonators|Christian Gottlieb Kratzenstein]] produced the five vowel sounds /a/, /e/, /i/, /o/, /u/ using various shaped tubes. A few years later, W. R. von Kempelen of Vienna further advanced this and managed to produce not only vowel sounds but also a number of consonant sounds. His [[Mechanical synthesis#Wolfgang von Kempelen, 1791: Acoustic-Mechanical Speech Machine|Speaking Machine]], dating back to the late 18th century, utilized bellows and reeds to simulate limited vowel and consonant sounds, underscoring the potential for creating full-fledged artificial speech.
As we can tell, the main idea of generating speech during the early development of speech synthesis is to copy the speech (Klatt, 1987). The approach to fulfill this idea is either mechanical or electrical. The former creates machines to mimic movements of the vocal tract, and the latter works on filtering electrical signals, as the vocal tract did to source signals. We should notice that these early machines are by standards not text to speech systems, as they do not generate speech from plain text. However, they are undoubtedly milestones in development of modern TTS systems, and their limitations are also obvious. Machines are not capable of producing a full range of vowels and consonants, let alone sentences. In a video about Voder, it can only produce one sentence ‘she saw me’ with stress on different words.  


Charles Wheatstone built upon Kempelen's work with his Speaking Machine in the early 19th century. By incorporating a vibrating reed, [[Mechanical synthesis#Charles Wheatstone, 1800s: upgraded version of von Kempelen's machine|Wheatstone's machine]] could produce a wider range of sounds compared to previous attempts, resulting in more accurate and recognizable speech sounds.
Until the late 20th century, full-fledged text to speech systems emerged. Most of TTS systems at that time were rule-based, meaning speech is generated using predefined linguistic and phonetic rules, letter to sound rules, and stress to prosody rules. One of the earliest full text to speech systems MITalk, created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was also rule-based. The performance of rule-based machines is so much better that users could hear the output speech as they type sentences on the keyboard. However, the speech output still lacks naturalness and expressiveness. What’s more, compiling linguistic rules for languages requires lots of manual work and due to the flexibility of languages, it is not impossible to cover all variations and exceptions.  
 
From late 20th onwards, more TTS techniques have been developed such as Parametric TTS, Hidden Markov Models (HMMs) , and Concatenative TTS. They dealt with the naturalness, adaptability and flexibility problems to some extent and improved the performance of TTS systems. But they all have their downsides. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.
 
The recent TTS system, on the contrary, can generate natural and human-like speech thanks to entwinement with artificial intelligence. They have been improved in multiple ways including training models and techniques, functionalities and output performance. We will look deeper into the AI techniques in the field of TTS in the next section.  


== Key Innovations ==
== Key Innovations ==

Revision as of 20:37, 17 October 2023

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aenean odio turpis, sodales a urna a, rutrum elementum libero. Phasellus pellentesque dapibus odio quis sodales. Duis a dignissim odio. Maecenas lobortis sapien purus, eu laoreet magna varius et. Vestibulum euismod pharetra lorem, id dignissim lorem porta ac. Proin euismod vehicula eleifend. Nunc hendrerit efficitur dolor vitae sodales. Fusce sit amet quam laoreet, aliquet diam at, sollicitudin turpis. Vestibulum eget posuere nibh, sit amet sodales purus. Curabitur vel vulputate eros. Vivamus pellentesque libero non magna iaculis tempor. Aenean sodales velit ut nulla aliquam, ut varius orci blandit. Aliquam semper neque ac rutrum porta.

Historical Context

The history of speech synthesis can be traced back to the 18th century. The first machine that attempted to produce human speech was created by Christian Kratzenstein in 1769. This machine used hollow tubes to resemble resonators and by adjusting resonators’ lengths and shapes, the machine could produce five vowels A, E, I, O and U. In 1791, Wolfgang von Kempelen created a tool known as a “speaking machine”. With a series of bellows, reeds and mechanical components, this complex machine represented the model of human vocal tract and could produce vowels and consonants. Then in the 20th century, Bell lab developed Voder (Voice Operating Demonstrator) , an electrical speech synthesizer whose source signals passed through bandpass electronic filters and the output was controlled by an operator.

As we can tell, the main idea of generating speech during the early development of speech synthesis is to copy the speech (Klatt, 1987). The approach to fulfill this idea is either mechanical or electrical. The former creates machines to mimic movements of the vocal tract, and the latter works on filtering electrical signals, as the vocal tract did to source signals. We should notice that these early machines are by standards not text to speech systems, as they do not generate speech from plain text. However, they are undoubtedly milestones in development of modern TTS systems, and their limitations are also obvious. Machines are not capable of producing a full range of vowels and consonants, let alone sentences. In a video about Voder, it can only produce one sentence ‘she saw me’ with stress on different words.

Until the late 20th century, full-fledged text to speech systems emerged. Most of TTS systems at that time were rule-based, meaning speech is generated using predefined linguistic and phonetic rules, letter to sound rules, and stress to prosody rules. One of the earliest full text to speech systems MITalk, created by researchers at the Massachusetts Institute of Technology (MIT) in 1960s, was also rule-based. The performance of rule-based machines is so much better that users could hear the output speech as they type sentences on the keyboard. However, the speech output still lacks naturalness and expressiveness. What’s more, compiling linguistic rules for languages requires lots of manual work and due to the flexibility of languages, it is not impossible to cover all variations and exceptions.  

From late 20th onwards, more TTS techniques have been developed such as Parametric TTS, Hidden Markov Models (HMMs) , and Concatenative TTS. They dealt with the naturalness, adaptability and flexibility problems to some extent and improved the performance of TTS systems. But they all have their downsides. For example, despite the naturalness of concatenative TTS, it lacks the ability to generate new voices because it relies heavily on recorded speech data.

The recent TTS system, on the contrary, can generate natural and human-like speech thanks to entwinement with artificial intelligence. They have been improved in multiple ways including training models and techniques, functionalities and output performance. We will look deeper into the AI techniques in the field of TTS in the next section.

Key Innovations

The Voder was among the first devices to allow manual control of speech synthesis. It was a pioneer in electronic sound generation, breaking down human speech into its fundamental acoustic components and reproducing these patterns electronically: this was a significant advancement in the early stages of electronic speech synthesis. Moreover, Voder was the first successful attempt at recreating an important physiological characteristic of the human voice – the ability to create voiced and unvoiced sounds.

To improve the operator's performance, the Voder had a recording and playback feature that allowed operators to objectively analyze their areas of improvement. This feature is similar to modern-day contact centers that use call recording and analysis to improve agent performance.

Impact

The advancements in AI Text-to-Speech (TTS) technology in the 2020s have had profound impacts across various domains:

1. Business:

Cost-Effective Marketing: AI TTS has allowed businesses to create cost-effective marketing materials by generating high-quality voiceovers for advertisements, promotional videos, and e-commerce product descriptions. This has enabled smaller businesses to compete with larger counterparts.

Elevated Customer Engagement: AI TTS is being used in customer service and support chatbots, providing a more engaging and interactive experience for customers. This technology reduces the need for human operators in routine tasks and enables 24/7 support and quick responses to customer queries.

Multilingual Communication: Companies have expanded their global reach by using AI TTS to provide content in multiple languages, which is especially important for businesses with international customers and markets.

Enhanced Brand Recognition: Customized brand voices can help businesses stand out in a crowded market. With AI TTS, businesses can maintain a consistent brand voice across various touchpoints.

2. Education:

Accessibility and Inclusion: TTS technology is being used in education and e-learning platforms to provide audio versions of text content. This benefits students with diverse learning styles and those with reading difficulties.

Language Learning: TTS technology remains an asset in language learning, helping learners improve pronunciation, fluency, and comprehension in various languages.

Personalized Learning: Educational institutions use AI TTS to provide personalized learning experiences, adapting content to individual student needs and preferences.

Teacher Assistance: TTS tools support educators in creating and delivering content, from generating voiceovers for instructional videos to offering speech feedback on assignments.

3. Society:

Language Preservation: Cross-lingual text-to-speech (CTTS) has facilitated communication across language barriers and has played a role in preserving and revitalizing low-resource and endangered languages, promoting linguistic diversity and aiding in documentation and communication. This is invaluable in a globalized world, allowing for better understanding and cooperation.

Digital Inclusion: TTS technology promotes digital inclusion by making digital content accessible to individuals with low literacy skills and those with disabilities. Improved TTS technology has greatly enhanced accessibility for individuals with visual impairments. It allows text-based information to be converted into speech, making digital content more accessible to a wider audience.

Entertainment and Content Creation: The entertainment industry has benefited from TTS technology through voice cloning and dubbing. It has become easier to dub movies, create voiceovers, and even bring back historical voices for documentaries and other media productions. AI TTS continues to support voiceovers in video games, audiobooks, and other audio content, contributing to the entertainment experience.

Emergency Communication: During emergencies and crisis situations, AI TTS is used to disseminate critical information rapidly, ensuring public safety and information access.

4. Privacy and Ethical Concerns:

Deepfake Threat: The potential for AI TTS to be used in deepfake audio and video content has become a growing concern. This emphasizes the need for robust authentication and content verification mechanisms.

Data Privacy: The collection and storage of voice data for TTS models raise concerns about data privacy. Regulations and guidelines have been developed to address these issues.

Bias and Cultural Sensitivity: The challenge of mitigating bias and ensuring cultural sensitivity in TTS models remains a critical consideration in their development and deployment.

Future Research

-

Reference

The Voder is a manually operated speech synthesizer that recreates the physiological characteristics of the human voice. It works by breaking up human speech into its acoustic components using a set of ten contiguous band-pass filters that cover the entire speech frequency range and are connected in parallel. The pass bands of the filters were chosen after a careful analysis of how the human ear interprets speech sounds. The initial sounds produced by either the oscillator or the gas discharge tube were passed through these filters, and their outputs were passed through an amplifier that mixed and modulated them and passed it on to a loudspeaker in order to produce an electronic human speech. The potentiometers (devices that control how much electricity flows through a circuit) controlled by the finger keys were used to operate the band-pass filter outputs.

Two basic sounds are used to create speech sounds: the buzz tone and the hissing noise. The buzz tone is used to create voiced vowels and nasal sounds, while the hissing noise is used to create voiceless fricative sounds. The pitch control is achieved by a foot pedal, which also converts the tones and hissing sounds to vowels, consonants, and inflections. The Voder's filters divide speech sounds into their acoustic components, which are then recreated using the buzz and hiss sounds.

LLM Review

-

Team Members

xinyi xueying jingsi yilan wansu