Vocoder Development: Difference between revisions
(30 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
A vocoder is a crucial component in the realm of speech synthesis, employed in various applications ranging from telecommunications and entertainment to assistive technologies and virtual assistants. Its ability to analyze and synthesize the complex spectrum of human speech, including the pitch, formants, and prosody (melody, rhythm, and intonation in speech), has changed how we use synthesized voices. | |||
First emerged in the 1930s, the term "vocoder" is derived from "voice" and "coder", and was initially conceived for telecommunication purposes. In the 1950s, Lincoln Laboratory conducted research on detecting pitch in speech, which subsequently influenced the development of voice coders, commonly known as vocoders. These devices are designed to decrease the amount of bandwidth required for transmitting speech. This reduction in bandwidth offers two advantages: it reduces the cost of transmitting and receiving speech, while also enhancing the potential for maintaining privacy.<ref name=":0">Gold, B. (1990). A history of Vocoder research at Lincoln Laboratory. ''The Lincoln Laboratory Journal'', ''3''(2), 163-202.</ref> | |||
As technology advanced, the phase vocoder emerged, preserving both the loudness and sound quality of speech. The phase vocoder is a significant advancement in vocoder technology and plays a pivotal role in modern speech synthesis and audio signal processing. It was developed to address critical limitations in early vocoders and revolutionized the way we process and manipulate audio signals. <ref name=": | Initially, a vocoder described a device designed to compress speech for efficient transmission over telephone lines. The idea involved splitting speech into parts using filters and then reconstructing it on the other end. The goal was to save bandwidth, but in practice, early vocoders faced challenges in preserving the natural quality of speech. Additionally, the original vocoder only transmitted loudness, not sound quality<ref name=":1">Gordon, J. W., & Strawn, J. (1987). ''An introduction to the phase vocoder'' (No. 55). CCRMA, Department of Music, Stanford University.</ref>. | ||
As technology advanced, the phase vocoder emerged, preserving both the loudness and sound quality of speech. The phase vocoder is a significant advancement in vocoder technology and plays a pivotal role in modern speech synthesis and audio signal processing. It was developed to address critical limitations in early vocoders and revolutionized the way we process and manipulate audio signals. <ref name=":1" /> | |||
== Historical Context == | == Historical Context == | ||
Speech synthesis, commonly known as Text-to-Speech (TTS), has gained increasing importance in people's lives. The development of TTS technology spans centuries, beginning with the Mechanical and Electro-Mechanical Era | Speech synthesis, commonly known as Text-to-Speech (TTS), has gained increasing importance in people's lives. The development of TTS technology spans centuries, beginning with the Mechanical and Electro-Mechanical Era which relied on mechanical and electro-mechanical components to simulate speech sounds. It progressed through the Electrical and Electronic Era, where electronic technology was used to create early formant synthesizers and speech coding systems, and eventually entered the digital and computational era, marked by the transition to digital signal processing and computational methods for speech synthesis. | ||
==== '''Mechanical and Electro-Mechanical Era''' ==== | |||
This era commenced with early speech synthesis attempts in the late 18th century. In 1769, [[Wolfgang von Kempelen's Speaking Machine (1769)|Wolfgang von Kempelen]] engineered a mechanical speaking device that ingeniously emulated speech sounds through the mechanical simulation of vocal cords, vocal tracts, and lungs. Concurrently, [[Mechanical synthesis|Christian Kratzenstein]] embarked on early mechanical synthesis by employing five organ pipe-like resonators to successfully reproduce five distinct vowels. Later in 1824, [[Wheatstone's "Speaking Machine" (1824)|Wheatstone]] introduced a "Speaking Machine," and in 1846, Joseph Faber constructed "Euphonia," both utilizing a series of mechanical components to produce speech-like sounds. During this period, the mechanical and electro-mechanical era of speech synthesis was drawing to a close. | |||
Despite their constraints in generating an extensive vocabulary and sentences, these early endeavours made noteworthy contributions to the exploration of speech synthesis, igniting extensive research into the physiology of speech production and experimental phonetics. The comprehension of acoustic resonators, spectral components, and formants marked a pivotal shift toward the scientific investigation of human sound production, setting the stage for the subsequent era of speech synthesis <ref>Schroeder, M. R. (1993). A brief history of synthetic speech. Speech communication, 13(1-2), 231-237.</ref><ref name=":2">Story, B. H. (2019). History of speech synthesis. Teoksessa The Routledge Handbook of Phonetics, toimittaneet William F. Katz & Peter F. Assmann, 9-33.</ref>. | |||
=== ''' | ==== '''Electrical and Electronic Era''' ==== | ||
In the electrical and electronic era, Text-to-Speech (TTS) technology underwent a transformative evolution, with the convergence of electricity and emerging electronic components giving rise to more efficient speech synthesis systems. This era dates back to 1922 when John Q. Stewart designed a system utilizing early electronic technology to generate speech sounds, essentially introducing the first electrical formant synthesizer. Electronic speech coding was inaugurated by Homer Dudley in response to bandwidth limitations on telegraph cables and the substantial bandwidth requirements for transmitting spectral content. Dudley designed [[Voder]] in 1930 and later created the Vocoder in the 1940s, introducing the concept of analyzing speech into spectral components and synthesizing it<ref name=":2" />. | |||
The Vocoder laid the foundation for subsequent TTS developments and paved the way for further research in the field of speech synthesis. However, the Vocoder had limitations in terms of naturalness and intelligibility in speech synthesis, primarily due to its complexity and dependence on manual tuning, making it challenging to achieve high-quality and versatile speech synthesis <ref name=":2" />. | |||
== Key Innovations == | == Key Innovations == | ||
Innovations in vocoders are often developments in deep learning, signal processing and machine learning. | |||
According to Dudley, the use of vocoders is advantageous for more secure communications and a greater number of telephone channels in the same frequency space. Digital communication made eavesdropping conversations more difficult. However, the digitization of speech required wider transmission bandwidths. The vocoder was used to reduce the speech bitrate to a number that could be handled through the average telephone channel. <ref name=":0" /> | |||
Pitch detection in vocoding was a difficulty in the late 1950s. This is necessary to synthesize speech of acceptable quality. Then, vocoders can recreate variations in tone and pitch. <ref>Gold, B., & Rabiner, L. (1969). Parallel processing techniques for estimating pitch periods of speech in the time domain. ''The Journal of the Acoustical Society of America'', ''46''(2B), 442-448.</ref> Speech is created by vibrating the vocal folds, the air that is moving and closing the mouth. The vocal fold vibrations are analyzed to find the pitch. To estimate the fundamental frequency, the Lincoln Laboratory designed an algorithm that used parallel processing, which produced accurate results. <ref name=":0" /> | |||
In a channel vocoder, formant tracking helps recreate speech. It looks at the first four formant features to estimate the speech sounds. This method can make a vocoder use less data or bandwidth and still sound good. <ref>Gold, B. (1980, April). Formant representation of parameters for a channel vocoder. In ''ICASSP'80. IEEE International Conference on Acoustics, Speech, and Signal Processing'' (Vol. 5, pp. 128-130). IEEE.</ref> | |||
Another innovation is the speaker-dependent WaveNet vocoder that uses existing vocoder data as additional information. This method does not need to explicitly model certain aspects of speech and offers better sound quality. It successfully recovers lost information and captures source details more accurately compared to traditional vocoders. <ref>Tamamori, A., Hayashi, T., Kobayashi, K., Takeda, K., & Toda, T. (2017, August). Speaker-dependent wavenet vocoder. In ''Interspeech'' (Vol. 2017, pp. 1118-1122).</ref> | |||
WORLD, a vocoder-based speech synthesis system, was developed in an effort to improve the sound quality of real-time applications using speech. Real-time processing has been difficult because of the high computational costs. This new system has a good sound quality and is also quick in processing. <ref name=":3">Morise, M., Yokomori, F., & Ozawa, K. (2016). WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. ''IEICE TRANSACTIONS on Information and Systems'', ''99''(7), 1877-1884.</ref> | |||
== Impact == | == Impact == | ||
* Vocoders significantly decreased the bandwidth needs of voice signals, allowing for more cost-effective voice transmission and storage.<ref>Schroeder, M. R. (1966). Vocoders: Analysis and synthesis of speech. ''Proceedings of the IEEE'', ''54''(5), 720-734.</ref> | |||
* Vocoders are used for speech encryption to improve the intelligibility of poor quality speech. | |||
* The use of vocoder technologies has enabled voice transmission in noisy environments and over long distances, as well as facilitating communication for individuals with language barriers. The principles of signal processing employed in these techniques were subsequently adopted in other communication technologies, including cellular telephony.<ref>Mills, M. (2012). Media and prosthesis: The vocoder, the artificial larynx, and the history of signal processing. ''Qui Parle: Critical Humanities and Social Sciences'', ''21''(1), 107-149.</ref> | |||
* Vocoders have proven to be invaluable in cochlear implant research, enabling the acquisition of a more profound comprehension of how cochlear implants operate and facilitating the creation and development of innovative signal-processing solutions.<ref>Shannon, R. V., Zeng, F. G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. ''Science'', ''270''(5234), 303-304.</ref> | |||
* Within the domain of speech synthesis, vocoders created high-quality human voices and fostered the development of speech synthesis technology. | |||
* The vocoder's ability to encode and process sound paved the way for a new era of electronic music creation. It facilitated the manipulation of a range of vocal effects and propelled the evolution of popular music to new heights, shaping modern music profoundly. | |||
== Future Research == | == Future Research == | ||
Despite the significant advancements in technology and the development of new speech synthesis systems, ranging from [[DECtalk Speech Synthesizer (1984)|DECtalk]] in the 1980s to the cutting-edge [[Advancements in AI TTS (2020s)|AI models]] of today, vocoders continue to play a key role in various applications. Today, vocoders are integral components of state-of-the-art speech synthesis systems, including WORLD<ref | Despite the significant advancements in technology and the development of new speech synthesis systems, ranging from [[DECtalk Speech Synthesizer (1984)|DECtalk]] in the 1980s to the cutting-edge [[Advancements in AI TTS (2020s)|AI models]] of today, vocoders continue to play a key role in various applications. Today, vocoders are integral components of state-of-the-art speech synthesis systems, including WORLD<ref name=":3" />, designed specifically for real-time applications, and BigVGAN<ref>Lee, S. G., Ping, W., Ginsburg, B., Catanzaro, B., & Yoon, S. (2022). Bigvgan: A universal neural vocoder with large-scale training. ''arXiv preprint arXiv:2206.04658''.</ref> , which uses the power of generative adversarial networks (GANs<ref>Rocca, J. (2019). Understanding generative adversarial networks (gans). ''Medium'', ''7'', 20.</ref>). A majority of contemporary vocoders rely on neural networks for their operation, but improvements could be made in this direction too. Additionally, the integration of generative AI holds the promise of further enhancing the quality of vocoder-synthesized voices. | ||
Furthermore, vocoders are largely used in the field of music production, although we have yet to achieve the creation of truly authentic singing voices through them. The potential to explore and refine the use of vocoders in this context offers a pathway to a broader advancement in speech synthesis technology. | Furthermore, vocoders are largely used in the field of music production, although we have yet to achieve the creation of truly authentic singing voices through them. The potential to explore and refine the use of vocoders in this context offers a pathway to a broader advancement in speech synthesis technology, since the effort of synthesizing a singing voice may lead us to a more fluent and smooth speaking voice. | ||
Additionally, vocoders are currently deployed to address other challenges in voice technology. Vocoder-synthesized voices serve as tools in training noise-robust models<ref>Zheng, | Additionally, vocoders are currently deployed to address other challenges in voice technology. Vocoder-synthesized voices serve as tools in training noise-robust recognition models<ref>Zheng, N., Shi, Y., Kang, Y., & Meng, Q. (2021, June). A noise-robust signal processing strategy for cochlear implants using neural networks. In ''ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 8343-8347). IEEE.</ref> and detect fake audio<ref>Yan, X., Yi, J., Tao, J., Wang, C., Ma, H., Wang, T., ... & Fu, R. (2022, October). An initial investigation for detecting vocoder fingerprints of fake audio. In ''Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia'' (pp. 61-68).</ref>. The former represents a useful development not only for the industry of speech recognition but also an improvement for medical applications of speech recognition and processing. The latter allows us to take steps forward in the direction of cybersecurity and data protection<ref>Lim, S. Y., Chae, D. K., & Lee, S. C. (2022). Detecting deepfake voice using explainable deep learning techniques. ''Applied Sciences'', ''12''(8), 3926.</ref> since it could enable us to automatically detect identity theft attempts and privacy violations. | ||
== LLM Review == | == LLM Review == | ||
We prompted [https://chat.openai.com/auth/login ChatGPT] to act as a university professor and review this wiki page and some of the useful feedback it provided are: | |||
* Clarification: The term "prosody" may be unfamiliar to some readers. It would be helpful to briefly explain what prosody refers to in the context of speech synthesis. | |||
* Subheadings: Consider using subheadings to categorize the innovations discussed (e.g., "Advancements in Telecommunications" or "Pitch Detection Innovations"). This can make the content more organized and accessible. | |||
* The key innovations section is informative, but it would be helpful to organize the information into bullet points or subheadings. This would make it easier for readers to scan and understand the major innovations in vocoder technology. | |||
* The historical context section is informative but quite lengthy. Consider breaking it down into smaller, more digestible paragraphs and use subheadings for different eras to make it easier to follow. | |||
* Practical Implications: Expand on the practical implications of future research in each application area. How might these advancements benefit society or specific industries? | |||
We also tried to ask the [https://www.grammarly.com/business/learn/enterprise-grade-generative-ai/#:~:text=As%20a%20result%2C%20we%20have,GrammarlyGO%20holds%20user%20privacy%20paramount Grammarly LLM system] to "find faults in the argumentation", in order to have general feedback on how we explained ourselves, and quite interesting the answer was the following: ''"As an AI-powered assistant, I don't have personal opinions or the ability to find faults in an argument. However, I can provide a neutral analysis of the text. [...]''". | |||
For citations, ChatGPT recommended using complete citations insted of "[1]" and ["5"], however this is a Wiki Page where all of the citations are at the Reference section. For a Wiki Page it is commonly used to include numbers as in-text references. | |||
While AI-generated feedback can be a valuable resource, it should be used judiciously. It's important to balance AI input with our own thinking, it's crucial to exercise caution and not rely too heavily on their feedbacks. LLMs operate based on patterns and data they have been trained on, but they lack true understanding or context. They may generate feedbacks that appear plausible but are ultimately irrelevant to the specific context. | |||
== References == | == References == | ||
<references /> | <references /> | ||
== Team Members == | == Team Members == | ||
Alice Vanni | |||
* Alice Vanni | |||
* Amber Lankheet | |||
* Chenyi Lin | |||
* Erin Shi | |||
* Wenjun Meng |
Latest revision as of 18:09, 17 October 2023
Introduction[edit | edit source]
A vocoder is a crucial component in the realm of speech synthesis, employed in various applications ranging from telecommunications and entertainment to assistive technologies and virtual assistants. Its ability to analyze and synthesize the complex spectrum of human speech, including the pitch, formants, and prosody (melody, rhythm, and intonation in speech), has changed how we use synthesized voices.
First emerged in the 1930s, the term "vocoder" is derived from "voice" and "coder", and was initially conceived for telecommunication purposes. In the 1950s, Lincoln Laboratory conducted research on detecting pitch in speech, which subsequently influenced the development of voice coders, commonly known as vocoders. These devices are designed to decrease the amount of bandwidth required for transmitting speech. This reduction in bandwidth offers two advantages: it reduces the cost of transmitting and receiving speech, while also enhancing the potential for maintaining privacy.[1]
Initially, a vocoder described a device designed to compress speech for efficient transmission over telephone lines. The idea involved splitting speech into parts using filters and then reconstructing it on the other end. The goal was to save bandwidth, but in practice, early vocoders faced challenges in preserving the natural quality of speech. Additionally, the original vocoder only transmitted loudness, not sound quality[2].
As technology advanced, the phase vocoder emerged, preserving both the loudness and sound quality of speech. The phase vocoder is a significant advancement in vocoder technology and plays a pivotal role in modern speech synthesis and audio signal processing. It was developed to address critical limitations in early vocoders and revolutionized the way we process and manipulate audio signals. [2]
Historical Context[edit | edit source]
Speech synthesis, commonly known as Text-to-Speech (TTS), has gained increasing importance in people's lives. The development of TTS technology spans centuries, beginning with the Mechanical and Electro-Mechanical Era which relied on mechanical and electro-mechanical components to simulate speech sounds. It progressed through the Electrical and Electronic Era, where electronic technology was used to create early formant synthesizers and speech coding systems, and eventually entered the digital and computational era, marked by the transition to digital signal processing and computational methods for speech synthesis.
Mechanical and Electro-Mechanical Era[edit | edit source]
This era commenced with early speech synthesis attempts in the late 18th century. In 1769, Wolfgang von Kempelen engineered a mechanical speaking device that ingeniously emulated speech sounds through the mechanical simulation of vocal cords, vocal tracts, and lungs. Concurrently, Christian Kratzenstein embarked on early mechanical synthesis by employing five organ pipe-like resonators to successfully reproduce five distinct vowels. Later in 1824, Wheatstone introduced a "Speaking Machine," and in 1846, Joseph Faber constructed "Euphonia," both utilizing a series of mechanical components to produce speech-like sounds. During this period, the mechanical and electro-mechanical era of speech synthesis was drawing to a close.
Despite their constraints in generating an extensive vocabulary and sentences, these early endeavours made noteworthy contributions to the exploration of speech synthesis, igniting extensive research into the physiology of speech production and experimental phonetics. The comprehension of acoustic resonators, spectral components, and formants marked a pivotal shift toward the scientific investigation of human sound production, setting the stage for the subsequent era of speech synthesis [3][4].
Electrical and Electronic Era[edit | edit source]
In the electrical and electronic era, Text-to-Speech (TTS) technology underwent a transformative evolution, with the convergence of electricity and emerging electronic components giving rise to more efficient speech synthesis systems. This era dates back to 1922 when John Q. Stewart designed a system utilizing early electronic technology to generate speech sounds, essentially introducing the first electrical formant synthesizer. Electronic speech coding was inaugurated by Homer Dudley in response to bandwidth limitations on telegraph cables and the substantial bandwidth requirements for transmitting spectral content. Dudley designed Voder in 1930 and later created the Vocoder in the 1940s, introducing the concept of analyzing speech into spectral components and synthesizing it[4].
The Vocoder laid the foundation for subsequent TTS developments and paved the way for further research in the field of speech synthesis. However, the Vocoder had limitations in terms of naturalness and intelligibility in speech synthesis, primarily due to its complexity and dependence on manual tuning, making it challenging to achieve high-quality and versatile speech synthesis [4].
Key Innovations[edit | edit source]
Innovations in vocoders are often developments in deep learning, signal processing and machine learning.
According to Dudley, the use of vocoders is advantageous for more secure communications and a greater number of telephone channels in the same frequency space. Digital communication made eavesdropping conversations more difficult. However, the digitization of speech required wider transmission bandwidths. The vocoder was used to reduce the speech bitrate to a number that could be handled through the average telephone channel. [1]
Pitch detection in vocoding was a difficulty in the late 1950s. This is necessary to synthesize speech of acceptable quality. Then, vocoders can recreate variations in tone and pitch. [5] Speech is created by vibrating the vocal folds, the air that is moving and closing the mouth. The vocal fold vibrations are analyzed to find the pitch. To estimate the fundamental frequency, the Lincoln Laboratory designed an algorithm that used parallel processing, which produced accurate results. [1]
In a channel vocoder, formant tracking helps recreate speech. It looks at the first four formant features to estimate the speech sounds. This method can make a vocoder use less data or bandwidth and still sound good. [6]
Another innovation is the speaker-dependent WaveNet vocoder that uses existing vocoder data as additional information. This method does not need to explicitly model certain aspects of speech and offers better sound quality. It successfully recovers lost information and captures source details more accurately compared to traditional vocoders. [7]
WORLD, a vocoder-based speech synthesis system, was developed in an effort to improve the sound quality of real-time applications using speech. Real-time processing has been difficult because of the high computational costs. This new system has a good sound quality and is also quick in processing. [8]
Impact[edit | edit source]
- Vocoders significantly decreased the bandwidth needs of voice signals, allowing for more cost-effective voice transmission and storage.[9]
- Vocoders are used for speech encryption to improve the intelligibility of poor quality speech.
- The use of vocoder technologies has enabled voice transmission in noisy environments and over long distances, as well as facilitating communication for individuals with language barriers. The principles of signal processing employed in these techniques were subsequently adopted in other communication technologies, including cellular telephony.[10]
- Vocoders have proven to be invaluable in cochlear implant research, enabling the acquisition of a more profound comprehension of how cochlear implants operate and facilitating the creation and development of innovative signal-processing solutions.[11]
- Within the domain of speech synthesis, vocoders created high-quality human voices and fostered the development of speech synthesis technology.
- The vocoder's ability to encode and process sound paved the way for a new era of electronic music creation. It facilitated the manipulation of a range of vocal effects and propelled the evolution of popular music to new heights, shaping modern music profoundly.
Future Research[edit | edit source]
Despite the significant advancements in technology and the development of new speech synthesis systems, ranging from DECtalk in the 1980s to the cutting-edge AI models of today, vocoders continue to play a key role in various applications. Today, vocoders are integral components of state-of-the-art speech synthesis systems, including WORLD[8], designed specifically for real-time applications, and BigVGAN[12] , which uses the power of generative adversarial networks (GANs[13]). A majority of contemporary vocoders rely on neural networks for their operation, but improvements could be made in this direction too. Additionally, the integration of generative AI holds the promise of further enhancing the quality of vocoder-synthesized voices.
Furthermore, vocoders are largely used in the field of music production, although we have yet to achieve the creation of truly authentic singing voices through them. The potential to explore and refine the use of vocoders in this context offers a pathway to a broader advancement in speech synthesis technology, since the effort of synthesizing a singing voice may lead us to a more fluent and smooth speaking voice.
Additionally, vocoders are currently deployed to address other challenges in voice technology. Vocoder-synthesized voices serve as tools in training noise-robust recognition models[14] and detect fake audio[15]. The former represents a useful development not only for the industry of speech recognition but also an improvement for medical applications of speech recognition and processing. The latter allows us to take steps forward in the direction of cybersecurity and data protection[16] since it could enable us to automatically detect identity theft attempts and privacy violations.
LLM Review[edit | edit source]
We prompted ChatGPT to act as a university professor and review this wiki page and some of the useful feedback it provided are:
- Clarification: The term "prosody" may be unfamiliar to some readers. It would be helpful to briefly explain what prosody refers to in the context of speech synthesis.
- Subheadings: Consider using subheadings to categorize the innovations discussed (e.g., "Advancements in Telecommunications" or "Pitch Detection Innovations"). This can make the content more organized and accessible.
- The key innovations section is informative, but it would be helpful to organize the information into bullet points or subheadings. This would make it easier for readers to scan and understand the major innovations in vocoder technology.
- The historical context section is informative but quite lengthy. Consider breaking it down into smaller, more digestible paragraphs and use subheadings for different eras to make it easier to follow.
- Practical Implications: Expand on the practical implications of future research in each application area. How might these advancements benefit society or specific industries?
We also tried to ask the Grammarly LLM system to "find faults in the argumentation", in order to have general feedback on how we explained ourselves, and quite interesting the answer was the following: "As an AI-powered assistant, I don't have personal opinions or the ability to find faults in an argument. However, I can provide a neutral analysis of the text. [...]".
For citations, ChatGPT recommended using complete citations insted of "[1]" and ["5"], however this is a Wiki Page where all of the citations are at the Reference section. For a Wiki Page it is commonly used to include numbers as in-text references.
While AI-generated feedback can be a valuable resource, it should be used judiciously. It's important to balance AI input with our own thinking, it's crucial to exercise caution and not rely too heavily on their feedbacks. LLMs operate based on patterns and data they have been trained on, but they lack true understanding or context. They may generate feedbacks that appear plausible but are ultimately irrelevant to the specific context.
References[edit | edit source]
- ↑ 1.0 1.1 1.2 Gold, B. (1990). A history of Vocoder research at Lincoln Laboratory. The Lincoln Laboratory Journal, 3(2), 163-202.
- ↑ 2.0 2.1 Gordon, J. W., & Strawn, J. (1987). An introduction to the phase vocoder (No. 55). CCRMA, Department of Music, Stanford University.
- ↑ Schroeder, M. R. (1993). A brief history of synthetic speech. Speech communication, 13(1-2), 231-237.
- ↑ 4.0 4.1 4.2 Story, B. H. (2019). History of speech synthesis. Teoksessa The Routledge Handbook of Phonetics, toimittaneet William F. Katz & Peter F. Assmann, 9-33.
- ↑ Gold, B., & Rabiner, L. (1969). Parallel processing techniques for estimating pitch periods of speech in the time domain. The Journal of the Acoustical Society of America, 46(2B), 442-448.
- ↑ Gold, B. (1980, April). Formant representation of parameters for a channel vocoder. In ICASSP'80. IEEE International Conference on Acoustics, Speech, and Signal Processing (Vol. 5, pp. 128-130). IEEE.
- ↑ Tamamori, A., Hayashi, T., Kobayashi, K., Takeda, K., & Toda, T. (2017, August). Speaker-dependent wavenet vocoder. In Interspeech (Vol. 2017, pp. 1118-1122).
- ↑ 8.0 8.1 Morise, M., Yokomori, F., & Ozawa, K. (2016). WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE TRANSACTIONS on Information and Systems, 99(7), 1877-1884.
- ↑ Schroeder, M. R. (1966). Vocoders: Analysis and synthesis of speech. Proceedings of the IEEE, 54(5), 720-734.
- ↑ Mills, M. (2012). Media and prosthesis: The vocoder, the artificial larynx, and the history of signal processing. Qui Parle: Critical Humanities and Social Sciences, 21(1), 107-149.
- ↑ Shannon, R. V., Zeng, F. G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270(5234), 303-304.
- ↑ Lee, S. G., Ping, W., Ginsburg, B., Catanzaro, B., & Yoon, S. (2022). Bigvgan: A universal neural vocoder with large-scale training. arXiv preprint arXiv:2206.04658.
- ↑ Rocca, J. (2019). Understanding generative adversarial networks (gans). Medium, 7, 20.
- ↑ Zheng, N., Shi, Y., Kang, Y., & Meng, Q. (2021, June). A noise-robust signal processing strategy for cochlear implants using neural networks. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8343-8347). IEEE.
- ↑ Yan, X., Yi, J., Tao, J., Wang, C., Ma, H., Wang, T., ... & Fu, R. (2022, October). An initial investigation for detecting vocoder fingerprints of fake audio. In Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia (pp. 61-68).
- ↑ Lim, S. Y., Chae, D. K., & Lee, S. C. (2022). Detecting deepfake voice using explainable deep learning techniques. Applied Sciences, 12(8), 3926.
Team Members[edit | edit source]
- Alice Vanni
- Amber Lankheet
- Chenyi Lin
- Erin Shi
- Wenjun Meng