Commerical TTS - Google, Amazon, Apple and Microsoft (2010s): Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
(added text info for Apple, no citations)
No edit summary
Line 7: Line 7:


=== Google ===
=== Google ===
Google began exploring text-to-speech(TTS) in earnest when it acquired the British-based artificial company, [https://en.wikipedia.org/wiki/Google_DeepMind DeepMind], in January 2014.<ref>Hodson, H. (n.d.). DeepMind and Google: The battle to control artificial intelligence.</ref> Though DeepMind maintains a portion of control over its intellectual property, exclusive of Google, the foundations derived from the subsidiary have helped create some of the widely recognized devices that utilize TTS today. In  2016, DeepMind revealed to the world WaveNet, a deep neural network for generating raw audio waveforms.<ref>Oord, A. van den, Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., & Kavukcuoglu, K. (2016). WaveNet: A Generative Model for Raw Audio (arXiv:1609.03499). arXiv. https://doi.org/10.48550/arXiv.1609.03499</ref> Following the release of WaveNet, Google began to create and refine multiple products, utilizing WaveNet’s model, including Google Assistant, released in the same year, and eventually Google Cloud Speech API, released in 2017. The power that WaveNet had over other TTS models was clear, as Google Assistant began to take the stage as the forefront of Intelligent Personal Assistants, overtaking both Siri and Cortant, and only being matched by Alexa in some regards.<ref>Berdasco, A., López, G., Diaz, I., Quesada, L., & Guerrero, L. A. (2019). User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana. Proceedings, 31(1), Article 1. https://doi.org/10.3390/proceedings2019031051</ref>
Google began exploring text-to-speech (TTS) in earnest when it acquired the British-based artificial company, [https://en.wikipedia.org/wiki/Google_DeepMind DeepMind], in January 2014.<ref>Hodson, H. (n.d.). DeepMind and Google: The battle to control artificial intelligence.</ref> Though DeepMind maintains a portion of control over its intellectual property, exclusive of Google, the foundations derived from the subsidiary have helped create some of the widely recognized devices that utilize TTS today. In  2016, DeepMind revealed to the world WaveNet, a deep neural network for generating raw audio waveforms.<ref>Oord, A. van den, Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., & Kavukcuoglu, K. (2016). WaveNet: A Generative Model for Raw Audio (arXiv:1609.03499). arXiv. https://doi.org/10.48550/arXiv.1609.03499</ref> Following the release of WaveNet, Google began to create and refine multiple products, utilizing WaveNet’s model, including Google Assistant, released in the same year, and eventually Google Cloud Speech API, released in 2017. The power that WaveNet had over other TTS models was clear, as Google Assistant began to take the stage as the forefront of Intelligent Personal Assistants, overtaking both Siri and Cortant, and only being matched by Alexa in some regards.<ref>Berdasco, A., López, G., Diaz, I., Quesada, L., & Guerrero, L. A. (2019). User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana. Proceedings, 31(1), Article 1. https://doi.org/10.3390/proceedings2019031051</ref>


== Key Innovations ==
== Key Innovations ==


=== Microsoft ===
=== Microsoft ===
The first application of speech synthesis by Microsoft was in Windows 2000, with the release of Microsoft Narrator<ref name=":0" />. Narrator was subsequently improved to sound more natural, such as through formant analysis using [[Hidden Markov Models in Speech Synthesis|Hidden Markov Models]]<ref name=":1" />. Microsoft also showed one of the first bilingual speech synthesis systems, called Mulan<ref>Chu, M., Peng, H., Zhao, Y., Niu, Z., & Chang, E. (2003, April). Microsoft Mulan-a bilingual TTS system. In ''2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).'' (Vol. 1, pp. I-I). IEEE.
The first application of speech synthesis by Microsoft was in Windows 2000, with the release of Microsoft Narrator<ref name=":0" />. Narrator was subsequently improved to sound more natural, such as through formant analysis using [[Hidden Markov Models in Speech Synthesis|Hidden Markov Models]]<ref name=":1" /> and other [[Advancements in Neural Network-Based TTS (2000s)|advancements in neural network-based TTS]]. Microsoft also showed one of the first bilingual speech synthesis systems, called Mulan<ref>Chu, M., Peng, H., Zhao, Y., Niu, Z., & Chang, E. (2003, April). Microsoft Mulan-a bilingual TTS system. In ''2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).'' (Vol. 1, pp. I-I). IEEE.


</ref>. Mulan is a model that can switch between Mandarin and English within a sentence, maintaining the intonation and voice quality. This was a major development for text-to-speech, especially as the combination of tonal and stress languages normally proves to be difficult. The application of this technology was according to the authors mostly to allow for more natural sounding switching between English terms in Chinese text.
</ref>. Mulan is a model that can switch between Mandarin and English within a sentence, maintaining the intonation and voice quality. This was a major development for text-to-speech, especially as the combination of tonal and stress languages normally proves to be difficult. The application of this technology was according to the authors mostly to allow for more natural sounding switching between English terms in Chinese text.

Revision as of 08:32, 18 October 2023

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Historical Context

Microsoft

Microsoft seriously entered the field of text-to-speech with the recruitment of the speech scientist Dr. Xuedong Huang, one of developers of the SPHINX-II system, a successor of Carnegie Mellon's Harpy System[1]. With Huang at the wheel, Microsoft developed their Speech Application Programming Interface (API), for speech recognition and speech synthesis applications inside the Windows 2000 and Windows XP operating systems[2][3]. An early application from this line of development is Microsoft Sam, a male text-to-speech voice that was used by the Microsoft Narrator, a screen reading application that is meant as an accessibility feature for the visually impaired[4]. The development of more natural sounding text-to-speech systems then moved to Whistler, a trainable system that aimed to move away from concatenative and formant synthesis methods by generalizing from data instead [5]. Other advancements included Hidden Markov Models and later Microsoft moved to deep neural networks like others[6][7][8]. Later versions of Windows added more natural sounding voices with different accents to Narrator and their other services, but the Microsoft Speech API remains a key part in their development.

Google

Google began exploring text-to-speech (TTS) in earnest when it acquired the British-based artificial company, DeepMind, in January 2014.[9] Though DeepMind maintains a portion of control over its intellectual property, exclusive of Google, the foundations derived from the subsidiary have helped create some of the widely recognized devices that utilize TTS today. In 2016, DeepMind revealed to the world WaveNet, a deep neural network for generating raw audio waveforms.[10] Following the release of WaveNet, Google began to create and refine multiple products, utilizing WaveNet’s model, including Google Assistant, released in the same year, and eventually Google Cloud Speech API, released in 2017. The power that WaveNet had over other TTS models was clear, as Google Assistant began to take the stage as the forefront of Intelligent Personal Assistants, overtaking both Siri and Cortant, and only being matched by Alexa in some regards.[11]

Key Innovations

Microsoft

The first application of speech synthesis by Microsoft was in Windows 2000, with the release of Microsoft Narrator[4]. Narrator was subsequently improved to sound more natural, such as through formant analysis using Hidden Markov Models[6] and other advancements in neural network-based TTS. Microsoft also showed one of the first bilingual speech synthesis systems, called Mulan[12]. Mulan is a model that can switch between Mandarin and English within a sentence, maintaining the intonation and voice quality. This was a major development for text-to-speech, especially as the combination of tonal and stress languages normally proves to be difficult. The application of this technology was according to the authors mostly to allow for more natural sounding switching between English terms in Chinese text.

In light of the introduction of voice assistants, Microsoft released Cortana in April of 2014. Cortana was a personal digital assistant using speech recognition for user input and speech synthesis for system output, and was based on the Microsoft Speech API. However, by the end of 2023, Microsoft phased out Cortana out of most of its services in favour of Bing Chat AI and Windows Copilot[13].

Google

At the advent of WaveNet in 2016, Google updated all their existing applications to better fit with the new TTS model that had hit the markets. Google Translate transitioned to a neural machine translation premise, which updated its capabilities in translating for grammar and larger sentences, while also allowing for Google Translate to better pronounce the translated phrases to the recipient.[14] This continued with the creation of Google Assistant: a smoother Personal Assistant that provided faster information and a much more pleasant, engaging experience than its competitors in Alexa and Siri.[15] The release of Google Assistant, done in the same year that WaveNet was revealed, also prompted the reveal of Google Home, an ambient voice-recognition interface for Google Assistant based at an individual home, which became a direct competitor to Amazon's Alexa.

Finally, in 2017, Google released Google Cloud Speech API, which allowed developers to transmute text into human-like speech in more than 180 voices across more than 30 variant languages, and has gone through multiple revisions since then, adding more support and languages to the API for use.[16]

Apple

Apple begun its venture with text-to-speech (TTS) technology in the 1980s with the introduction of the MacInTalk text-to-speech system on the Macintosh in 1984. This early effort paved the way for the integration of TTS technology in Apple's ecosystem, making digital text audibly accessible to users. Over the decades, Apple continued to refine and expand its TTS capabilities within its PlainTalk package, showcasing a commitment to making its platforms more inclusive and user-friendly. Notable advancements include the introduction of high-quality voices and multilingual support, which significantly improved the user experience.

With the surge of mobile technology, Apple leveraged its TTS technology to further enhance accessibility on its iOS devices. Siri, introduced in 2011, embodied a major leap in Apple's TTS and speech recognition capabilities, allowing users to interact with their devices using natural language. The integration of TTS technology in various Apple applications and features, such as VoiceOver, Speak Selection, and Speak Screen, has demonstrated Apple's ongoing efforts to support individuals with visual impairments and learning disabilities. Through consistent innovation, Apple has played a pivotal role in advancing TTS technology and promoting its application in enhancing accessibility and user interaction across its product lineup.

Impact

Microsoft

Although Narrator itself did not become very popular as an accessibility feature, the dominance of Windows in the operating systems market and the dire need for good screen readers manifested a breeding ground for the development of screen readers, in particular the popular Job Access With Speech (JAWS) and NonVisual Desktop Access (NVDA)[17]. This has had an enormous impact on the speech synthesis field for accessibility applications and the computer usage of visually impaired people.

Google

Due to Google Translate and Google Cloud Speech API's positive reactions, the two applications have been used in a variety of research based on speech recognition. Google Translate has the potential to be used as a tool for Self-Directed Language Learning, allowing for a much more accessible way to learn multiple languages.[18] Google Cloud Speech API has been used and found in comparative experiments where it is tested against other TTS learning models, such as Kaldi,[19] as a Bluetooth navigator for special needs individuals,[20] and for speech recognition application for the speech impaired.[21]

Apple

Apple has had a substantial impact on the field of speech synthesis, contributing to both the technology's development and its widespread adoption. Through its various product offerings, Apple has brought speech synthesis to the forefront, making it an integral part of modern user interfaces. The introduction of Siri, Apple's voice-activated personal assistant, in 2011 marked a significant milestone in promoting speech synthesis and natural language processing to a broader consumer audience. Siri's ability to understand and respond to natural language queries demonstrated a practical application of speech synthesis, setting a precedent for other tech companies to integrate similar functionalities in their offerings.

Moreover, Apple's continuous improvement and integration of speech synthesis in accessibility features like VoiceOver, have been instrumental in aiding individuals with visual impairments or learning disabilities. The high-quality, natural-sounding voices provided by Apple's speech synthesis engines have set a high standard in the industry, pushing other companies to improve the quality and naturalness of synthesized speech. By making speech synthesis a core component of its user experience and accessibility initiatives, Apple has significantly influenced the perception and the technological trajectory of speech synthesis, fostering innovation and setting a benchmark for user-centric design in the tech industry.

Future research

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

LLM Review

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Contributors

Brandi Hongell, Ömer Tarik Özyilmaz, Jocomin Galarneau, Xiaoling (River) Lin, Yuxing (Patrick) Ouyang

References

  1. Huang, X., Alleva, F., Hon, H. W., Hwang, M. Y., Lee, K. F., & Rosenfeld, R. (1993). The SPHINX-II speech recognition system: an overview. Computer Speech & Language, 7(2), 137-148.
  2. https://en.wikipedia.org/wiki/Microsoft_Speech_API
  3. Këpuska, V., & Bohouta, G. (2017). Comparing speech recognition systems (Microsoft API, Google API and CMU Sphinx). Int. J. Eng. Res. Appl, 7(03), 20-24.
  4. 4.0 4.1 https://support.microsoft.com/en-us/windows/complete-guide-to-narrator-e4397a0d-ef4f-b386-d8ae-c172f109bdb1
  5. Huang, X., Acero, A., Adcock, J., Hon, H. W., Goldsmith, J., Liu, J., & Plumpe, M. (1996, October). Whistler: A trainable text-to-speech system. In Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP'96 (Vol. 4, pp. 2387-2390). IEEE.
  6. 6.0 6.1 Acero, A. (1999). Formant analysis and synthesis using hidden Markov models. In Sixth European Conference on Speech Communication and Technology.
  7. Dahl, G. E., Yu, D., Deng, L., & Acero, A. (2011). Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, 20(1), 30-42.
  8. https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/
  9. Hodson, H. (n.d.). DeepMind and Google: The battle to control artificial intelligence.
  10. Oord, A. van den, Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., & Kavukcuoglu, K. (2016). WaveNet: A Generative Model for Raw Audio (arXiv:1609.03499). arXiv. https://doi.org/10.48550/arXiv.1609.03499
  11. Berdasco, A., López, G., Diaz, I., Quesada, L., & Guerrero, L. A. (2019). User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana. Proceedings, 31(1), Article 1. https://doi.org/10.3390/proceedings2019031051
  12. Chu, M., Peng, H., Zhao, Y., Niu, Z., & Chang, E. (2003, April). Microsoft Mulan-a bilingual TTS system. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03). (Vol. 1, pp. I-I). IEEE.
  13. https://support.microsoft.com/en-us/topic/end-of-support-for-cortana-d025b39f-ee5b-4836-a954-0ab646ee1efa
  14. This is how Google Translate actually works. (2021, March 24). The Independent. https://www.independent.co.uk/tech/how-does-google-translate-work-b1821775.html
  15. Lynley, M. (2016, May 18). Google unveils Google Assistant, a virtual assistant that’s a big upgrade to Google Now. TechCrunch. https://techcrunch.com/2016/05/18/google-unveils-google-assistant-a-big-upgrade-to-google-now/
  16. Barri, E., Gkamas, A., Michos, E., Bouras, C., Koulouri, C., & Salgado, S. A. K. (2020). Text to Speech through Bluetooth for People with Special Needs Navigation.
  17. https://www.theverge.com/23203911/screen-readers-history-blind-henter-curran-teh-nvda
  18. van Lieshout, C., & Cardoso, W. (2022). Google Translate as a tool for self-directed language learning. http://hdl.handle.net/10125/73460
  19. Kimura, T., Nose, T., Hirooka, S., Chiba, Y., & Ito, A. (2019). Comparison of Speech Recognition Performance Between Kaldi and Google Cloud Speech API. In J.-S. Pan, A. Ito, P.-W. Tsai, & L. C. Jain (Eds.), Recent Advances in Intelligent Information Hiding and Multimedia Signal Processing (pp. 109–115). Springer International Publishing. https://doi.org/10.1007/978-3-030-03748-2_13
  20. Barri, E., Gkamas, A., Michos, E., Bouras, C., Koulouri, C., & Salgado, S. A. K. (2020). Text to Speech through Bluetooth for People with Special Needs Navigation.
  21. Anggraini, N., Kurniawan, A., Wardhani, L. K., & Hakiem, N. (2018). Speech Recognition Application for the Speech Impaired using the Android-based Google Cloud Speech API. TELKOMNIKA (Telecommunication Computing Electronics and Control), 16(6), Article 6. https://doi.org/10.12928/telkomnika.v16i6.9638