Commercial TTS - Google, Amazon, Apple and Microsoft (2010s)

From MSc Voice Technology
Jump to navigation Jump to search

Introduction[edit | edit source]

The landscape of text-to-speech (TTS) technology has evolved significantly over the years, with key players like Microsoft, Google, Apple, and Amazon shaping its trajectory. The roots of TTS can be traced back to the 1980s, with the emergence of HMM-based speech synthesis systems (HTS). In the late 1990s, the Festival Speech Synthesis System marked a milestone. However, it was in the 2000s that advancements in Neural Network-Based TTS began to redefine the field.

Microsoft, propelled by the expertise of speech scientist Dr. Xuedong Huang, made substantial strides in TTS. From the early Microsoft Sam to the more advanced Whistler system, Microsoft delved into Hidden Markov Models and eventually embraced deep neural networks. The development of the Microsoft Speech API played a pivotal role in integrating natural-sounding voices into Windows operating systems.

Google, after acquiring DeepMind in 2014, unleashed WaveNet—a deep neural network that revolutionized TTS. WaveNet became the driving force behind products like Google Assistant and Google Cloud Speech API, propelling Google to the forefront of intelligent personal assistants.

Amazon, originating as an online bookstore, ventured into TTS with the Kindle 2 in 2009. However, it was the launch of Amazon Polly in 2016, leveraging deep learning for lifelike speech, that marked a significant leap. Amazon's virtual assistant, Alexa, further solidified their position in the TTS landscape.

Apple, with its MacInTalk system in the 1980s, laid the foundation for TTS integration into its ecosystem. Siri, introduced in 2011, represented a paradigm shift in user-device interaction. Apple's commitment to inclusivity and innovation continued with features like VoiceOver, Speak Selection, and Speak Screen.

In the subsequent sections, we will explore the key innovations brought forth by each company, examining how these developments have shaped the accessibility, user experience, and broader implications of TTS technology.

Historical Context[edit | edit source]

Microsoft[edit | edit source]

Microsoft seriously entered the field of text-to-speech with the recruitment of the speech scientist Dr. Xuedong Huang, one of developers of the SPHINX-II system, a successor of Carnegie Mellon's Harpy System[1]. With Huang at the wheel, Microsoft developed their Speech Application Programming Interface (API), for speech recognition and speech synthesis applications inside the Windows 2000 and Windows XP operating systems[2][3]. An early application from this line of development is Microsoft Sam, a male text-to-speech voice that was used by the Microsoft Narrator, a screen reading application that is meant as an accessibility feature for the visually impaired[4]. The development of more natural sounding text-to-speech systems then moved to Whistler, a trainable system that aimed to move away from concatenative and formant synthesis methods by generalizing from data instead [5]. Other advancements included Hidden Markov Models and later Microsoft moved to deep neural networks like others[6][7][8]. Later versions of Windows added more natural sounding voices with different accents to Narrator and their other services, but the Microsoft Speech API remains a key part in their development.

Google[edit | edit source]

Google began exploring text-to-speech (TTS) in earnest when it acquired the British-based artificial company, DeepMind, in January 2014.[9] Though DeepMind maintains a portion of control over its intellectual property, exclusive of Google, the foundations derived from the subsidiary have helped create some of the widely recognized devices that utilize TTS today. In 2016, DeepMind revealed to the world WaveNet, a deep neural network for generating raw audio waveforms.[10] Following the release of WaveNet, Google began to create and refine multiple products, utilizing WaveNet’s model, including Google Assistant, released in the same year, and eventually Google Cloud Speech API, released in 2017. The power that WaveNet had over other TTS models was clear, as Google Assistant began to take the stage as the forefront of Intelligent Personal Assistants, overtaking both Siri and Cortant, and only being matched by Alexa in some regards.[11]

Amazon[edit | edit source]

Amazon, established in 1994 by Jeff Bezos, began as an online bookstore, expanding into various domains as technology evolved[12]. It dove into TTS when it launched the Kindle 2 in 2009, which had a "Read-to-Me" feature[13]. This was a landmark moment as it integrated TTS technology into a mainstream commercial product.

Key Innovations[edit | edit source]

Microsoft[edit | edit source]

The first application of speech synthesis by Microsoft was in Windows 2000, with the release of Microsoft Narrator[4]. Narrator was subsequently improved to sound more natural, such as through formant analysis using Hidden Markov Models[6] and other advancements in neural network-based TTS. Microsoft also showed one of the first bilingual speech synthesis systems, called Mulan[14]. Mulan is a model that can switch between Mandarin and English within a sentence, maintaining the intonation and voice quality. This was a major development for text-to-speech, especially as the combination of tonal and stress languages normally proves to be difficult. The application of this technology was according to the authors mostly to allow for more natural sounding switching between English terms in Chinese text.

In light of the introduction of voice assistants, Microsoft released Cortana in April of 2014. Cortana was a personal digital assistant using speech recognition for user input and speech synthesis for system output, and was based on the Microsoft Speech API. However, by the end of 2023, Microsoft phased out Cortana out of most of its services in favour of Bing Chat AI and Windows Copilot[15].

Google[edit | edit source]

At the advent of WaveNet in 2016, Google updated all their existing applications to better fit with the new TTS model that had hit the markets. Google Translate transitioned to a neural machine translation premise, which updated its capabilities in translating for grammar and larger sentences, while also allowing for Google Translate to better pronounce the translated phrases to the recipient.[16] This continued with the creation of Google Assistant: a smoother Personal Assistant that provided faster information and a much more pleasant, engaging experience than its competitors in Alexa and Siri.[17] The release of Google Assistant, done in the same year that WaveNet was revealed, also prompted the reveal of Google Home, an ambient voice-recognition interface for Google Assistant based at an individual home, which became a direct competitor to Amazon's Alexa.

Finally, in 2017, Google released Google Cloud Speech API, which allowed developers to transmute text into human-like speech in more than 180 voices across more than 30 variant languages, and has gone through multiple revisions since then, adding more support and languages to the API for use.[18]

Apple[edit | edit source]

Apple begun its venture with text-to-speech (TTS) technology in the 1980s with the introduction of the MacInTalk text-to-speech system on the Macintosh in 1984. This early effort paved the way for the integration of TTS technology in Apple's ecosystem, making digital text audibly accessible to users. Over the decades, Apple continued to refine and expand its TTS capabilities within its PlainTalk package, showcasing a commitment to making its platforms more inclusive and user-friendly. Notable advancements include the introduction of high-quality voices and multilingual support, which significantly improved the user experience.

With the surge of mobile technology, Apple leveraged its TTS technology to further enhance accessibility on its iOS devices. Siri, introduced in 2011, embodied a major leap in Apple's TTS and speech recognition capabilities, allowing users to interact with their devices using natural language[19]. The integration of TTS technology in various Apple applications and features, such as VoiceOver, Speak Selection, and Speak Screen, has demonstrated Apple's ongoing efforts to support individuals with visual impairments and learning disabilities. Through consistent innovation, Apple has played a pivotal role in advancing TTS technology and promoting its application in enhancing accessibility and user interaction across its product lineup.

Amazon[edit | edit source]

The Kindle 2 was just the tip of the iceberg. While it allowed text content to be read aloud, the voice was mechanical, lacking human-like intonations.

Amazon Polly was the next significant milestone. Launched in 2016 as part of AI functionalities of Amazon Web Services (AWS)[20]. Polly leveraged deep learning to produce lifelike speech. It supported multiple languages and offered a plethora of voices. Beyond simple reading, Polly could whisper[21], vary speech rates, and even emulate different styles[22].

Another major stride was with Alexa, Amazon's virtual assistant, introduced in 2014[23]. Alexa's voice interactions were driven by sophisticated TTS engines. The innovation didn’t stop with English. Alexa was soon multilingual, speaking in German, Japanese, and other languages[24].

Voice User Interfaces (VUIs) were another addition, wherein the device could recognize individual users and customize responses[25]. This personalization made interactions more user-centric and increased user engagement.

Impact[edit | edit source]

Microsoft[edit | edit source]

Although Narrator itself did not become very popular as an accessibility feature, the dominance of Windows in the operating systems market and the dire need for good screen readers manifested a breeding ground for the development of screen readers, in particular the popular Job Access With Speech (JAWS) and NonVisual Desktop Access (NVDA)[26]. This has had an enormous impact on the speech synthesis field for accessibility applications and the computer usage of visually impaired people.

Google[edit | edit source]

Due to Google Translate and Google Cloud Speech API's positive reactions, the two applications have been used in a variety of research based on speech recognition. Google Translate has the potential to be used as a tool for Self-Directed Language Learning, allowing for a much more accessible way to learn multiple languages.[27] Google Cloud Speech API has been used and found in comparative experiments where it is tested against other TTS learning models, such as Kaldi,[28] as a Bluetooth navigator for special needs individuals,[29] and for speech recognition application for the speech impaired.[30]

Apple[edit | edit source]

Apple has had a substantial impact on the field of speech synthesis, contributing to both the technology's development and its widespread adoption. Through its various product offerings, Apple has brought speech synthesis to the forefront, making it an integral part of modern user interfaces. The introduction of Siri, Apple's voice-activated personal assistant, in 2011 marked a significant milestone in promoting speech synthesis and natural language processing to a broader consumer audience. Siri's ability to understand and respond to natural language queries demonstrated a practical application of speech synthesis, setting a precedent for other tech companies to integrate similar functionalities in their offerings[31].

Apple's continuous improvement and integration of speech synthesis in accessibility features like VoiceOver, have been instrumental in aiding individuals with visual impairments or learning disabilities[32]. By making speech synthesis a core component of its user experience and accessibility initiatives, Apple has significantly influenced the perception and the technological trajectory of speech synthesis, fostering innovation and setting a benchmark for user-centric design in the tech industry.

Amazon[edit | edit source]

Amazon's TTS features, especially in the Kindle, expanded accessibility for various user groups. For the visually impaired and those with reading disabilities such as dyslexia, the TTS capabilities of Kindle devices became invaluable, allowing for auditory consumption of textual content[33]. This aligned with a broader trend where TTS technologies were increasingly recognized for their potential to bridge accessibility gaps in education[34].

With Alexa and the Amazon Echo's ecosystem, voice-driven shopping emerged as a new trend. Users could conduct transactions, reorder products, and get product recommendations using voice commands. This introduced a more intuitive and hands-free shopping experience, evolving the e-commerce landscape[35].

The widespread adoption of Alexa and other voice-enabled devices brought forth concerns regarding data security and user privacy. There were rising questions about how voice data was stored, processed, and potentially utilized, sparking debates about user consent and the ethics of voice data collection[36].

Future research[edit | edit source]

As the evolution of commercial TTS systems by major tech giants in the 2010s has provided a solid foundation for voice technologies, future researchers may find it intriguing to delve into the socio-cultural implications of these advancements. For instance, how have these TTS technologies impacted individuals with disabilities, particularly in terms of accessibility and inclusion? Another pertinent line of exploration could be the integration of emotion and sentiment in TTS, understanding its impact on user experience and potential applications in areas like mental health support. Additionally, as concerns around privacy and data security grow, it might be worthwhile to investigate the ethical dimensions of TTS, especially how these companies are managing user data and the potential biases that might arise from the data sources they utilize. Lastly, with the emergence of new and localized languages, studying the challenges and opportunities in expanding TTS offerings to support indigenous and lesser-known languages would shed light on the technological efforts in preserving linguistic diversity.

LLM Review[edit | edit source]

Reviewed by GPT-4:

Your text provides a comprehensive overview of the evolution and contributions of major tech companies in the field of text-to-speech (TTS). As a professor of speech science reviewing this piece, I'd offer the following feedback:

Score: 90/100

Strengths:

1. Organization and Structure: Your text has a clear and logical structure, beginning with a broad overview and then diving into company-specific histories and innovations.

2. Comprehensive Detail: You provide a broad range of historical and technological details that showcase deep research and understanding.

3. Citations: The use of references enhances the credibility of your information.

4. Analysis: The "Impact" section critically evaluates the contributions and implications of TTS advancements

Areas for Improvement:

1. Redundancy: There are some instances of repeated information, particularly regarding the historical context of each company's TTS efforts. For example, the mention of Dr. Xuedong Huang's contributions and Microsoft's efforts is repeated in the "Introduction" and "Historical Context" sections.

2. Flow and Transition: Some sections could benefit from smoother transitions to maintain reader engagement and ensure the narrative flow remains uninterrupted.

3. Further Elaboration: In the "Future research" section, you may want to include more about the potential ethical concerns related to deepfakes or synthesized voice impersonations.

4. Grammatical Consistency: Ensure that tense remains consistent throughout the document. For instance, the switch from past to present tense in certain parts can be jarring.

Overall: This is a well-researched and written piece that effectively captures the evolution of TTS technology among tech giants. By addressing the areas of improvement mentioned above, this text could be even more polished and impactful. Great job!

Contributors[edit | edit source]

Brandi Hongell, Ömer Tarik Özyilmaz, Jocomin Galarneau, Xiaoling (River) Lin, Yuxing (Patrick) Ouyang

References[edit | edit source]

  1. Huang, X., Alleva, F., Hon, H. W., Hwang, M. Y., Lee, K. F., & Rosenfeld, R. (1993). The SPHINX-II speech recognition system: an overview. Computer Speech & Language, 7(2), 137-148.
  2. https://en.wikipedia.org/wiki/Microsoft_Speech_API
  3. Këpuska, V., & Bohouta, G. (2017). Comparing speech recognition systems (Microsoft API, Google API and CMU Sphinx). Int. J. Eng. Res. Appl, 7(03), 20-24.
  4. 4.0 4.1 https://support.microsoft.com/en-us/windows/complete-guide-to-narrator-e4397a0d-ef4f-b386-d8ae-c172f109bdb1
  5. Huang, X., Acero, A., Adcock, J., Hon, H. W., Goldsmith, J., Liu, J., & Plumpe, M. (1996, October). Whistler: A trainable text-to-speech system. In Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP'96 (Vol. 4, pp. 2387-2390). IEEE.
  6. 6.0 6.1 Acero, A. (1999). Formant analysis and synthesis using hidden Markov models. In Sixth European Conference on Speech Communication and Technology.
  7. Dahl, G. E., Yu, D., Deng, L., & Acero, A. (2011). Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Transactions on audio, speech, and language processing, 20(1), 30-42.
  8. https://azure.microsoft.com/en-us/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/
  9. Hodson, H. (n.d.). DeepMind and Google: The battle to control artificial intelligence.
  10. Oord, A. van den, Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., & Kavukcuoglu, K. (2016). WaveNet: A Generative Model for Raw Audio (arXiv:1609.03499). arXiv. https://doi.org/10.48550/arXiv.1609.03499
  11. Berdasco, A., López, G., Diaz, I., Quesada, L., & Guerrero, L. A. (2019). User Experience Comparison of Intelligent Personal Assistants: Alexa, Google Assistant, Siri and Cortana. Proceedings, 31(1), Article 1. https://doi.org/10.3390/proceedings2019031051
  12. Byers, A. (2007). Jeff Bezos: the founder of Amazon. com. The Rosen Publishing Group.
  13. Stone, B. (2013). The everything store: Jeff Bezos and the age of Amazon. Random House.
  14. Chu, M., Peng, H., Zhao, Y., Niu, Z., & Chang, E. (2003, April). Microsoft Mulan-a bilingual TTS system. In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03). (Vol. 1, pp. I-I). IEEE.
  15. https://support.microsoft.com/en-us/topic/end-of-support-for-cortana-d025b39f-ee5b-4836-a954-0ab646ee1efa
  16. This is how Google Translate actually works. (2021, March 24). The Independent. https://www.independent.co.uk/tech/how-does-google-translate-work-b1821775.html
  17. Lynley, M. (2016, May 18). Google unveils Google Assistant, a virtual assistant that’s a big upgrade to Google Now. TechCrunch. https://techcrunch.com/2016/05/18/google-unveils-google-assistant-a-big-upgrade-to-google-now/
  18. Barri, E., Gkamas, A., Michos, E., Bouras, C., Koulouri, C., & Salgado, S. A. K. (2020). Text to Speech through Bluetooth for People with Special Needs Navigation.
  19. Capes, T., Coles, P., Conkie, A., Golipour, L., Hadjitarkhani, A., Hu, Q., ... & Zhang, H. (2017, August). Siri on-device deep learning-guided unit selection text-to-speech system. In Interspeech (pp. 4011-4015).
  20. https://techcrunch.com/2016/11/30/amazon-launches-amazon-ai-to-bring-its-machine-learning-smarts-to-developers/
  21. https://aws.amazon.com/blogs/aws/new-amazon-polly-speech-marks/
  22. https://aws.amazon.com/blogs/aws/amazon-polly-introduces-neural-text-to-speech-and-newscaster-style/
  23. Luger, E., & Sellen, A. (2016, May). " Like Having a Really Bad PA" The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 5286-5297).
  24. https://techcrunch.com/2019/10/11/alexa-now-speaks-spanish-including-in-multi-lingual-mode/
  25. Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018, April). Voice interfaces in everyday life. In proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1-12).
  26. https://www.theverge.com/23203911/screen-readers-history-blind-henter-curran-teh-nvda
  27. van Lieshout, C., & Cardoso, W. (2022). Google Translate as a tool for self-directed language learning. http://hdl.handle.net/10125/73460
  28. Kimura, T., Nose, T., Hirooka, S., Chiba, Y., & Ito, A. (2019). Comparison of Speech Recognition Performance Between Kaldi and Google Cloud Speech API. In J.-S. Pan, A. Ito, P.-W. Tsai, & L. C. Jain (Eds.), Recent Advances in Intelligent Information Hiding and Multimedia Signal Processing (pp. 109–115). Springer International Publishing. https://doi.org/10.1007/978-3-030-03748-2_13
  29. Barri, E., Gkamas, A., Michos, E., Bouras, C., Koulouri, C., & Salgado, S. A. K. (2020). Text to Speech through Bluetooth for People with Special Needs Navigation.
  30. Anggraini, N., Kurniawan, A., Wardhani, L. K., & Hakiem, N. (2018). Speech Recognition Application for the Speech Impaired using the Android-based Google Cloud Speech API. TELKOMNIKA (Telecommunication Computing Electronics and Control), 16(6), Article 6. https://doi.org/10.12928/telkomnika.v16i6.9638
  31. Aron, J. (2011). How innovative is Apple's new voice assistant, Siri?.
  32. Sayago, S., & Ribera, M. (2020, December). Apple Siri (input)+ Voice Over (output)= a de facto marriage: An exploratory case study with blind people. In Proceedings of the 9th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion (pp. 6-10).
  33. Schneps, M. H., Thomson, J. M., Sonnert, G., Pomplun, M., Chen, C., & Heffner-Wong, A. (2013). Shorter lines facilitate reading in those who struggle. PloS one, 8(8), e71161.
  34. Wood, E., Willoughby, T., Rushing, A., Bechtel, L., & Gilbert, J. (2005). Use of computer input devices by older adults. Journal of Applied Gerontology, 24(5), 419-438.
  35. https://searchengineland.com/survey-alexa-frequently-used-assistant-cortana-seen-accurate-269052
  36. Martin, K. (2019). Ethical implications and accountability of algorithms. Journal of business ethics, 160, 835-850.