Speech Recognition

From MSc Voice Technology
Revision as of 11:33, 16 September 2023 by Ting (talk | contribs)
Jump to navigation Jump to search

Overview of innovations

Key events in the history of speech recognition in roughly chronological order follow:

  • Bell Labs' Auditory Model (1950s): Early work on speech recognition started with Bell Laboratories' auditory model, which aimed to simulate human hearing and speech recognition processes
  • IBM Shoebox (1961): "IBM 704," was a pioneering device that could recognize and respond to spoken digits. It was one of the earliest digital speech recognition systems.
  • DARPA Speech Understanding Research (1970s): DARPA's research program in the 1970s resulted in systems capable of recognizing a thousand words.
  • Dynamic Time Warping (1970s): DTW has been applied to cope with different speaking speeds in the context of non-linear fluctuations occurring in speech pattern versus time axis.
  • Hidden Markov Models (1970s): HMMs revolutionized speech recognition by allowing more accurate modeling of speech patterns, leading to improvements in accuracy.
  • Carnegie Mellon's Harpy System (1980s): The Harpy system demonstrated the feasibility of continuous speech recognition.
  • Dragon Dictate (1990): Dragon Systems introduced Dragon Dictate, one of the first commercially successful speech recognition software packages for personal computers, making speech recognition more accessible. In 1997 Dragon NaturallySpeaking was one of the first commercially successful continuous speech recognition software packages for personal computers.
  • Large Vocabulary Continuous Speech Recognition (1990s): LVCSR systems were capable of recognizing large vocabularies.
  • Introduction of Voice Assistants (2010s): Voice assistants like Apple's Siri, Amazon's Alexa, and Google Assistant brought ASR into daily life for more people.
  • Deep Learning Revolution (2010s): Deep learning techniques, esp deep neural networks (DNNs), improved speech recognition accuracy.
  • Development of End-to-End Models (2010s): End-to-end models, incl ASR systems directly mapping audio to text, simplified the recognition pipeline.
  • Multimodal Speech Recognition (2010s): The "interaction with the virtual and physical environment through natural modes of communication".
  • Template - Use for inspiration only, complete adherence is not required, but it is advisable!