Editing
Development of End-to-End Models
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Historical Context == In 2006, Alex Graves and his colleagues introduced '''Connectionist Temporal Classification''' (CTC)<ref name=":0">Graves, Alex, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. n.d. “Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks.”https://www.cs.toronto.edu/~graves/icml_2006.pdf</ref>, which is considered a precursor to end-to-end models. The CTC loss function allows for training deep neural networks end-to-end for tasks like ASR. The previously unavoidable task of segmenting the sound into chunks representing words or phones was now redundant.<ref name=":0" /> From Late 2000s to 2010s, [[Deep Learning Revolution|deep learning]] brings remarkable improvements in many researches, speech recognition also gain development in this boom. In 2011, Yu Dong, Deng Li, etc. from Microsoft Research Institute proposed a [[Hidden Markov Models|hidden Markov model]] combined with context-based deep neural network which named context-dependent (CD)-DNN-[[Hidden Markov Models|HMM]] <ref>Dahl, G.E.; Yu, D.; Deng, L.; Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2011, 20, 30–42.https://ieeexplore.ieee.org/document/5740583</ref>. It achieved significant performance gains compared to traditional [[Hidden Markov Models|HMM]]-GMM system in [[Large Vocabulary Continuous Speech Recognition|large vocabulary continuous speech recognition]] task. Since then, [[Large Vocabulary Continuous Speech Recognition|LVCSR]] technology using deep learning has begun to be widely studied. [[Large Vocabulary Continuous Speech Recognition|LVCSR]] can be divided into two categories: [[Hidden Markov Models|HMM]]-based model and the end-to-end model<ref>Tom Bäckström, Okko Räsänen, Abraham Zewoudie, Pablo Pérez Zarazaga, Liisa Koivusalo, Sneha Das, Esteban Gómez Mellado, Mariem Bouafif Mansali, Daniel Ramos, Sudarsana Kadiri and Paavo Alku “''Introduction to Speech Processing''”, 2nd Edition, 2022. https://speechprocessingbook.aalto.fi/</ref>. In the [[Hidden Markov Models|HMM]]-based model, different modules use different technologies and play different roles. [[Hidden Markov Models|HMM]] is mainly used to do [[Dynamic Time Warping|dynamic time warping]] at the frame level. GMM and DNN are used to calculate [[Hidden Markov Models|HMM]] hidden states’ emission probability.<ref>Miao, Y.; Gowayyed, M.; Metze, F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In Proceedings of the 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, 13–17 December 2015; pp. 167–174.https://arxiv.org/abs/1507.08240</ref> The construction process and working mode of the [[Hidden Markov Models|HMM]]-based model determines if it faces the following difficulties in practical use: First, the training process is complex and difficult to be globally optimized. [[Hidden Markov Models|HMM]]-based model often uses different training methods and data sets to train different modules. Each module is independently optimized with their own optimization objective functions which are generally different from the true [[Large Vocabulary Continuous Speech Recognition|LVCSR]] performance evaluation criteria. So the optimality of each module does not necessarily mean the global optimality <ref>Zhang, Y.; Pezeshki, M.; Brakel, P.; Zhang, S.; Laurent, C.; Bengio, Y.; Courville, A. Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. arXiv 2017<nowiki/>https://arxiv.org/abs/1701.02720</ref><ref>Graves, A.; Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21 June–26 June 2014; pp. 1764–1772.https://proceedings.mlr.press/v32/graves14.html</ref>. Second, conditional independent assumptions. To simplify the model's construction and training, the [[Hidden Markov Models|HMM]]-based model uses conditional independence assumptions within [[Hidden Markov Models|HMM]] and between different modules, which does not match the actual situation of [[Large Vocabulary Continuous Speech Recognition|LVCSR]]. Due to the above-mentioned shortcomings of the [[Hidden Markov Models|HMM]]-based model, coupled with the promotion of deep learning technology, Researchers began exploring end-to-end models as an alternative to traditional systems. In 2014, Baidu's '''Deep Speech''', led by Andrew Ng's team, demonstrated the potential of end-to-end models for [[Large Vocabulary Continuous Speech Recognition|LVCSR]]. Their Deep Speech system used deep neural networks (DNNs) to map audio to text directly. <ref>Hannun, Awni, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, et al. “Deep Speech: Scaling up End-to-End Speech Recognition.” arXiv, December 19, 2014. https://arxiv.org/abs/1412.5567</ref> In 2015, a breakthrough came with the introduction of the '''Listen, Attend, and Spell''' (LAS) model by Chan and his colleagues from Google Brain and Carnegie Mellon University. LAS used an attention mechanism to improve sequence-to-sequence mapping for automatic speech recognition (ASR) tasks. <ref>Chan, William, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. “Listen, Attend and Spell.” arXiv, August 19, 2015. https://arxiv.org/abs/1508.01211</ref> Since then Attention mechanisms, originally developed for machine translation, have become a critical component of end-to-end ASR models. In 2018, Vaswani and his colleagues first introduced '''Transformer'''<ref>Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. “Attention Is All You Need.” arXiv, August 1, 2023. https://arxiv.org/abs/1706.03762</ref> , have revolutionized various natural language processing tasks, in which speech recognition was included. Models like Conformer and ESPnet's Transformer ASR have achieved state-of-the-art results in ASR tasks, since then, end-to-end ASR models have been adopted by major technology companies, including Google, Amazon, and Microsoft, for their voice assistants and transcription services.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information