Development of End-to-End Models: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
 
(22 intermediate revisions by 5 users not shown)
Line 1: Line 1:
YiningLei, Xinyi Ma, Qing Li, Jingwen Shi
== Introduction ==
== Introduction ==
The development of end-to-end models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.</ref> Framed in the [[Deep Learning Revolution|Deep Learning]] context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.<ref>Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).</ref> The functional structure of end-to-end models is shown below:<blockquote>'''''L'' = {<math>l_1</math>,···,<math>l_n</math>}'''  ''<small>output sequence</small>''
The development of end-to-end models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.https://www.mdpi.com/2073-8994/11/8/1018</ref> Framed in the [[Deep Learning Revolution|deep learning]] context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.<ref>Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).https://arxiv.org/abs/1704.08305</ref> The functional structure of end-to-end models is shown below:<blockquote>'''''L'' = {<math>l_1</math>,···,<math>l_n</math>}'''  ''<small>output sequence</small>''


<math>\uparrow</math>
<math>\uparrow</math>
Line 26: Line 24:
== Historical Context ==
== Historical Context ==


In 2006, Alex Graves and his colleagues introduced Connectionist Temporal Classification (CTC)<ref name=":0">Graves, Alex, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. n.d. “Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks.</ref>, which is considered a precursor to end-to-end models. The CTC loss function allows for training deep neural networks end-to-end for tasks like ASR. The previously unavoidable task of segmenting the sound into chunks representing words or phones was now redundant.<ref name=":0" />
In 2006, Alex Graves and his colleagues introduced '''Connectionist Temporal Classification''' (CTC)<ref name=":0">Graves, Alex, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. n.d. “Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks.”https://www.cs.toronto.edu/~graves/icml_2006.pdf</ref>, which is considered a precursor to end-to-end models. The CTC loss function allows for training deep neural networks end-to-end for tasks like ASR. The previously unavoidable task of segmenting the sound into chunks representing words or phones was now redundant.<ref name=":0" />


From Late 2000s to 2010s, deep learning brings remarkable improvements in many researches, speech recognition also gain development in this boom. In 2011, Yu Dong, Deng Li, etc. from Microsoft Research Institute proposed a hidden Markov model combined with context-based deep neural network which named context-dependent (CD)-DNN-HMM <ref>Dahl, G.E.; Yu, D.; Deng, L.; Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2011, 20, 30–42.</ref>. It achieved significant performance gains compared to traditional HMM-GMM system in [[Large Vocabulary Continuous Speech Recognition|LVCSR]] task. Since then, [[Large Vocabulary Continuous Speech Recognition|LVCSR]] technology using deep learning has begun to be widely studied.
From Late 2000s to 2010s, [[Deep Learning Revolution|deep learning]] brings remarkable improvements in many researches, speech recognition also gain development in this boom. In 2011, Yu Dong, Deng Li, etc. from Microsoft Research Institute proposed a [[Hidden Markov Models|hidden Markov model]] combined with context-based deep neural network which named context-dependent (CD)-DNN-[[Hidden Markov Models|HMM]] <ref>Dahl, G.E.; Yu, D.; Deng, L.; Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2011, 20, 30–42.https://ieeexplore.ieee.org/document/5740583</ref>. It achieved significant performance gains compared to traditional [[Hidden Markov Models|HMM]]-GMM system in [[Large Vocabulary Continuous Speech Recognition|large vocabulary continuous speech recognition]] task. Since then, [[Large Vocabulary Continuous Speech Recognition|LVCSR]] technology using deep learning has begun to be widely studied. [[Large Vocabulary Continuous Speech Recognition|LVCSR]] can be divided into two categories: [[Hidden Markov Models|HMM]]-based model and the end-to-end model<ref>Tom Bäckström, Okko Räsänen, Abraham Zewoudie, Pablo Pérez Zarazaga, Liisa Koivusalo, Sneha Das, Esteban Gómez Mellado, Mariem Bouafif Mansali, Daniel Ramos, Sudarsana Kadiri and Paavo Alku “''Introduction to Speech Processing''”, 2nd Edition, 2022. https://speechprocessingbook.aalto.fi/</ref>. In the [[Hidden Markov Models|HMM]]-based model, different modules use different technologies and play different roles. [[Hidden Markov Models|HMM]] is mainly used to do [[Dynamic Time Warping|dynamic time warping]] at the frame level. GMM and DNN are used to calculate [[Hidden Markov Models|HMM]] hidden states’ emission probability.<ref>Miao, Y.; Gowayyed, M.; Metze, F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In Proceedings of the 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, 13–17 December 2015; pp. 167–174.https://arxiv.org/abs/1507.08240</ref> The construction process and working mode of the [[Hidden Markov Models|HMM]]-based model determines if it faces the following difficulties in practical use: First, the training process is complex and difficult to be globally optimized. [[Hidden Markov Models|HMM]]-based model often uses different training methods and data sets to train different modules. Each module is independently optimized with their own optimization objective functions which are generally different from the true [[Large Vocabulary Continuous Speech Recognition|LVCSR]] performance evaluation criteria. So the optimality of each module does not necessarily mean the global optimality <ref>Zhang, Y.; Pezeshki, M.; Brakel, P.; Zhang, S.; Laurent, C.; Bengio, Y.; Courville, A. Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. arXiv 2017<nowiki/>https://arxiv.org/abs/1701.02720</ref><ref>Graves, A.; Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21 June–26 June 2014; pp. 1764–1772.https://proceedings.mlr.press/v32/graves14.html</ref>. Second, conditional independent assumptions. To simplify the model's construction and training, the [[Hidden Markov Models|HMM]]-based model uses conditional independence assumptions within [[Hidden Markov Models|HMM]] and between different modules, which does not match the actual situation of [[Large Vocabulary Continuous Speech Recognition|LVCSR]]. Due to the above-mentioned shortcomings of the [[Hidden Markov Models|HMM]]-based model, coupled with the promotion of deep learning technology, Researchers began exploring end-to-end models as an alternative to traditional systems.


[[Large Vocabulary Continuous Speech Recognition|LVCSR]] can be divided into two categories: HMM-based model and the end-to-end model<ref>Tom Bäckström, Okko Räsänen, Abraham Zewoudie, Pablo Pérez Zarazaga, Liisa Koivusalo, Sneha Das, Esteban Gómez Mellado, Mariem Bouafif Mansali, Daniel Ramos, Sudarsana Kadiri and Paavo Alku “''Introduction to Speech Processing''”, 2nd Edition, 2022. URL: <nowiki>https://speechprocessingbook.aalto.fi</nowiki>, DOI: 10.5281/zenodo.6821775.</ref>. In the HMM-based model, different modules use different technologies and play different roles. HMM is mainly used to do [[dynamic time warping]] at the frame level. GMM and DNN are used to calculate HMM hidden states’ emission probability.<ref>Miao, Y.; Gowayyed, M.; Metze, F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In Proceedings of the 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, 13–17 December 2015; pp. 167–174.</ref> The construction process and working mode of the HMM-based model determines if it faces the following difficulties in practical use: First, the training process is complex and difficult to be globally optimized. HMM-based model often uses different training methods and data sets to train different modules. Each module is independently optimized with their own optimization objective functions which are generally different from the true [[Large Vocabulary Continuous Speech Recognition|LVCSR]] performance evaluation criteria. So the optimality of each module does not necessarily mean the global optimality <ref>Zhang, Y.; Pezeshki, M.; Brakel, P.; Zhang, S.; Laurent, C.; Bengio, Y.; Courville, A. Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. arXiv 2017</ref><ref>Graves, A.; Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21 June–26 June 2014; pp. 1764–1772.</ref>. Second, conditional independent assumptions. To simplify the model’s construction and training, the HMM-based model uses conditional independence assumptions within HMM and between different modules, which does not match the actual situation of [[Large Vocabulary Continuous Speech Recognition|LVCSR]]. Due to the above-mentioned shortcomings of the HMM-based model, coupled with the promotion of deep learning technology, Researchers began exploring end-to-end models as an alternative to traditional systems.
In 2014, Baidu's '''Deep Speech''', led by Andrew Ng's team, demonstrated the potential of end-to-end models for [[Large Vocabulary Continuous Speech Recognition|LVCSR]]. Their Deep Speech system used deep neural networks (DNNs) to map audio to text directly. <ref>Hannun, Awni, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, et al. “Deep Speech: Scaling up End-to-End Speech Recognition.” arXiv, December 19, 2014. https://arxiv.org/abs/1412.5567</ref> In 2015, a breakthrough came with the introduction of the '''Listen, Attend, and Spell''' (LAS) model by Chan and his colleagues from Google Brain and Carnegie Mellon University. LAS used an attention mechanism to improve sequence-to-sequence mapping for automatic speech recognition (ASR) tasks. <ref>Chan, William, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. “Listen, Attend and Spell.” arXiv, August 19, 2015. https://arxiv.org/abs/1508.01211</ref> Since then Attention mechanisms, originally developed for machine translation, have become a critical component of end-to-end ASR models. In 2018, Vaswani and his colleagues first introduced '''Transformer'''<ref>Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. “Attention Is All You Need.” arXiv, August 1, 2023. https://arxiv.org/abs/1706.03762</ref> , have revolutionized various natural language processing tasks, in which speech recognition was included. Models like Conformer and ESPnet's Transformer ASR have achieved state-of-the-art results in ASR tasks, since then, end-to-end ASR models have been adopted by major technology companies, including Google, Amazon, and Microsoft, for their voice assistants and transcription services.
 
In 2014, Baidu's '''Deep Speech''', led by Andrew Ng's team, demonstrated the potential of end-to-end models for large vocabulary continuous speech recognition [[Large Vocabulary Continuous Speech Recognition|LVCSR]]. Their Deep Speech system used deep neural networks (DNNs) to map audio to text directly. <ref>Hannun, Awni, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, et al. “Deep Speech: Scaling up End-to-End Speech Recognition.” arXiv, December 19, 2014. <nowiki>http://arxiv.org/abs/1412.5567</nowiki>.</ref> In 2015, a breakthrough came with the introduction of the '''Listen, Attend, and Spell''' (LAS) model by Chan and his colleagues from Google Brain and Carnegie Mellon University. LAS used an attention mechanism to improve sequence-to-sequence mapping for automatic speech recognition (ASR) tasks. <ref>Chan, William, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. “Listen, Attend and Spell.” arXiv, August 19, 2015. <nowiki>http://arxiv.org/abs/1508.01211</nowiki>.</ref> Since then Attention mechanisms, originally developed for machine translation, have become a critical component of end-to-end ASR models. In 2018, Vaswani and his colleagues first introduced '''Transformer'''<ref>Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. “Attention Is All You Need.” arXiv, August 1, 2023. <nowiki>http://arxiv.org/abs/1706.03762</nowiki>.</ref> , have revolutionized various natural language processing tasks, in which speech recognition was included. Models like Conformer and ESPnet's Transformer ASR have achieved state-of-the-art results in ASR tasks, since then, end-to-end ASR models have been adopted by major technology companies, including Google, Amazon, and Microsoft, for their voice assistants and transcription services.


== Key Innovations ==
== Key Innovations ==
End-to-end modeling can directly translate the speech input into the output only using a single neural network, unlike the traditional one, which has several independent elements.


In traditional ASR, the majority of ASR systems comprise distinct acoustic, pronunciation, and language model components, each trained separately. The creation of a pronunciation lexicon and the specification of phoneme sets for a specific language necessitate expertise and are time-intensive tasks.  ([https://ibb.co/SQ4x7fF]) shows its structure.
==== Significant Shift : Simplifying Speech Recognition ====
End-to-end modeling can directly translate the speech input into the output only using a single neural network, unlike the traditional one, which has several independent elements. In traditional ASR, the majority of ASR systems comprise distinct acoustic, pronunciation, and language model components, each trained separately. The creation of a pronunciation lexicon and the specification of phoneme sets for a specific language necessitate expertise and are time-intensive tasks.  ([https://ibb.co/SQ4x7fF]) shows its structure.


[https://ibb.co/5FGLdfG E2E] (Click here to view the end-to-end models workflow) speech recognition significantly simplifies the complexity of traditional models. Manual labeling of information is unnecessary, as the neural network can autonomously learn language and pronunciation details.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” ''Symmetry'' 11(8):1018.</ref>
[https://ibb.co/5FGLdfG E2E] (Click here to view the end-to-end models workflow) speech recognition significantly simplifies the complexity of traditional models. Manual labeling of information is unnecessary, as the neural network can autonomously learn language and pronunciation details.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” ''Symmetry'' 11(8):1018.https://www.mdpi.com/2073-8994/11/8/1018</ref>


===== Advantage of End-to-End Models ASR =====
==== Main Structures of End-to-End models ====
Initially, end-to-end models models simplify the ASR pipeline substantially by directly generating characters or even words. Conversely, the design of traditional hybrid models is intricate, demanding extensive expertise and years of ASR experience.
 
Furthermore, the utilization of a single network for ASR makes end-to-end models models significantly more compact compared to traditional hybrid models. This compactness enables easy deployment of end-to-end models models on high-accuracy devices.<ref>Li J .Recent Advances in End-to-End Automatic Speech Recognition[J].  2021.DOI:10.48550/arXiv.2111.01690.</ref>
 
Last, end-to-end models has a much simpler training approach with models, which reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language.<ref>Orken M , Dina O , Keylan A ,et al.A study of transformer-based end-to-end speech recognition system for Kazakh language[J].Scientific Reports[2023-09-17].DOI:10.1038/s41598-022-12260-y.</ref>
 
===== Main Structures for End-to-End models Speech RecognitionCTC =====
'''CTC'''


===== 1. Connectionist Temporal Classification (CTC) =====
As a technique to train an acoustic model without the need for precise frame-level alignments. Initially, using CTC to generate target phonemes didn't truly constitute an end-to-end method, as it still relied on language models. CTC allows for training an acoustic model without the necessity of frame-level alignments aligning acoustic data with the transcriptions.
As a technique to train an acoustic model without the need for precise frame-level alignments. Initially, using CTC to generate target phonemes didn't truly constitute an end-to-end method, as it still relied on language models. CTC allows for training an acoustic model without the necessity of frame-level alignments aligning acoustic data with the transcriptions.


The utilization of CTC as the loss function in training the acoustic model represents an end-to-end training approach. This method eliminates the necessity for prior data alignment, requiring only an input sequence and an output sequence for training. Consequently, manual alignment and labeling of data become unnecessary. Moreover, CTC's direct output sequence prediction doesn't require external post-processing. Within the CTC framework, 'blank' is introduced (denoting no predicted value for a given frame). Each prediction's classification corresponds to a spike in the speech waveform, while non-spike regions are considered 'blank.' In a speech sequence, CTC ultimately generates a spike sequence, irrespective of each phoneme's duration.
The utilization of CTC as the loss function in training the acoustic model represents an end-to-end training approach. This method eliminates the necessity for prior data alignment, requiring only an input sequence and an output sequence for training. Consequently, manual alignment and labeling of data become unnecessary. Moreover, CTC's direct output sequence prediction doesn't require external post-processing. Within the CTC framework, 'blank' is introduced (denoting no predicted value for a given frame). Each prediction's classification corresponds to a spike in the speech waveform, while non-spike regions are considered 'blank.' In a speech sequence, CTC ultimately generates a spike sequence, irrespective of each phoneme's duration.


'''RNN'''
===== 2. Recurrent Neural Network (RNN)-Transducer =====
RNN-transducer lists all potential rigid alignments and aggregates them for achieving flexible alignment. However, unlike CTC, RNN-transducer does not assume label independence during the enumeration of rigid alignments. Consequently, it differs from CTC in terms of how paths are defined and probabilities are calculated.<ref>Boyer, Florian, Yusuke Shinohara, Takaaki Ishii, Hirofumi Inaguma, and Shinji Watanabe. “A Study of Transducer Based End-to-End ASR with ESPnet: Architecture, Auxiliary Loss and Decoding Strategies.” arXiv, January 14, 2022. <nowiki>http://arxiv.org/abs/2201.05420</nowiki>.</ref>


RNN-transducer lists all potential rigid alignments and aggregates them for achieving flexible alignment. However, unlike CTC, RNN-transducer does not assume label independence during the enumeration of rigid alignments. Consequently, it differs from CTC in terms of how paths are defined and probabilities are calculated.<ref>Wang D , Wang X , Lv S .An Overview of End-to-End Automatic Speech Recognition[J].Symmetry, 2019, 11(8):1018.DOI:10.3390/sym11081018.</ref>
===== 3. Attention Mechanism =====
Attention-based Encoder-Decoder Models made their initial appearance within the domain of neural machine translation. The primary purpose of the Attention Mechanism is to address issues present in traditional RNN-based sequence-to-sequence models. This approach takes a different route by foregoing the enumeration of all potential rigid alignments. Instead, it employs the Attention mechanism to directly get soft alignment details between the input data and output labels.  


'''Attention model'''
==== Advantage of End-to-End Models ASR ====
Initially, end-to-end models simplify the ASR pipeline substantially by directly generating characters or even words. Conversely, the design of traditional hybrid models is intricate, demanding extensive expertise and years of ASR experience.


Attention-based Encoder-Decoder Models made their initial appearance within the domain of neural machine translation. The primary purpose of the Attention Mechanism is to address issues present in traditional RNN-based Seq2Seq models. This approach takes a different route by foregoing the enumeration of all potential rigid alignments. Instead, it employs the Attention mechanism to directly get soft alignment details between the input data and output labels.  
Furthermore, the utilization of a single network for ASR makes end-to-end models significantly more compact compared to traditional hybrid models. This compactness enables easy deployment of end-to-end models on high-accuracy devices.<ref>Li J .Recent Advances in End-to-End Automatic Speech Recognition[J].  2021.https://arxiv.org/abs/2111.01690</ref>
 
Last, end-to-end models has a much simpler training approach with models, which reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language.<ref>Orken M , Dina O , Keylan A ,et al.A study of transformer-based end-to-end speech recognition system for Kazakh language[J].Scientific Reports[2023-09-17].https://www.nature.com/articles/s41598-022-12260-y</ref>


== Impact ==
== Impact ==


=== The impact on the field of speech recognition ===
==== 1.The Impact on the Field of Speech Recognition ====


==== Simplify Processes and Reduce Labor Costs: ====
===== Simplify Processes and Reduce Labor Costs: =====
The end-to-end model eliminates the need for multiple processing steps and manual feature engineering, reducing the time and effort to develop and maintain the model. In the aspect of speech recognition, the end-to-end model simplifies the architecture of the speech recognition system, eliminating the need for complex hand-designed feature extractors or acoustic models.
The end-to-end model eliminates the need for multiple processing steps and manual feature engineering, reducing the time and effort to develop and maintain the model. In the aspect of speech recognition, the end-to-end model simplifies the architecture of the speech recognition system, eliminating the need for complex hand-designed feature extractors or acoustic models.


==== Better Performance: ====
===== Better Performance: =====
End-to-end models can extract relevant features directly from the original data, sometimes better than traditional processes to capture the complex relationships of the data, so the performance of the model may be improved. The end-to-end model can better capture the complex relationship between audio and text, which usually enables speech recognition to achieve higher recognition accuracy than traditional methods.
End-to-end models can extract relevant features directly from the original data, sometimes better than traditional processes to capture the complex relationships of the data, so the performance of the model may be improved. The end-to-end model can better capture the complex relationship between audio and text, which usually enables speech recognition to achieve higher recognition accuracy than traditional methods.


==== More Extensive Applications: ====
===== More Extensive Applications: =====
The end-to-end model can be applied to a variety of tasks, such as speech recognition, machine translation, image processing and so on, making the solutions of these tasks easier to implement and popularize. For speech recognition, this simplified method can be easily applied to a variety of speech recognition tasks, including multiple languages, accents from all over the world, language habits of various cultural backgrounds and speech recognition in specific fields.
The end-to-end model can be applied to a variety of tasks, such as speech recognition, machine translation, image processing and so on, making the solutions of these tasks easier to implement and popularize. For speech recognition, this simplified method can be easily applied to a variety of speech recognition tasks, including multiple languages, accents from all over the world, language habits of various cultural backgrounds and speech recognition in specific fields.


==== Faster Experimental Iteration: ====
===== Faster Experimental Iteration: =====
The simplified modeling process enables researchers to try new ideas and algorithms faster and faster, accelerating research progress. The end-to-end model provides more flexibility and innovative possibilities for researchers in the field of speech recognition.
The simplified modeling process enables researchers to try new ideas and algorithms faster and faster, accelerating research progress. The end-to-end model provides more flexibility and innovative possibilities for researchers in the field of speech recognition.


==== More Efficient Training: ====
===== More Efficient Training: =====
End-to-end models may require more data to train, and more computer resources to process and train these large models. But at the same time, end-to-end training can provide more efficient training for speech recognition, and direct training from original audio to text can make more efficient use of a large number of tagged data and improve the performance of the model.
End-to-end models may require more data to train, and more computer resources to process and train these large models. But at the same time, end-to-end training can provide more efficient training for speech recognition, and direct training from original audio to text can make more efficient use of a large number of tagged data and improve the performance of the model.


==== Listen, Attend and Spell (LAS) ====
===== Listen, Attend and Spell (LAS) =====
Listen, Attend and Spell (LAS), is an attention-based neural network that can di-rectly transcribe acoustic signals to characters. LAS is based on the sequence to sequence framework with a pyramid structure in the encoder that reduces the number of timesteps that the decoder has to attend to. LAS is trained end-to-end and has two main components: a listener and a speller. The listener is a pyramidal recurrent net- work encoder that accepts filter bank spectra as inputs. The speller is an attention- based recurrent network decoder that emits characters as outputs. The network produces character sequences without making any independence assumptions be- tween the characters. This is the key improvement of LAS over previous end-to-end CTC models. On a subset of the Google voice search task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a language model, and 10.3% with language model rescoring over the top 32 beams. By comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0%.<ref>Chan, William, Navdeep Jaitly, Quoc V. Le和Oriol Vinyals. "Listen, Attend and Spell". arXiv, 19.08.2015. http://arxiv.org/abs/1508.01211.</ref>  
Listen, Attend and Spell (LAS), is an attention-based neural network that can di-rectly transcribe acoustic signals to characters. LAS is based on the sequence to sequence framework with a pyramid structure in the encoder that reduces the number of timesteps that the decoder has to attend to. LAS is trained end-to-end and has two main components: a listener and a speller. The listener is a pyramidal recurrent net- work encoder that accepts filter bank spectra as inputs. The speller is an attention- based recurrent network decoder that emits characters as outputs. The network produces character sequences without making any independence assumptions be- tween the characters. This is the key improvement of LAS over previous end-to-end [[wikipedia:Connectionist_temporal_classification|CTC models]]. On a subset of the Google voice search task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a language model, and 10.3% with language model rescoring over the top 32 beams. By comparison, the state-of-the-art [https://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/43455.pdf CLDNN]-HMM model achieves a WER of 8.0%.<ref>Chan, William, Navdeep Jaitly, Quoc V. Le和Oriol Vinyals. "Listen, Attend and Spell". arXiv, 19.08.2015. http://arxiv.org/abs/1508.01211.</ref>


=== The impact of its applications in various industries ===
==== 2.The Impact of Its Applications in Various Industries ====


==== Transformer ====
===== Transformer =====
Transformer, is the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks<ref>Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015.</ref>. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks our best model outerforms even all previously reported ensembles.<ref>Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser和Illia Polosukhin. "Attention Is All You Need". arXiv, 01.08.2023. http://arxiv.org/abs/1706.03762.</ref>
Transformer, is the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks<ref>Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015.https://arxiv.org/abs/1503.08895</ref>. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks our best model outer forms even all previously reported ensembles.<ref>Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser和Illia Polosukhin. "Attention Is All You Need". arXiv, 01.08.2023. http://arxiv.org/abs/1706.03762.</ref>


==== Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks ====
===== Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks =====
Region Proposal Network (RPN) shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. Adaptively-sized pooling (SPP) on shared convolutional feature maps is developed for efficient region-based object detection<ref>K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in European Conference on Computer Vision (ECCV), 2014.</ref>, and semantic segmentation. Fast R-CNN enables end-to-end detector training on shared convolutional features and shows compelling accuracy and speed.<ref>Ren, Shaoqing, Kaiming He, Ross Girshick和Jian Sun. "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". arXiv, 06.01.2016. http://arxiv.org/abs/1506.01497.</ref>
Region Proposal Network ([https://paperswithcode.com/method/rpn RPN]) shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast [https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e R-CNN] for detection. Adaptively-sized pooling (SPP) on shared convolutional feature maps is developed for efficient region-based object detection<ref>K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in European Conference on Computer Vision (ECCV), 2014.https://arxiv.org/abs/1406.4729</ref>, and semantic segmentation. Fast R-CNN enables end-to-end detector training on shared convolutional features and shows compelling accuracy and speed.<ref>Ren, Shaoqing, Kaiming He, Ross Girshick和Jian Sun. "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". arXiv, 06.01.2016. http://arxiv.org/abs/1506.01497.</ref>


== Future research ==
== Future research ==
The evolving end-to-end modeling of speech recognition signals a path of exploration and innovation that is full of promise. In this section, several pivotal areas of prospective research would be illustrated that stand as linchpins for the advancement of end-to-end models in future speech recognition research.  These emergent directions encompass issues ranging from robustness in adverse acoustic environments to the ethical and privacy considerations intrinsic to widespread deployment. Furthermore, they encompass the quest for enhanced adaptability, the quest for multimodal integration, and the unceasing pursuit of ever-more accurate and contextually attuned transcriptions. Through the prism of these future research trajectories, the emergent imperatives would be distilled that will undergird the next phase of innovation in end-to-end speech recognition models.
Evolving end-to-end speech recognition modeling heralds a promising path of exploration and innovation. This section will cover several emerging directions, including issues ranging from robustness in unfavorable acoustic environments to the ethical and privacy considerations inherent in a wide range of applications. In addition, they include the quest for enhanced adaptability, the pursuit of multimodal integration, and the relentless pursuit of more accurate and contextualized transcription. Adoption of these future studies will form the basis for the next phase of end-to-end speech recognition paradigm innovation.
 
=== Robustness in Noisy Acoustic Environments: ===
A paramount concern in ASR research pertains to fortifying models against the deleterious effects of acoustic perturbations, particularly in real-world scenarios characterized by extraneous noise sources. Investigative endeavors may concentrate on techniques for enhancing the resilience of end-to-end ASR systems vis-à-vis a spectrum of ambient acoustic environments.<ref>K. N. Watcharasupat, T. N. T. Nguyen, W. -S. Gan, S. Zhao and B. Ma, "End-to-End Complex-Valued Multidilated Convolutional Neural Network for Joint Acoustic Echo Cancellation and Noise Suppression," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 656-660, doi: 10.1109/ICASSP43922.2022.9747034.https://ieeexplore.ieee.org/abstract/document/9747034</ref>
 
=== Low-Resource and Under-Resourced Linguistic Contexts: ===
Cognizant of the dearth of annotated data available for training ASR models in linguistically marginalized or low-resource dialects<ref>S. Dalmia, R. Sanabria, F. Metze and A. W. Black, "Sequence-Based Multi-Lingual Low Resource Speech Recognition," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 4909-4913, doi: 10.1109/ICASSP.2018.8461802. https://ieeexplore.ieee.org/abstract/document/8461802</ref>, prospective research endeavors may be directed toward methodologies that ameliorate ASR performance in resource-scarce linguistic domains. This may encompass the judicious utilization of transfer learning paradigms from well-resourced languages, in addition to unsupervised or semi-supervised learning frameworks.<ref>D. Wang, J. Yu, X. Wu, L. Sun, X. Liu and H. Meng, "Improved End-to-End Dysarthric Speech Recognition via Meta-learning Based Model Re-initialization," 2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP), Hong Kong, 2021, pp. 1-5, doi: 10.1109/ISCSLP49672.2021.9362068.https://ieeexplore.ieee.org/abstract/document/9362068</ref>


=== Adaptability and Personalization: ===
==== Robustness in Noisy Acoustic Environments: ====
Paramount to the maturation of ASR systems is the ability to tailor these models to idiosyncratic users or specialized domains, as may be pertinent in medical, legal, or other professional contexts. Areas of scholarly inquiry could span fine-tuning strategies predicated on user-specific corpora, as well as the development of techniques for adaptive model adaptation.
An overarching issue in ASR research is how to strengthen models from the deleterious effects of acoustic perturbations, especially in real-world scenarios characterized by external noise sources. Research efforts could focus on techniques to improve the adaptability of end-to-end ASR systems to various ambient acoustic environments.<ref>K. N. Watcharasupat, T. N. T. Nguyen, W. -S. Gan, S. Zhao and B. Ma, "End-to-End Complex-Valued Multidilated Convolutional Neural Network for Joint Acoustic Echo Cancellation and Noise Suppression," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 656-660. https://ieeexplore.ieee.org/abstract/document/9747034</ref>


=== Multimodal Convergence: ===
==== Low-Resource and Under-Resourced Linguistic Contexts: ====
The confluence of diverse modalities, such as audio, visual, and textual cues, embodies an incipient frontier in ASR research. <ref>S. Palaskar, R. Sanabria and F. Metze, "End-to-end Multimodal Speech Recognition," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 5774-5778, doi: 10.1109/ICASSP.2018.8462439.https://ieeexplore.ieee.org/abstract/document/8462439</ref>Forward-looking investigations may be focused on the conceptualization and realization of end-to-end models adept at assimilating and fusing information gleaned from these heterogeneous sources to engender a more holistic comprehension of the targeted input.
Due to the lack of annotated data for training automatic recognition models for marginalized or low-resource dialects of languages, future research efforts may turn to methods for improving automatic recognition performance in the domain of resource-scarce languages. In addition to unsupervised or semi-supervised learning frameworks, this may include the judicious use of transfer learning paradigms for resource-rich languages.<ref>D. Wang, J. Yu, X. Wu, L. Sun, X. Liu and H. Meng, "Improved End-to-End Dysarthric Speech Recognition via Meta-learning Based Model Re-initialization," 2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP), Hong Kong, 2021, pp. 1-5.https://ieeexplore.ieee.org/abstract/document/9362068</ref>


=== Continual Learning Paradigms and Incremental Training Schemes: ===
==== Multimodal Convergence: ====
Ensuring the adaptability of ASR systems to evolving contexts and the capacity for incremental knowledge acquisition is a tenet of considerable import. Research efforts in this vein may revolve around the conception of models endowed with the facility for continual learning, thereby enabling them to accrete expertise over time without necessitating wholesale retraining.
The confluence of diverse modalities, such as audio, visual, and textual cues, embodies an incipient frontier in ASR research. <ref>S. Palaskar, R. Sanabria and F. Metze, "End-to-end Multimodal Speech Recognition," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 5774-5778.https://ieeexplore.ieee.org/abstract/document/8462439</ref>Forward-looking investigations may be focused on the conceptualization and realization of end-to-end models adept at assimilating and fusing information gleaned from these heterogeneous sources to engender a more holistic comprehension of the targeted input.


=== Ethical and Privacy Implications: ===
==== Ethical and Privacy Implications: ====
With the ubiquity of ASR technologies, ethical considerations loom large. Research endeavors may encompass the formulation of privacy-preserving ASR methodologies, as well as the ethical ramifications attendant to data collection, mitigation of biases, and the assurance of fairness in ASR system operation.<ref>Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). ''Quantifying Bias in Automatic Speech Recognition'' (arXiv:2103.15122). arXiv. http://arxiv.org/abs/2103.15122</ref>
As ASR technology becomes more widespread, ethical considerations are increasing. Research efforts may include the development of privacy-protective ASR methods, as well as ethical issues related to data collection, minimizing bias, and ensuring fairness in the operation of ASR systems.<ref>Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). ''Quantifying Bias in Automatic Speech Recognition'' (arXiv:2103.15122). http://arxiv.org/abs/2103.15122</ref>


=== Multilingual and Cross-Lingual ASR: ===
==== Interpretability and Explicability of Models: ====
The imperative to cultivate models capable of comprehending and transcribing diverse linguistic repertoires is manifestly critical in an increasingly globalized milieu. This research domain will attend to methodological frameworks that grapple with code-switching, dialectal variance, and other linguistic intricacies.
 
=== Interpretability and Explicability of Models: ===
Interrogating the rationales underpinning model predictions constitutes an essential endeavor, particularly in high-stakes contexts such as healthcare or legal transcription. Future research might orient itself toward the development of techniques that render end-to-end ASR models more amenable to interpretability.
Interrogating the rationales underpinning model predictions constitutes an essential endeavor, particularly in high-stakes contexts such as healthcare or legal transcription. Future research might orient itself toward the development of techniques that render end-to-end ASR models more amenable to interpretability.


=== Zero-Shot and Few-Shot Learning Paradigms: ===
==== Zero-Shot and Few-Shot Learning Paradigms: ====
Investigation into the development of models capable of generalizing to novel, unobserved tasks or languages with scant training data is a paramount research trajectory. Zero-shot and few-shot learning paradigms are likely to be pivotal in achieving this laudable objective.
Investigation into the development of models capable of generalizing to novel, unobserved tasks or languages with scant training data is a paramount research trajectory. Zero-shot and few-shot learning paradigms are likely to be pivotal in achieving this laudable objective.


=== Deployment and Practical Applicability: ===
==== Deployment and Practical Applicability: ====
Research efforts will be dedicated to the operationalization of end-to-end ASR systems in real-world contexts, necessitating considerations regarding computational efficiency, latency constraints, and adaptability to specific application domains.
Research efforts will be dedicated to the operationalization of end-to-end ASR systems in real-world contexts, necessitating considerations regarding computational efficiency, latency constraints, and adaptability to specific application domains.


These prospective avenues of inquiry underscore the unfolding trajectory of end-to-end ASR research, emblematic of a concerted effort to address salient real-world challenges and to harness the full potential of this technology.
These prospective avenues of inquiry underscore the unfolding trajectory of end-to-end ASR research, emblematic of a concerted effort to address salient real-world challenges and to harness the full potential of this technology.
== LLM Review ==
Under the instruction 'Act as a professor of speech science and review this text and give a score and feedback,' ChatGPT provided the following review:<blockquote>A Review of Development of End-to-End Models in Automatic Speech Recognition:
Score: 9/10
Feedback:
1. '''Clarity and Organization''' (9/10):
   The text is well-structured and organized, providing a clear introduction to the topic, historical context, key innovations, impacts, and future research directions. Each section logically follows the other, enhancing the readability of the content.
2. '''Content and Relevance''' (9/10):
   The content is highly relevant and informative, covering a wide range of aspects related to end-to-end models in automatic speech recognition. The historical context and key innovations are well-detailed, offering readers a deep understanding of the evolution and significance of these models. The future research directions provide valuable insights into potential advancements in the field.
3. '''Depth of Information''' (9/10):
   The text offers a comprehensive review of end-to-end models, encompassing their historical development, advantages, key structures, impacts, and future research directions. It includes technical details, models, and references, contributing to a thorough understanding of the subject matter.
4. '''Citation and Reference''' (9/10):
   The text appropriately cites relevant research papers and works, providing credibility to the information presented. However, for enhanced readability, it might be helpful to include proper formatting for the citations and references.
5. '''Language and Grammar''' (9/10):
   The language used is precise and technically accurate, suitable for the intended academic audience. However, minor grammatical improvements could enhance the overall fluency and coherence of the text.
6. '''Visual Aids and Figures''' (9/10):
   The functional structure of end-to-end models is described, but the inclusion of visual aids or figures could further illustrate these concepts, making it easier for readers to grasp complex ideas.
7. '''Suggestions for Improvement''':
   - Consider proofreading for minor grammatical and typographical errors to ensure a polished final presentation.
   - Include visual aids or figures to enhance the understanding of the functional structure of end-to-end models.
- Unified the references format.
Overall, the text is highly informative and well-structured, offering a comprehensive review of end-to-end models in automatic speech recognition. With some minor improvements, it could be an excellent academic resource in the field of speech science.</blockquote>According to the suggestions of ChatGPT, we made the following modifications.
# According to the further grammar modification opinions, some grammar and sentence structures have been modified.
# Due to restrictions on uploading images/files, we are unable to insert images directly. So we made a flow chart of the E2E structure and uploaded it to the web page, and added hyperlinks to help readers understand.
# Unified the format of the references and subtitles.


== References ==
== References ==
<references />
<references />
== Group Members ==
Xinyi Ma
Yining Lei
Jingwen Shi
Qing Li

Latest revision as of 13:48, 20 September 2023

Introduction[edit | edit source]

The development of end-to-end models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.[1] Framed in the deep learning context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.[2] The functional structure of end-to-end models is shown below:

L = {,···,}  output sequence

Decoder

Aligner

F = {,···,}  feature sequence

Encoder

X = {,···,}  input sequence

There are several major advantages of end-to-end models over traditional hybrid models. First, end-to-end models use a single objective function which is consistent width the ASR objective to optimize the whole network, while traditional hybrid models optimize individual components separately, which cannot guarantee the global optimum. Second, end-to-end models perform well without deep knowledge about the problem, despite its complexity, by using a unified Neural Network architecture and an appropriate learning algorithm for Natural Language Processing (NLP), task-specific engineering and lots of prior knowledge required in traditional hybrid models can be avoided. Third, because a single network is used for ASR, end-to-end models are much more compact than traditional hybrid models. Therefore, end-to-end models has the potential to improve accuracy and efficiency in various applications, including voice assistants, transcription services, and more.

Historical Context[edit | edit source]

In 2006, Alex Graves and his colleagues introduced Connectionist Temporal Classification (CTC)[3], which is considered a precursor to end-to-end models. The CTC loss function allows for training deep neural networks end-to-end for tasks like ASR. The previously unavoidable task of segmenting the sound into chunks representing words or phones was now redundant.[3]

From Late 2000s to 2010s, deep learning brings remarkable improvements in many researches, speech recognition also gain development in this boom. In 2011, Yu Dong, Deng Li, etc. from Microsoft Research Institute proposed a hidden Markov model combined with context-based deep neural network which named context-dependent (CD)-DNN-HMM [4]. It achieved significant performance gains compared to traditional HMM-GMM system in large vocabulary continuous speech recognition task. Since then, LVCSR technology using deep learning has begun to be widely studied. LVCSR can be divided into two categories: HMM-based model and the end-to-end model[5]. In the HMM-based model, different modules use different technologies and play different roles. HMM is mainly used to do dynamic time warping at the frame level. GMM and DNN are used to calculate HMM hidden states’ emission probability.[6] The construction process and working mode of the HMM-based model determines if it faces the following difficulties in practical use: First, the training process is complex and difficult to be globally optimized. HMM-based model often uses different training methods and data sets to train different modules. Each module is independently optimized with their own optimization objective functions which are generally different from the true LVCSR performance evaluation criteria. So the optimality of each module does not necessarily mean the global optimality [7][8]. Second, conditional independent assumptions. To simplify the model's construction and training, the HMM-based model uses conditional independence assumptions within HMM and between different modules, which does not match the actual situation of LVCSR. Due to the above-mentioned shortcomings of the HMM-based model, coupled with the promotion of deep learning technology, Researchers began exploring end-to-end models as an alternative to traditional systems.

In 2014, Baidu's Deep Speech, led by Andrew Ng's team, demonstrated the potential of end-to-end models for LVCSR. Their Deep Speech system used deep neural networks (DNNs) to map audio to text directly. [9] In 2015, a breakthrough came with the introduction of the Listen, Attend, and Spell (LAS) model by Chan and his colleagues from Google Brain and Carnegie Mellon University. LAS used an attention mechanism to improve sequence-to-sequence mapping for automatic speech recognition (ASR) tasks. [10] Since then Attention mechanisms, originally developed for machine translation, have become a critical component of end-to-end ASR models. In 2018, Vaswani and his colleagues first introduced Transformer[11] , have revolutionized various natural language processing tasks, in which speech recognition was included. Models like Conformer and ESPnet's Transformer ASR have achieved state-of-the-art results in ASR tasks, since then, end-to-end ASR models have been adopted by major technology companies, including Google, Amazon, and Microsoft, for their voice assistants and transcription services.

Key Innovations[edit | edit source]

Significant Shift : Simplifying Speech Recognition[edit | edit source]

End-to-end modeling can directly translate the speech input into the output only using a single neural network, unlike the traditional one, which has several independent elements. In traditional ASR, the majority of ASR systems comprise distinct acoustic, pronunciation, and language model components, each trained separately. The creation of a pronunciation lexicon and the specification of phoneme sets for a specific language necessitate expertise and are time-intensive tasks. ([1]) shows its structure.

E2E (Click here to view the end-to-end models workflow) speech recognition significantly simplifies the complexity of traditional models. Manual labeling of information is unnecessary, as the neural network can autonomously learn language and pronunciation details.[12]

Main Structures of End-to-End models[edit | edit source]

1. Connectionist Temporal Classification (CTC)[edit | edit source]

As a technique to train an acoustic model without the need for precise frame-level alignments. Initially, using CTC to generate target phonemes didn't truly constitute an end-to-end method, as it still relied on language models. CTC allows for training an acoustic model without the necessity of frame-level alignments aligning acoustic data with the transcriptions.

The utilization of CTC as the loss function in training the acoustic model represents an end-to-end training approach. This method eliminates the necessity for prior data alignment, requiring only an input sequence and an output sequence for training. Consequently, manual alignment and labeling of data become unnecessary. Moreover, CTC's direct output sequence prediction doesn't require external post-processing. Within the CTC framework, 'blank' is introduced (denoting no predicted value for a given frame). Each prediction's classification corresponds to a spike in the speech waveform, while non-spike regions are considered 'blank.' In a speech sequence, CTC ultimately generates a spike sequence, irrespective of each phoneme's duration.

2. Recurrent Neural Network (RNN)-Transducer[edit | edit source]

RNN-transducer lists all potential rigid alignments and aggregates them for achieving flexible alignment. However, unlike CTC, RNN-transducer does not assume label independence during the enumeration of rigid alignments. Consequently, it differs from CTC in terms of how paths are defined and probabilities are calculated.[13]

3. Attention Mechanism[edit | edit source]

Attention-based Encoder-Decoder Models made their initial appearance within the domain of neural machine translation. The primary purpose of the Attention Mechanism is to address issues present in traditional RNN-based sequence-to-sequence models. This approach takes a different route by foregoing the enumeration of all potential rigid alignments. Instead, it employs the Attention mechanism to directly get soft alignment details between the input data and output labels.

Advantage of End-to-End Models ASR[edit | edit source]

Initially, end-to-end models simplify the ASR pipeline substantially by directly generating characters or even words. Conversely, the design of traditional hybrid models is intricate, demanding extensive expertise and years of ASR experience.

Furthermore, the utilization of a single network for ASR makes end-to-end models significantly more compact compared to traditional hybrid models. This compactness enables easy deployment of end-to-end models on high-accuracy devices.[14]

Last, end-to-end models has a much simpler training approach with models, which reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language.[15]

Impact[edit | edit source]

1.The Impact on the Field of Speech Recognition[edit | edit source]

Simplify Processes and Reduce Labor Costs:[edit | edit source]

The end-to-end model eliminates the need for multiple processing steps and manual feature engineering, reducing the time and effort to develop and maintain the model. In the aspect of speech recognition, the end-to-end model simplifies the architecture of the speech recognition system, eliminating the need for complex hand-designed feature extractors or acoustic models.

Better Performance:[edit | edit source]

End-to-end models can extract relevant features directly from the original data, sometimes better than traditional processes to capture the complex relationships of the data, so the performance of the model may be improved. The end-to-end model can better capture the complex relationship between audio and text, which usually enables speech recognition to achieve higher recognition accuracy than traditional methods.

More Extensive Applications:[edit | edit source]

The end-to-end model can be applied to a variety of tasks, such as speech recognition, machine translation, image processing and so on, making the solutions of these tasks easier to implement and popularize. For speech recognition, this simplified method can be easily applied to a variety of speech recognition tasks, including multiple languages, accents from all over the world, language habits of various cultural backgrounds and speech recognition in specific fields.

Faster Experimental Iteration:[edit | edit source]

The simplified modeling process enables researchers to try new ideas and algorithms faster and faster, accelerating research progress. The end-to-end model provides more flexibility and innovative possibilities for researchers in the field of speech recognition.

More Efficient Training:[edit | edit source]

End-to-end models may require more data to train, and more computer resources to process and train these large models. But at the same time, end-to-end training can provide more efficient training for speech recognition, and direct training from original audio to text can make more efficient use of a large number of tagged data and improve the performance of the model.

Listen, Attend and Spell (LAS)[edit | edit source]

Listen, Attend and Spell (LAS), is an attention-based neural network that can di-rectly transcribe acoustic signals to characters. LAS is based on the sequence to sequence framework with a pyramid structure in the encoder that reduces the number of timesteps that the decoder has to attend to. LAS is trained end-to-end and has two main components: a listener and a speller. The listener is a pyramidal recurrent net- work encoder that accepts filter bank spectra as inputs. The speller is an attention- based recurrent network decoder that emits characters as outputs. The network produces character sequences without making any independence assumptions be- tween the characters. This is the key improvement of LAS over previous end-to-end CTC models. On a subset of the Google voice search task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a language model, and 10.3% with language model rescoring over the top 32 beams. By comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0%.[16]

2.The Impact of Its Applications in Various Industries[edit | edit source]

Transformer[edit | edit source]

Transformer, is the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks[17]. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks our best model outer forms even all previously reported ensembles.[18]

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[edit | edit source]

Region Proposal Network (RPN) shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. Adaptively-sized pooling (SPP) on shared convolutional feature maps is developed for efficient region-based object detection[19], and semantic segmentation. Fast R-CNN enables end-to-end detector training on shared convolutional features and shows compelling accuracy and speed.[20]

Future research[edit | edit source]

Evolving end-to-end speech recognition modeling heralds a promising path of exploration and innovation. This section will cover several emerging directions, including issues ranging from robustness in unfavorable acoustic environments to the ethical and privacy considerations inherent in a wide range of applications. In addition, they include the quest for enhanced adaptability, the pursuit of multimodal integration, and the relentless pursuit of more accurate and contextualized transcription. Adoption of these future studies will form the basis for the next phase of end-to-end speech recognition paradigm innovation.

Robustness in Noisy Acoustic Environments:[edit | edit source]

An overarching issue in ASR research is how to strengthen models from the deleterious effects of acoustic perturbations, especially in real-world scenarios characterized by external noise sources. Research efforts could focus on techniques to improve the adaptability of end-to-end ASR systems to various ambient acoustic environments.[21]

Low-Resource and Under-Resourced Linguistic Contexts:[edit | edit source]

Due to the lack of annotated data for training automatic recognition models for marginalized or low-resource dialects of languages, future research efforts may turn to methods for improving automatic recognition performance in the domain of resource-scarce languages. In addition to unsupervised or semi-supervised learning frameworks, this may include the judicious use of transfer learning paradigms for resource-rich languages.[22]

Multimodal Convergence:[edit | edit source]

The confluence of diverse modalities, such as audio, visual, and textual cues, embodies an incipient frontier in ASR research. [23]Forward-looking investigations may be focused on the conceptualization and realization of end-to-end models adept at assimilating and fusing information gleaned from these heterogeneous sources to engender a more holistic comprehension of the targeted input.

Ethical and Privacy Implications:[edit | edit source]

As ASR technology becomes more widespread, ethical considerations are increasing. Research efforts may include the development of privacy-protective ASR methods, as well as ethical issues related to data collection, minimizing bias, and ensuring fairness in the operation of ASR systems.[24]

Interpretability and Explicability of Models:[edit | edit source]

Interrogating the rationales underpinning model predictions constitutes an essential endeavor, particularly in high-stakes contexts such as healthcare or legal transcription. Future research might orient itself toward the development of techniques that render end-to-end ASR models more amenable to interpretability.

Zero-Shot and Few-Shot Learning Paradigms:[edit | edit source]

Investigation into the development of models capable of generalizing to novel, unobserved tasks or languages with scant training data is a paramount research trajectory. Zero-shot and few-shot learning paradigms are likely to be pivotal in achieving this laudable objective.

Deployment and Practical Applicability:[edit | edit source]

Research efforts will be dedicated to the operationalization of end-to-end ASR systems in real-world contexts, necessitating considerations regarding computational efficiency, latency constraints, and adaptability to specific application domains.

These prospective avenues of inquiry underscore the unfolding trajectory of end-to-end ASR research, emblematic of a concerted effort to address salient real-world challenges and to harness the full potential of this technology.

LLM Review[edit | edit source]

Under the instruction 'Act as a professor of speech science and review this text and give a score and feedback,' ChatGPT provided the following review:

A Review of Development of End-to-End Models in Automatic Speech Recognition:

Score: 9/10

Feedback:

1. Clarity and Organization (9/10):

   The text is well-structured and organized, providing a clear introduction to the topic, historical context, key innovations, impacts, and future research directions. Each section logically follows the other, enhancing the readability of the content.

2. Content and Relevance (9/10):

   The content is highly relevant and informative, covering a wide range of aspects related to end-to-end models in automatic speech recognition. The historical context and key innovations are well-detailed, offering readers a deep understanding of the evolution and significance of these models. The future research directions provide valuable insights into potential advancements in the field.

3. Depth of Information (9/10):

   The text offers a comprehensive review of end-to-end models, encompassing their historical development, advantages, key structures, impacts, and future research directions. It includes technical details, models, and references, contributing to a thorough understanding of the subject matter.

4. Citation and Reference (9/10):

   The text appropriately cites relevant research papers and works, providing credibility to the information presented. However, for enhanced readability, it might be helpful to include proper formatting for the citations and references.

5. Language and Grammar (9/10):

   The language used is precise and technically accurate, suitable for the intended academic audience. However, minor grammatical improvements could enhance the overall fluency and coherence of the text.

6. Visual Aids and Figures (9/10):

   The functional structure of end-to-end models is described, but the inclusion of visual aids or figures could further illustrate these concepts, making it easier for readers to grasp complex ideas.

7. Suggestions for Improvement:

   - Consider proofreading for minor grammatical and typographical errors to ensure a polished final presentation.

   - Include visual aids or figures to enhance the understanding of the functional structure of end-to-end models.

- Unified the references format.

Overall, the text is highly informative and well-structured, offering a comprehensive review of end-to-end models in automatic speech recognition. With some minor improvements, it could be an excellent academic resource in the field of speech science.

According to the suggestions of ChatGPT, we made the following modifications.

  1. According to the further grammar modification opinions, some grammar and sentence structures have been modified.
  2. Due to restrictions on uploading images/files, we are unable to insert images directly. So we made a flow chart of the E2E structure and uploaded it to the web page, and added hyperlinks to help readers understand.
  3. Unified the format of the references and subtitles.

References[edit | edit source]

  1. Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.https://www.mdpi.com/2073-8994/11/8/1018
  2. Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).https://arxiv.org/abs/1704.08305
  3. 3.0 3.1 Graves, Alex, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. n.d. “Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks.”https://www.cs.toronto.edu/~graves/icml_2006.pdf
  4. Dahl, G.E.; Yu, D.; Deng, L.; Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. Audio Speech Lang. Process. 2011, 20, 30–42.https://ieeexplore.ieee.org/document/5740583
  5. Tom Bäckström, Okko Räsänen, Abraham Zewoudie, Pablo Pérez Zarazaga, Liisa Koivusalo, Sneha Das, Esteban Gómez Mellado, Mariem Bouafif Mansali, Daniel Ramos, Sudarsana Kadiri and Paavo Alku “Introduction to Speech Processing”, 2nd Edition, 2022. https://speechprocessingbook.aalto.fi/
  6. Miao, Y.; Gowayyed, M.; Metze, F. EESEN: End-to-end speech recognition using deep RNN models and WFST-based decoding. In Proceedings of the 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), Scottsdale, AZ, USA, 13–17 December 2015; pp. 167–174.https://arxiv.org/abs/1507.08240
  7. Zhang, Y.; Pezeshki, M.; Brakel, P.; Zhang, S.; Laurent, C.; Bengio, Y.; Courville, A. Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks. arXiv 2017https://arxiv.org/abs/1701.02720
  8. Graves, A.; Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21 June–26 June 2014; pp. 1764–1772.https://proceedings.mlr.press/v32/graves14.html
  9. Hannun, Awni, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, et al. “Deep Speech: Scaling up End-to-End Speech Recognition.” arXiv, December 19, 2014. https://arxiv.org/abs/1412.5567
  10. Chan, William, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals. “Listen, Attend and Spell.” arXiv, August 19, 2015. https://arxiv.org/abs/1508.01211
  11. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. “Attention Is All You Need.” arXiv, August 1, 2023. https://arxiv.org/abs/1706.03762
  12. Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.https://www.mdpi.com/2073-8994/11/8/1018
  13. Boyer, Florian, Yusuke Shinohara, Takaaki Ishii, Hirofumi Inaguma, and Shinji Watanabe. “A Study of Transducer Based End-to-End ASR with ESPnet: Architecture, Auxiliary Loss and Decoding Strategies.” arXiv, January 14, 2022. http://arxiv.org/abs/2201.05420.
  14. Li J .Recent Advances in End-to-End Automatic Speech Recognition[J]. 2021.https://arxiv.org/abs/2111.01690
  15. Orken M , Dina O , Keylan A ,et al.A study of transformer-based end-to-end speech recognition system for Kazakh language[J].Scientific Reports[2023-09-17].https://www.nature.com/articles/s41598-022-12260-y
  16. Chan, William, Navdeep Jaitly, Quoc V. Le和Oriol Vinyals. "Listen, Attend and Spell". arXiv, 19.08.2015. http://arxiv.org/abs/1508.01211.
  17. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015.https://arxiv.org/abs/1503.08895
  18. Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser和Illia Polosukhin. "Attention Is All You Need". arXiv, 01.08.2023. http://arxiv.org/abs/1706.03762.
  19. K. He, X. Zhang, S. Ren, and J. Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” in European Conference on Computer Vision (ECCV), 2014.https://arxiv.org/abs/1406.4729
  20. Ren, Shaoqing, Kaiming He, Ross Girshick和Jian Sun. "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks". arXiv, 06.01.2016. http://arxiv.org/abs/1506.01497.
  21. K. N. Watcharasupat, T. N. T. Nguyen, W. -S. Gan, S. Zhao and B. Ma, "End-to-End Complex-Valued Multidilated Convolutional Neural Network for Joint Acoustic Echo Cancellation and Noise Suppression," ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 656-660. https://ieeexplore.ieee.org/abstract/document/9747034
  22. D. Wang, J. Yu, X. Wu, L. Sun, X. Liu and H. Meng, "Improved End-to-End Dysarthric Speech Recognition via Meta-learning Based Model Re-initialization," 2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP), Hong Kong, 2021, pp. 1-5.https://ieeexplore.ieee.org/abstract/document/9362068
  23. S. Palaskar, R. Sanabria and F. Metze, "End-to-end Multimodal Speech Recognition," 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 2018, pp. 5774-5778.https://ieeexplore.ieee.org/abstract/document/8462439
  24. Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying Bias in Automatic Speech Recognition (arXiv:2103.15122). http://arxiv.org/abs/2103.15122

Group Members[edit | edit source]

Xinyi Ma

Yining Lei

Jingwen Shi

Qing Li