Development of End-to-End Models: Difference between revisions
No edit summary |
|||
Line 2: | Line 2: | ||
== Introduction == | == Introduction == | ||
The development of End-to-End (E2E) models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” ''Symmetry'' 11(8):1018.</ref> Framed in the [[Deep Learning Revolution|Deep Learning]] context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.<ref>Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).</ref> The functional structure of E2E models is shown below | The development of End-to-End (E2E) models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” ''Symmetry'' 11(8):1018.</ref> Framed in the [[Deep Learning Revolution|Deep Learning]] context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.<ref>Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).</ref> The functional structure of E2E models is shown below<blockquote> | ||
====== '''''L'' = {<math>l_1</math>,···,<math>l_n</math>}''' ''<small>output sequence</small>'' ====== | |||
<math>\uparrow</math> | |||
'''Decoder''' | |||
<math>\uparrow</math> | |||
'''Aligner''' | |||
<math>\uparrow</math> | |||
'''''F'' = {<math>f_1</math>,···,<math>f_T</math>}''' ''<small>'''feature sequence'''</small>'' | |||
<math>\uparrow</math> | |||
'''Encoder''' | |||
<math>\uparrow</math> | |||
'''''X'' = {<math>x_1</math>,···,<math>x_T</math>} ''<small>input sequence</small>'''''</blockquote> | |||
== Historical Context == | == Historical Context == |
Revision as of 17:24, 16 September 2023
YiningLei, Xinyi Ma, Liqing, Jingwen Shi
Introduction
The development of End-to-End (E2E) models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.[1] Framed in the Deep Learning context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.[2] The functional structure of E2E models is shown below
L = {,···,} output sequence
Decoder
Aligner
F = {,···,} feature sequence
Encoder
X = {,···,} input sequence
Historical Context
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Key Innovations
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Impact
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Future research
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. [3]
References
To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.[4] and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.
- ↑ Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.
- ↑ Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).
- ↑ Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying Bias in Automatic Speech Recognition (arXiv:2103.15122). arXiv. http://arxiv.org/abs/2103.15122
- ↑ Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [1]
<ref>