Development of End-to-End Models: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 41: Line 41:
E2E speech recognition significantly simplifies the complexity of traditional models. Manual labeling of information is unnecessary, as the neural network can autonomously learn language and pronunciation details.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” ''Symmetry'' 11(8):1018.</ref>
E2E speech recognition significantly simplifies the complexity of traditional models. Manual labeling of information is unnecessary, as the neural network can autonomously learn language and pronunciation details.<ref>Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” ''Symmetry'' 11(8):1018.</ref>


 
===== Advantage of E2E ASR: =====
===Advantage of E2E ASR:===
 
Initially, E2E models simplify the ASR pipeline substantially by directly generating characters or even words. Conversely, the design of traditional hybrid models is intricate, demanding extensive expertise and years of ASR experience.
Initially, E2E models simplify the ASR pipeline substantially by directly generating characters or even words. Conversely, the design of traditional hybrid models is intricate, demanding extensive expertise and years of ASR experience.


Line 49: Line 47:


Last, E2E has a much simpler training approach with models, which reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language.<ref>Orken M , Dina O , Keylan A ,et al.A study of transformer-based end-to-end speech recognition system for Kazakh language[J].Scientific Reports[2023-09-17].DOI:10.1038/s41598-022-12260-y.</ref>
Last, E2E has a much simpler training approach with models, which reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language.<ref>Orken M , Dina O , Keylan A ,et al.A study of transformer-based end-to-end speech recognition system for Kazakh language[J].Scientific Reports[2023-09-17].DOI:10.1038/s41598-022-12260-y.</ref>
== Impact ==
== Impact ==
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Revision as of 20:12, 16 September 2023

YiningLei, Xinyi Ma, Liqing, Jingwen Shi

Introduction

The development of End-to-End (E2E) models represent a significant shift in the field of automatic speech recognition (ASR), which seek to simplify the complex pipeline of traditional systems by directly mapping input audio sequence to sequence of words or other graphemes.[1] Framed in the Deep Learning context and taking advantage of Neural Network(NN) architectures, these models directly capture the acoustic and linguistic information present in the speech signal, casting a possibly complex processing pipeline into the coherent and flexible modeling language of neural networks.[2] The functional structure of E2E models is shown below

L = {,···,}  output sequence

Decoder

Aligner

F = {,···,}  feature sequence

Encoder

X = {,···,}  input sequence

There are several major advantages of E2E models over traditional hybrid models. First, E2E models use a single objective function which is consistent witdh the ASR objective to optimize the whole network, while traditional hybrid models optimize individual components separately, which cannot guarantee the global optimum. Second, E2E models perform well without deep knowledge about the problem, despite its complexity, by using a unified Neural Network architecture and an appropriate learning algorithm for Natural Language Processing (NLP), task-specific engineering and lots of prior knowledge required in traditional hybrid models can be avoided. Third, because a single network is used for ASR, E2E models are much more compact than traditional hybrid models. Therefore, E2E models has the potential to improve accuracy and efficiency in various applications, including voice assistants, transcription services, and more.

Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) have played key roles in the development of E2E models.

Historical Context

For a long time, the hidden Markov model (HMM)-Gaussian mixed model (GMM) has been the mainstream speech recognition framework. But when????, HMM-deep neural network (DNN) model and the end-to-end model using deep learning has achieved performance beyond HMM-GMM. Both using deep learning techniques, these two models have comparable performances. However, the HMM-DNN model itself is limited by various unfavorable factors such as data forced segmentation alignment, independent hypothesis, and multi-module individual training inherited from HMM, while the end-to-end model has a simplified model, joint training, direct output, no need to force data alignment and other advantages. Therefore, the end-to-end models has become a hot topic as well as important research direction in the field of speech recognition.

Traditional ASR systems involve multiple stages, including feature extraction, acoustic modeling, phonetic decoding, and language modeling. These stages often require handcrafted engineering and are computationally expensive.

The traditional approach design for a spoken language understanding system is a pipeline structure with several different components, exemplified by the following sequence:

Audio (input) → Feature Extraction → Phoneme Detection → Word Composition → Text Transcript (output)

A clear limitation of this pipelined architecture is that each module has to be optimized separately under different criteria. The E2E approach consists in replacing the aforementioned chain for a single Neural Network(NN), allowing the use of a single optimization criterion for enhancing the system:

Audio (Input) — — NN — → Transcript (output)

E2E models emerged as a response to streamline this process and leverage deep learning techniques to directly map audio to text.

Key Innovations

E2E modeling can directly translate the speech input into the output only using a single neural network, unlike the traditional one, which has several independent elements.

In traditional ASR, the majority of ASR systems comprise distinct acoustic, pronunciation, and language model components, each trained separately. The creation of a pronunciation lexicon and the specification of phoneme sets for a specific language necessitate expertise and are time-intensive tasks.

E2E speech recognition significantly simplifies the complexity of traditional models. Manual labeling of information is unnecessary, as the neural network can autonomously learn language and pronunciation details.[3]

Advantage of E2E ASR:

Initially, E2E models simplify the ASR pipeline substantially by directly generating characters or even words. Conversely, the design of traditional hybrid models is intricate, demanding extensive expertise and years of ASR experience.

Furthermore, the utilization of a single network for ASR makes E2E models significantly more compact compared to traditional hybrid models. This compactness enables easy deployment of E2E models on high-accuracy devices.[4]

Last, E2E has a much simpler training approach with models, which reduces learning time, decoding time, and allows joint optimization with subsequent processing, such as understanding the natural language.[5]

Impact

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Future research

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. [6]

References

To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.[7] and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

  1. Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.
  2. Glasmachers, Tobias. “Limits of end-to-end learning.”arXiv preprint arXiv:1704.08305 (2017).
  3. Wang, Dong, Xiaodong Wang, and Shaohe Lv. 2019. “An Overview of End-to-End Automatic Speech Recognition.” Symmetry 11(8):1018.
  4. Li J .Recent Advances in End-to-End Automatic Speech Recognition[J]. 2021.DOI:10.48550/arXiv.2111.01690.
  5. Orken M , Dina O , Keylan A ,et al.A study of transformer-based end-to-end speech recognition system for Kazakh language[J].Scientific Reports[2023-09-17].DOI:10.1038/s41598-022-12260-y.
  6. Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying Bias in Automatic Speech Recognition (arXiv:2103.15122). arXiv. http://arxiv.org/abs/2103.15122
  7. Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [1]

<ref>