Large Vocabulary Continuous Speech Recognition: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 14: Line 14:


A key advantage of this method is its ability to perform decoding in an almost deterministic manner, making it suitable for real-time speech recognition without the need for computationally intensive procedures. This performance characteristic suggests the feasibility of extending the method to large-vocabulary speech recognition systems. It also opens up the possibility of directly searching for the most probable word sequences, as opposed to conducting searches at the phoneme or frame level, as is traditionally done in [[Hidden Markov Model (HMM)]]-based hybrid systems. This feature contributes significantly to improving the speed and accuracy of speech recognition.
A key advantage of this method is its ability to perform decoding in an almost deterministic manner, making it suitable for real-time speech recognition without the need for computationally intensive procedures. This performance characteristic suggests the feasibility of extending the method to large-vocabulary speech recognition systems. It also opens up the possibility of directly searching for the most probable word sequences, as opposed to conducting searches at the phoneme or frame level, as is traditionally done in [[Hidden Markov Model (HMM)]]-based hybrid systems. This feature contributes significantly to improving the speed and accuracy of speech recognition.
'''增量式学习''':一些研究也着眼于增量式学习,即在已有模型的基础上持续学习新的语音数据,以适应新的说话人、口音和语言变化。
1.
2.
'''多语言和跨语言识别''':LVCSR技术在支持多种语言和跨语言识别方面有了显著的改进。这对于国际化应用和多语种环境非常重要。
3.
4.
'''低资源语音识别''':对于资源受限的语音识别任务,研究人员一直在寻找有效的方法,以在数据有限的情况下构建高性能的LVCSR系统。
5.
== Impact ==
== Impact ==
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Revision as of 19:43, 17 September 2023

Introduction

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Historical Context

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Key Innovations

All the time, LVCSR is developing in a more end-to-end direction. The traditional LVCSR usually includes various modules, for instance, acoustic models, language models, and pronunciation dictionaries. However, recently decade, to simplify the process and improve LVCSR efficiency, experts are focusing on how to use a neural network to couple all these modules into a unite model.

This approach utilizes a Recurrent Neural Network (RNN) model for phoneme recognition, which is the process of converting spoken sounds into text. A distinctive feature of this novel method is its ability to perform this conversion without the need for complex alignment operations, allowing for the direct generation of the desired textual output. Additionally, it is capable of operating in real-time scenarios where there are stringent timing requirements.[1]

The foundation of this method is an extension of a neural machine translation model, which bears similarities to certain existing speech recognition techniques. However, what sets it apart is its capacity to calculate scores for all positions within both the input and output sequences, subsequently employing these scores to aid in the recognition process. An innovative aspect of this approach is the explicit utilization of these scores for alignment purposes, facilitating the generation of accurate textual representations. Furthermore, the decoder state in this model incorporates information regarding prior alignment choices, enhancing the precision of speech understanding.

A key advantage of this method is its ability to perform decoding in an almost deterministic manner, making it suitable for real-time speech recognition without the need for computationally intensive procedures. This performance characteristic suggests the feasibility of extending the method to large-vocabulary speech recognition systems. It also opens up the possibility of directly searching for the most probable word sequences, as opposed to conducting searches at the phoneme or frame level, as is traditionally done in Hidden Markov Model (HMM)-based hybrid systems. This feature contributes significantly to improving the speed and accuracy of speech recognition.

Impact

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Future research

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. [2]

References

To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.[3] and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

  1. [1]
  2. Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying Bias in Automatic Speech Recognition (arXiv:2103.15122). arXiv. http://arxiv.org/abs/2103.15122
  3. Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [2]


group members: Yan Liao, Jingxuan Yue, Chenyu Li