Large Vocabulary Continuous Speech Recognition: Difference between revisions
(examples in impact part) |
No edit summary |
||
(33 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
Large Vocabulary Continuous Speech Recognition, | Large Vocabulary Continuous Speech Recognition(LVCSR), stands as a sophisticated technology within the domain of Automatic Speech Recognition (ASR). LVCSR specifically focuses on the recognition of a sequence of words from a vast and diverse vocabulary without information of the word boundaries.<ref>Mitankin, P., Mihov, S., & Tinchev, T. (2009). ''Large Vocabulary Continuous Speech Recognition for Bulgarian''. International Conference RANLP. 246–250</ref> LVCSR system is composed of four indispensable components: Front-End Processing, Acoustic Model, Language Model, and Search & System Combination.<ref>Saon, G., & Chien, J.-T. (2012). Large-Vocabulary Continuous Speech Recognition Systems: A Look at Some Recent Advances. ''IEEE Signal Processing Magazine'', ''29''(6), 18–33. <nowiki>https://doi.org/10.1109/MSP.2012.2197156</nowiki></ref> Collectively, these components work seamlessly to transform speech signals into coherent word sequences. With the advancement of [[Deep Learning Revolution|deep learning]], LVCSR has witnessed a substantial enhancement in accuracy. This breakthrough has broadened the applicability of LVCSR across various industries and applications, making it an indispensable tool in the realm of speech recognition technology. | ||
== Historical Context == | == Historical Context == | ||
The concept of Automatic Speech Recognition | The concept of Automatic Speech Recognition began to take shape in the 1950s and 1960s, with early research focused on constructing recognition systems for isolated word speech using rudimentary techniques. | ||
In the 1970s, | In the 1970s, a groundbreaking development occurred when [[Hidden Markov Models]] (HMMs) were successfully applied to continuous speech recognition systems. This pivotal moment marked the evolution of speech recognition from basic pattern matching approaches to sophisticated statistical probability models. | ||
The 1980s witnessed the emergence of | The 1980s witnessed the emergence of LVCSR. To achieve continuous speech recognition, innovations like Linear Predictive Coding (LPC) and the incorporation of statistical language modeling significantly enhanced the accuracy and resilience of LVCSR systems.<ref>Sameti, H., Veisi, H., Bahrani, M., Babaali, B., & Hosseinzadeh, K. (2011). A large vocabulary continuous speech recognition system for Persian language. ''EURASIP Journal on Audio, Speech, and Music Processing'', ''2011''(1), 6. <nowiki>https://doi.org/10.1186/1687-4722-2011-426795</nowiki></ref> | ||
Entering the 1990s, LVCSR applications gained momentum in the market, finding increasing utility in transcription services, customer support applications, and supplementary tools. | |||
In the 21st century, fueled by advancements in deep learning and artificial intelligence, acoustic models continued to evolve. Deep Neural Networks (DNNs) and Recurrent Neural Networks (RNNs) gradually assumed a dominant role in the LVCSR field, further elevating the performance and precision of speech recognition.<ref>Variani, E., Bagby, T., McDermott, E., & Bacchiani, M. (2017). End-to-End Training of Acoustic Models for Large Vocabulary Continuous Speech Recognition with TensorFlow. ''Interspeech 2017'', 1641–1645. <nowiki>https://doi.org/10.21437/Interspeech.2017-1284</nowiki></ref> This widespread adoption paved the way for applications such as real-time transcription and voice commands. Today, research in the LVCSR domain extends across diverse sectors, including healthcare, automotive, education and entertainment propelling innovation in human-computer interaction and assistive technologies. | |||
== Key Innovations == | == Key Innovations == | ||
=== Development of End-to-End Models === | === Development of End-to-End Models === | ||
All the time, LVCSR is | All the time, LVCSR is evolving towards more [[Development of End-to-End Models|End-to-End models]]. Traditional LVCSR typically includes various modules, such as acoustic models, language models, and pronunciation dictionaries. However, in recent decades, to simplify the process and enhance LVCSR efficiency, experts have been focusing on how to use a neural network to integrate all these modules into a unified model. | ||
This approach | This approach employs an RNNs model for phoneme recognition, which is the process of converting spoken sounds into text. A distinctive feature of this novel method is its ability to perform this conversion without the need for complex alignment operations, enabling the direct generation of the desired textual output. Additionally, it can operate in real-time scenarios with stringent timing requirements. | ||
The foundation of this method is an extension of a neural machine translation model, which | The foundation of this method is an extension of a neural machine translation model, which shares similarities with certain existing speech recognition techniques. However, what sets it apart is its capability to calculate scores for all positions within both the input and output sequences, subsequently using these scores to assist in the recognition process. An innovative aspect of this approach is the explicit utilization of these scores for alignment purposes, facilitating the generation of accurate textual representations. Furthermore, the decoder state in this model incorporates information regarding prior alignment choices, enhancing the precision of speech understanding. | ||
A key advantage of this method is its ability to perform decoding in an almost deterministic manner, making it suitable for real-time speech recognition without the need for computationally intensive procedures. This performance characteristic suggests the feasibility of extending the method to large-vocabulary speech recognition systems. It also opens up the possibility of directly searching for the most probable word sequences, as opposed to conducting searches at the phoneme or frame level, as | A key advantage of this method is its ability to perform decoding in an almost deterministic manner, making it suitable for real-time speech recognition without the need for computationally intensive procedures. This performance characteristic suggests the feasibility of extending the method to large-vocabulary speech recognition systems. It also opens up the possibility of directly searching for the most probable word sequences, as opposed to conducting searches at the phoneme or frame level, as traditionally done in Hidden Markov Model (HMM)-based hybrid systems. This feature significantly contributes to improving the speed and accuracy of speech recognition.<ref>Chorowski, J., Bahdanau, D., Cho, K., & Bengio, Y. (2014). ''End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results'' (arXiv:1412.1602;). arXiv. <nowiki>http://arxiv.org/abs/1412.1602</nowiki></ref> | ||
=== Multimodal integration in speech recognition === | === Multimodal integration in speech recognition === | ||
[[Multimodal integration in speech recognition]] refers to the process of combining information from multiple sensory modalities, such as audio (speech signals) and visual (lip movements or facial expressions), to improve the accuracy and robustness of speech recognition systems. | [[Multimodal Speech Recognition|Multimodal integration in speech recognition]] refers to the process of combining information from multiple sensory modalities, such as audio (speech signals) and visual (lip movements or facial expressions), to improve the accuracy and robustness of speech recognition systems. | ||
In traditional speech recognition, the system relies solely on audio input to transcribe spoken words. However, this approach can be limited in noisy environments or when there are variations in speech articulation. Multimodal integration seeks to address these limitations by incorporating additional sources of information, such as visual cues from the speaker's mouth movements. | In traditional speech recognition, the system relies solely on audio input to transcribe spoken words. However, this approach can be limited in noisy environments or when there are variations in speech articulation. Multimodal integration seeks to address these limitations by incorporating additional sources of information, such as visual cues from the speaker's mouth movements. | ||
The idea behind multimodal integration is that different modalities can provide complementary information that helps disambiguate spoken words. For example, lip movements can provide information about the shape of the mouth and the position of the tongue, which can be useful for disambiguating similar-sounding words.<ref>Conneau, A., Lample, G., Ranzato, M., Denoyer, L., & Jégou, H. (2018). ''Word Translation Without Parallel Data'' (arXiv:1710.04087). arXiv. <nowiki>http://arxiv.org/abs/1710.04087</nowiki></ref> | The idea behind multimodal integration is that different modalities can provide complementary information that helps disambiguate spoken words. For example, lip movements can provide information about the shape of the mouth and the position of the tongue, which can be useful for disambiguating similar-sounding words.<ref>Conneau, A., Lample, G., Ranzato, M., Denoyer, L., & Jégou, H. (2018). ''Word Translation Without Parallel Data'' (arXiv:1710.04087). arXiv. <nowiki>http://arxiv.org/abs/1710.04087</nowiki></ref> | ||
== Impact == | == Impact == | ||
Over the past decade or so, several advances have been made to the design of modern | Over the past decade or so, several advances have been made to the design of modern LVCSR systems to the point where their application has broadened from early speaker-dependent dictation systems to speaker-independent automatic broadcast news transcription and indexing, lectures and meetings transcription, conversational telephone speech transcription, open-domain voice search, medical and legal speech recognition, and call center applications, to name a few. The commercial success of these systems is an impressive testimony to how far research in LVCSR has come.<ref>Saon, G., & Chien, J.-T. (2012). ''Large-Vocabulary Continuous Speech Recognition Systems: A Look at Some Recent Advances. IEEE Signal Processing Magazine, 29(6), 18–33.'' doi:10.1109/msp.2012.2197156</ref> LVCSR technology has significantly advanced the capabilities of speech recognition, enabling a wide range of applications and benefiting numerous sectors. | ||
=== | === Improved Accuracy and Enhanced User Experience === | ||
* LVCSR has substantially improved the accuracy of speech recognition systems. It allows for the transcription of continuous speech and recognition of a vast vocabulary, resulting in more precise and natural language understanding. On the other side, LVCSR has made voice interactions more user-friendly and intuitive. Users can communicate naturally and expect more accurate responses from speech-based systems. | * LVCSR has substantially improved the accuracy of speech recognition systems. It allows for the transcription of continuous speech and recognition of a vast vocabulary, resulting in more precise and natural language understanding. On the other side, LVCSR has made voice interactions more user-friendly and intuitive. Users can communicate naturally and expect more accurate responses from speech-based systems. | ||
* The higher accuracy of LVCSR has made it more practical and reliable for various applications, such as voice assistants like [[wikipedia:Siri|Siri]], [[wikipedia:Google_Assistant|Google Assistant]], and [[wikipedia:Amazon_Alex|Alexa]], transcription services like Riverside, | * The higher accuracy of LVCSR has made it more practical and reliable for various applications, such as voice assistants like [[wikipedia:Siri|Siri]], [[wikipedia:Google_Assistant|Google Assistant]], and [[wikipedia:Amazon_Alex|Alexa]], transcription services like [https://riverside.fm/transcription Riverside], [https://otter.ai/ Otter]. | ||
=== | === Automation and Accessibility === | ||
* LVCSR technology enables automation in various sectors, reducing the need for manual intervention and streamlining processes, and improving accessibility for individuals with disabilities, especially those with hearing impairments or speech disabilities. | * LVCSR technology enables automation in various sectors, reducing the need for manual intervention and streamlining processes, and improving accessibility for individuals with disabilities, especially those with hearing impairments or speech disabilities. | ||
* | * LVCSR has found applications like [https://www.deepscribe.ai/ Deepscribe], [https://www.nuance.com/healthcare.html Nuance DAX] in medical informatics, where healthcare professionals work alongside speech recognition experts to develop systems for transcribing medical records, aiding in diagnosis, and facilitating medical research. [https://whispp.com/ Whispp], smart speech amplifier enables people with a voice disorder to communicate smoothly and effectively, opening up new opportunities for them. | ||
=== | === Research Opportunities and Diverse Applications === | ||
* LVCSR technology has spurred research in [[wikipedia:Natural_language_processing|NLP]], AI, and related fields, offering opportunities for innovation and development. | * LVCSR technology has spurred research in [[wikipedia:Natural_language_processing|NLP]], AI, and related fields, offering opportunities for innovation and development. LVCSR has also catalyzed extensive research in emerging areas and fostered interdisciplinary collaborations, generating numerous innovative research topics | ||
* Researchers have explored the intersection of neuroscience and LVCSR to better understand how the human brain processes spoken language. Linguists and acoustic modeling experts collaborate to improve speech recognition systems by enhancing phonetic models and language-specific features, making them more effective in recognizing dialects and accents. The intersection of NLP and LVCSR has led to significant progress in understanding and processing spoken language. Researchers in these fields work together to develop advanced algorithms for LVCSR and conversational AI. | |||
== Future research == | == Future research == | ||
=== Emotion recognition and speech generation: === | === Emotion recognition and speech generation: === | ||
Combining emotion recognition technology with speech recognition to better understand and synthesize speech with emotional color. This has potential value in applications such as virtual assistants and automated telephone customer service.<ref>Han, | Combining emotion recognition technology with speech recognition to better understand and synthesize speech with emotional color. This has potential value in applications such as virtual assistants and automated telephone customer service.<ref>Han, Kun & Yu, Dong & Tashev, Ivan. (2014). Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 10.21437/Interspeech.2014-57. </ref> | ||
=== | === Language modeling === | ||
Language modeling techniques are investigated by experts, specifically for conversational speech. For example, discriminative language model training methods and language models that leverage conversational speech patterns. Much of the SRI Language Modeling Toolkit (SRI International is an American nonprofit scientific research institute) was developed as a by-product of LVCSR research, and SRI often provides language modeling support to other sites in the LVCSR community.<ref>Stolcke, A., Zheng, J., Wang, W., & Abrash, V. (2011). SRILM at sixteen: Update and outlook. In Proceedings of IEEE automatic speech recognition and understanding workshop (Vol. 5). ASRU: Waikoloa.</ref> | |||
=== Continuous adaptive and incremental learning: === | === Continuous adaptive and incremental learning: === | ||
Develop LVCSR systems with continuous adaptive capabilities that can adapt to changing environments and user needs, as well as enable incremental learning | Develop LVCSR systems with continuous adaptive capabilities that can adapt to changing environments and user needs, as well as enable incremental learning to continuously improve performance while continuously accumulating data.<ref>J.E. Hamaker, "MLLR: A Speaker Adaptation Technique For LVCSR," ISIP course lecture, Mississippi State University, 1999.</ref> | ||
== | == LLM review == | ||
The foundation of this wiki page was established through an in-depth exploration of relevant academic research publications available online. Contributors conducted extensive research to gather accurate and up-to-date information regarding LVCSR technology, its evolution, and its real-world applications. | The foundation of this wiki page was established through an in-depth exploration of relevant academic research publications available online. Contributors conducted extensive research to gather accurate and up-to-date information regarding LVCSR technology, its evolution, and its real-world applications. | ||
Line 72: | Line 69: | ||
== References == | == References == | ||
<references /> | <references /> | ||
group members: | group members: Yanhua Liao, Jingxuan Yue, Chenyu Li |
Latest revision as of 21:31, 19 September 2023
Introduction[edit | edit source]
Large Vocabulary Continuous Speech Recognition(LVCSR), stands as a sophisticated technology within the domain of Automatic Speech Recognition (ASR). LVCSR specifically focuses on the recognition of a sequence of words from a vast and diverse vocabulary without information of the word boundaries.[1] LVCSR system is composed of four indispensable components: Front-End Processing, Acoustic Model, Language Model, and Search & System Combination.[2] Collectively, these components work seamlessly to transform speech signals into coherent word sequences. With the advancement of deep learning, LVCSR has witnessed a substantial enhancement in accuracy. This breakthrough has broadened the applicability of LVCSR across various industries and applications, making it an indispensable tool in the realm of speech recognition technology.
Historical Context[edit | edit source]
The concept of Automatic Speech Recognition began to take shape in the 1950s and 1960s, with early research focused on constructing recognition systems for isolated word speech using rudimentary techniques.
In the 1970s, a groundbreaking development occurred when Hidden Markov Models (HMMs) were successfully applied to continuous speech recognition systems. This pivotal moment marked the evolution of speech recognition from basic pattern matching approaches to sophisticated statistical probability models.
The 1980s witnessed the emergence of LVCSR. To achieve continuous speech recognition, innovations like Linear Predictive Coding (LPC) and the incorporation of statistical language modeling significantly enhanced the accuracy and resilience of LVCSR systems.[3]
Entering the 1990s, LVCSR applications gained momentum in the market, finding increasing utility in transcription services, customer support applications, and supplementary tools.
In the 21st century, fueled by advancements in deep learning and artificial intelligence, acoustic models continued to evolve. Deep Neural Networks (DNNs) and Recurrent Neural Networks (RNNs) gradually assumed a dominant role in the LVCSR field, further elevating the performance and precision of speech recognition.[4] This widespread adoption paved the way for applications such as real-time transcription and voice commands. Today, research in the LVCSR domain extends across diverse sectors, including healthcare, automotive, education and entertainment propelling innovation in human-computer interaction and assistive technologies.
Key Innovations[edit | edit source]
Development of End-to-End Models[edit | edit source]
All the time, LVCSR is evolving towards more End-to-End models. Traditional LVCSR typically includes various modules, such as acoustic models, language models, and pronunciation dictionaries. However, in recent decades, to simplify the process and enhance LVCSR efficiency, experts have been focusing on how to use a neural network to integrate all these modules into a unified model.
This approach employs an RNNs model for phoneme recognition, which is the process of converting spoken sounds into text. A distinctive feature of this novel method is its ability to perform this conversion without the need for complex alignment operations, enabling the direct generation of the desired textual output. Additionally, it can operate in real-time scenarios with stringent timing requirements.
The foundation of this method is an extension of a neural machine translation model, which shares similarities with certain existing speech recognition techniques. However, what sets it apart is its capability to calculate scores for all positions within both the input and output sequences, subsequently using these scores to assist in the recognition process. An innovative aspect of this approach is the explicit utilization of these scores for alignment purposes, facilitating the generation of accurate textual representations. Furthermore, the decoder state in this model incorporates information regarding prior alignment choices, enhancing the precision of speech understanding.
A key advantage of this method is its ability to perform decoding in an almost deterministic manner, making it suitable for real-time speech recognition without the need for computationally intensive procedures. This performance characteristic suggests the feasibility of extending the method to large-vocabulary speech recognition systems. It also opens up the possibility of directly searching for the most probable word sequences, as opposed to conducting searches at the phoneme or frame level, as traditionally done in Hidden Markov Model (HMM)-based hybrid systems. This feature significantly contributes to improving the speed and accuracy of speech recognition.[5]
Multimodal integration in speech recognition[edit | edit source]
Multimodal integration in speech recognition refers to the process of combining information from multiple sensory modalities, such as audio (speech signals) and visual (lip movements or facial expressions), to improve the accuracy and robustness of speech recognition systems.
In traditional speech recognition, the system relies solely on audio input to transcribe spoken words. However, this approach can be limited in noisy environments or when there are variations in speech articulation. Multimodal integration seeks to address these limitations by incorporating additional sources of information, such as visual cues from the speaker's mouth movements.
The idea behind multimodal integration is that different modalities can provide complementary information that helps disambiguate spoken words. For example, lip movements can provide information about the shape of the mouth and the position of the tongue, which can be useful for disambiguating similar-sounding words.[6]
Impact[edit | edit source]
Over the past decade or so, several advances have been made to the design of modern LVCSR systems to the point where their application has broadened from early speaker-dependent dictation systems to speaker-independent automatic broadcast news transcription and indexing, lectures and meetings transcription, conversational telephone speech transcription, open-domain voice search, medical and legal speech recognition, and call center applications, to name a few. The commercial success of these systems is an impressive testimony to how far research in LVCSR has come.[7] LVCSR technology has significantly advanced the capabilities of speech recognition, enabling a wide range of applications and benefiting numerous sectors.
Improved Accuracy and Enhanced User Experience[edit | edit source]
- LVCSR has substantially improved the accuracy of speech recognition systems. It allows for the transcription of continuous speech and recognition of a vast vocabulary, resulting in more precise and natural language understanding. On the other side, LVCSR has made voice interactions more user-friendly and intuitive. Users can communicate naturally and expect more accurate responses from speech-based systems.
- The higher accuracy of LVCSR has made it more practical and reliable for various applications, such as voice assistants like Siri, Google Assistant, and Alexa, transcription services like Riverside, Otter.
Automation and Accessibility[edit | edit source]
- LVCSR technology enables automation in various sectors, reducing the need for manual intervention and streamlining processes, and improving accessibility for individuals with disabilities, especially those with hearing impairments or speech disabilities.
- LVCSR has found applications like Deepscribe, Nuance DAX in medical informatics, where healthcare professionals work alongside speech recognition experts to develop systems for transcribing medical records, aiding in diagnosis, and facilitating medical research. Whispp, smart speech amplifier enables people with a voice disorder to communicate smoothly and effectively, opening up new opportunities for them.
Research Opportunities and Diverse Applications[edit | edit source]
- LVCSR technology has spurred research in NLP, AI, and related fields, offering opportunities for innovation and development. LVCSR has also catalyzed extensive research in emerging areas and fostered interdisciplinary collaborations, generating numerous innovative research topics
- Researchers have explored the intersection of neuroscience and LVCSR to better understand how the human brain processes spoken language. Linguists and acoustic modeling experts collaborate to improve speech recognition systems by enhancing phonetic models and language-specific features, making them more effective in recognizing dialects and accents. The intersection of NLP and LVCSR has led to significant progress in understanding and processing spoken language. Researchers in these fields work together to develop advanced algorithms for LVCSR and conversational AI.
Future research[edit | edit source]
Emotion recognition and speech generation:[edit | edit source]
Combining emotion recognition technology with speech recognition to better understand and synthesize speech with emotional color. This has potential value in applications such as virtual assistants and automated telephone customer service.[8]
Language modeling[edit | edit source]
Language modeling techniques are investigated by experts, specifically for conversational speech. For example, discriminative language model training methods and language models that leverage conversational speech patterns. Much of the SRI Language Modeling Toolkit (SRI International is an American nonprofit scientific research institute) was developed as a by-product of LVCSR research, and SRI often provides language modeling support to other sites in the LVCSR community.[9]
Continuous adaptive and incremental learning:[edit | edit source]
Develop LVCSR systems with continuous adaptive capabilities that can adapt to changing environments and user needs, as well as enable incremental learning to continuously improve performance while continuously accumulating data.[10]
LLM review[edit | edit source]
The foundation of this wiki page was established through an in-depth exploration of relevant academic research publications available online. Contributors conducted extensive research to gather accurate and up-to-date information regarding LVCSR technology, its evolution, and its real-world applications.
ChatGPT helped in elevating the language and style of the content to a professional standard. It was utilized to refine sentence structures, improve vocabulary usage, and enhance overall readability. Apart from that, ChatGPT assisted in structuring the content logically. It suggested headings, subheadings, and bullet points to enhance organization, ensuring that topics flowed seamlessly throughout the page. This aided in presenting complex concepts in a comprehensible manner.
Prior to publication, contributors meticulously fact-checked and validated all information provided by ChatGPT to ensure that the content was not only well-written but also accurate and reliable.
References[edit | edit source]
- ↑ Mitankin, P., Mihov, S., & Tinchev, T. (2009). Large Vocabulary Continuous Speech Recognition for Bulgarian. International Conference RANLP. 246–250
- ↑ Saon, G., & Chien, J.-T. (2012). Large-Vocabulary Continuous Speech Recognition Systems: A Look at Some Recent Advances. IEEE Signal Processing Magazine, 29(6), 18–33. https://doi.org/10.1109/MSP.2012.2197156
- ↑ Sameti, H., Veisi, H., Bahrani, M., Babaali, B., & Hosseinzadeh, K. (2011). A large vocabulary continuous speech recognition system for Persian language. EURASIP Journal on Audio, Speech, and Music Processing, 2011(1), 6. https://doi.org/10.1186/1687-4722-2011-426795
- ↑ Variani, E., Bagby, T., McDermott, E., & Bacchiani, M. (2017). End-to-End Training of Acoustic Models for Large Vocabulary Continuous Speech Recognition with TensorFlow. Interspeech 2017, 1641–1645. https://doi.org/10.21437/Interspeech.2017-1284
- ↑ Chorowski, J., Bahdanau, D., Cho, K., & Bengio, Y. (2014). End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results (arXiv:1412.1602;). arXiv. http://arxiv.org/abs/1412.1602
- ↑ Conneau, A., Lample, G., Ranzato, M., Denoyer, L., & Jégou, H. (2018). Word Translation Without Parallel Data (arXiv:1710.04087). arXiv. http://arxiv.org/abs/1710.04087
- ↑ Saon, G., & Chien, J.-T. (2012). Large-Vocabulary Continuous Speech Recognition Systems: A Look at Some Recent Advances. IEEE Signal Processing Magazine, 29(6), 18–33. doi:10.1109/msp.2012.2197156
- ↑ Han, Kun & Yu, Dong & Tashev, Ivan. (2014). Speech Emotion Recognition Using Deep Neural Network and Extreme Learning Machine. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. 10.21437/Interspeech.2014-57.
- ↑ Stolcke, A., Zheng, J., Wang, W., & Abrash, V. (2011). SRILM at sixteen: Update and outlook. In Proceedings of IEEE automatic speech recognition and understanding workshop (Vol. 5). ASRU: Waikoloa.
- ↑ J.E. Hamaker, "MLLR: A Speaker Adaptation Technique For LVCSR," ISIP course lecture, Mississippi State University, 1999.
group members: Yanhua Liao, Jingxuan Yue, Chenyu Li