Editing
Multimodal Speech Recognition
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Neural Networks and Deep Learning algorithms === Neural Networks and Deep Learning algorithms empower multimodal speech recognition systems to extract intricate patterns and relationships from diverse data sources, including audio, video, and text. Consequently, more precise and context-aware speech recognition results can be delivered even in noisy environments, thereby enhancing their robustness in real-world usage. The LAS model (listen, attend and spell) is introduced by researchers of [[wikipedia:Google|Google]]. By combining attention mechanisms with [[wikipedia:Recurrent_neural_network|recurrent neural networks (RNNs)]], this model can significantly improve the accuracy of ASR and become a fundamental building block for multimodal speech recognition that takes both audio and visual cues into account. According to the investigation of this model, on a subset of the Google voice search task, LAS achieves a [[wikipedia:Word_error_rate|word error rate (WER)]] of 14.1% without a [[wikipedia:Dictionary|dictionary]] or a [[wikipedia:Language_model|language model]], and 10.3% with language model rescoring over the top 32 beams.<ref>Chan, W., Jaitly, N., Le, Q. V., & Vinyals, O. (2015). Listen, attend and spell. ''arXiv preprint arXiv:1508.01211''.</ref> Moreover, with more data points from multiple input modes, multimodal systems can adapt and learn from user interactions more effectively than unimodal systems.Therefore, the field of multimodal speech recognition has driven advancements in machine learning algorithms, including deep learning models that can process and integrate multiple modalities simultaneously. CD-DNN-HMM, a state-of-the-art acoustic modeling technique, has emerged recently and demonstrated remarkable performance gains compared to the older Gaussian-mixture model-based [[wikipedia:HMM|HMMs]] (GMM-HMMs) in multiple ASR tasks <ref>Yao, K., Yu, D., Seide, F., Su, H., Deng, L., & Gong, Y. (2012). Adaptation of context-dependent deep neural networks for automatic speech recognition. ''2012 IEEE Spoken Language Technology Workshop (SLT)'', 366β369. <nowiki>https://doi.org/10.1109/SLT.2012.6424251</nowiki></ref>.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information