Editing
Multimodal Speech Recognition
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== 2010s: Neural Networks and Deep Learning === As technology continues to evolve, the emergence of [[wikipedia:Artificial neural network|artificial neural networks]] and [[wikipedia:Deep_learning|deep learning]] marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems. [[wikipedia:Artificial_neural_network|Artificial neural networks]] have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small [[wikipedia:Neural_network|neural networks]] for speech recognition did not outperform generative models like [[wikipedia:Generalized_method_of_moments|GMM]]-[[wikipedia:Hidden_Markov_model|HMM]]. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning. The following are some examples of the application of artificial neural networks and deep learning in multimodal speech recognition. * '''End-to-End Multimodal ASR:''' Building on the success of Transformers in [[wikipedia:Natural_language_processing|natural language processing (NLP)]], researchers have extended these architectures to multimodal tasks. Subsequently, Investigating [[Development of End-to-End Models|end-to-end]] multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to [[wikipedia:Transcription_(linguistics)|transcriptions]], eliminating the need for intermediate steps in traditional ASR [[wikipedia:Pipeline_(computing)|pipelines]]. And there are many pioneering companies that had devoted to this domain, such as [[wikipedia:LipNet|Lipnet]], which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, [[wikipedia:LipNet|LipNet]] can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID [https://nl.wikipedia.org/wiki/CORPUS corpus].<ref>Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). ''LipNet: End-to-End Sentence-level Lipreading'' (arXiv:1611.01599). arXiv. <nowiki>https://doi.org/10.48550/arXiv.1611.01599</nowiki></ref> To summarize, these investigations represent a selection of crucial contributions that paved the way for more accurate, robust, and context-aware multimodal systems, with applications ranging from virtual assistants to accessibility tools and beyond.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information