Editing
Multimodal Speech Recognition
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Historical Context == === 20th Century: Early Experiments with Lip Reading === The idea of multimodal speech recognition can be traced back to the mid-20th century. Around this time, some pioneering researchers set out to experiment with the possibility of improving the accuracy of speech recognition in challenging acoustic environments by combining it with other modalities, which laid the foundation of multimodal approaches of ASR (Automatic [[wikipedia:Speech_recognition|Speech Recognition]]). As early as 1984, scholars conducted some research on [[wikipedia:Automated_Lip_Reading|automated lip reading]] to enhance speech recognition.<ref>Petajan, E. (1984). Automatic Lipreading to Enhance Speech Recognition (Speech Reading).</ref> Prominent scholars such as Petajan, E.D. are renowned for developing one of the first audio-visual recognition systems. In his experiment, binary mouth image were used to extract mouth parameters like height, width and the area of mouth of the speaker which would be later used in the recognition system. Then the speech is processed by the acoustic recognizer first, and then passed on to the visual recognizer for final decision.<ref>Petajan, E., Bischoff, B., Bodoff, D., & Brooke, N. M. (1988, May). An improved automatic lipreading system to enhance speech recognition. In ''Proceedings of the SIGCHI conference on Human factors in computing systems'' (pp. 19-25).</ref> This visual analysis system was later used by Goldschen<ref>Goldschen, A.J., Garcia, O.N., Petajan, E.D. (1997). Continuous Automatic Speech Recognition by Lipreading. In: Shah, M., Jain, R. (eds) Motion-Based Recognition. Computational Imaging and Vision, vol 9. Springer, Dordrecht. <nowiki>https://doi.org/10.1007/978-94-015-8935-2_14</nowiki></ref> to recognize continuous speech visually. The significant contributions made by those forerunners pave the way for the integration of audio and visual information in the process of speech recognition. === Late 20th Century - Early 21st Century: Integration of Audio and Visual Information === Around the 20th century, there was a growing emphasis on enhancing the [[wikipedia:Robustness|robustness]] of speech recognition systems in the face of various types of background noise in audio channel. This development gained significant attention because speech recognition systems experienced notable performance setbacks when operating in noisy environments, dealing with unfavorable acoustic channel conditions, or contending with issues like [[wikipedia:Crosstalk|crosstalk]]. Moreover, researchers observed that some amount of [[wikipedia:Orthogonality|orthogonality]] between the audio and video channels, presenting an opportunity to enhance recognition efficiency by integrating both channels. Therefore, two different approaches to combine audio and visual information have been tried.<ref>Verma, A., Faruquie, T., Neti, C., & Basu, S. (n.d.). ''LATE INTEGRATION IN AUDIO-VISUAL CONTINUOUS SPEECH RECOGNITION''.</ref> * '''Early Integration:''' For the first approach, audio and visual features had to be computed from the acoustic and visual speech data respectively, after this, they are combined before the recognition experiment. However, this approach had a drawback, that it couldn't handle different categories or types of information in audio and video as it uses a common recognizer for both of them. <ref>Tsuhan Chen, & Rao, R. R. (1998). Audio-visual integration in multimodal communication. ''Proceedings of the IEEE'', ''86''(5), 837β852. <nowiki>https://doi.org/10.1109/5.664274</nowiki></ref> * '''Late Integration:''' The other approach, known as late Integration, uses separate systems for audio and video recognition. Then it merges the results from these two systems to produce the final outcome. This approach is well-equipped to handle diverse categories of information in audio and video because it keeps them separate until the very end when they are combined. <ref>Bregler, C., Manke, S., Hild, H., & Waibel, A. (1993, March). Bimodal sensor integration on the example of'speechreading'. In ''IEEE International Conference on Neural Networks'' (pp. 667-671). IEEE.</ref> When it comes to the early 21st century, some researchers introduced innovative methods, including composite [[wikipedia:Feature_(machine_learning)|feature]] [[wikipedia:Vector|vectors]] and a [[Hidden Markov Models|hidden Markov model]] structure accommodating audio-visual [[wikipedia:Asynchrony|asynchrony]].<ref>Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''2'', 821β824 vol. 2. <nowiki>https://doi.org/10.1109/ICASSP.1996.543247</nowiki></ref> These techniques demonstrated substantial improvements in recognition accuracy, particularly in the presence of interfering noise, as well as marked the inception of multimodal approaches, where audio and visual information converged, heralding a new era in speech recognition technology, characterized by increased accuracy and [[wikipedia:Resilience|resilience]] in diverse communication scenarios. === 2010s: Neural Networks and Deep Learning === As technology continues to evolve, the emergence of [[wikipedia:Artificial neural network|artificial neural networks]] and [[wikipedia:Deep_learning|deep learning]] marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems. [[wikipedia:Artificial_neural_network|Artificial neural networks]] have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small [[wikipedia:Neural_network|neural networks]] for speech recognition did not outperform generative models like [[wikipedia:Generalized_method_of_moments|GMM]]-[[wikipedia:Hidden_Markov_model|HMM]]. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning. The following are some examples of the application of artificial neural networks and deep learning in multimodal speech recognition. * '''End-to-End Multimodal ASR:''' Building on the success of Transformers in [[wikipedia:Natural_language_processing|natural language processing (NLP)]], researchers have extended these architectures to multimodal tasks. Subsequently, Investigating [[Development of End-to-End Models|end-to-end]] multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to [[wikipedia:Transcription_(linguistics)|transcriptions]], eliminating the need for intermediate steps in traditional ASR [[wikipedia:Pipeline_(computing)|pipelines]]. And there are many pioneering companies that had devoted to this domain, such as [[wikipedia:LipNet|Lipnet]], which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, [[wikipedia:LipNet|LipNet]] can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID [https://nl.wikipedia.org/wiki/CORPUS corpus].<ref>Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). ''LipNet: End-to-End Sentence-level Lipreading'' (arXiv:1611.01599). arXiv. <nowiki>https://doi.org/10.48550/arXiv.1611.01599</nowiki></ref> To summarize, these investigations represent a selection of crucial contributions that paved the way for more accurate, robust, and context-aware multimodal systems, with applications ranging from virtual assistants to accessibility tools and beyond.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information