Multimodal Speech Recognition: Difference between revisions
No edit summary |
|||
(28 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
== Introduction == | == Introduction == | ||
Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous McGurk effect<ref>Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. ''Nature'', ''264''(5588), 746–748. <nowiki>https://doi.org/10.1038/264746a0</nowiki></ref> has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment<ref>Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. ''2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 2130–2134. <nowiki>https://doi.org/10.1109/ICASSP.2015.7178347</nowiki></ref>. | Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous [[wikipedia:McGurk_effect|McGurk effect]]<ref>Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. ''Nature'', ''264''(5588), 746–748. <nowiki>https://doi.org/10.1038/264746a0</nowiki></ref> has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment<ref>Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. ''2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 2130–2134. <nowiki>https://doi.org/10.1109/ICASSP.2015.7178347</nowiki></ref>. | ||
Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas. | Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas. | ||
== Historical Context == | == Historical Context == | ||
=== 20th Century: Early Experiments with Lip Reading === | |||
The idea of multimodal speech recognition can be traced back to the mid-20th century. Around this time, some pioneering researchers set out to experiment with the possibility of improving the accuracy of speech recognition in challenging acoustic environments by combining it with other modalities, which laid the foundation of multimodal approaches of ASR (Automatic [[wikipedia:Speech_recognition|Speech Recognition]]). | The idea of multimodal speech recognition can be traced back to the mid-20th century. Around this time, some pioneering researchers set out to experiment with the possibility of improving the accuracy of speech recognition in challenging acoustic environments by combining it with other modalities, which laid the foundation of multimodal approaches of ASR (Automatic [[wikipedia:Speech_recognition|Speech Recognition]]). | ||
As early as 1984, scholars conducted some research on [[wikipedia:Automated_Lip_Reading|automated lip reading]] to enhance speech recognition.<ref>Petajan, E. (1984). Automatic Lipreading to Enhance Speech Recognition (Speech Reading).</ref> Prominent scholars such as Petajan, E.D. are renowned for developing one of the first audio-visual recognition systems. In his experiment, binary mouth image were used to extract mouth parameters like height, width and the area of mouth of the speaker which would be later used in the recognition system. Then the speech is processed by the acoustic recognizer first, and then passed on to the visual recognizer for final decision.<ref>Petajan, E., Bischoff, B., Bodoff, D., & Brooke, N. M. (1988, May). An improved automatic lipreading system to enhance speech recognition. In ''Proceedings of the SIGCHI conference on Human factors in computing systems'' (pp. 19-25).</ref> This visual analysis system was later used by Goldschen<ref>Goldschen, A.J., Garcia, O.N., Petajan, E.D. (1997). Continuous Automatic Speech Recognition by Lipreading. In: Shah, M., Jain, R. (eds) Motion-Based Recognition. Computational Imaging and Vision, vol 9. Springer, Dordrecht. <nowiki>https://doi.org/10.1007/978-94-015-8935-2_14</nowiki></ref> to recognize continuous speech visually. The significant contributions made by those forerunners pave the way for the integration of audio and visual information in the process of speech recognition. | As early as 1984, scholars conducted some research on [[wikipedia:Automated_Lip_Reading|automated lip reading]] to enhance speech recognition.<ref>Petajan, E. (1984). Automatic Lipreading to Enhance Speech Recognition (Speech Reading).</ref> Prominent scholars such as Petajan, E.D. are renowned for developing one of the first audio-visual recognition systems. In his experiment, binary mouth image were used to extract mouth parameters like height, width and the area of mouth of the speaker which would be later used in the recognition system. Then the speech is processed by the acoustic recognizer first, and then passed on to the visual recognizer for final decision.<ref>Petajan, E., Bischoff, B., Bodoff, D., & Brooke, N. M. (1988, May). An improved automatic lipreading system to enhance speech recognition. In ''Proceedings of the SIGCHI conference on Human factors in computing systems'' (pp. 19-25).</ref> This visual analysis system was later used by Goldschen<ref>Goldschen, A.J., Garcia, O.N., Petajan, E.D. (1997). Continuous Automatic Speech Recognition by Lipreading. In: Shah, M., Jain, R. (eds) Motion-Based Recognition. Computational Imaging and Vision, vol 9. Springer, Dordrecht. <nowiki>https://doi.org/10.1007/978-94-015-8935-2_14</nowiki></ref> to recognize continuous speech visually. The significant contributions made by those forerunners pave the way for the integration of audio and visual information in the process of speech recognition. | ||
=== Late 20th Century - Early 21st Century: Integration of Audio and Visual Information === | |||
Around the 20th century, there was a growing emphasis on enhancing the [[wikipedia:Robustness|robustness]] of speech recognition systems in the face of various types of background noise in audio channel. This development gained significant attention because speech recognition systems experienced notable performance setbacks when operating in noisy environments, dealing with unfavorable acoustic channel conditions, or contending with issues like [[wikipedia:Crosstalk|crosstalk]]. Moreover, researchers observed that some amount of [[wikipedia:Orthogonality|orthogonality]] between the audio and video channels, presenting an opportunity to enhance recognition efficiency by integrating both channels. Therefore, two different approaches to combine audio and visual information have been tried.<ref>Verma, A., Faruquie, T., Neti, C., & Basu, S. (n.d.). ''LATE INTEGRATION IN AUDIO-VISUAL CONTINUOUS SPEECH RECOGNITION''.</ref> | Around the 20th century, there was a growing emphasis on enhancing the [[wikipedia:Robustness|robustness]] of speech recognition systems in the face of various types of background noise in audio channel. This development gained significant attention because speech recognition systems experienced notable performance setbacks when operating in noisy environments, dealing with unfavorable acoustic channel conditions, or contending with issues like [[wikipedia:Crosstalk|crosstalk]]. Moreover, researchers observed that some amount of [[wikipedia:Orthogonality|orthogonality]] between the audio and video channels, presenting an opportunity to enhance recognition efficiency by integrating both channels. Therefore, two different approaches to combine audio and visual information have been tried.<ref>Verma, A., Faruquie, T., Neti, C., & Basu, S. (n.d.). ''LATE INTEGRATION IN AUDIO-VISUAL CONTINUOUS SPEECH RECOGNITION''.</ref> | ||
Line 19: | Line 18: | ||
When it comes to the early 21st century, some researchers introduced innovative methods, including composite [[wikipedia:Feature_(machine_learning)|feature]] [[wikipedia:Vector|vectors]] and a [[Hidden Markov Models|hidden Markov model]] structure accommodating audio-visual [[wikipedia:Asynchrony|asynchrony]].<ref>Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''2'', 821–824 vol. 2. <nowiki>https://doi.org/10.1109/ICASSP.1996.543247</nowiki></ref> These techniques demonstrated substantial improvements in recognition accuracy, particularly in the presence of interfering noise, as well as marked the inception of multimodal approaches, where audio and visual information converged, heralding a new era in speech recognition technology, characterized by increased accuracy and [[wikipedia:Resilience|resilience]] in diverse communication scenarios. | When it comes to the early 21st century, some researchers introduced innovative methods, including composite [[wikipedia:Feature_(machine_learning)|feature]] [[wikipedia:Vector|vectors]] and a [[Hidden Markov Models|hidden Markov model]] structure accommodating audio-visual [[wikipedia:Asynchrony|asynchrony]].<ref>Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''2'', 821–824 vol. 2. <nowiki>https://doi.org/10.1109/ICASSP.1996.543247</nowiki></ref> These techniques demonstrated substantial improvements in recognition accuracy, particularly in the presence of interfering noise, as well as marked the inception of multimodal approaches, where audio and visual information converged, heralding a new era in speech recognition technology, characterized by increased accuracy and [[wikipedia:Resilience|resilience]] in diverse communication scenarios. | ||
=== 2010s: Neural Networks and Deep Learning === | |||
As technology continues to evolve, the emergence of [[wikipedia:Artificial neural network|artificial neural networks]] and [[wikipedia:Deep_learning|deep learning]] marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems. | As technology continues to evolve, the emergence of [[wikipedia:Artificial neural network|artificial neural networks]] and [[wikipedia:Deep_learning|deep learning]] marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems. | ||
[[wikipedia:Artificial_neural_network|Artificial neural networks]] have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small [[wikipedia:Neural_network|neural networks]] for speech recognition did not outperform generative models like [[wikipedia:Generalized_method_of_moments|GMM]]-[[wikipedia:Hidden_Markov_model|HMM]]. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning. The following are some examples of the application of artificial neural networks and deep learning in multimodal speech recognition. | [[wikipedia:Artificial_neural_network|Artificial neural networks]] have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small [[wikipedia:Neural_network|neural networks]] for speech recognition did not outperform generative models like [[wikipedia:Generalized_method_of_moments|GMM]]-[[wikipedia:Hidden_Markov_model|HMM]]. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning. The following are some examples of the application of artificial neural networks and deep learning in multimodal speech recognition. | ||
* '''End-to-End Multimodal ASR:''' Building on the success of Transformers in [[wikipedia:Natural_language_processing|natural language processing (NLP)]], researchers have extended these architectures to multimodal tasks. Subsequently, Investigating [[Development of End-to-End Models|end-to-end]] multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to [[wikipedia:Transcription_(linguistics)|transcriptions]], eliminating the need for intermediate steps in traditional ASR [[wikipedia:Pipeline_(computing)|pipelines]]. And there are many pioneering companies that had devoted to this domain, such as [[wikipedia:LipNet|Lipnet]], which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, [[wikipedia:LipNet|LipNet]] can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID [https://nl.wikipedia.org/wiki/CORPUS corpus].<ref>Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). ''LipNet: End-to-End Sentence-level Lipreading'' (arXiv:1611.01599). arXiv. <nowiki>https://doi.org/10.48550/arXiv.1611.01599</nowiki></ref> | * '''End-to-End Multimodal ASR:''' Building on the success of Transformers in [[wikipedia:Natural_language_processing|natural language processing (NLP)]], researchers have extended these architectures to multimodal tasks. Subsequently, Investigating [[Development of End-to-End Models|end-to-end]] multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to [[wikipedia:Transcription_(linguistics)|transcriptions]], eliminating the need for intermediate steps in traditional ASR [[wikipedia:Pipeline_(computing)|pipelines]]. And there are many pioneering companies that had devoted to this domain, such as [[wikipedia:LipNet|Lipnet]], which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, [[wikipedia:LipNet|LipNet]] can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID [https://nl.wikipedia.org/wiki/CORPUS corpus].<ref>Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). ''LipNet: End-to-End Sentence-level Lipreading'' (arXiv:1611.01599). arXiv. <nowiki>https://doi.org/10.48550/arXiv.1611.01599</nowiki></ref> | ||
Line 31: | Line 28: | ||
== Key Innovations == | == Key Innovations == | ||
Multimodal speech recognition incorporates multiple modes of input (like audio, visual, and sometimes even tactile) to interpret and convert spoken language into text or execute commands. Compared to single-modal (single mode, typically audio) speech recognition systems, multimodal systems have presented several innovations: | |||
=== Data Fusion === | |||
The primary innovation in multimodal speech recognition is Data fusion.By merging information or data from multiple sensory modalities, such as audio, visual, tactile, and others, the speech recognition system can better comprehend and process speech input,and therefore improve recognition accuracy. For example, the effective integration of both audio and visual cues, specifically speaker's lip images, can enhance ASR accuracy, particularly in challenging, noisy environments<ref>Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. ''1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings'', ''2'', 821–824 vol. 2. <nowiki>https://doi.org/10.1109/ICASSP.1996.543247</nowiki></ref>. Fusing data from multiple sensory modalities even enables multimodal speech recognition systems to support more complex application scenarios, such as sign language recognition, gesture recognition, emotion analysis, and more, enhancing the system's versatility and adaptability. | |||
=== Robustness in Noisy Environments === | |||
One of the significant challenges in automatic speech recognition (ASR) is its performance degradation in noisy environments. Traditional ASR systems excel in quiet conditions but struggle when confronted with background noise. However, Several researches have shown that multimodal speech recognition systems perform better over single-modal speech recognition systems in noisy environments<ref>Chibelushi, C. C. (1996). Design issues for a digital audio-visual integrated database. ''IEE Colloquium on Integrated Audio-Visual Processing for Recognition, Synthesis and Communication'', ''1996'', 7–7. <nowiki>https://doi.org/10.1049/ic:19961151</nowiki></ref><ref>Stewart, D., Seymour, R., Pass, A., & Ji Ming. (2014). Robust Audio-Visual Speech Recognition Under Noisy Audio-Video Conditions. ''IEEE Transactions on Cybernetics'', ''44''(2), 175–184. <nowiki>https://doi.org/10.1109/TCYB.2013.2250954</nowiki></ref><ref>Kashiwagi, Y., Suzuki, M., Minematsu, N., & Hirose, K. (2012). Audio-visual feature integration based on piecewise linear transformation for noise robust automatic speech recognition. ''2012 IEEE Spoken Language Technology Workshop (SLT)'', 149–152. <nowiki>https://doi.org/10.1109/SLT.2012.6424213</nowiki></ref>.This innovation provides an essential breakthrough in ensuring accurate speech recognition across various real-world scenarios, where noise interference is a common occurrence. | |||
=== Increased Contextual Understanding === | |||
By analyzing both the spoken word and the speaker's facial expressions or gestures, multimodal systems can better understand the context and emotion behind the speech, leading to more accurate results. A study created a database that includes various emotions expressed by people in speech-based interactions<ref>Kessous, L., Castellano, G., & Caridakis, G. (2010). Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. ''Journal on Multimodal User Interfaces'', ''3''(1–2), 33–48. <nowiki>https://doi.org/10.1007/s12193-009-0025-5</nowiki></ref>. By combining facial expressions, body gestures, and speech analysis, they successfully integrated three modalities to enhance emotion recognition. The results show that multimodal approaches have a significant advantage in emotion recognition, especially when combining facial expressions, body gestures, and speech information. | |||
=== Neural Networks and Deep Learning algorithms === | |||
Neural Networks and Deep Learning algorithms empower multimodal speech recognition systems to extract intricate patterns and relationships from diverse data sources, including audio, video, and text. Consequently, more precise and context-aware speech recognition results can be delivered even in noisy environments, thereby enhancing their robustness in real-world usage. The LAS model (listen, attend and spell) is introduced by researchers of [[wikipedia:Google|Google]]. By combining attention mechanisms with [[wikipedia:Recurrent_neural_network|recurrent neural networks (RNNs)]], this model can significantly improve the accuracy of ASR and become a fundamental building block for multimodal speech recognition that takes both audio and visual cues into account. According to the investigation of this model, on a subset of the Google voice search task, LAS achieves a [[wikipedia:Word_error_rate|word error rate (WER)]] of 14.1% without a [[wikipedia:Dictionary|dictionary]] or a [[wikipedia:Language_model|language model]], and 10.3% with language model rescoring over the top 32 beams.<ref>Chan, W., Jaitly, N., Le, Q. V., & Vinyals, O. (2015). Listen, attend and spell. ''arXiv preprint arXiv:1508.01211''.</ref> | |||
Moreover, with more data points from multiple input modes, multimodal systems can adapt and learn from user interactions more effectively than unimodal systems.Therefore, the field of multimodal speech recognition has driven advancements in machine learning algorithms, including deep learning models that can process and integrate multiple modalities simultaneously. CD-DNN-HMM, a state-of-the-art acoustic modeling technique, has emerged recently and demonstrated remarkable performance gains compared to the older Gaussian-mixture model-based [[wikipedia:HMM|HMMs]] (GMM-HMMs) in multiple ASR tasks <ref>Yao, K., Yu, D., Seide, F., Su, H., Deng, L., & Gong, Y. (2012). Adaptation of context-dependent deep neural networks for automatic speech recognition. ''2012 IEEE Spoken Language Technology Workshop (SLT)'', 366–369. <nowiki>https://doi.org/10.1109/SLT.2012.6424251</nowiki></ref>. | |||
== Impact == | == Impact == | ||
=== Multimodal Speech Recognition's Influences on Speech Technology === | |||
Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), which are two key [[Introduction of Voice Assistants|deep learning]] methods. In multimodal speech recognition, audio and image data play a vital role in extracting and merging features. A common strategy is to use CNNs to process both audio and image data, extracting important features. For audio, CNNs can create spectrograms, while for images, they identify objects and generate feature vectors capturing important visual details. After feature extraction, models like RNNs or transformers are used to integrate these features, capturing the connections between audio and visual data. Training audio recognition and visual recognition separately linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates.<ref>Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. ''2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 2130–2134. <nowiki>https://doi.org/10.1109/ICASSP.2015.7178347</nowiki></ref> | |||
CTC(Connectionist Temporal Classification) is a technique for aligning audio and text, and it has found application in multimodal speech recognition. CTC allows for the alignment of audio and text data, enabling the system to understand when and how to correspond audio features with text labels. An experiment tested the CTC bidirectional LSTM acoustic model and the S2S visual semantic feature model. As a result, the CTC output is closer to the acoustics of an utterance.<ref>Palaskar, S., Sanabria, R., & Metze, F. (2018). End-to-end Multimodal Speech Recognition. ''2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 5774–5778. <nowiki>https://doi.org/10.1109/ICASSP.2018.8462439</nowiki></ref> | |||
Multimodal speech recognition systems need to adapt to varying environmental noise conditions, allowing the system to automatically adjust in different environments, thereby enhancing robustness. Also, for speaker adaptation, models also need to adapt to the speech characteristics of different speakers. Adaptive techniques can capture the speaker-specific speech traits, improving recognition performance. The experiment has shown that the MMASR model shows significant gains (up to 4.2% WER improvements) compared to traditional speech-to-text architecture in noisy environments.<ref>Srinivasan, T., Sanabria, R., & Metze, F. (2019). ''Analyzing Utility of Visual Context in Multimodal Speech Recognition Under Noisy Conditions''. <nowiki>https://doi.org/10.48550/ARXIV.1907.00477</nowiki></ref> | |||
Traditional Multimodal Speech Recognition systems comprise many independent processing stages, including audio extraction, image extraction, and speech recognition. Each stage has individual design, which may lead to more system complexity, high computational costs. The key advantage of [[Development of End-to-End Models|end-to-end]] multimodal models lies in simplifying the entire multimodal speech recognition pipeline while enhancing performance. End-to-end models take audio and image data as inputs into a single deep learning model, enabling the model to automatically learn how to extract speech information from multimodal inputs. Understanding speech from visual signals alone has been of interest for decades. One researcher proposed a multimodal attention method to get information from multimodal input. Modality attention mechanism is integrated in an end-to-end attention based AVSR system. The results show that our proposed method obtains a 36% improvement.<ref>Zhou, P., Yang, W., Chen, W., Wang, Y., & Jia, J. (2019). Modality Attention for End-to-end Audio-visual Speech Recognition. ''ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 6565–6569. <nowiki>https://doi.org/10.1109/ICASSP.2019.8683733</nowiki></ref> | |||
=== Multimodal Speech Recognition's Influence in Real Life === | |||
Firstly, multimodal speech recognition has promoted the development of natural human-computer interaction techniques. Users can engage with computing systems and devices in a more intuitive manner, combining speech, gestural inputs, and facial expressions. For example, Users can interact and explore virtual worlds immersive through voice and gestures. | |||
In the healthcare sector, multimodal speech recognition has been useful in promoting communication between doctors and patients. This technology facilitates real-time transcription of spoken content and contributes to the smooth collecting of medical diagnoses and patient records. It means that the system can recognize the special place in the body where patients point by their fingers, combining speech content to summarize more accurate and time-saving medical diagnoses. | |||
Also, the integration of multimodal speech recognition into smart home ecosystems and Internet of Things (IoT) devices has introduced significant change. Users can exercise control over household appliances such as [[Introduction of Voice Assistants|voice assistant]], lighting systems, and smart locks via spoken commands and other sensory modalities. Such as the smart lock can have a double password, gesture with voice, combined with a specific combination to open the door. As a result, the security of users can be better guaranteed. | |||
Multimodal speech recognition is primarily driven by the pursuit of enhanced robustness. When dependent on audio signals, speech recognition systems are easily influenced by background noise, speaker's different articulation, and other uncertainties. The integration of insights from diverse modalities enhance these systems to deal with these challenges more effectively. Without it, some instructions made by users to the machine cannot be executed in a noisy environment. | |||
== Future Research == | == Future Research == | ||
In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving | In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases' quality from various aspects<ref>Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. ''International Journal of Advanced Robotic Systems'', ''17''(6), 172988142097608. <nowiki>https://doi.org/10.1177/1729881420976082</nowiki></ref>. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality. | ||
With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal | With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as“sight - listening - touching - tasting - smelling”<ref>Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. ''International Journal of Advanced Robotic Systems'', ''17''(6), 172988142097608. <nowiki>https://doi.org/10.1177/1729881420976082</nowiki></ref>. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background<ref>Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. ''The Journal of the Acoustical Society of America'', ''146''(5), 3495–3504. <nowiki>https://doi.org/10.1121/1.5134064</nowiki></ref>. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal. | ||
What's more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by [https://www.nuance.com/dragon.html Nuance] and recognizing whisper speech from patients by [https://whispp.com/ Whispp], but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse<ref>Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. ''Sensors'', ''23''(4), 1843. <nowiki>https://doi.org/10.3390/s23041843</nowiki></ref>. | |||
Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%<ref>Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). ''Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture'' (arXiv:1810.00108). arXiv. <nowiki>http://arxiv.org/abs/1810.00108</nowiki></ref>. Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM<ref>Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. ''Applied Intelligence'', ''42''(4), 722–737. <nowiki>https://doi.org/10.1007/s10489-014-0629-7</nowiki></ref>. | Finally, novel [[Deep Learning Revolution|deep learning architectures]] might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%<ref>Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). ''Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture'' (arXiv:1810.00108). arXiv. <nowiki>http://arxiv.org/abs/1810.00108</nowiki></ref>. Some new approaches are also mentioned in other studies, for example, integration between [[Hidden Markov Models|DNN-HMM and MSHMM]]<ref>Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. ''Applied Intelligence'', ''42''(4), 722–737. <nowiki>https://doi.org/10.1007/s10489-014-0629-7</nowiki></ref>. | ||
== ChatGPT Review == | == ChatGPT Review == | ||
We ask GPT to review the content of our Wiki page "Multimodal Speech Recognition", and the prompts we used are as follows: | |||
# Imagine you are a professor of the master program "Voice technology", and your students are working on a group assignment of creating their own Wiki page to introduce speech recognition. Here is the topic and content... Can you offer some comments to help them improve their work? | |||
# Act as a student who has no previous knowledge about multimodal speech recognition and review this text. Advise me on how to make improvements of this text. | |||
'''Feedback:''' | |||
The student has made a commendable effort in creating a comprehensive wiki page on Multimodal Speech Recognition. This page is informative and well-structured, but with enhancements in specific areas, it can become an even more valuable resource. However, there are still some areas for improvement: | |||
* '''Specific Examples:''' To enhance the page's informativeness, encourage the student to provide specific real-world examples or case studies that demonstrate how multimodal speech recognition has been applied in various domains. This will help readers better understand the practical applications of the technology. | |||
* '''Balance of Detail:''' While the page contains a lot of valuable information, it's important to maintain a balance between providing in-depth details and keeping the content concise and digestible. Consider revising sections to make them more concise where necessary. | |||
* '''Peer Review:''' Suggest seeking input from peers or experts in the field to gather diverse perspectives and feedback for further improvement. Collaborative editing can lead to valuable enhancements. | |||
* '''Visual Elements:''' ChatGPT suggested adding images, diagrams or charts to illustrate key concepts. It will improve readability and help understanding. | |||
== Group Members == | |||
Xueying Liu, Ting Zhang, Weixi Lai, Weihao Jiang | |||
== References == | == References == |
Latest revision as of 10:42, 19 September 2023
Introduction[edit | edit source]
Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous McGurk effect[1] has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment[2].
Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.
Historical Context[edit | edit source]
20th Century: Early Experiments with Lip Reading[edit | edit source]
The idea of multimodal speech recognition can be traced back to the mid-20th century. Around this time, some pioneering researchers set out to experiment with the possibility of improving the accuracy of speech recognition in challenging acoustic environments by combining it with other modalities, which laid the foundation of multimodal approaches of ASR (Automatic Speech Recognition).
As early as 1984, scholars conducted some research on automated lip reading to enhance speech recognition.[3] Prominent scholars such as Petajan, E.D. are renowned for developing one of the first audio-visual recognition systems. In his experiment, binary mouth image were used to extract mouth parameters like height, width and the area of mouth of the speaker which would be later used in the recognition system. Then the speech is processed by the acoustic recognizer first, and then passed on to the visual recognizer for final decision.[4] This visual analysis system was later used by Goldschen[5] to recognize continuous speech visually. The significant contributions made by those forerunners pave the way for the integration of audio and visual information in the process of speech recognition.
Late 20th Century - Early 21st Century: Integration of Audio and Visual Information[edit | edit source]
Around the 20th century, there was a growing emphasis on enhancing the robustness of speech recognition systems in the face of various types of background noise in audio channel. This development gained significant attention because speech recognition systems experienced notable performance setbacks when operating in noisy environments, dealing with unfavorable acoustic channel conditions, or contending with issues like crosstalk. Moreover, researchers observed that some amount of orthogonality between the audio and video channels, presenting an opportunity to enhance recognition efficiency by integrating both channels. Therefore, two different approaches to combine audio and visual information have been tried.[6]
- Early Integration: For the first approach, audio and visual features had to be computed from the acoustic and visual speech data respectively, after this, they are combined before the recognition experiment. However, this approach had a drawback, that it couldn't handle different categories or types of information in audio and video as it uses a common recognizer for both of them. [7]
- Late Integration: The other approach, known as late Integration, uses separate systems for audio and video recognition. Then it merges the results from these two systems to produce the final outcome. This approach is well-equipped to handle diverse categories of information in audio and video because it keeps them separate until the very end when they are combined. [8]
When it comes to the early 21st century, some researchers introduced innovative methods, including composite feature vectors and a hidden Markov model structure accommodating audio-visual asynchrony.[9] These techniques demonstrated substantial improvements in recognition accuracy, particularly in the presence of interfering noise, as well as marked the inception of multimodal approaches, where audio and visual information converged, heralding a new era in speech recognition technology, characterized by increased accuracy and resilience in diverse communication scenarios.
2010s: Neural Networks and Deep Learning[edit | edit source]
As technology continues to evolve, the emergence of artificial neural networks and deep learning marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems.
Artificial neural networks have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small neural networks for speech recognition did not outperform generative models like GMM-HMM. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning. The following are some examples of the application of artificial neural networks and deep learning in multimodal speech recognition.
- End-to-End Multimodal ASR: Building on the success of Transformers in natural language processing (NLP), researchers have extended these architectures to multimodal tasks. Subsequently, Investigating end-to-end multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to transcriptions, eliminating the need for intermediate steps in traditional ASR pipelines. And there are many pioneering companies that had devoted to this domain, such as Lipnet, which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, LipNet can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID corpus.[10]
To summarize, these investigations represent a selection of crucial contributions that paved the way for more accurate, robust, and context-aware multimodal systems, with applications ranging from virtual assistants to accessibility tools and beyond.
Key Innovations[edit | edit source]
Multimodal speech recognition incorporates multiple modes of input (like audio, visual, and sometimes even tactile) to interpret and convert spoken language into text or execute commands. Compared to single-modal (single mode, typically audio) speech recognition systems, multimodal systems have presented several innovations:
Data Fusion[edit | edit source]
The primary innovation in multimodal speech recognition is Data fusion.By merging information or data from multiple sensory modalities, such as audio, visual, tactile, and others, the speech recognition system can better comprehend and process speech input,and therefore improve recognition accuracy. For example, the effective integration of both audio and visual cues, specifically speaker's lip images, can enhance ASR accuracy, particularly in challenging, noisy environments[11]. Fusing data from multiple sensory modalities even enables multimodal speech recognition systems to support more complex application scenarios, such as sign language recognition, gesture recognition, emotion analysis, and more, enhancing the system's versatility and adaptability.
Robustness in Noisy Environments[edit | edit source]
One of the significant challenges in automatic speech recognition (ASR) is its performance degradation in noisy environments. Traditional ASR systems excel in quiet conditions but struggle when confronted with background noise. However, Several researches have shown that multimodal speech recognition systems perform better over single-modal speech recognition systems in noisy environments[12][13][14].This innovation provides an essential breakthrough in ensuring accurate speech recognition across various real-world scenarios, where noise interference is a common occurrence.
Increased Contextual Understanding[edit | edit source]
By analyzing both the spoken word and the speaker's facial expressions or gestures, multimodal systems can better understand the context and emotion behind the speech, leading to more accurate results. A study created a database that includes various emotions expressed by people in speech-based interactions[15]. By combining facial expressions, body gestures, and speech analysis, they successfully integrated three modalities to enhance emotion recognition. The results show that multimodal approaches have a significant advantage in emotion recognition, especially when combining facial expressions, body gestures, and speech information.
Neural Networks and Deep Learning algorithms[edit | edit source]
Neural Networks and Deep Learning algorithms empower multimodal speech recognition systems to extract intricate patterns and relationships from diverse data sources, including audio, video, and text. Consequently, more precise and context-aware speech recognition results can be delivered even in noisy environments, thereby enhancing their robustness in real-world usage. The LAS model (listen, attend and spell) is introduced by researchers of Google. By combining attention mechanisms with recurrent neural networks (RNNs), this model can significantly improve the accuracy of ASR and become a fundamental building block for multimodal speech recognition that takes both audio and visual cues into account. According to the investigation of this model, on a subset of the Google voice search task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a language model, and 10.3% with language model rescoring over the top 32 beams.[16]
Moreover, with more data points from multiple input modes, multimodal systems can adapt and learn from user interactions more effectively than unimodal systems.Therefore, the field of multimodal speech recognition has driven advancements in machine learning algorithms, including deep learning models that can process and integrate multiple modalities simultaneously. CD-DNN-HMM, a state-of-the-art acoustic modeling technique, has emerged recently and demonstrated remarkable performance gains compared to the older Gaussian-mixture model-based HMMs (GMM-HMMs) in multiple ASR tasks [17].
Impact[edit | edit source]
Multimodal Speech Recognition's Influences on Speech Technology[edit | edit source]
Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), which are two key deep learning methods. In multimodal speech recognition, audio and image data play a vital role in extracting and merging features. A common strategy is to use CNNs to process both audio and image data, extracting important features. For audio, CNNs can create spectrograms, while for images, they identify objects and generate feature vectors capturing important visual details. After feature extraction, models like RNNs or transformers are used to integrate these features, capturing the connections between audio and visual data. Training audio recognition and visual recognition separately linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates.[18]
CTC(Connectionist Temporal Classification) is a technique for aligning audio and text, and it has found application in multimodal speech recognition. CTC allows for the alignment of audio and text data, enabling the system to understand when and how to correspond audio features with text labels. An experiment tested the CTC bidirectional LSTM acoustic model and the S2S visual semantic feature model. As a result, the CTC output is closer to the acoustics of an utterance.[19]
Multimodal speech recognition systems need to adapt to varying environmental noise conditions, allowing the system to automatically adjust in different environments, thereby enhancing robustness. Also, for speaker adaptation, models also need to adapt to the speech characteristics of different speakers. Adaptive techniques can capture the speaker-specific speech traits, improving recognition performance. The experiment has shown that the MMASR model shows significant gains (up to 4.2% WER improvements) compared to traditional speech-to-text architecture in noisy environments.[20]
Traditional Multimodal Speech Recognition systems comprise many independent processing stages, including audio extraction, image extraction, and speech recognition. Each stage has individual design, which may lead to more system complexity, high computational costs. The key advantage of end-to-end multimodal models lies in simplifying the entire multimodal speech recognition pipeline while enhancing performance. End-to-end models take audio and image data as inputs into a single deep learning model, enabling the model to automatically learn how to extract speech information from multimodal inputs. Understanding speech from visual signals alone has been of interest for decades. One researcher proposed a multimodal attention method to get information from multimodal input. Modality attention mechanism is integrated in an end-to-end attention based AVSR system. The results show that our proposed method obtains a 36% improvement.[21]
Multimodal Speech Recognition's Influence in Real Life[edit | edit source]
Firstly, multimodal speech recognition has promoted the development of natural human-computer interaction techniques. Users can engage with computing systems and devices in a more intuitive manner, combining speech, gestural inputs, and facial expressions. For example, Users can interact and explore virtual worlds immersive through voice and gestures.
In the healthcare sector, multimodal speech recognition has been useful in promoting communication between doctors and patients. This technology facilitates real-time transcription of spoken content and contributes to the smooth collecting of medical diagnoses and patient records. It means that the system can recognize the special place in the body where patients point by their fingers, combining speech content to summarize more accurate and time-saving medical diagnoses.
Also, the integration of multimodal speech recognition into smart home ecosystems and Internet of Things (IoT) devices has introduced significant change. Users can exercise control over household appliances such as voice assistant, lighting systems, and smart locks via spoken commands and other sensory modalities. Such as the smart lock can have a double password, gesture with voice, combined with a specific combination to open the door. As a result, the security of users can be better guaranteed.
Multimodal speech recognition is primarily driven by the pursuit of enhanced robustness. When dependent on audio signals, speech recognition systems are easily influenced by background noise, speaker's different articulation, and other uncertainties. The integration of insights from diverse modalities enhance these systems to deal with these challenges more effectively. Without it, some instructions made by users to the machine cannot be executed in a noisy environment.
Future Research[edit | edit source]
In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases' quality from various aspects[22]. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality.
With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as“sight - listening - touching - tasting - smelling”[23]. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background[24]. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal.
What's more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by Nuance and recognizing whisper speech from patients by Whispp, but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse[25].
Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%[26]. Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM[27].
ChatGPT Review[edit | edit source]
We ask GPT to review the content of our Wiki page "Multimodal Speech Recognition", and the prompts we used are as follows:
- Imagine you are a professor of the master program "Voice technology", and your students are working on a group assignment of creating their own Wiki page to introduce speech recognition. Here is the topic and content... Can you offer some comments to help them improve their work?
- Act as a student who has no previous knowledge about multimodal speech recognition and review this text. Advise me on how to make improvements of this text.
Feedback:
The student has made a commendable effort in creating a comprehensive wiki page on Multimodal Speech Recognition. This page is informative and well-structured, but with enhancements in specific areas, it can become an even more valuable resource. However, there are still some areas for improvement:
- Specific Examples: To enhance the page's informativeness, encourage the student to provide specific real-world examples or case studies that demonstrate how multimodal speech recognition has been applied in various domains. This will help readers better understand the practical applications of the technology.
- Balance of Detail: While the page contains a lot of valuable information, it's important to maintain a balance between providing in-depth details and keeping the content concise and digestible. Consider revising sections to make them more concise where necessary.
- Peer Review: Suggest seeking input from peers or experts in the field to gather diverse perspectives and feedback for further improvement. Collaborative editing can lead to valuable enhancements.
- Visual Elements: ChatGPT suggested adding images, diagrams or charts to illustrate key concepts. It will improve readability and help understanding.
Group Members[edit | edit source]
Xueying Liu, Ting Zhang, Weixi Lai, Weihao Jiang
References[edit | edit source]
- ↑ Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746–748. https://doi.org/10.1038/264746a0
- ↑ Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2130–2134. https://doi.org/10.1109/ICASSP.2015.7178347
- ↑ Petajan, E. (1984). Automatic Lipreading to Enhance Speech Recognition (Speech Reading).
- ↑ Petajan, E., Bischoff, B., Bodoff, D., & Brooke, N. M. (1988, May). An improved automatic lipreading system to enhance speech recognition. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 19-25).
- ↑ Goldschen, A.J., Garcia, O.N., Petajan, E.D. (1997). Continuous Automatic Speech Recognition by Lipreading. In: Shah, M., Jain, R. (eds) Motion-Based Recognition. Computational Imaging and Vision, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-8935-2_14
- ↑ Verma, A., Faruquie, T., Neti, C., & Basu, S. (n.d.). LATE INTEGRATION IN AUDIO-VISUAL CONTINUOUS SPEECH RECOGNITION.
- ↑ Tsuhan Chen, & Rao, R. R. (1998). Audio-visual integration in multimodal communication. Proceedings of the IEEE, 86(5), 837–852. https://doi.org/10.1109/5.664274
- ↑ Bregler, C., Manke, S., Hild, H., & Waibel, A. (1993, March). Bimodal sensor integration on the example of'speechreading'. In IEEE International Conference on Neural Networks (pp. 667-671). IEEE.
- ↑ Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, 2, 821–824 vol. 2. https://doi.org/10.1109/ICASSP.1996.543247
- ↑ Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). LipNet: End-to-End Sentence-level Lipreading (arXiv:1611.01599). arXiv. https://doi.org/10.48550/arXiv.1611.01599
- ↑ Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, 2, 821–824 vol. 2. https://doi.org/10.1109/ICASSP.1996.543247
- ↑ Chibelushi, C. C. (1996). Design issues for a digital audio-visual integrated database. IEE Colloquium on Integrated Audio-Visual Processing for Recognition, Synthesis and Communication, 1996, 7–7. https://doi.org/10.1049/ic:19961151
- ↑ Stewart, D., Seymour, R., Pass, A., & Ji Ming. (2014). Robust Audio-Visual Speech Recognition Under Noisy Audio-Video Conditions. IEEE Transactions on Cybernetics, 44(2), 175–184. https://doi.org/10.1109/TCYB.2013.2250954
- ↑ Kashiwagi, Y., Suzuki, M., Minematsu, N., & Hirose, K. (2012). Audio-visual feature integration based on piecewise linear transformation for noise robust automatic speech recognition. 2012 IEEE Spoken Language Technology Workshop (SLT), 149–152. https://doi.org/10.1109/SLT.2012.6424213
- ↑ Kessous, L., Castellano, G., & Caridakis, G. (2010). Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. Journal on Multimodal User Interfaces, 3(1–2), 33–48. https://doi.org/10.1007/s12193-009-0025-5
- ↑ Chan, W., Jaitly, N., Le, Q. V., & Vinyals, O. (2015). Listen, attend and spell. arXiv preprint arXiv:1508.01211.
- ↑ Yao, K., Yu, D., Seide, F., Su, H., Deng, L., & Gong, Y. (2012). Adaptation of context-dependent deep neural networks for automatic speech recognition. 2012 IEEE Spoken Language Technology Workshop (SLT), 366–369. https://doi.org/10.1109/SLT.2012.6424251
- ↑ Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2130–2134. https://doi.org/10.1109/ICASSP.2015.7178347
- ↑ Palaskar, S., Sanabria, R., & Metze, F. (2018). End-to-end Multimodal Speech Recognition. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5774–5778. https://doi.org/10.1109/ICASSP.2018.8462439
- ↑ Srinivasan, T., Sanabria, R., & Metze, F. (2019). Analyzing Utility of Visual Context in Multimodal Speech Recognition Under Noisy Conditions. https://doi.org/10.48550/ARXIV.1907.00477
- ↑ Zhou, P., Yang, W., Chen, W., Wang, Y., & Jia, J. (2019). Modality Attention for End-to-end Audio-visual Speech Recognition. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6565–6569. https://doi.org/10.1109/ICASSP.2019.8683733
- ↑ Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
- ↑ Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
- ↑ Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. The Journal of the Acoustical Society of America, 146(5), 3495–3504. https://doi.org/10.1121/1.5134064
- ↑ Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. Sensors, 23(4), 1843. https://doi.org/10.3390/s23041843
- ↑ Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture (arXiv:1810.00108). arXiv. http://arxiv.org/abs/1810.00108
- ↑ Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. Applied Intelligence, 42(4), 722–737. https://doi.org/10.1007/s10489-014-0629-7