Multimodal Speech Recognition

From MSc Voice Technology
Revision as of 09:25, 19 September 2023 by 145.90.130.66 (talk) (→‎Impact)
Jump to navigation Jump to search

Introduction

Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous McGurk effect[1] has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment[2].

Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.

Historical Context

20th Century: Early Experiments with Lip Reading

The idea of multimodal speech recognition can be traced back to the mid-20th century. Around this time, some pioneering researchers set out to experiment with the possibility of improving the accuracy of speech recognition in challenging acoustic environments by combining it with other modalities, which laid the foundation of multimodal approaches of ASR (Automatic Speech Recognition).

As early as 1984, scholars conducted some research on automated lip reading to enhance speech recognition.[3] Prominent scholars such as Petajan, E.D. are renowned for developing one of the first audio-visual recognition systems. In his experiment, binary mouth image were used to extract mouth parameters like height, width and the area of mouth of the speaker which would be later used in the recognition system. Then the speech is processed by the acoustic recognizer first, and then passed on to the visual recognizer for final decision.[4] This visual analysis system was later used by Goldschen[5] to recognize continuous speech visually. The significant contributions made by those forerunners pave the way for the integration of audio and visual information in the process of speech recognition.

Late 20th Century - Early 21st Century: Integration of Audio and Visual Information

Around the 20th century, there was a growing emphasis on enhancing the robustness of speech recognition systems in the face of various types of background noise in audio channel. This development gained significant attention because speech recognition systems experienced notable performance setbacks when operating in noisy environments, dealing with unfavorable acoustic channel conditions, or contending with issues like crosstalk. Moreover, researchers observed that some amount of orthogonality between the audio and video channels, presenting an opportunity to enhance recognition efficiency by integrating both channels. Therefore, two different approaches to combine audio and visual information have been tried.[6]

  • Early Integration: For the first approach, audio and visual features had to be computed from the acoustic and visual speech data respectively, after this, they are combined before the recognition experiment. However, this approach had a drawback, that it couldn't handle different categories or types of information in audio and video as it uses a common recognizer for both of them. [7]
  • Late Integration: The other approach, known as late Integration, uses separate systems for audio and video recognition. Then it merges the results from these two systems to produce the final outcome. This approach is well-equipped to handle diverse categories of information in audio and video because it keeps them separate until the very end when they are combined. [8]

When it comes to the early 21st century, some researchers introduced innovative methods, including composite feature vectors and a hidden Markov model structure accommodating audio-visual asynchrony.[9] These techniques demonstrated substantial improvements in recognition accuracy, particularly in the presence of interfering noise, as well as marked the inception of multimodal approaches, where audio and visual information converged, heralding a new era in speech recognition technology, characterized by increased accuracy and resilience in diverse communication scenarios.

2010s: Neural Networks and Deep Learning

As technology continues to evolve, the emergence of artificial neural networks and deep learning marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems.

Artificial neural networks have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small neural networks for speech recognition did not outperform generative models like GMM-HMM. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning. The following are some examples of the application of artificial neural networks and deep learning in multimodal speech recognition.

  • LAS Model of Google: The LAS model (listen, attend and spell) is introduced by researchers of Google. By combining attention mechanisms with recurrent neural networks (RNNs), this model can significantly improve the accuracy of ASR and become a fundamental building block for multimodal speech recognition that takes both audio and visual cues into account. According to the investigation of this model, on a subset of the Google voice search task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a language model, and 10.3% with language model rescoring over the top 32 beams.[10]
  • End-to-End Multimodal ASR: Building on the success of Transformers in natural language processing (NLP), researchers have extended these architectures to multimodal tasks. Subsequently, Investigating end-to-end multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to transcriptions, eliminating the need for intermediate steps in traditional ASR pipelines. And there are many pioneering companies that had devoted to this domain, such as Lipnet, which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, LipNet can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID corpus.[11]

To summarize, these investigations represent a selection of crucial contributions that paved the way for more accurate, robust, and context-aware multimodal systems, with applications ranging from virtual assistants to accessibility tools and beyond.

Key Innovations

Impact

Multimodal Speech Recognition significantly influences the advancement of speech technology.

Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), which are two key deep learning methods. In Multimodal Speech Recognition, audio and image data play a vital role in extracting and merging features. A common strategy is to use CNNs to process both audio and image data, extracting important features. For audio, CNNs can create spectrograms, while for images, they identify objects and generate feature vectors capturing important visual details. After feature extraction, models like RNNs or Transformers are used to integrate these features, capturing the connections between audio and visual data. Training audio recognition and visual recognition separately and subsequently linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates.[12]


CTC(Connectionist Temporal Classification) is a  technique for aligning audio and text, and it has found application in Multimodal Speech Recognition. CTC allows for the alignment of audio and text data, enabling the system to understand when and how to correspond audio features with text labels. An experiment tested the CTC bidirectional LSTM acoustic model and the S2S visual semantic feature model. As a result, the CTC output is closer to the acoustics of an utterance.[13]


Multimodal Speech Recognition systems need to adapt to varying environmental noise conditions, allowing the system to automatically adjust in different environments, thereby enhancing robustness. Also, for speaker adaptation, models also need to adapt to the speech characteristics of different speakers. Adaptive techniques can capture the speaker-specific speech traits, improving recognition performance. The experiment has shown that the MMASR model shows significant gains (up to 4.2% WER improvements) compared to traditional speech-to-text architecture in noisy environments.[14]


Traditional Multimodal Speech Recognition systems  comprise many independent processing stages, including audio extraction, image extraction, and speech recognition. Each stage has individual design, which potentially leads to increased system complexity, high computational costs. The key advantage of end-to-end multimodal models lies in simplifying the entire Multimodal Speech Recognition pipeline while enhancing performance. End-to-end models take audio and image data as inputs into a single deep learning model, enabling the model to automatically learn how to extract speech information from multimodal inputs. Understanding speech from visual signals alone has been of interest for decades. One researcher proposed a multimodal attention method to get information from multimodal input. Modality attention mechanism is integrated in an end-to-end attention based AVSR system. The results show that our proposed method obtains a 36% improvement.[15]


Also, multimodal Speech Recognition, as an evolving field of study, has exerted its influence in principal domains in people’s life. Firstly, multimodal speech recognition has promoted the development of natural human-computer interaction techniques. Users can engage with computing systems and devices in a more intuitive manner, combining speech, gestural inputs, and facial expressions. For example, Users can interact and explore virtual worlds immersive through voice and gestures.


In the healthcare sector, multimodal speech recognition has been useful in promoting communication between doctors and patients. This technology facilitates real-time transcription of spoken content and contributes to the smooth collecting of medical diagnoses and patient records. It means that the system can recognize the special place in the body where patients point by their fingers, combining speech content to summarize more accurate and time-saving medical diagnoses.


The integration of multimodal speech recognition into smart home ecosystems and Internet of Things (IoT) devices has introduced significant change. Users can exercise control over household appliances such as smart speakers, lighting systems, and smart locks via spoken commands and other sensory modalities. Such as the smart lock can have a double password, gesture with voice, combined with a specific combination to open the door. As a result, the security of users can be better guaranteed.


Multimodal speech recognition is primarily driven by the pursuit of enhanced robustness. When exclusively dependent on audio signals, speech recognition systems are easily influenced by background noise, speaker’s different articulation, and other uncertainties. The integration of insights from diverse modalities enhance these systems to deal with these challenges more effectively. Without it, some instructions made by users to the machine cannot be executed in a noisy environment.

Future Research

In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases’ quality from various aspects[16]. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality.

With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as “sight - listening - touching - tasting - smelling”[17]. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background[18]. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal.

What’s more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by Nuance and recognizing whisper speech from patients by Whispp, but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse[19].

Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%[20]. Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM[21].

ChatGPT Review

References

  1. Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746–748. https://doi.org/10.1038/264746a0
  2. Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2130–2134. https://doi.org/10.1109/ICASSP.2015.7178347
  3. Petajan, E. (1984). Automatic Lipreading to Enhance Speech Recognition (Speech Reading).
  4. Petajan, E., Bischoff, B., Bodoff, D., & Brooke, N. M. (1988, May). An improved automatic lipreading system to enhance speech recognition. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 19-25).
  5. Goldschen, A.J., Garcia, O.N., Petajan, E.D. (1997). Continuous Automatic Speech Recognition by Lipreading. In: Shah, M., Jain, R. (eds) Motion-Based Recognition. Computational Imaging and Vision, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-015-8935-2_14
  6. Verma, A., Faruquie, T., Neti, C., & Basu, S. (n.d.). LATE INTEGRATION IN AUDIO-VISUAL CONTINUOUS SPEECH RECOGNITION.
  7. Tsuhan Chen, & Rao, R. R. (1998). Audio-visual integration in multimodal communication. Proceedings of the IEEE, 86(5), 837–852. https://doi.org/10.1109/5.664274
  8. Bregler, C., Manke, S., Hild, H., & Waibel, A. (1993, March). Bimodal sensor integration on the example of'speechreading'. In IEEE International Conference on Neural Networks (pp. 667-671). IEEE.
  9. Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, 2, 821–824 vol. 2. https://doi.org/10.1109/ICASSP.1996.543247
  10. Chan, W., Jaitly, N., Le, Q. V., & Vinyals, O. (2015). Listen, attend and spell. arXiv preprint arXiv:1508.01211.
  11. Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). LipNet: End-to-End Sentence-level Lipreading (arXiv:1611.01599). arXiv. https://doi.org/10.48550/arXiv.1611.01599
  12. Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2130–2134. https://doi.org/10.1109/ICASSP.2015.7178347
  13. Palaskar, S., Sanabria, R., & Metze, F. (2018). End-to-end Multimodal Speech Recognition. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5774–5778. https://doi.org/10.1109/ICASSP.2018.8462439
  14. Srinivasan, T., Sanabria, R., & Metze, F. (2019). Analyzing Utility of Visual Context in Multimodal Speech Recognition Under Noisy Conditions. https://doi.org/10.48550/ARXIV.1907.00477
  15. Zhou, P., Yang, W., Chen, W., Wang, Y., & Jia, J. (2019). Modality Attention for End-to-end Audio-visual Speech Recognition. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 6565–6569. https://doi.org/10.1109/ICASSP.2019.8683733
  16. Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
  17. Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
  18. Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. The Journal of the Acoustical Society of America, 146(5), 3495–3504. https://doi.org/10.1121/1.5134064
  19. Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. Sensors, 23(4), 1843. https://doi.org/10.3390/s23041843
  20. Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture (arXiv:1810.00108). arXiv. http://arxiv.org/abs/1810.00108
  21. Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. Applied Intelligence, 42(4), 722–737. https://doi.org/10.1007/s10489-014-0629-7