Multimodal Speech Recognition

From MSc Voice Technology
Revision as of 22:18, 18 September 2023 by Ting (talk | contribs)
Jump to navigation Jump to search

Introduction

Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous McGurk effect[1] has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment[2].

Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.

Historical Context

20th Century: Early Experiments with Lip Reading

The idea of multimodal speech recognition can be traced back to the mid-20th century. Around this time, some pioneering researchers set out to experiment with the possibility of improving the accuracy of speech recognition in challenging acoustic environments by combining it with other modalities, which laid the foundation of multimodal approaches of ASR (Automatic Speech Recognition).

As early as 1984, scholars conducted some research on automated lip reading to enhance speech recognition.[3] Prominent scholars such as Petajan, E.D. are renowned for their significant contributions to computer lip reading. Their research is dedicated to the development of computer-based systems capable of automatically discerning speech through the analysis of lip movements.

Late 20th Century - Early 21st Century: Integration of Audio and Visual Information

Around the 20th century, Increasing robustness of the speech recognition system against different kinds of noises in the audio channel has become a focus area in the recent past. This is due to the fact that the performance of all the speech recognition systems suffers to a great extent in environments like background noise, bad acoustic channel characteristic, crosstalk, etc.  Furthermore, it has been observed that some amount of orthogonality is present between the audio and the video channel which can be used to improve the recognition efficiency by combining the two channels.Therefore, two different approaches to combine audio and visual information have been tried.

  • Early Integration: For the first approach, called "Early Integration or Feature Fusion", audio and visual features are computed from the acoustic and visual speech respectively and they are combined before the recognition experiment. Nevertheless, this approach can not handle different classifications in audio and video as it uses a common recognizer for both of them.
  • Late Integration: The other approach, called "Late Integration or Decision Fusion", incorporates separate recognizers for audio and video channels and then combines the outputs of the two recognizers to get the final result. The new approaches in terms of investigating integration of audio and visual information, led to transformative breakthroughs in computer vision and speech recognition. Some researchers introduced innovative methods, including composite feature vectors and a hidden Markov model structure accommodating audio-visual asynchrony.[4] These techniques demonstrated substantial improvements in recognition accuracy, particularly in the presence of interfering noise, as well as marked the inception of multimodal approaches, where audio and visual information converged, heralding a new era in speech recognition technology, characterized by increased accuracy and resilience in diverse communication scenarios.

2010s: Neural Networks and Deep Learning

As technology continues to evolve, the emergence of artificial neural networks and deep learning marked a transformative shift in the field of speech recognition, as well as enabling the development of more accurate and versatile multimodal speech recognition systems.

Artificial neural networks have been in use for over half a century, with applications in speech processing dating back almost as long. Early attempts at using shallow and small neural networks for speech recognition did not outperform generative models like GMM-HMM. However, researchers endeavored to advance the field of multimodal speech recognition by harnessing the capabilities of neural networks and deep learning.

  • LAS Model of Google: The LAS model (listen, attend and spell) is introduced by researchers of Google. By combining attention mechanisms with recurrent neural networks (RNNs), this model can significantly improve the accuracy of ASR and become a fundamental building block for multimodal speech recognition that takes both audio and visual cues into account. According to the investigation of this model, on a subset of the Google voice search task, LAS achieves a word error rate (WER) of 14.1% without a dictionary or a language model, and 10.3% with language model rescoring over the top 32 beams.[5]
  • End-to-End Multimodal ASR: Building on the success of Transformers in natural language processing (NLP), researchers have extended these architectures to multimodal tasks. Subsequently, Investigating end-to-end multimodal automatic speech recognition (ASR) systems has been a key focus. These systems leverage deep learning to directly map input audio-visual data to transcriptions, eliminating the need for intermediate steps in traditional ASR pipelines. And there are many pioneering companies that had devoted to this domain, such as Lipnet, which is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. Based on the research of Yannis M. Assael and his team, LipNet can achieve 95.2% accuracy in sentence-level, overlapped speaker split tasks on the GRID corpus.[6]

To summarize, these investigations represent a selection of crucial contributions that paved the way for more accurate, robust, and context-aware multimodal systems, with applications ranging from virtual assistants to accessibility tools and beyond.

Key Innovations

Impact

Training audio recognition and visual recognition separately and subsequently linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates .



Future Research

In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases’ quality from various aspects[7]. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality.

With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as “sight - listening - touching - tasting - smelling”[8]. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background[9]. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal.

What’s more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by Nuance and recognizing whisper speech from patients by Whispp, but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse[10].

Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%[11]. Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM[12].

ChatGPT Review

References

  1. Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746–748. https://doi.org/10.1038/264746a0
  2. Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2130–2134. https://doi.org/10.1109/ICASSP.2015.7178347
  3. Petajan, E. (1984). Automatic Lipreading to Enhance Speech Recognition (Speech Reading).
  4. Tomlinson, M. J., Russell, M. J., & Brooke, N. M. (1996). Integrating audio and visual information to provide highly robust speech recognition. 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings, 2, 821–824 vol. 2. https://doi.org/10.1109/ICASSP.1996.543247
  5. Chan, W., Jaitly, N., Le, Q. V., & Vinyals, O. (2015). Listen, attend and spell. arXiv preprint arXiv:1508.01211.
  6. Assael, Y. M., Shillingford, B., Whiteson, S., & de Freitas, N. (2016). LipNet: End-to-End Sentence-level Lipreading (arXiv:1611.01599). arXiv. https://doi.org/10.48550/arXiv.1611.01599
  7. Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
  8. Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
  9. Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. The Journal of the Acoustical Society of America, 146(5), 3495–3504. https://doi.org/10.1121/1.5134064
  10. Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. Sensors, 23(4), 1843. https://doi.org/10.3390/s23041843
  11. Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture (arXiv:1810.00108). arXiv. http://arxiv.org/abs/1810.00108
  12. Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. Applied Intelligence, 42(4), 722–737. https://doi.org/10.1007/s10489-014-0629-7