Multimodal Speech Recognition

From MSc Voice Technology
Jump to navigation Jump to search

Weixi Lai

Xueying Liu

Weihao Jiang

Ting Zhang

Introduction

Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements,  has a prominent influence. A famous McGurk effect has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment (Mroueh et al., 2015).

Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. In this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.

Historical Context

Key Innovations

Impact

Training audio recognition and visual recognition separately and subsequently linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates .



Future Research

ChatGPT Review

References

To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.[1] and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

  1. Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [1]