Multimodal Speech Recognition: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
Line 8: Line 8:


== Introduction ==
== Introduction ==
Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements,  has a prominent influence. A famous McGurk effect has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment (Mroueh et al., 2015).  
Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous McGurk effect<ref>Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. ''Nature'', ''264''(5588), 746–748. <nowiki>https://doi.org/10.1038/264746a0</nowiki></ref> has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment<ref>Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. ''2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 2130–2134. <nowiki>https://doi.org/10.1109/ICASSP.2015.7178347</nowiki></ref>. 


Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. In this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.
Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.


== Historical Context ==
== Historical Context ==
Line 24: Line 24:
== Future Research ==
== Future Research ==


In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases’ quality from various aspects (Xia et al., 2020). To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality.
In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases’ quality from various aspects<ref>Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. ''International Journal of Advanced Robotic Systems'', ''17''(6), 172988142097608. <nowiki>https://doi.org/10.1177/1729881420976082</nowiki></ref>. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality.


With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as “sight - listening - touching - tasting - smelling” (Xia et al., 2020). One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background (Derrick et al., 2019). In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal.
With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as “sight - listening - touching - tasting - smelling”<ref>Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. ''International Journal of Advanced Robotic Systems'', ''17''(6), 172988142097608. <nowiki>https://doi.org/10.1177/1729881420976082</nowiki></ref>. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background<ref>Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. ''The Journal of the Acoustical Society of America'', ''146''(5), 3495–3504. <nowiki>https://doi.org/10.1121/1.5134064</nowiki></ref>. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal.  
 
What’s more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by Nuance and recognizing whisper speech from patients by Whispp, but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse (Vásquez-Correa & Álvarez Muniain, 2023).
 
Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3% (Petridis et al., 2018). Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM (Noda et al., 2015).


What’s more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by Nuance and recognizing whisper speech from patients by Whispp, but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse<ref>Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. ''Sensors'', ''23''(4), 1843. <nowiki>https://doi.org/10.3390/s23041843</nowiki></ref>.


Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%<ref>Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). ''Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture'' (arXiv:1810.00108). arXiv. <nowiki>http://arxiv.org/abs/1810.00108</nowiki></ref>. Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM<ref>Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. ''Applied Intelligence'', ''42''(4), 722–737. <nowiki>https://doi.org/10.1007/s10489-014-0629-7</nowiki></ref>.
== ChatGPT Review ==
== ChatGPT Review ==


== References ==
== References ==
To insert a reference, type <nowiki><ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.</nowiki><ref>Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [https://dl.acm.org/doi/abs/10.1145/1478462.1478541]</ref> and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.
To insert a reference, type <nowiki><ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.</nowiki><ref>Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [https://dl.acm.org/doi/abs/10.1145/1478462.1478541]</ref> and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

Revision as of 20:24, 18 September 2023

Weixi Lai

Xueying Liu

Weihao Jiang

Ting Zhang

Introduction

Speech perception by humans is a multi-channel process. People perceive speech not only through hearing but also via other channels, among which the visual channel, particularly lip movements, has a prominent influence. A famous McGurk effect[1] has well demonstrated the effect of visual information. When hearing the sound /ba/ while seeing the lip movement /ga/, many people perceive the sound as /da/. Numerous studies have also proved that lip movements help listeners better disambiguate sounds in a noisy environment and clean environment[2].

Inspired by the multimodal speech perception of humans, automatic speech recognition (ASR) adopts the multimodal mode as well. It means that ASR is not trained solely on acoustic data; it is trained based on integrated data from various modalities, e.g. combination of acoustic and visual data. Multimodal ASR has become a hot topic these years due to its better recognition performance compared with unimodal ASR. On this page, we briefly introduce its development throughout history, some key innovations and impacts, as well as a few future research ideas.

Historical Context

Key Innovations

Impact

Training audio recognition and visual recognition separately and subsequently linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates .



Future Research

In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases’ quality from various aspects[3]. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality.

With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as “sight - listening - touching - tasting - smelling”[4]. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background[5]. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal.

What’s more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by Nuance and recognizing whisper speech from patients by Whispp, but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse[6].

Finally, novel deep learning architectures might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%[7]. Some new approaches are also mentioned in other studies, for example, integration between DNN-HMM and MSHMM[8].

ChatGPT Review

References

To insert a reference, type <ref> and paste the source your exported from Zotero (or whatever reference manager you're using) in the pop up box which appears. Make sure links in citations are clickable using proper formatting. Once you do this, a footnote will appear.[9] and a reference comes at the end automatically. Please use this method to cite for Wiki articles only, not for your thesis.

  1. Mcgurk, H., & Macdonald, J. (1976). Hearing lips and seeing voices. Nature, 264(5588), 746–748. https://doi.org/10.1038/264746a0
  2. Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2130–2134. https://doi.org/10.1109/ICASSP.2015.7178347
  3. Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
  4. Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. International Journal of Advanced Robotic Systems, 17(6), 172988142097608. https://doi.org/10.1177/1729881420976082
  5. Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. The Journal of the Acoustical Society of America, 146(5), 3495–3504. https://doi.org/10.1121/1.5134064
  6. Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. Sensors, 23(4), 1843. https://doi.org/10.3390/s23041843
  7. Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture (arXiv:1810.00108). arXiv. http://arxiv.org/abs/1810.00108
  8. Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. Applied Intelligence, 42(4), 722–737. https://doi.org/10.1007/s10489-014-0629-7
  9. Glantz, Richard "SHOEBOX: a personal file handling system for textual data." In Proceedings of the November 17-19, 1970, Fall Joint Computer Conference 1970. 535-545. [1]