Editing
Multimodal Speech Recognition
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Future Research == In this section, we propose several directions for future research. To begin with, in terms of databases, it is necessary to design and build large-scale databases for low-resource languages. Despite AVSR being a data-driven technology, audio-visual databases for low-resource languages are very rare, which means training and developing advanced AVSR for those languages are limited. Currently, the dominant source language of large-scale datasets is still English, followed by Chinese, Russian, Arabic and a few other European languages. Moreover, researchers in the future could also work on improving databases' quality from various aspects<ref>Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. ''International Journal of Advanced Robotic Systems'', ''17''(6), 172988142097608. <nowiki>https://doi.org/10.1177/1729881420976082</nowiki></ref>. To name a few, build publicly easily accessible databases for general purposes, build databases that have multiple recording angles, and record audio and video under high quality. With regard to multimodal, it may be possible that future research does not be limited to bimodal, i.e. visual and audio; it could be a trimodal, or even a true multimodal as“sight - listening - touching - tasting - smelling”<ref>Xia, L., Chen, G., Xu, X., Cui, J., & Gao, Y. (2020). Audiovisual speech recognition: A review and forecast. ''International Journal of Advanced Robotic Systems'', ''17''(6), 172988142097608. <nowiki>https://doi.org/10.1177/1729881420976082</nowiki></ref>. One previous research has already suggested a trimodal including audio, visual and aero-tactile information for speech perception under a noisy background<ref>Derrick, D., Hansmann, D., & Theys, C. (2019). Tri-modal speech: Audio-visual-tactile integration in speech perception. ''The Journal of the Acoustical Society of America'', ''146''(5), 3495–3504. <nowiki>https://doi.org/10.1121/1.5134064</nowiki></ref>. In this study, air puff was added to audiovisual stimuli /pa/ and /ba/ and the matched pairs (e.g. /pa/ with air puff) had higher speech clarity than mismatched pairs, represented by the decreased SNR that listeners needed for matched pairs. With the ability of machine learning and deep learning, it might just be a matter of time to discover how to extract, represent and fuse multimodal features to current speech recognition modal. What's more, researchers could explore if multimodal speech recognition could be applied to a wider domain. It is classic that automatic speech recognition is used for healthcare purposes such as transcribing clinical notes by [https://www.nuance.com/dragon.html Nuance] and recognizing whisper speech from patients by [https://whispp.com/ Whispp], but with the help of an extra modality i.e. visual information, it seems new to use audiovisual speech recognition in forensic fields. A good case would be that an audiovisual speech recognition was used to transcribe audio-visual speech materials and detect child abuse<ref>Vásquez-Correa, J. C., & Álvarez Muniain, A. (2023). Novel Speech Recognition Systems Applied to Forensics within Child Exploitation: Wav2vec2.0 vs. Whisper. ''Sensors'', ''23''(4), 1843. <nowiki>https://doi.org/10.3390/s23041843</nowiki></ref>. Finally, novel [[Deep Learning Revolution|deep learning architectures]] might also be worth implementing in future research. These could be new modals that better integrate features among different modalities and thus could improve the performance of speech recognition. For example, the first Hybrid CTC/Attention architecture for audio-visual recognition and this architecture led to a decrease in word error by 1.3%<ref>Petridis, S., Stafylakis, T., Ma, P., Tzimiropoulos, G., & Pantic, M. (2018). ''Audio-Visual Speech Recognition With A Hybrid CTC/Attention Architecture'' (arXiv:1810.00108). arXiv. <nowiki>http://arxiv.org/abs/1810.00108</nowiki></ref>. Some new approaches are also mentioned in other studies, for example, integration between [[Hidden Markov Models|DNN-HMM and MSHMM]]<ref>Noda, K., Yamaguchi, Y., Nakadai, K., Okuno, H. G., & Ogata, T. (2015). Audio-visual speech recognition using deep learning. ''Applied Intelligence'', ''42''(4), 722–737. <nowiki>https://doi.org/10.1007/s10489-014-0629-7</nowiki></ref>.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information