Editing
Multimodal Speech Recognition
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Impact == === Multimodal Speech Recognition's Influences on Speech Technology === Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), which are two key [[Introduction of Voice Assistants|deep learning]] methods. In multimodal speech recognition, audio and image data play a vital role in extracting and merging features. A common strategy is to use CNNs to process both audio and image data, extracting important features. For audio, CNNs can create spectrograms, while for images, they identify objects and generate feature vectors capturing important visual details. After feature extraction, models like RNNs or transformers are used to integrate these features, capturing the connections between audio and visual data. Training audio recognition and visual recognition separately linking the two together resulted in lower phone error rates. however, when a new bilinear DNN network was used, which allowed training both audio and visual, the result was to achieve even lower error rates.<ref>Mroueh, Y., Marcheret, E., & Goel, V. (2015). Deep multimodal learning for Audio-Visual Speech Recognition. ''2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 2130–2134. <nowiki>https://doi.org/10.1109/ICASSP.2015.7178347</nowiki></ref> CTC(Connectionist Temporal Classification) is a technique for aligning audio and text, and it has found application in multimodal speech recognition. CTC allows for the alignment of audio and text data, enabling the system to understand when and how to correspond audio features with text labels. An experiment tested the CTC bidirectional LSTM acoustic model and the S2S visual semantic feature model. As a result, the CTC output is closer to the acoustics of an utterance.<ref>Palaskar, S., Sanabria, R., & Metze, F. (2018). End-to-end Multimodal Speech Recognition. ''2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 5774–5778. <nowiki>https://doi.org/10.1109/ICASSP.2018.8462439</nowiki></ref> Multimodal speech recognition systems need to adapt to varying environmental noise conditions, allowing the system to automatically adjust in different environments, thereby enhancing robustness. Also, for speaker adaptation, models also need to adapt to the speech characteristics of different speakers. Adaptive techniques can capture the speaker-specific speech traits, improving recognition performance. The experiment has shown that the MMASR model shows significant gains (up to 4.2% WER improvements) compared to traditional speech-to-text architecture in noisy environments.<ref>Srinivasan, T., Sanabria, R., & Metze, F. (2019). ''Analyzing Utility of Visual Context in Multimodal Speech Recognition Under Noisy Conditions''. <nowiki>https://doi.org/10.48550/ARXIV.1907.00477</nowiki></ref> Traditional Multimodal Speech Recognition systems comprise many independent processing stages, including audio extraction, image extraction, and speech recognition. Each stage has individual design, which may lead to more system complexity, high computational costs. The key advantage of [[Development of End-to-End Models|end-to-end]] multimodal models lies in simplifying the entire multimodal speech recognition pipeline while enhancing performance. End-to-end models take audio and image data as inputs into a single deep learning model, enabling the model to automatically learn how to extract speech information from multimodal inputs. Understanding speech from visual signals alone has been of interest for decades. One researcher proposed a multimodal attention method to get information from multimodal input. Modality attention mechanism is integrated in an end-to-end attention based AVSR system. The results show that our proposed method obtains a 36% improvement.<ref>Zhou, P., Yang, W., Chen, W., Wang, Y., & Jia, J. (2019). Modality Attention for End-to-end Audio-visual Speech Recognition. ''ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'', 6565–6569. <nowiki>https://doi.org/10.1109/ICASSP.2019.8683733</nowiki></ref> === Multimodal Speech Recognition's Influence in Real Life === Firstly, multimodal speech recognition has promoted the development of natural human-computer interaction techniques. Users can engage with computing systems and devices in a more intuitive manner, combining speech, gestural inputs, and facial expressions. For example, Users can interact and explore virtual worlds immersive through voice and gestures. In the healthcare sector, multimodal speech recognition has been useful in promoting communication between doctors and patients. This technology facilitates real-time transcription of spoken content and contributes to the smooth collecting of medical diagnoses and patient records. It means that the system can recognize the special place in the body where patients point by their fingers, combining speech content to summarize more accurate and time-saving medical diagnoses. Also, the integration of multimodal speech recognition into smart home ecosystems and Internet of Things (IoT) devices has introduced significant change. Users can exercise control over household appliances such as [[Introduction of Voice Assistants|voice assistant]], lighting systems, and smart locks via spoken commands and other sensory modalities. Such as the smart lock can have a double password, gesture with voice, combined with a specific combination to open the door. As a result, the security of users can be better guaranteed. Multimodal speech recognition is primarily driven by the pursuit of enhanced robustness. When dependent on audio signals, speech recognition systems are easily influenced by background noise, speaker's different articulation, and other uncertainties. The integration of insights from diverse modalities enhance these systems to deal with these challenges more effectively. Without it, some instructions made by users to the machine cannot be executed in a noisy environment.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information