Editing
Deep Learning Revolution
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Future research == === Multimodal Fusion === [[Multimodal Speech Recognition (2010s)|Multimodal fusion]] refers to obtaining information from multiple fields, including voice, text, image and video, to improve the performance of models. One of the applications is conference summary.<ref>Li, M., Zhang, L., Ji, H., & Radke, R. J. (2019, July). Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In ''Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics'' (pp. 2190-2196).</ref> Many employees are bothered by frequent meetings and lengthy content. And to extract conference summary, meeting text information is not sufficient. That’s why multimodal fusion takes multimodal information, such as video and audio, to make more comprehensive insight. For example, it can identify speech intonation and discern whether discussions involve emotions or disagreements. === Zero-shot and Few-shot Learning === The continued development of Deep Neural Networks allows for the potential enhancements of zero-shot and few-shot learning for ASR systems, especially in the English language. This will allow ASR’s to recognize speech with little to no training. One example of this is a study conducted on AphasiaBank, the largest datasource for aphasic speech recognition. Although it is the largest datasource, AphasiaBank only holds under 100 hours of audio data. Despite this, pre-training large models on a universal dataset shows a zero-shot 22% improvement on AphasiaBank.<ref>Xiao, A., Zheng, W., Keren, G., Le, D., Zhang, F., Fuegen, C., ... & Mohamed, A. (2021). Scaling asr improves zero and few shot learning. ''arXiv preprint arXiv:2111.05948''.</ref> This bodes well for other ASR applications with small resource pools. === Improved Accuracy === Accuracy of automatic speech recognition systems will continue to improve alongside the increase of training and development of deep neural networks. This is attributed to more robust model architectures as well as growing datasets of higher quality used to train these models.<ref>Tao, J., Evanini, K., & Wang, X. (2014, December). The influence of automatic speech recognition accuracy on the performance of an automated speech assessment system. In ''2014 IEEE Spoken Language Technology Workshop (SLT)''(pp. 294-299). IEEE.</ref> === Personalization === Automatic speech recognition continues to become more tailored to the user experience rather than being an “out of the box” product. Currently, personalization is often done in a server-based training environment, however, this poses many issues, such as: data privacy, update delays, and computing costs.<ref>Tomanek, K., Beaufays, F., Cattiau, J., Chandorkar, A., & Sim, K. C. (2021). On-device personalization of automatic speech recognition models for disordered speech. ''arXiv preprint arXiv:2106.10259''.</ref> Future research points towards on-device ASR personalization using limited data sets from the speaker as a possible remedy. === Privacy and Security === Along with the mass deployment of systems using ASR, namely [[Introduction of Voice Assistants|voice assistants]] such as Alexa and Siri, have come security concerns with always-on microphones<ref>Sun, K., Chen, C., & Zhang, X. (2020, November). " Alexa, stop spying on me!" speech privacy protection against voice assistants. In ''Proceedings of the 18th conference on embedded networked sensor systems'' (pp. 298-311).</ref> and manipulated inputs.<ref>Abdullah, H., Warren, K., Bindschaedler, V., Papernot, N., & Traynor, P. (2021, May). Sok: The faults in our asrs: An overview of attacks against automatic speech recognition and speaker identification systems. In ''2021 IEEE symposium on security and privacy (SP)'' (pp. 730-747). IEEE.</ref> Ongoing research is needed in order to protect this ever-evolving space. === Ethical Considerations === There are many ethical considerations when it comes to the continued development of deep learning in ASR. One prominent issue that continues to be researched is bias. Even well-trained ASR systems often face problems in large variations in speech that are due to characteristics such as: age, gender, race, speech impairments, and accents.<ref>Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying bias in automatic speech recognition. ''arXiv preprint arXiv:2103.15122''.</ref> Further training and research is needed with more diverse datasets to combat this issue and make ASR’s accessible to the greater audience. Similarly, fairness will need to be taken into consideration. Fairness in ASR relates to how equally a system performs between different subgroups of a population.<ref>Veliche, I. E., & Fung, P. (2023, June). Improving Fairness and Robustness in End-to-End Speech Recognition Through Unsupervised Clustering. In ''ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 1-5). IEEE.</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information