Editing
State-of-the-art
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Yang, M., Tjandra, A., Liu, C., Zhang, D., Le, D., & Kalinli, O. (2023, June). Learning asr pathways: A sparse multilingual asr model. In ''ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 1-5). IEEE. ==== * Summary: This research proposes a sparse multilingual ASR model, ASR pathways, which employs language-specific sub-networks to effectively manage multilingual speech recognition without significant performance drops in low-resource languages. The model utilizes iterative magnitude pruning (IMP) and the Lottery Ticket Hypothesis (LTH) to learn language-specific masks, facilitating knowledge transfer and improved performance in languages with scant data. This method enhances the accessibility of advanced ASR technologies in multilingual contexts, showing promise in scaling speech recognition capabilities across diverse language landscapes, including those with fewer resources. * RQ: How can neural network pruning be optimized for multilingual Automatic Speech Recognition (ASR) without significantly degrading recognition performance on certain languages, given that language-agnostic pruning may discard important language-specific parameters? * Hypothesis: It's possible to construct a sparse multilingual ASR model, referred to as ASR pathways, which activates language-specific sub-networks or "pathways" for different languages. This approach enables both language-specific optimization and the shared learning of parameters across languages, particularly benefiting lower-resource languages through joint multilingual training. * Conclusion: The ASR pathways model, which utilizes sparse sub-networks tailored for specific languages within a unified parameter set, outperforms both dense models and language-agnostically pruned models. It demonstrates improved performance on low-resource languages compared to monolingual sparse models, showcasing the effectiveness of this sparse multilingual ASR framework in achieving efficient and robust speech recognition across multiple languages. * Critical observations: The study found that language-specific pruning masks, developed through Iterative Magnitude Pruning (IMP) or Lottery Ticket Hypothesis (LTH), are crucial for the model's success. These masks enable the model to maintain or even improve performance across languages by preserving essential language-specific parameters while also benefiting from shared knowledge. The LTH approach, in particular, showed superior performance, even with fewer total effective parameters, highlighting the importance of the initial parameter selection in the pruning process. * Relevance: The shared parameters between these language-specific pathways facilitate knowledge transfer during joint multilingual training, which is especially beneficial for languages with limited training data. The empirical results showing improved performance on low-resource languages compared to monolingual sparse models underline the potential of this method to bring high-quality ASR technologies to low-resource settings.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information