Editing
State-of-the-art
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Thomas, B., Kessler, S., & Karout, S. (2022). ''Efficient Adapter Transfer of Self-Supervised Speech Models for Automatic Speech Recognition'' (arXiv:2202.03218). arXiv. <nowiki>http://arxiv.org/abs/2202.03218</nowiki> ==== *Summary: In this paper the authors applied adapter modules to a pre-trained wav2vec 2.0 model in order to perform downstream ASR tasks such as multilingual speech recognition. Compared with full fine-tuning, inserting adapters shows benefits of reducing the number of parameters and increasing the scalability of the model. *RQ: The authors asked if applying adapters on self-supervised ASR models would show the same benefits as in an NLP model. * Hypothesis: The authors hypothesized that the wav2vec 2.0 model tuned with adapter modules would be able to perform downstream tasks with little performance degradation. * Conclusion: Self-supervised speech models can be utilized in a more parameter-efficient manner without sacrificing performance. The monolingual model such as wav2vec 2.0Β can be successfully adapted to a multilingual ASR model. The multilingual model that the authors trained themselves also demonstrated capabilities to recognize English or French. * Critical observations: ** Adapters perform slightly worse than fine-tuning on English ASR. ** French ASR saw a slight performance increase using adapters. ** Multilingual pre-trained models using adapters also get close performance as in fine-tuning. ** Adapters add only a small number of additional parameters per task. *Relevance: This paper is the first paper that applies adapters on self-supervised ASR models. It provides insight on how adapters can be used as a quicker and computationally inexpensive method to tune the model for downstream tasks and multi-tasks. It is highly relevant to low-resource ASR because low-resource languages usually have less training data and are easy to overfit with a full fine-tuning approach. Adapter approach can prevent tuning model from overfitting.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information