Editing
State-of-the-art
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Jin et al. (2023): Voice-preserving zero-shot multiple accent conversion ==== '''Summary:''' Separating accent from speaker identity is usually the hardest, because each speaker in the dataset has one single accent. Previous attempts at doing this include: * use adversarial learning to get a discriminator to wipe out speaker-dependent information from content embeddings. * quantization of different features in speech to obscure undesired information. The main problem with conventional approaches to conversion is that they very often require available utterances with the same text in both source and target accent, making their applicability very limited. Alternatively, different approaches require either training or fine-tuning on the input utterances. The current paper uses a pronunciation encoder, an acoustic encoder, and a HiFiGAN voice decoder. During training, the model minimises reconstruction loss between input and output mel-spectrograms. The pronunciation encoder synthesizes accent-dependent pronunciation sequences using accent IDs. The acoustic encoder mapss MFCCs and periodicity features to a single vector, while adversarial training removes accent information. Lastly, the decoder reconstructs waveforms from the processed features. The model is evaluated on audio quality, speaker similarity, and accent conversion effectiveness. '''Results:''' Results indicate it maintains comparable audio quality to the original, maintains speaker similarity, and is efficient in replicating perceived nativeneess. However, listeners struggled to identify synthesized accents if they were unfamiliar with the target language (e.g. a native US listener could not classify a Korean accent on English as such, but a bilingual Korean-American listener could). Overall, the paper presents one of the best performing ACMs, that is able to preserve both speaker identity and acoustic quality during conversion. '''Critical observations:''' I think this paper achives a lot given that it's zero shot, but I am a bit critical about just how 'zero-shot' it truly is. They use a pre-trained acoustic model and while they do not require accent labels or speaker IDs, it seems that their training set contains over 24h of accented speech for all accents that they're synthesizing in. Additionally, none of their code is openly available, which is understandable for a private corporation like Meta, but it's still a bit disappointing.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information