Editing
Deep Learning Revolution
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Impact on the field == Voice technology, encompassing Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) synthesis, has been significantly impacted by some of the previous key innovations in deep learning. === Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) === RNNs and LSTMs are particularly suited for sequential data like voice, making them ideal for ASR. ASR involves converting spoken language into written text, which requires understanding temporal dependencies in spoken sequences. LSTM's ability to capture long-term dependencies in sequences has made them an essential component of many state-of-the-art ASR systems.<ref>Graves, A., Mohamed, A. R., & Hinton, G. (2013, May). Speech recognition with deep recurrent neural networks. In ''2013 IEEE international conference on acoustics, speech and signal processing'' (pp. 6645-6649). Ieee.</ref><ref>Graves, A., & Jaitly, N. (2014, June). Towards end-to-end speech recognition with recurrent neural networks. In ''International conference on machine learning'' (pp. 1764-1772). PMLR.</ref> === Generative Adversarial Networks (GANs) === In the field of TTS, GANs have been explored for generating high quality speech waveforms. Neural networks like Parallel WaveGAN use GANs to convert mel-spectrograms to raw audio waveforms, producing more natural-sounding speech.<ref>Yamamoto, R., Song, E., & Kim, J. M. (2020, May). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In ''ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 6199-6203). IEEE.</ref> === Transfer Learning and Pre-trained Models === Transfer learning, especially with large pre-trained models, is increasingly used in voice technologies. While it's more dominant in the NLP field, the principle of using knowledge from one domain and applying it to another is seeing its application in ASR and TTS. For instance, pre-trained models on vast text datasets can provide a foundational understanding of language, which can then be fine-tuned on specific voice datasets for improved performance.<ref>Li, X., Wang, C., Tang, Y., Tran, C., Tang, Y., Pino, J., ... & Auli, M. (2020). Multilingual speech translation with efficient finetuning of pretrained models. ''arXiv preprint arXiv:2010.12829''.</ref> === Multilingual Automatic Speech Recognition === Advancements in deep learning have led to innovations in the field of multilingual Automatic Speech Recognition (ASR). These advancements have facilitated the development of multilingual deep neural networks characterized by shared hidden layers across multiple languages. The output layers of the DNNβs can model either a universal phone set using the International Phonetic Alphabet or multiple sets of language specific senones.<ref>Tong, S., Garner, P. N., & Bourlard, H. (2017). ''An investigation of deep neural networks for multilingual speech recognition training and adaptation'' (No. CONF, pp. 714-718).</ref> These outputs have been used in many Language Adaptive Training techniques. Speech recognition systems leveraging these multilingual deep learning neural networks consistently yield substantial performance enhancements, particularly in lower-resourced languages.<ref>Huang, J. T., Li, J., Yu, D., Deng, L., & Gong, Y. (2013, May). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In ''2013 IEEE international conference on acoustics, speech and signal processing'' (pp. 7304-7308). IEEE.</ref> === Low-Latency Automatic Speech Recognition === Usage of deep learning such as Amortized Neural Networks (AmNets) has allowed for advances in low-latency ASR. In one study, researchers use these AmNets to reduce processing and latency required for ASR by applying them to the Recurrent Neural Network Transducer (RNN-T).<ref>Macoskey, J., Strimel, G. P., Su, J., & Rastrow, A. (2021). Amortized neural networks for low-latency speech recognition. ''arXiv preprint arXiv:2108.01553''.</ref> These advances have allowed for ASR processing in virtually real-time without sacrificing accuracy.
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information