Deep Learning Revolution: Difference between revisions

From MSc Voice Technology
Jump to navigation Jump to search
No edit summary
(reformatting and grammar revisions as well as additional information added to introduction and key innovations)
Line 1: Line 1:
== Introduction ==
== Introduction ==
Machine learning is a science of finding patterns in data, while deep learning, a subset of machine learning inspired by human brain structures, uses layered and structured neural networks to process raw data and learn features automatically. Recent key innovations such as RNN, LSTM and Transformers have shifted the landscape of speech technology which was previously dominated by Hidden Markov Models ([[Hidden Markov Models|HMMs]]).
Machine learning is a science of finding patterns in data, while deep learning, a subset of machine learning inspired by human brain structures, uses layered and structured neural networks to process raw data and learn features automatically. Deep learning has revolutionized the field of speech recognition. The field was previously limited, but the introduction of machine learning has invited recent key innovations such as RNN, LSTM and Transformers which have shifted and dramatically improved the landscape of speech technology which was previously dominated by Hidden Markov Models ([[Hidden Markov Models|HMMs]]). The impact of these technologies have enabled several applications, such as: voice assistant, transcription services, and voice-controlled technology. Future research in deep learning as it pertains to speech recognition will most likely focus on enhancing the robustness of the foundations it has built, adaptability in areas such as different accents and environments, and exploration of multimodal fusion such as combining audio and visual data for higher accuracy.


== Historical Context ==
== Historical Context ==
Deep learning is based on the concept of neural network, which was first inspired by the neural structures of the human brains. In 1962, Frank Rosenblatt published "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms." In this work, he introduced the multilayer perceptron (MLP), often regarded as the precursor to modern deep learning.<ref>Tappert, C. C. (2019, December). Who is the father of deep learning?. In ''2019 International Conference on Computational Science and Computational Intelligence (CSCI)'' (pp. 343-348). IEEE.</ref> In 1970, Seppo Linnainmaa introduced what we now recognize as the backpropagation algorithm, a foundational technique that underpins modern deep learning frameworks such as PyTorch and TensorFlow.<ref>Schmidhuber, J. (2022). Annotated history of modern AI and Deep learning. ''arXiv preprint arXiv:2212.11279''.</ref> In 1986, David E. Rumelhart et al. published an experimental analysis of applying backpropagation to MLPs,<ref>Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. ''nature'', ''323''(6088), 533-536.</ref> which made backpropagation more widely accepted and raised researchers’ interest in MLP. At this time, it looked like tough problems such as speech recognition could be solved.  
Deep learning is based on the concept of the neural network, which was first inspired by the neural structures of the human brains. In 1962, Frank Rosenblatt published "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms." In this work, he introduced the multilayer perceptron (MLP), often regarded as the precursor to modern deep learning.<ref>Tappert, C. C. (2019, December). Who is the father of deep learning?. In ''2019 International Conference on Computational Science and Computational Intelligence (CSCI)'' (pp. 343-348). IEEE.</ref> In 1970, Seppo Linnainmaa introduced what we now recognize as the backpropagation algorithm, a foundational technique that underpins modern deep learning frameworks such as PyTorch and TensorFlow.<ref>Schmidhuber, J. (2022). Annotated history of modern AI and Deep learning. ''arXiv preprint arXiv:2212.11279''.</ref> In 1986, David E. Rumelhart et al. published an experimental analysis of applying backpropagation to MLPs,<ref>Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. ''nature'', ''323''(6088), 533-536.</ref> which made backpropagation more widely accepted and raised researchers’ interest in MLP. At this time, researchers were very optimistic about the future of speech recognition. However, early neural networks faced significant computational limitations at that age, hindering their scalability until the past decade.
 
The modern rapid ascent of deep learning over the past decade can be mainly attributed to two factors. The first is the exponential surge in computational power, particularly from GPUs.<ref>Sejnowski, T. J. (2018). ''The deep learning revolution''. MIT press.</ref> In 2012, a defining moment happened when a paper was released covering a deep convolutional neural network named AlexNet which dramatically outperformed other methods in the ImageNet competition.<ref name=":02">Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. ''Advances in neural information processing systems'', ''25''.</ref> The primary insight from the AlexNet paper was the critical importance of a neural network's depth for achieving superior performance. This depth, however, came with computational intensity, underscoring the significance of using GPUs during training. The second is the availability of vast data sets and related advanced big data infrastructure. In the domains of language, speech, and vision, a noticeable transition emerged around 2014-2015. During this time, datasets appeared that were orders of magnitude larger than those prevalent in the preceding decade.<ref>Pablo Villalobos and Anson Ho (2022), "Trends in Training Dataset Sizes". Published online at epochai.org. Retrieved from: '<nowiki>https://epochai.org/blog/trends-in-training-dataset-sizes'</nowiki> [online resource]</ref>
 
== Key Innovations ==
Modern deep learning can be characterized by several key innovations.


However, early neural networks faced significant computational limitations at that age, hindering their scalability until the past decade or so. The modern rapid ascent of deep learning over the past decade or so can be mainly attributed to two factors.
=== Convolutional Neural Networks (CNNs) ===
One of the most influential network architectures in computer vision is the Convolutional Neural Network (CNN). While modern CNNs can trace their origins to Yann LeCun’s LeNet-5, initially designed for digit recognition, versions like AlexNet and ResNet have evolved to be more sophisticated, achieving significant success in image classification challenges.<ref name=":02" /> Although these CNNs are primarily associated with image processing, they also find great success in use of speech recognition systems, namely spectrograms of audio data. Spectrograms are 2D visual representations of the audio's frequency content over time, which are images that can be processed by CNNs. CNNs can extract data from these spectrograms, such as frequency patterns, phonemes, and other acoustic features.


The first is the exponential surge in computational power, particularly from GPUs.<ref>Sejnowski, T. J. (2018). ''The deep learning revolution''. MIT press.</ref> In 2012, a defining moment was released when a deep convolutional neural network named AlexNet dramatically outperformed other methods in the ImageNet competition.<ref name=":0">Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. ''Advances in neural information processing systems'', ''25''.</ref> The primary insight from the AlexNet paper was the critical importance of a neural network's depth for achieving superior performance. This depth, however, came with computational intensity, underscoring the significance of using GPUs during training.
=== Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) ===
RNNs are a category of neural networks designed for sequential data. RNNs are particularly useful in ASR due to their ability model longer-distance context than word n-gram models.<ref>Arisoy, E., Sethy, A., Ramabhadran, B., & Chen, S. (2015, April). Bidirectional recurrent neural network language models for automatic speech recognition. In ''2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 5421-5425). IEEE.</ref> LSTMs, a specific type of RNN, aim to deal with the vanishing gradient issues present in traditional RNNs.<ref>Graves, A., & Graves, A. (2012). Long short-term memory. ''Supervised sequence labelling with recurrent neural networks'', 37-45.</ref> LSTMs particularly excel in the handling and modeling of sequential data, and speech signals are highly sequential. They can model the probabilistic relationships between words in a sentence, which aids in improving the recognition accuracy.


The second is the availability of vast data sets and related advanced big data infrastructure. In the domains of language, speech and vision, a noticeable transition emerged around 2014-2015. During this time, datasets appeared that were orders of magnitude larger than those prevalent in the preceding decade.<ref>Pablo Villalobos and Anson Ho (2022), "Trends in Training Dataset Sizes". Published online at epochai.org. Retrieved from: '<nowiki>https://epochai.org/blog/trends-in-training-dataset-sizes'</nowiki> [online resource]</ref>
=== Transformers and Attention Mechanisms ===
Traditional RNNs and LSTMs process sequences step by step, making them slower for some applications. The transformer architecture, introduced by Vaswani et al., uses a mechanism called “attention” to weigh the importance of different parts of an input sequence when generating an output sequence. This structure allows for parallel processing of sequences, resulting in substantially reduced training times compared to earlier RNNs and LSTMs.<ref>Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. ''Advances in neural information processing systems'', ''30''.</ref> The transformers and attention mechanisms are especially useful in handling long audio sequences and complex language modeling.


== Key Innovations ==
=== Generative Adversarial Networks (GANs) ===
Modern deep learning can be characterized by several key innovations.
GANs consist of two networks: a generator that creates data and a discriminator that evaluates it. GANs are not usually directly used in speech recognition, however, they are used successfully in generating realistic audio data. GANs have the capability to produce remarkably realistic synthetic data, ranging from art pieces to high-resolution images.<ref>Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. ''Advances in neural information processing systems'', ''27''.</ref> GANs have proven particularly useful in the training and modeling of accent adaptation of ASR systems. GANs are also useful in audio enhancement and denoising.


* '''Convolutional Neural Networks (CNNs)''' One of the most influential network architectures in computer vision is the Convolutional Neural Network (CNN). While modern CNNs can trace their origins to Yann LeCun’s LeNet-5, initially designed for digit recognition, versions like AlexNet and ResNet have evolved to be more sophisticated, achieving significant success in image classification challenges.<ref name=":0" />
=== Transfer Learning and Pre-trained Models ===
* '''Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)''' RNNs are a category of neural networks designed for sequential data. RNNs are particularly useful in ASR due to their ability model longer-distance context than word n-gram models.<ref>Graves, A., Fernández, S., Gomez, F., & Schmidhuber, J. (2006, June). Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ''Proceedings of the 23rd international conference on Machine learning'' (pp. 369-376).</ref> LSTMs, a specific type of RNN, aim to deal with the vanishing gradient issues present in traditional RNNs.<ref>Graves, A., & Graves, A. (2012). Long short-term memory. ''Supervised sequence labelling with recurrent neural networks'', 37-45.</ref> LSTMs particularly excel in the handling of sequential data, and speech signals are highly sequential.<ref>Graves, A., & Schmidhuber, J. (2005). Framewise phoneme classification with bidirectional LSTM and other neural network architectures. ''Neural networks'', ''18''(5-6), 602-610.</ref>
Deep learning models, especially those used for NLP tasks, can have millions or even billions of parameters. Training such models from scratch requires extensive computational resources. Transfer learning circumvents this by leveraging pre-trained models. These models, trained on vast datasets, can be fine-tuned with a smaller amount of task-specific data, accelerating development and boosting performance.<ref>Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. ''arXiv preprint arXiv:1810.04805''.</ref> <ref>Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.</ref> Transfer learning and pre-trained models have been crucial to the enhancement of ASR systems, primarily by offering significant improvements in accuracy, efficiency, and the ability to adapt models to various tasks.
* '''Transformers and Attention Mechanisms''' Traditional RNNs and LSTMs process sequences step by step, making them slower for some applications. The transformer architecture, introduced by Vaswani et al., uses a mechanism called “attention” to weigh the importance of different parts of an input sequence when generating an output sequence. This structure allows for parallel processing of sequences, resulting in substantially reduced training times compared to earlier RNNs and LSTMs.<ref>Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. ''Advances in neural information processing systems'', ''30''.</ref>
* '''Generative Adversarial Networks (GANs)''' GANs consist of two networks: a generator that creates data and a discriminator that evaluates it. Through their competition, GANs have the capability to produce remarkably realistic synthetic data, ranging from art pieces to high-resolution images.<ref>Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. ''Advances in neural information processing systems'', ''27''.</ref> GANs have proven particularly useful in the training and modeling of accent adaptation of ASR systems.
* '''Transfer Learning and Pre-trained Models''' Deep learning models, especially those used for NLP tasks, can have millions or even billions of parameters. Training such models from scratch requires extensive computational resources. Transfer learning circumvents this by leveraging pre-trained models. These models, trained on vast datasets, can be fine-tuned with a smaller amount of task-specific data, accelerating development and boosting performance.<ref>Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. ''arXiv preprint arXiv:1810.04805''.</ref> <ref>Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.</ref>


== Impact on the field ==
== Impact on the field ==
Voice technology, encompassing Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) synthesis, has been significantly impacted by some of the previous  key innovations in deep learning.
Voice technology, encompassing Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) synthesis, has been significantly impacted by some of the previous key innovations in deep learning.
 
=== Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) ===
RNNs and LSTMs are particularly suited for sequential data like voice, making them ideal for ASR. ASR involves converting spoken language into written text, which requires understanding temporal dependencies in spoken sequences. LSTM's ability to capture long-term dependencies in sequences has made them an essential component of many state-of-the-art ASR systems.<ref>Graves, A., Mohamed, A. R., & Hinton, G. (2013, May). Speech recognition with deep recurrent neural networks. In ''2013 IEEE international conference on acoustics, speech and signal processing'' (pp. 6645-6649). Ieee.</ref>
 
=== Generative Adversarial Networks (GANs) ===
In the field of TTS, GANs have been explored for generating high quality speech waveforms. Neural networks like Parallel WaveGAN use GANs to convert mel-spectrograms to raw audio waveforms, producing more natural-sounding speech.<ref>Yamamoto, R., Song, E., & Kim, J. M. (2020, May). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In ''ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 6199-6203). IEEE.</ref>
 
=== Transfer Learning and Pre-trained Models ===
Transfer learning, especially with large pre-trained models, is increasingly used in voice technologies. While it's more dominant in the NLP field, the principle of using knowledge from one domain and applying it to another is seeing its application in ASR and TTS. For instance, pre-trained models on vast text datasets can provide a foundational understanding of language, which can then be fine-tuned on specific voice datasets for improved performance.<ref>Li, X., Wang, C., Tang, Y., Tran, C., Tang, Y., Pino, J., ... & Auli, M. (2020). Multilingual speech translation with efficient finetuning of pretrained models. ''arXiv preprint arXiv:2010.12829''.</ref>
 
=== Multilingual Automatic Speech Recognition ===
Advancements in deep learning have led to innovations in the field of multilingual Automatic Speech Recognition (ASR). These advancements have facilitated the development of multilingual deep neural networks characterized by shared hidden layers across multiple languages. The output layers of the DNN’s can model either a universal phone set using the International Phonetic Alphabet or multiple sets of language specific senones.<ref>Tong, S., Garner, P. N., & Bourlard, H. (2017). ''An investigation of deep neural networks for multilingual speech recognition training and adaptation'' (No. CONF, pp. 714-718).</ref> These outputs have been used in many Language Adaptive Training techniques. Speech recognition systems leveraging these multilingual deep learning neural networks consistently yield substantial performance enhancements, particularly in lower-resourced languages.<ref>Huang, J. T., Li, J., Yu, D., Deng, L., & Gong, Y. (2013, May). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In ''2013 IEEE international conference on acoustics, speech and signal processing'' (pp. 7304-7308). IEEE.</ref>


* '''Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)''' RNNs and LSTMs are particularly suited for sequential data like voice, making them ideal for ASR. ASR involves converting spoken language into written text, which requires understanding temporal dependencies in spoken sequences. LSTM's ability to capture long-term dependencies in sequences has made them an essential component of many state-of-the-art ASR systems.<ref>Graves, A., Mohamed, A. R., & Hinton, G. (2013, May). Speech recognition with deep recurrent neural networks. In ''2013 IEEE international conference on acoustics, speech and signal processing'' (pp. 6645-6649). Ieee.</ref>
=== Low-Latency Automatic Speech Recognition ===
* '''Generative Adversarial Networks (GANs)''' In the field of TTS, GANs have been explored for generating high quality speech waveforms. Neural networks like Parallel WaveGAN use GANs to convert mel-spectrograms to raw audio waveforms, producing more natural-sounding speech.<ref>Yamamoto, R., Song, E., & Kim, J. M. (2020, May). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In ''ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 6199-6203). IEEE.</ref>
Usage of deep learning such as Amortized Neural Networks (AmNets) has allowed for advances in low-latency ASR. In one study, researchers use these AmNets to reduce processing and latency required for ASR by applying them to the Recurrent Neural Network Transducer (RNN-T).<ref>Macoskey, J., Strimel, G. P., Su, J., & Rastrow, A. (2021). Amortized neural networks for low-latency speech recognition. ''arXiv preprint arXiv:2108.01553''.</ref> These advances have allowed for ASR processing in virtually real-time without sacrificing accuracy.
* '''Transfer Learning and Pre-trained Models''' Transfer learning, especially with large pre-trained models, is increasingly used in voice technologies. While it's more dominant in the NLP field, the principle of using knowledge from one domain and applying it to another is seeing its application in ASR and TTS. For instance, pre-trained models on vast text datasets can provide a foundational understanding of language, which can then be fine-tuned on specific voice datasets for improved performance.<ref>Li, X., Wang, C., Tang, Y., Tran, C., Tang, Y., Pino, J., ... & Auli, M. (2020). Multilingual speech translation with efficient finetuning of pretrained models. ''arXiv preprint arXiv:2010.12829''.</ref>
* '''Multilingual Automatic Speech Recognition''' Advancements in deep learning have led to innovations in the field of multilingual Automatic Speech Recognition (ASR). These advancements have facilitated the development of multilingual deep neural networks characterized by shared hidden layers across multiple languages. The output layers of the DNN’s can model either a universal phone set using the International Phonetic Alphabet or multiple sets of language specific senones.<ref>Tong, S., Garner, P. N., & Bourlard, H. (2017). ''An investigation of deep neural networks for multilingual speech recognition training and adaptation'' (No. CONF, pp. 714-718).</ref> These outputs have been used in many Language Adaptive Training techniques. Speech recognition systems leveraging these multilingual deep learning neural networks consistently yield substantial performance enhancements, particularly in lower-resourced languages.<ref>Huang, J. T., Li, J., Yu, D., Deng, L., & Gong, Y. (2013, May). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In ''2013 IEEE international conference on acoustics, speech and signal processing'' (pp. 7304-7308). IEEE.</ref>
* '''Low-Latency Automatic Speech Recognition''' Usage of deep learning such as Amortized Neural Networks (AmNets) has allowed for advances in low-latency ASR. In one study, researchers use these AmNets to reduce processing and latency required for ASR by applying them to the Recurrent Neural Network Transducer (RNN-T).<ref>Macoskey, J., Strimel, G. P., Su, J., & Rastrow, A. (2021). Amortized neural networks for low-latency speech recognition. ''arXiv preprint arXiv:2108.01553''.</ref> These advances have allowed for ASR processing in virtually real-time without sacrificing accuracy.


== Future research ==
== Future research ==


* '''Multimodal Fusion''' Multimodal fusion refers to obtaining information from multiple fields, including voice, text, image and video, to improve the performance of models. One of the applications is conference summary.<ref>Li, M., Zhang, L., Ji, H., & Radke, R. J. (2019, July). Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In ''Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics'' (pp. 2190-2196).</ref> Many employees are bothered by frequent meetings and lengthy content. And to extract conference summary, meeting text information is not sufficient. That’s why multimodal fusion takes multimodal information, such as video and audio, to make more comprehensive insight. For example, it can identify speech intonation and discern whether discussions involve emotions or disagreements.
=== Multimodal Fusion ===
* '''Zero-shot and Few-shot Learning''' The continued development of Deep Neural Networks allows for the potential enhancements of zero-shot and few-shot learning for ASR systems, especially in the English language. This will allow ASR’s to recognize speech with little to no training.  One example of this is a study conducted on AphasiaBank, the largest datasource for aphasic speech recognition. Although it is the largest datasource, AphasiaBank only holds under 100 hours of audio data. Despite this, pre-training large models on a universal dataset shows a zero-shot 22% improvement on AphasiaBank.<ref>Xiao, A., Zheng, W., Keren, G., Le, D., Zhang, F., Fuegen, C., ... & Mohamed, A. (2021). Scaling asr improves zero and few shot learning. ''arXiv preprint arXiv:2111.05948''.</ref> This bodes well for other ASR applications with small resource pools.
[[Multimodal Speech Recognition (2010s)|Multimodal fusion]] refers to obtaining information from multiple fields, including voice, text, image and video, to improve the performance of models. One of the applications is conference summary.<ref>Li, M., Zhang, L., Ji, H., & Radke, R. J. (2019, July). Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In ''Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics'' (pp. 2190-2196).</ref> Many employees are bothered by frequent meetings and lengthy content. And to extract conference summary, meeting text information is not sufficient. That’s why multimodal fusion takes multimodal information, such as video and audio, to make more comprehensive insight. For example, it can identify speech intonation and discern whether discussions involve emotions or disagreements.
* '''Improved Accuracy''' Accuracy of automatic speech recognition systems will continue to improve alongside the increase of training and development of deep neural networks. This is attributed to more robust model architectures as well as growing datasets of higher quality used to train these models.<ref>Tao, J., Evanini, K., & Wang, X. (2014, December). The influence of automatic speech recognition accuracy on the performance of an automated speech assessment system. In ''2014 IEEE Spoken Language Technology Workshop (SLT)''(pp. 294-299). IEEE.</ref>
 
* '''Personalization''' Automatic speech recognition continues to become more tailored to the user experience rather than being an “out of the box” product. Currently, personalization is often done in a server-based training environment, however, this poses many issues, such as: data privacy, update delays, and computing costs.<ref>Tomanek, K., Beaufays, F., Cattiau, J., Chandorkar, A., & Sim, K. C. (2021). On-device personalization of automatic speech recognition models for disordered speech. ''arXiv preprint arXiv:2106.10259''.</ref> Future research points towards on-device ASR personalization using limited data sets from the speaker as a possible remedy.
=== Zero-shot and Few-shot Learning ===
* '''Privacy and Security''' Along with the mass deployment of systems using ASR, namely [[Introduction of Voice Assistants|voice assistants]] such as Alexa and Siri, have come security concerns with always-on microphones<ref>Sun, K., Chen, C., & Zhang, X. (2020, November). " Alexa, stop spying on me!" speech privacy protection against voice assistants. In ''Proceedings of the 18th conference on embedded networked sensor systems'' (pp. 298-311).</ref> and manipulated inputs.<ref>Abdullah, H., Warren, K., Bindschaedler, V., Papernot, N., & Traynor, P. (2021, May). Sok: The faults in our asrs: An overview of attacks against automatic speech recognition and speaker identification systems. In ''2021 IEEE symposium on security and privacy (SP)'' (pp. 730-747). IEEE.</ref> Ongoing research is needed in order to protect this ever-evolving space.
The continued development of Deep Neural Networks allows for the potential enhancements of zero-shot and few-shot learning for ASR systems, especially in the English language. This will allow ASR’s to recognize speech with little to no training.  One example of this is a study conducted on AphasiaBank, the largest datasource for aphasic speech recognition. Although it is the largest datasource, AphasiaBank only holds under 100 hours of audio data. Despite this, pre-training large models on a universal dataset shows a zero-shot 22% improvement on AphasiaBank.<ref>Xiao, A., Zheng, W., Keren, G., Le, D., Zhang, F., Fuegen, C., ... & Mohamed, A. (2021). Scaling asr improves zero and few shot learning. ''arXiv preprint arXiv:2111.05948''.</ref> This bodes well for other ASR applications with small resource pools.
* '''Ethical Considerations''' There are many ethical considerations when it comes to the continued development of deep learning in ASR. One prominent issue that continues to be researched is bias. Even well-trained ASR systems often face problems in large variations in speech that are due to characteristics such as: age, gender, race, speech impairments, and accents.<ref>Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying bias in automatic speech recognition. ''arXiv preprint arXiv:2103.15122''.</ref> Further training and research is needed with more diverse datasets to combat this issue and make ASR’s accessible to the greater audience. Similarly, fairness will need to be taken into consideration. Fairness in ASR relates to how equally a system performs between different subgroups of a population.<ref>Veliche, I. E., & Fung, P. (2023, June). Improving Fairness and Robustness in End-to-End Speech Recognition Through Unsupervised Clustering. In ''ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 1-5). IEEE.</ref>
 
=== Improved Accuracy ===
Accuracy of automatic speech recognition systems will continue to improve alongside the increase of training and development of deep neural networks. This is attributed to more robust model architectures as well as growing datasets of higher quality used to train these models.<ref>Tao, J., Evanini, K., & Wang, X. (2014, December). The influence of automatic speech recognition accuracy on the performance of an automated speech assessment system. In ''2014 IEEE Spoken Language Technology Workshop (SLT)''(pp. 294-299). IEEE.</ref>
 
=== Personalization ===
Automatic speech recognition continues to become more tailored to the user experience rather than being an “out of the box” product. Currently, personalization is often done in a server-based training environment, however, this poses many issues, such as: data privacy, update delays, and computing costs.<ref>Tomanek, K., Beaufays, F., Cattiau, J., Chandorkar, A., & Sim, K. C. (2021). On-device personalization of automatic speech recognition models for disordered speech. ''arXiv preprint arXiv:2106.10259''.</ref> Future research points towards on-device ASR personalization using limited data sets from the speaker as a possible remedy.
 
=== Privacy and Security ===
Along with the mass deployment of systems using ASR, namely [[Introduction of Voice Assistants|voice assistants]] such as Alexa and Siri, have come security concerns with always-on microphones<ref>Sun, K., Chen, C., & Zhang, X. (2020, November). " Alexa, stop spying on me!" speech privacy protection against voice assistants. In ''Proceedings of the 18th conference on embedded networked sensor systems'' (pp. 298-311).</ref> and manipulated inputs.<ref>Abdullah, H., Warren, K., Bindschaedler, V., Papernot, N., & Traynor, P. (2021, May). Sok: The faults in our asrs: An overview of attacks against automatic speech recognition and speaker identification systems. In ''2021 IEEE symposium on security and privacy (SP)'' (pp. 730-747). IEEE.</ref> Ongoing research is needed in order to protect this ever-evolving space.
 
=== Ethical Considerations ===
There are many ethical considerations when it comes to the continued development of deep learning in ASR. One prominent issue that continues to be researched is bias. Even well-trained ASR systems often face problems in large variations in speech that are due to characteristics such as: age, gender, race, speech impairments, and accents.<ref>Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying bias in automatic speech recognition. ''arXiv preprint arXiv:2103.15122''.</ref> Further training and research is needed with more diverse datasets to combat this issue and make ASR’s accessible to the greater audience. Similarly, fairness will need to be taken into consideration. Fairness in ASR relates to how equally a system performs between different subgroups of a population.<ref>Veliche, I. E., & Fung, P. (2023, June). Improving Fairness and Robustness in End-to-End Speech Recognition Through Unsupervised Clustering. In ''ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)'' (pp. 1-5). IEEE.</ref>


== LLM Review ==
== LLM Review ==

Revision as of 15:50, 18 September 2023

Introduction

Machine learning is a science of finding patterns in data, while deep learning, a subset of machine learning inspired by human brain structures, uses layered and structured neural networks to process raw data and learn features automatically. Deep learning has revolutionized the field of speech recognition. The field was previously limited, but the introduction of machine learning has invited recent key innovations such as RNN, LSTM and Transformers which have shifted and dramatically improved the landscape of speech technology which was previously dominated by Hidden Markov Models (HMMs). The impact of these technologies have enabled several applications, such as: voice assistant, transcription services, and voice-controlled technology. Future research in deep learning as it pertains to speech recognition will most likely focus on enhancing the robustness of the foundations it has built, adaptability in areas such as different accents and environments, and exploration of multimodal fusion such as combining audio and visual data for higher accuracy.

Historical Context

Deep learning is based on the concept of the neural network, which was first inspired by the neural structures of the human brains. In 1962, Frank Rosenblatt published "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms." In this work, he introduced the multilayer perceptron (MLP), often regarded as the precursor to modern deep learning.[1] In 1970, Seppo Linnainmaa introduced what we now recognize as the backpropagation algorithm, a foundational technique that underpins modern deep learning frameworks such as PyTorch and TensorFlow.[2] In 1986, David E. Rumelhart et al. published an experimental analysis of applying backpropagation to MLPs,[3] which made backpropagation more widely accepted and raised researchers’ interest in MLP. At this time, researchers were very optimistic about the future of speech recognition. However, early neural networks faced significant computational limitations at that age, hindering their scalability until the past decade.

The modern rapid ascent of deep learning over the past decade can be mainly attributed to two factors. The first is the exponential surge in computational power, particularly from GPUs.[4] In 2012, a defining moment happened when a paper was released covering a deep convolutional neural network named AlexNet which dramatically outperformed other methods in the ImageNet competition.[5] The primary insight from the AlexNet paper was the critical importance of a neural network's depth for achieving superior performance. This depth, however, came with computational intensity, underscoring the significance of using GPUs during training. The second is the availability of vast data sets and related advanced big data infrastructure. In the domains of language, speech, and vision, a noticeable transition emerged around 2014-2015. During this time, datasets appeared that were orders of magnitude larger than those prevalent in the preceding decade.[6]

Key Innovations

Modern deep learning can be characterized by several key innovations.

Convolutional Neural Networks (CNNs)

One of the most influential network architectures in computer vision is the Convolutional Neural Network (CNN). While modern CNNs can trace their origins to Yann LeCun’s LeNet-5, initially designed for digit recognition, versions like AlexNet and ResNet have evolved to be more sophisticated, achieving significant success in image classification challenges.[5] Although these CNNs are primarily associated with image processing, they also find great success in use of speech recognition systems, namely spectrograms of audio data. Spectrograms are 2D visual representations of the audio's frequency content over time, which are images that can be processed by CNNs. CNNs can extract data from these spectrograms, such as frequency patterns, phonemes, and other acoustic features.

Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)

RNNs are a category of neural networks designed for sequential data. RNNs are particularly useful in ASR due to their ability model longer-distance context than word n-gram models.[7] LSTMs, a specific type of RNN, aim to deal with the vanishing gradient issues present in traditional RNNs.[8] LSTMs particularly excel in the handling and modeling of sequential data, and speech signals are highly sequential. They can model the probabilistic relationships between words in a sentence, which aids in improving the recognition accuracy.

Transformers and Attention Mechanisms

Traditional RNNs and LSTMs process sequences step by step, making them slower for some applications. The transformer architecture, introduced by Vaswani et al., uses a mechanism called “attention” to weigh the importance of different parts of an input sequence when generating an output sequence. This structure allows for parallel processing of sequences, resulting in substantially reduced training times compared to earlier RNNs and LSTMs.[9] The transformers and attention mechanisms are especially useful in handling long audio sequences and complex language modeling.

Generative Adversarial Networks (GANs)

GANs consist of two networks: a generator that creates data and a discriminator that evaluates it. GANs are not usually directly used in speech recognition, however, they are used successfully in generating realistic audio data. GANs have the capability to produce remarkably realistic synthetic data, ranging from art pieces to high-resolution images.[10] GANs have proven particularly useful in the training and modeling of accent adaptation of ASR systems. GANs are also useful in audio enhancement and denoising.

Transfer Learning and Pre-trained Models

Deep learning models, especially those used for NLP tasks, can have millions or even billions of parameters. Training such models from scratch requires extensive computational resources. Transfer learning circumvents this by leveraging pre-trained models. These models, trained on vast datasets, can be fine-tuned with a smaller amount of task-specific data, accelerating development and boosting performance.[11] [12] Transfer learning and pre-trained models have been crucial to the enhancement of ASR systems, primarily by offering significant improvements in accuracy, efficiency, and the ability to adapt models to various tasks.

Impact on the field

Voice technology, encompassing Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) synthesis, has been significantly impacted by some of the previous key innovations in deep learning.

Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM)

RNNs and LSTMs are particularly suited for sequential data like voice, making them ideal for ASR. ASR involves converting spoken language into written text, which requires understanding temporal dependencies in spoken sequences. LSTM's ability to capture long-term dependencies in sequences has made them an essential component of many state-of-the-art ASR systems.[13]

Generative Adversarial Networks (GANs)

In the field of TTS, GANs have been explored for generating high quality speech waveforms. Neural networks like Parallel WaveGAN use GANs to convert mel-spectrograms to raw audio waveforms, producing more natural-sounding speech.[14]

Transfer Learning and Pre-trained Models

Transfer learning, especially with large pre-trained models, is increasingly used in voice technologies. While it's more dominant in the NLP field, the principle of using knowledge from one domain and applying it to another is seeing its application in ASR and TTS. For instance, pre-trained models on vast text datasets can provide a foundational understanding of language, which can then be fine-tuned on specific voice datasets for improved performance.[15]

Multilingual Automatic Speech Recognition

Advancements in deep learning have led to innovations in the field of multilingual Automatic Speech Recognition (ASR). These advancements have facilitated the development of multilingual deep neural networks characterized by shared hidden layers across multiple languages. The output layers of the DNN’s can model either a universal phone set using the International Phonetic Alphabet or multiple sets of language specific senones.[16] These outputs have been used in many Language Adaptive Training techniques. Speech recognition systems leveraging these multilingual deep learning neural networks consistently yield substantial performance enhancements, particularly in lower-resourced languages.[17]

Low-Latency Automatic Speech Recognition

Usage of deep learning such as Amortized Neural Networks (AmNets) has allowed for advances in low-latency ASR. In one study, researchers use these AmNets to reduce processing and latency required for ASR by applying them to the Recurrent Neural Network Transducer (RNN-T).[18] These advances have allowed for ASR processing in virtually real-time without sacrificing accuracy.

Future research

Multimodal Fusion

Multimodal fusion refers to obtaining information from multiple fields, including voice, text, image and video, to improve the performance of models. One of the applications is conference summary.[19] Many employees are bothered by frequent meetings and lengthy content. And to extract conference summary, meeting text information is not sufficient. That’s why multimodal fusion takes multimodal information, such as video and audio, to make more comprehensive insight. For example, it can identify speech intonation and discern whether discussions involve emotions or disagreements.

Zero-shot and Few-shot Learning

The continued development of Deep Neural Networks allows for the potential enhancements of zero-shot and few-shot learning for ASR systems, especially in the English language. This will allow ASR’s to recognize speech with little to no training.  One example of this is a study conducted on AphasiaBank, the largest datasource for aphasic speech recognition. Although it is the largest datasource, AphasiaBank only holds under 100 hours of audio data. Despite this, pre-training large models on a universal dataset shows a zero-shot 22% improvement on AphasiaBank.[20] This bodes well for other ASR applications with small resource pools.

Improved Accuracy

Accuracy of automatic speech recognition systems will continue to improve alongside the increase of training and development of deep neural networks. This is attributed to more robust model architectures as well as growing datasets of higher quality used to train these models.[21]

Personalization

Automatic speech recognition continues to become more tailored to the user experience rather than being an “out of the box” product. Currently, personalization is often done in a server-based training environment, however, this poses many issues, such as: data privacy, update delays, and computing costs.[22] Future research points towards on-device ASR personalization using limited data sets from the speaker as a possible remedy.

Privacy and Security

Along with the mass deployment of systems using ASR, namely voice assistants such as Alexa and Siri, have come security concerns with always-on microphones[23] and manipulated inputs.[24] Ongoing research is needed in order to protect this ever-evolving space.

Ethical Considerations

There are many ethical considerations when it comes to the continued development of deep learning in ASR. One prominent issue that continues to be researched is bias. Even well-trained ASR systems often face problems in large variations in speech that are due to characteristics such as: age, gender, race, speech impairments, and accents.[25] Further training and research is needed with more diverse datasets to combat this issue and make ASR’s accessible to the greater audience. Similarly, fairness will need to be taken into consideration. Fairness in ASR relates to how equally a system performs between different subgroups of a population.[26]

LLM Review

References

  1. Tappert, C. C. (2019, December). Who is the father of deep learning?. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI) (pp. 343-348). IEEE.
  2. Schmidhuber, J. (2022). Annotated history of modern AI and Deep learning. arXiv preprint arXiv:2212.11279.
  3. Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533-536.
  4. Sejnowski, T. J. (2018). The deep learning revolution. MIT press.
  5. 5.0 5.1 Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
  6. Pablo Villalobos and Anson Ho (2022), "Trends in Training Dataset Sizes". Published online at epochai.org. Retrieved from: 'https://epochai.org/blog/trends-in-training-dataset-sizes' [online resource]
  7. Arisoy, E., Sethy, A., Ramabhadran, B., & Chen, S. (2015, April). Bidirectional recurrent neural network language models for automatic speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5421-5425). IEEE.
  8. Graves, A., & Graves, A. (2012). Long short-term memory. Supervised sequence labelling with recurrent neural networks, 37-45.
  9. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
  10. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27.
  11. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  12. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training.
  13. Graves, A., Mohamed, A. R., & Hinton, G. (2013, May). Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing (pp. 6645-6649). Ieee.
  14. Yamamoto, R., Song, E., & Kim, J. M. (2020, May). Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6199-6203). IEEE.
  15. Li, X., Wang, C., Tang, Y., Tran, C., Tang, Y., Pino, J., ... & Auli, M. (2020). Multilingual speech translation with efficient finetuning of pretrained models. arXiv preprint arXiv:2010.12829.
  16. Tong, S., Garner, P. N., & Bourlard, H. (2017). An investigation of deep neural networks for multilingual speech recognition training and adaptation (No. CONF, pp. 714-718).
  17. Huang, J. T., Li, J., Yu, D., Deng, L., & Gong, Y. (2013, May). Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In 2013 IEEE international conference on acoustics, speech and signal processing (pp. 7304-7308). IEEE.
  18. Macoskey, J., Strimel, G. P., Su, J., & Rastrow, A. (2021). Amortized neural networks for low-latency speech recognition. arXiv preprint arXiv:2108.01553.
  19. Li, M., Zhang, L., Ji, H., & Radke, R. J. (2019, July). Keep meeting summaries on topic: Abstractive multi-modal meeting summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 2190-2196).
  20. Xiao, A., Zheng, W., Keren, G., Le, D., Zhang, F., Fuegen, C., ... & Mohamed, A. (2021). Scaling asr improves zero and few shot learning. arXiv preprint arXiv:2111.05948.
  21. Tao, J., Evanini, K., & Wang, X. (2014, December). The influence of automatic speech recognition accuracy on the performance of an automated speech assessment system. In 2014 IEEE Spoken Language Technology Workshop (SLT)(pp. 294-299). IEEE.
  22. Tomanek, K., Beaufays, F., Cattiau, J., Chandorkar, A., & Sim, K. C. (2021). On-device personalization of automatic speech recognition models for disordered speech. arXiv preprint arXiv:2106.10259.
  23. Sun, K., Chen, C., & Zhang, X. (2020, November). " Alexa, stop spying on me!" speech privacy protection against voice assistants. In Proceedings of the 18th conference on embedded networked sensor systems (pp. 298-311).
  24. Abdullah, H., Warren, K., Bindschaedler, V., Papernot, N., & Traynor, P. (2021, May). Sok: The faults in our asrs: An overview of attacks against automatic speech recognition and speaker identification systems. In 2021 IEEE symposium on security and privacy (SP) (pp. 730-747). IEEE.
  25. Feng, S., Kudina, O., Halpern, B. M., & Scharenborg, O. (2021). Quantifying bias in automatic speech recognition. arXiv preprint arXiv:2103.15122.
  26. Veliche, I. E., & Fung, P. (2023, June). Improving Fairness and Robustness in End-to-End Speech Recognition Through Unsupervised Clustering. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.

Contributors

Yuxing (Patrick) Ouyang, Xiaoling (River) Lin, Brandi Hongell