Editing
Deep Learning Revolution
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Historical Context == Deep learning is based on the concept of the neural network, which was first inspired by the neural structures of the human brains. In 1962, Frank Rosenblatt published "Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms." In this work, he introduced the multilayer perceptron (MLP), often regarded as the precursor to modern deep learning.<ref>Tappert, C. C. (2019, December). Who is the father of deep learning?. In ''2019 International Conference on Computational Science and Computational Intelligence (CSCI)'' (pp. 343-348). IEEE.</ref> In 1970, Seppo Linnainmaa introduced what we now recognize as the backpropagation algorithm, a foundational technique that underpins modern deep learning frameworks such as PyTorch and TensorFlow.<ref>Schmidhuber, J. (2022). Annotated history of modern AI and Deep learning. ''arXiv preprint arXiv:2212.11279''.</ref> In 1986, David E. Rumelhart et al. published an experimental analysis of applying backpropagation to MLPs,<ref>Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. ''nature'', ''323''(6088), 533-536.</ref> which made backpropagation more widely accepted and raised researchers’ interest in MLP. At this time, researchers were very optimistic about the future of speech recognition. However, early neural networks faced significant computational limitations at that age, hindering their scalability until the past decade. The modern rapid ascent of deep learning over the past decade can be mainly attributed to two factors. The first is the exponential surge in computational power, particularly from GPUs.<ref>Sejnowski, T. J. (2018). ''The deep learning revolution''. MIT press.</ref> In 2012, a defining moment happened when a paper was released covering a deep convolutional neural network named AlexNet which dramatically outperformed other methods in the ImageNet competition.<ref name=":02">Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. ''Advances in neural information processing systems'', ''25''.</ref> The primary insight from the AlexNet paper was the critical importance of a neural network's depth for achieving superior performance. This depth, however, came with computational intensity, underscoring the significance of using GPUs during training. The second is the availability of vast data sets and related advanced big data infrastructure. In the domains of language, speech, and vision, a noticeable transition emerged around 2014-2015. During this time, datasets appeared that were orders of magnitude larger than those prevalent in the preceding decade.<ref>Pablo Villalobos and Anson Ho (2022), "Trends in Training Dataset Sizes". Published online at epochai.org. Retrieved from: '<nowiki>https://epochai.org/blog/trends-in-training-dataset-sizes'</nowiki> [online resource]</ref>
Summary:
Please note that all contributions to MSc Voice Technology are considered to be released under the Creative Commons Attribution (see
MSc Voice Technology:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information