Abstract
Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.
Original language | English |
---|---|
Pages (from-to) | 860-891 |
Number of pages | 32 |
Journal | Siam review |
Volume | 61 |
Issue number | 4 |
Early online date | 6 Nov 2019 |
DOIs | |
Publication status | E-pub ahead of print - 6 Nov 2019 |
Keywords
- back progagation
- chain rule
- convolution
- image classification
- neural network
- overfitting
- sigmoid
- stochastic gradient method