Deep learning: an introduction for applied mathematicians

Catherine F. Higham, Desmond J. Higham

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Multilayered artificial neural networks are becoming a pervasive tool in a host of application fields. At the heart of this deep learning revolution are familiar concepts from applied and computational mathematics; notably, in calculus, approximation theory, optimization and linear algebra. This article provides a very brief introduction to the basic ideas that underlie deep learning from an applied mathematics perspective. Our target audience includes postgraduate and final year undergraduate students in mathematics who are keen to learn about the area. The article may also be useful for instructors in mathematics who wish to enliven their classes with references to the application of deep learning techniques. We focus on three fundamental questions: what is a deep neural network? how is a network trained? what is the stochastic gradient method? We illustrate the ideas with a short MATLAB code that sets up and trains a network. We also show the use of state-of-the art software on a large scale image classification problem. We finish with references to the current literature.
Original languageEnglish
Pages (from-to)860-891
Number of pages32
JournalSiam review
Issue number4
Early online date6 Nov 2019
Publication statusE-pub ahead of print - 6 Nov 2019

Keywords / Materials (for Non-textual outputs)

  • back progagation
  • chain rule
  • convolution
  • image classification
  • neural network
  • overfitting
  • sigmoid
  • stochastic gradient method


Dive into the research topics of 'Deep learning: an introduction for applied mathematicians'. Together they form a unique fingerprint.

Cite this