Deep Gate Recurrent Neural Network

Yuan Gao, Dorota Glowacka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper explores the possibility of using multiplicative gates to build two recurrent neural network structures. These two structures are called Deep Simple Gated Unit (DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM, which require more than one gate to control information flow in the network, SGU and DSGU only use one multiplicative gate to control the flow of information. We show that this difference can accelerate the learning speed in tasks that require long dependency information. We also show that DSGU is more numerically stable than SGU. In addition, we also propose a standard way of representing the inner structure of RNN called RNN Conventional Graph (RCG), which helps to analyze the relationship between input units and hidden units of RNN.
Original languageEnglish
Title of host publicationProceedings of The 8th Asian Conference on Machine Learning
EditorsRobert J. Durrant, Kee-Eung Kim
Place of PublicationThe University of Waikato, Hamilton, New Zealand
PublisherPMLR
Pages350-365
Number of pages16
Volume63
Publication statusPublished - 18 Nov 2016
Event8th Asian Conference on Machine Learning - Hamilton, New Zealand
Duration: 16 Nov 201618 Nov 2016
http://www.acml-conf.org/2016/

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR
Volume63
ISSN (Electronic)2640-3498

Conference

Conference8th Asian Conference on Machine Learning
Abbreviated titleACML 2016
Country/TerritoryNew Zealand
CityHamilton
Period16/11/1618/11/16
Internet address

Fingerprint

Dive into the research topics of 'Deep Gate Recurrent Neural Network'. Together they form a unique fingerprint.

Cite this