Deep Gate Recurrent Neural Network

Yuan Gao, Dorota Glowacka

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper explores the possibility of using multiplicative gates to build two recurrent neural network structures. These two structures are called Deep Simple Gated Unit (DSGU) and Simple Gated Unit (SGU), which are structures for learning long-term dependencies. Compared to traditional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), both structures require fewer parameters and less computation time in sequence classification tasks. Unlike GRU and LSTM, which require more than one gate to control information flow in the network, SGU and DSGU only use one multiplicative gate to control the flow of information. We show that this difference can accelerate the learning speed in tasks that require long dependency information. We also show that DSGU is more numerically stable than SGU. In addition, we also propose a standard way of representing the inner structure of RNN called RNN Conventional Graph (RCG), which helps to analyze the relationship between input units and hidden units of RNN.
Original languageEnglish
Title of host publicationProceedings of The 8th Asian Conference on Machine Learning
EditorsRobert J. Durrant, Kee-Eung Kim
Place of PublicationThe University of Waikato, Hamilton, New Zealand
Number of pages16
Publication statusPublished - 18 Nov 2016
Event8th Asian Conference on Machine Learning - Hamilton, New Zealand
Duration: 16 Nov 201618 Nov 2016

Publication series

NameProceedings of Machine Learning Research
ISSN (Electronic)2640-3498


Conference8th Asian Conference on Machine Learning
Abbreviated titleACML 2016
Country/TerritoryNew Zealand
Internet address


Dive into the research topics of 'Deep Gate Recurrent Neural Network'. Together they form a unique fingerprint.

Cite this