Abstract / Description of output
Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have be- gun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on well- established neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in net- work firing to bind predicates to arguments. To our knowledge, this is the first demonstration of human-like generalisation in a machine system that does not assume structured representations to begin with.
Original language | English |
---|---|
Title of host publication | Proceedings of the 42nd Annual Conference of the Cognitive Science Society |
Editors | Stephanie Denison, Michael Mack, Yang Xu, Blair C. Armstrong |
Publisher | Cognitive Science Society |
Pages | 932-937 |
Number of pages | 6 |
ISBN (Print) | 9781713818977 |
Publication status | Published - 30 Nov 2020 |
Keywords / Materials (for Non-textual outputs)
- predicate learning
- generalisation
- neural networks
- symbolic-connectionism
- neural oscillations