How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and the relations between them. An account of how we might acquire such representations has central importance for theories of human cognition. We describe how a system can learn structured relational representations from initially unstructured inputs using comparison, sensitivity to time, and a modified Hebbian learning algorithm. We summarize how the model DORA (Discovery of Relations by Analogy) instantiates this approach, which we call predicate learning, as well as how the model captures several phenomena from cognitive development, relational reasoning, and language processing in the human brain. Predicate learning offers a link between models based on formal languages and models which learn from experience and provides an existence proof for how structured representations might be learned in the first place. © 2018 Elsevier Inc.
|Title of host publication||Psychology of Learning and Motivation|
|Editors||Kara D. Federmeier|
|Publication status||Published - 2018|
|Name||Psychology of Learning and Motivation|
- learning structured representations
- machine learning
- relational reasoning