Constructing Complex Systems Via Activity-Driven Unsupervised Hebbian Self-Organization

James A. Bednar

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review


How can an information processing system as complex and as powerful as the human cerebral cortex be constructed from the limited information available in the genome? Answering this scientific question has the potential to revolutionize how computing systems for manipulating real-world data are designed and built. Based on an extensive array of physiological, anatomical, and imaging data from the primary visual cortex (V1) of mammals, we propose a relatively simple biologically based developmental architecture that accounts for most of the demonstrated functional properties of V1 neurons. Given the overall similarity between cortical regions, and the absence of V1-specific circuitry in the model architecture, we expect similar principles to apply throughout the cerebral cortex. The architecture consists of a network of simple artificial V1 neurons with initially unspecific connections that are modified by Hebbian learning and homeostatic plasticity, driven by input patterns from other neural regions and ultimately from the external world. Through an unsupervised developmental process, the model neurons begin to display the major known functional properties of V1 neurons, including receptive fields and topographic maps selective for all of the major low-level visual feature dimensions, realistic specific lateral connectivity underlying surround modulation and adaptation such as visual aftereffects, realistic behavior with visual contrast, and realistic temporal responses. In each case these relatively complex properties emerge from interactions between simple neurons and between internal and external drivers for neural activity, without any requirement for supervised learning, top-down feedback or reinforcement, neuromodulation, or spike-timing dependent plasticity. The model also unifies explanations of a wide variety of phenomena previously considered distinct, with the same adaptation mechanisms leading to both long-term development and short-term plasticity (aftereffects), the same subcortical lateral interactions providing both gain control and accounting for the time course of neural responses, and the same cortical lateral interactions leading to complex cell properties, map formation, and surround modulation. This relatively simple architecture thus sets a baseline for explanations of neural function, suggesting that most of the development and function of V1 can be understood as unsupervised learning, and setting the stage for demonstrating the additional effects of higher- or lower-level mechanisms. The architecture also represents a simple, scalable approach for specifying complex data-processing systems in general.
Original languageEnglish
Title of host publicationGrowing Adaptive Machines
Subtitle of host publicationCombining Development and Learning in Artificial Neural Networks
EditorsTara Kowaliw, Nicolas Bredeche, Rene Doursat
PublisherSpringer Berlin Heidelberg
Number of pages25
ISBN (Electronic)978-3-642-55337-0
ISBN (Print)978-3-642-55336-3
Publication statusPublished - 5 Jun 2014

Publication series

NameGrowing Adaptive Machines
ISSN (Print)1860-949X
ISSN (Electronic)1860-9503


Dive into the research topics of 'Constructing Complex Systems Via Activity-Driven Unsupervised Hebbian Self-Organization'. Together they form a unique fingerprint.

Cite this