TY - CHAP
T1 - Constructing Complex Systems Via Activity-Driven Unsupervised Hebbian Self-Organization
AU - Bednar, James A.
PY - 2014/6/5
Y1 - 2014/6/5
N2 - How can an information processing system as complex and as powerful as the human cerebral cortex be constructed from the limited information available in the genome? Answering this scientific question has the potential to revolutionize how computing systems for manipulating real-world data are designed and built. Based on an extensive array of physiological, anatomical, and imaging data from the primary visual cortex (V1) of mammals, we propose a relatively simple biologically based developmental architecture that accounts for most of the demonstrated functional properties of V1 neurons. Given the overall similarity between cortical regions, and the absence of V1-specific circuitry in the model architecture, we expect similar principles to apply throughout the cerebral cortex. The architecture consists of a network of simple artificial V1 neurons with initially unspecific connections that are modified by Hebbian learning and homeostatic plasticity, driven by input patterns from other neural regions and ultimately from the external world. Through an unsupervised developmental process, the model neurons begin to display the major known functional properties of V1 neurons, including receptive fields and topographic maps selective for all of the major low-level visual feature dimensions, realistic specific lateral connectivity underlying surround modulation and adaptation such as visual aftereffects, realistic behavior with visual contrast, and realistic temporal responses. In each case these relatively complex properties emerge from interactions between simple neurons and between internal and external drivers for neural activity, without any requirement for supervised learning, top-down feedback or reinforcement, neuromodulation, or spike-timing dependent plasticity. The model also unifies explanations of a wide variety of phenomena previously considered distinct, with the same adaptation mechanisms leading to both long-term development and short-term plasticity (aftereffects), the same subcortical lateral interactions providing both gain control and accounting for the time course of neural responses, and the same cortical lateral interactions leading to complex cell properties, map formation, and surround modulation. This relatively simple architecture thus sets a baseline for explanations of neural function, suggesting that most of the development and function of V1 can be understood as unsupervised learning, and setting the stage for demonstrating the additional effects of higher- or lower-level mechanisms. The architecture also represents a simple, scalable approach for specifying complex data-processing systems in general.
AB - How can an information processing system as complex and as powerful as the human cerebral cortex be constructed from the limited information available in the genome? Answering this scientific question has the potential to revolutionize how computing systems for manipulating real-world data are designed and built. Based on an extensive array of physiological, anatomical, and imaging data from the primary visual cortex (V1) of mammals, we propose a relatively simple biologically based developmental architecture that accounts for most of the demonstrated functional properties of V1 neurons. Given the overall similarity between cortical regions, and the absence of V1-specific circuitry in the model architecture, we expect similar principles to apply throughout the cerebral cortex. The architecture consists of a network of simple artificial V1 neurons with initially unspecific connections that are modified by Hebbian learning and homeostatic plasticity, driven by input patterns from other neural regions and ultimately from the external world. Through an unsupervised developmental process, the model neurons begin to display the major known functional properties of V1 neurons, including receptive fields and topographic maps selective for all of the major low-level visual feature dimensions, realistic specific lateral connectivity underlying surround modulation and adaptation such as visual aftereffects, realistic behavior with visual contrast, and realistic temporal responses. In each case these relatively complex properties emerge from interactions between simple neurons and between internal and external drivers for neural activity, without any requirement for supervised learning, top-down feedback or reinforcement, neuromodulation, or spike-timing dependent plasticity. The model also unifies explanations of a wide variety of phenomena previously considered distinct, with the same adaptation mechanisms leading to both long-term development and short-term plasticity (aftereffects), the same subcortical lateral interactions providing both gain control and accounting for the time course of neural responses, and the same cortical lateral interactions leading to complex cell properties, map formation, and surround modulation. This relatively simple architecture thus sets a baseline for explanations of neural function, suggesting that most of the development and function of V1 can be understood as unsupervised learning, and setting the stage for demonstrating the additional effects of higher- or lower-level mechanisms. The architecture also represents a simple, scalable approach for specifying complex data-processing systems in general.
U2 - 10.1007/978-3-642-55337-0_7
DO - 10.1007/978-3-642-55337-0_7
M3 - Chapter (peer-reviewed)
SN - 978-3-642-55336-3
VL - 557
T3 - Growing Adaptive Machines
SP - 201
EP - 225
BT - Growing Adaptive Machines
A2 - Kowaliw, Tara
A2 - Bredeche, Nicolas
A2 - Doursat, Rene
PB - Springer
ER -