Encoding words into a Potts attractor network

Sahar Pirmoradian, Alessandro Treves, Cognitive Neuroscience Sector SISSA

Research output: Chapter in Book/Report/Conference proceedingConference contribution


To understand the brain mechanisms underlying language phenomena, and sentence construction in particular, a number of approaches have been followed that are based on artificial neural networks, where words are encoded as distributed patterns of activity. Still, issues like the distinct encoding of semantic vs syntactic features, word binding, and the learning processes through which words come to be encoded that way, have remained tough challenges. We explore a novel approach to address these challenges, which focuses first on encoding words of an artificial language of intermediate complexity (BLISS) into a Potts attractor net. Such a network has the capability to spontaneously latch between attractor states, offering a simplified cortical model of sentence production. The network stores the BLISS vocabulary, and hopefully its grammar, in its semantic and syntactic subnetworks. Function and content words are encoded differently on the two subnetworks, as suggested by neuropsychological findings. We propose that a next step might describe the self-organization of a comparable representation of words through a model of a learning process.
Original languageEnglish
Title of host publicationProgress in Neural Processing
Subtitle of host publicationProceedings of the 13th Neural Computation and Psychology Workshop
PublisherWorld Scientific Publishing
Number of pages14
ISBN (Electronic)978-981-4458-85-6
ISBN (Print)ISBN: 978-981-4458-83-2
Publication statusPublished - 2013


Dive into the research topics of 'Encoding words into a Potts attractor network'. Together they form a unique fingerprint.

Cite this