To understand the brain mechanisms underlying language phenomena, and sentence construction in particular, a number of approaches have been followed that are based on artificial neural networks, where words are encoded as distributed patterns of activity. Still, issues like the distinct encoding of semantic vs syntactic features, word binding, and the learning processes through which words come to be encoded that way, have remained tough challenges. We explore a novel approach to address these challenges, which focuses first on encoding words of an artificial language of intermediate complexity (BLISS) into a Potts attractor net. Such a network has the capability to spontaneously latch between attractor states, offering a simplified cortical model of sentence production. The network stores the BLISS vocabulary, and hopefully its grammar, in its semantic and syntactic subnetworks. Function and content words are encoded differently on the two subnetworks, as suggested by neuropsychological findings. We propose that a next step might describe the self-organization of a comparable representation of words through a model of a learning process.
|Title of host publication||Progress in Neural Processing|
|Subtitle of host publication||Proceedings of the 13th Neural Computation and Psychology Workshop|
|Publisher||World Scientific Publishing|
|Number of pages||14|
|ISBN (Print)||ISBN: 978-981-4458-83-2|
|Publication status||Published - 2013|