Controlling the Magnification Factor of Self-Organizing Feature Maps

H.-U. Bauer, R. Der, Michael Herrmann

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

The magnification exponents μ occurring in adaptive map formation algorithms like Kohonen's self-organizing feature map deviate for the information theoretically optimal value μ = 1 as well as from the values that optimize, e.g., the mean square distortion error (μ = 1/3 for one-dimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect, which are based on topographic maps, require negative magnification exponents μ < 0. We present an extension of the self-organizing feature map algorithm, which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations.
Original languageEnglish
Pages (from-to)757-771
Number of pages15
JournalNeural Computation
Issue number4
Publication statusPublished - 15 May 1996


Dive into the research topics of 'Controlling the Magnification Factor of Self-Organizing Feature Maps'. Together they form a unique fingerprint.

Cite this