Previous PageNext Page

2. Self-Organizing Neural Networks

A Kohonen network can be used to study data of high-dimensional spaces by projection into a two-dimensional plane. The projection will be such that points that are adjacent in the high-dimensional space will also be adjacent in the Kohonen map; this explains the full name of the method; self-organizing topological feature maps.
Figure 1 shows the architecture of a Kohonen network: each column in this two-dimensional arrangement represents a neuron, each box in such a column represents a weight of a neuron [5]. Each neuron has as many (m) weights, wji, as there are input data, xi, for the object that is being mapped into the network.

Fig. 1. Architure of a Kohonen neural network. The input object X=(x1, x2, ..., xm) is mapped into an n x n arrangement of neurons, j, each having a weight vector Wj = (wj1, wj2, ..., wjm).

An object, a sample, s, will be mapped into that neuron, sc, that has weights most similar to the input data (Eq. 1):

(1)

The weights of this winning neuron, sc, will be adjusted such as to make them even more similar to the input data. In fact, the weights of each neuron will be adjusted but to a degree that decreases with increasing distance to the winning neuron.
There are various ways for utilizing the two-dimensional maps obtained by a Kohonen network:

1.

Representation: the two-dimensional map can be taken as a representation, an encoding, of the higher-dimensional information.

2.

Similarity perception: objects that are mapped into the same or closely adjacent neurons can be considered as similar.

3.

Cluster analysis: points that form a group in such a map, clearly distinguished from other points, can be taken as a class or category of objects having certain features in common.

Previous PageNext Page


Johann.Gasteiger@chemie.uni-erlangen.de