A Kohonen network can be used to study data of high-dimensional spaces by projection into a two-dimensional plane. The projection will be such that points that are adjacent in the high-dimensional space will also be adjacent in the Kohonen map; this explains the full name of the method; self-organizing topological feature maps.

Figure 1 shows the architecture of a Kohonen network: each column in this two-dimensional arrangement represents a neuron, each box in such a column represents a weight of a neuron [5]. Each neuron has as many (*m*) weights, *w _{ji}*, as there are input data,

*Fig. 1. Architure of a Kohonen neural network. The input object X=(x _{1}, x_{2}, ..., x_{m}) is mapped into an n x n arrangement of neurons, j, each having a weight vector W_{j} = (w_{j1}, w_{j2}, ..., w_{jm}).*

An object, a sample, *s*, will be mapped into that neuron, *sc*, that has weights most similar to the input data (Eq. 1):

(1) |

The weights of this winning neuron, *sc*, will be adjusted such as to make them even more similar to the input data. In fact, the weights of each neuron will be adjusted but to a degree that decreases with increasing distance to the winning neuron.

There are various ways for utilizing the two-dimensional maps obtained by a Kohonen network:

1. |
Representation: the two-dimensional map can be taken as a representation, an encoding, of the higher-dimensional information. |

2. |
Similarity perception: objects that are mapped into the same or closely adjacent neurons can be considered as similar. |

3. |
Cluster analysis: points that form a group in such a map, clearly distinguished from other points, can be taken as a class or category of objects having certain features in common. |