A back-propagation network, or more precisely, a multilayer neural network trained by the back-propagation algorithm, is also trained by a supervised learning strategy. The general principle of back-propagation learning is to adjust the weights of a neuron depending on the error of its output signal with the goal to minimize the output error. A back-propagation network can be used to develop quantitative models which predict a certain output.
A back-propagation network usually consists of input units, one or more hidden layers and one output layer.
The weights of each layer are adapted during training. The final output, the output of the output layer, is then compared with the target vector. This yields the error between the output and the target vector. This error signal is then back-propagated through the layers from the output to the input layer and the weights of each layer are adapted correspondingly. In this way, the network learns the correct classification for a set of inputs.