Previous PageNext Page

3. Historical Perspective

The dramatic increase in interest in neural networks does much to disguise the fact that people have been developing them for nearly 50 years [4]. As early as the forties McCulloch and Pitts [5][6] developed a model of a neuron as a logical threshold element with two possible states. They were forced to leave open the question of how neurons learn, to which Hebb (1949) suggested an answer [7]. This Hebb learning rule states that the synaptic strength of a neuron varies proportionally with the activity in front of and behind the synapse.
Rosenblatt [8] organized a number of neurons in such a way that they corresponded essentially to a one-layered network (Fig. 7), and named this formation a perceptron.
The original enthusiasm for the development of models of biological nervous systems began to wane in the sixties for various reasons. The results with the networks of that time were hardly relevant to solving real-world problems and thus not very encouraging. With the advent of the computer, interest shifted more and more to problem-solving by directly programmed methods. The buzz-word invented for this, "artificial intelligence", shows only too clearly what was expected of this technology, although it was clear from the outset that the method chosen had very little to do with the processing of information as performed by the human brain.
Not surprising therefore that research into neural networks suffered a heavy blow from none other than Marvin Minsky, one of the main proponents of artificial intelligence. In 1969 together with Papert [9] Minsky published a very severe but probably justified criticism of the network models of the time. They showed in their theoretical study that perceptrons - at least as they were being developed at that time - offer only limited possibilities. Moreover they speculated that extending the architecture of perceptrons to more layers would not bring about any significant improvement in results. As a consequence of this criticism from so influential a personage as M. Minskv, research funding for the modeling of biological nervous systems became virtually unavailable.
In the years that followed very little work was done on neural network models; however, even at this time some important advances were made. The work of Albus [10], Amari [11], Grossberg [12], Kohonen [13], von der Malsburg [14], and Widrow and Hoff [15] all deserves mention.
A decisive new impetus was given in 1982 by a physicist, Hopfield [16], who was able to show that network models of binary neurons correspond formally to spin systems, and can be manipulated by the methods already developed for them.
The greatest boost for the application of neural networks then came through the publication of the "back-propagation" algorithm for learning in multilayered models, introduced by Rumelhart, Hinton, and Williams [17][18]. Even though the back-propagation algorithm had been suggested earlier [19], credit must go to the research group on "Parallel Distributed Systems" [20] for bringing it to the attention of the public.
Despite all the successes in the development of models for neural information processing, we must clearly acknowledge the fact that we are still very far from an understanding of the way the human brain works. The capacities of artificial neural networks must still be reckoned very rudimentary in comparison with the biological networks they are supposed to emulate. Nevertheless even these elementary models of neural networks have shown new ways of processing information. It is precisely to the possibilities they are opening up, especially in the area of chemistry, that we wish to devote ourselves in this article.

Previous PageNext Page


Johann.Gasteiger@chemie.uni-erlangen.de