Preface to the First Edition
This volume is inspired by two great questions: “How does the brain work?” and “How can we build intelligent machines?” It provides no simple, single answer to either question because no single answer, simple or otherwise, exists. However, in hundreds of articles it charts the immense progress made in recent years in answering many related, but far more specific, questions.
The term neural networks has been used for a century or more to describe the networks of biological neurons that constitute the nervous systems of animals, whether invertebrates or vertebrates. Since the 1940s, and especially since the 1980s, the term has been used for a technology of parallel computation in which the computing elements are “artificial neurons” loosely modeled on simple properties of biological neurons, usually with some adaptive capability to change the strengths of connections between the neurons.
Brain theory is centered on “computational neuroscience,” the use of computational techniques to model biological neural networks, but also includes attempts to understand the brain and its function through a variety of theoretical constructs and computer analogies. In fact, as the following pages reveal, much of brain theory is not about neural networks per se, but focuses on structural and functional “networks” whose units are in scales both coarser and finer than that of the neuron. Computer scientists, engineers, and physicists have analyzed and applied artificial neural networks inspired by the adaptive, parallel computing style of the brain, but this Handbook will also sample non-neural approaches to the design and analysis of “intelligent” machines. In between the biologists and the technologists are the connectionists. They use artificial neural networks in psychology and linguistics and make related contributions to artificial intelligence, using neuron-like unites which interact “in the style of the brain” at a more abstract level than that of individual biological neurons.
Many texts have described limited aspects of one subfield or another of brain theory and neural networks, but no truly comprehensive overview is available. The aim of this Handbook is to fill that gap, presenting the entire range of the following topics: detailed models of single neurons; analysis of a wide variety of neurobiological systems; “connectionist” studies; mathematical analyses of abstract neural networks; and technological applications of adaptive, artificial neural networks and related methodologies. The excitement, and the frustration, of these topics is that they span such a broad range of disciplines, including mathematics, statistical physics and chemistry, neurology and neurobiology, and computer science and electrical engineering, as well as cognitive psychology, artificial intelligence, and philosophy. Much effort, therefore, has gone into making the book accessible to readers with varied backgrounds (an undergraduate education in one of the above areas, for example, or the frequent reading of related articles at the level of the Scientific American) while still providing a clear view of much of the recent specialized research.
The heart of the book comes in Part III, in which the breadth of brain theory and neural networks is sampled in 266 articles, presented in alphabetical order by title. Each article meets the following requirements:
1. It is authoritative within its own subfield, yet accessible to students and experts in a wide range of other fields.
2. It is comprehensive, yet short enough that its concepts can be acquired in a single sitting.
3. It includes a list of references, limited to 15, to give the reader a well-defined and selective list of places to go to initiate further study.
4. It is as self-contained as possible, while providing cross-references to allow readers to explore particular issues of related interest.
Despite the fourth requirement, some articles are more self-contained than others. Some articles can be read with almost no prior knowledge; some can be read with a rather general knowledge of a few key concepts; others require fairly detailed understanding of material covered in other articles. For example, many articles on applications will make sense only if one understands the “backpropagation” technique for training artificial neural networks; and a number of studies of neuronal function will make sense only if one has at least some idea of the Hodgkin-Huxley equation. Whenever appropriate, therefore, the articles include advice on background articles.
Parts I and II of the book provide a more general approach to helping readers orient themselves. Part I: Background presents a perspective on the “landscape” of brain theory and neural networks, including an exposition of the key concepts for viewing neural networks as dynamic, adaptive systems. Part II: Road Maps then provides an entrée into the many articles of Part III, with “road maps” for 23 different themes. The “Meta-Map,“ which introduces Part II, groups these themes under eight general headings which, in and of themselves, give some sense of the sweep of the Handbook:
Connectionism: Psychology, Linguistics, and Artificial Intelligence
Dynamics, Self-Organization, and Cooperativity
Learning in Artificial Neural Networks
Applications and Implementations
Biological Neurons and Networks
Sensory Systems
Plasticity in Development and Learning
Motor Control
A more detailed view of the structure of the book is provided in the introductory section “How to Use this Book.” The aim is to ensure that readers will not only turn to the book to get good brief reviews of topics in their own specialty, but also will find many invitations to browse widely—finding parallels amongst different subfields, or simply enjoying the discovery of interesting topics far from familiar territory.
Acknowledgments
My foremost acknowledgment is to Prue Arbib, who served as Editorial Assistant during the long and arduous process of eliciting and assembling the many, many contributions to Part III; we both thank Paulina Tagle for her help with our work. The initial plan for the book was drawn up in 1991, and it benefited from the advice of a number of friends, especially George Adelman, who shared his experience as Editor of the Encyclopedia of Neuroscience. Refinement of the plan and the choice of publishers occupied the first few months of 1992, and I thank Fiona Stevens of The MIT Press for her support of the project from that time onward.
As can be imagined, the plan for a book like this has developed through a time-consuming process of constraint satisfaction. The first steps were to draw up a list of about 20 topic areas (similar to, but not identical with, the 23 areas surveyed in Part II), to populate these areas with a preliminary list of over 100 articles and possible authors, and to recruit the first members of the Editorial Advisory Board to help expand the list of articles and focus on the search for authors. A very satisfying number of authors invited in the first round accepted my invitation, and many of these added their voices to the Editorial Advisory Board in suggesting further topics and authors for the Handbook.
I was delighted, stimulated, and informed as I read the first drafts of the articles; but I have also been grateful for the fine spirit of cooperation with which the authors have responded to editorial comments and reviews. The resulting articles not only are authoritative and accessible in themselves, but also have been revised to match the overall style of the Handbook and to meet the needs of a broad readership. With this I express my sincere thanks to the editorial advisors, the authors, and the hundreds of reviewers who so constructively contributed to the final polishing of the articles that now appear in Part III; to Doug Gordon and the copy editors and typesetters who transformed the diverse styles of the manuscripts into the style of the Handbook; and to the graduate students who helped so much with the proofreading.
Finally, I want to record a debt that did not reach my conscious awareness until well into the editing of this book. It is to Hiram Haydn, who for many years was editor of The American Scholar, which is published for general circulation by Phi Beta Kappa. In 1971 or so, Phi Beta Kappa conducted a competition to find authors to receive grants for books to be written, if memory serves aright, for the Bicentennial of the United States. I submitted an entry. Although I was not successful, Mr. Haydn, who had been a member of the jury, wrote to express his appreciation of that entry, and to invite me to write an article for the Scholar. What stays in my mind from the ensuing correspondence was the sympathetic way in which he helped me articulate the connections that were at best implicit in my draft, and find the right voice in which to “speak” with the readers of a publication so different from the usual scientific journal. I now realize that it is his example I have tried to follow as I have worked with these hundreds of authors in the quest to see the subject of brain theory and neural networks whole, and to share it with readers of diverse interests and backgrounds.
Michael A. Arbib
Los Angeles and La Jolla
January 1995