MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

Selected Title Details  
Mar 1995
ISBN 0262510812
672 pp.
367 illus.
BUY THE BOOK
An Introduction to Neural Networks
James A. Anderson

An Introduction to Neural Networks falls into a new ecological niche for texts. Based on notes that have been class-tested for more than a decade, it is aimed at cognitive science and neuroscience students who need to understand brain function in terms of computational modeling, and at engineers who want to go beyond formal algorithms to applications and computing strategies. It is the only current text to approach networks from a broad neuroscience and cognitive science perspective, with an emphasis on the biology and psychology behind the assumptions of the models, as well as on what the models might be used for. It describes the mathematical and computational tools needed and provides an account of the author's own ideas.

Students learn how to teach arithmetic to a neural network and get a short course on linear associative memory and adaptive maps. They are introduced to the author's brain-state-in-a-box (BSB) model and are provided with some of the neurobiological background necessary for a firm grasp of the general subject.

The field now known as neural networks has split in recent years into two major groups, mirrored in the texts that are currently available: the engineers who are primarily interested in practical applications of the new adaptive, parallel computing technology, and the cognitive scientists and neuroscientists who are interested in scientific applications. As the gap between these two groups widens, Anderson notes that the academics have tended to drift off into irrelevant, often excessively abstract research while the engineers have lost contact with the source of ideas in the field. Neuroscience, he points out, provides a rich and valuable source of ideas about data representation and setting up the data representation is the major part of neural network programming. Both cognitive science and neuroscience give insights into how this can be done effectively: cognitive science suggests what to compute and neuroscience suggests how to compute it.

The programs and documentation to accompany the book can be found here.

Table of Contents
 Introduction
 Acknowledgements
1 Properties of Single Neurons
2 Synaptic Integration and Neuron Models
3 Essential Vector Operations
4 Lateral Inhibition and Sensory Processing
5 Simple Matrix Operations
6 The Linear Associator: Background and Foundations
7 The Kinear Associator: Simulations
8 Early Network Models: The Perceptron
9 Gradient Descent Algorithms
10 Representation of Information
11 Applications of Simple Associators: Concept Formation and Object Motion
12 Energy and Neural Networks: Hopfield Networks and Boltzmann Machines
13 Nearest Neighbor Models
14 Adaptive Maps
15 The BSB Model: A Simple Nonlinear Autoassociative Neural Network
16 Associative Computation
17 Teaching Arithmetic to a Neural Network
 Afterword
 Index
 
 


© 2010 The MIT Press
MIT Logo