Neural Smithing

Supervised Learning in Feedforward Artificial Neural Networks
Overview

Artificial neural networks are nonlinear mapping systems whose structure is loosely based on principles observed in the nervous systems of humans and animals. The basic idea is that massive systems of simple units linked together in appropriate ways can generate many complex and interesting behaviors. This book focuses on the subset of feedforward artificial neural networks called multilayer perceptrons (MLP). These are the mostly widely used neural networks, with applications as diverse as finance (forecasting), manufacturing (process control), and science (speech and image recognition).

This book presents an extensive and practical overview of almost every aspect of MLP methodology, progressing from an initial discussion of what MLPs are and how they might be used to an in-depth examination of technical factors affecting performance. The book can be used as a tool kit by readers interested in applying networks to specific problems, yet it also presents theory and references outlining the last ten years of MLP research.

Table of Contents

  1. Preface
  2. 1. Introduction
  3. 2. Supervised Learning
  4. 3. Single-Layer Networks
  5. 4. MLP Representational Capabilities
  6. 5. Back-Propagation
  7. 6. Learning Rate and Momentum
  8. 7. Weight-Initialization Techniques
  9. 8. The Error Surface
  10. 9. Faster Variations of Back-Propagation
  11. 10. Classical Optimization Techniques
  12. 11. Genetic Algorithms and Neural Networks
  13. 12. Constructive Methods
  14. 13. Pruning Algorithms
  15. 14. Factors Influencing Generalization
  16. 15. Generalization Prediction and Assessment
  17. 16. Heuristics for Improving Generalization
  18. 17. Effects of Training with Noisy Inputs
  19. A. Linear Regression
  20. B. Principal Components Analysis
  21. C. Jitter Calculations
  22. D. Sigmoid-like Nonlinear Functions
  23. References
  24. Index