288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

July 1, 1997, Vol. 9, No. 5, Pages 1109-1126
(doi: 10.1162/neco.1997.9.5.1109)
© 1997 Massachusetts Institute of Technology
The Faulty Behavior of Feedforward Neural Networks with Hard-Limiting Activation Function
Article PDF (404.54 KB)

With the progress in hardware implementation of artificial neural networks, the ability to analyze their faulty behavior has become increasingly important to their diagnosis, repair, reconfiguration, and reliable application. The behavior of feedforward neural networks with hard limiting activation function under stuck-at faults is studied in this article. It is shown that the stuck-at-M faults have a larger effect on the network's performance than the mixed stuck-at faults, which in turn have a larger effect than that of stuck-at-0 faults. Furthermore, the fault-tolerant ability of the network decreases with the increase of its size for the same percentage of faulty interconnections. The results of our analysis are validated by Monte-Carlo simulations.