288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

July 1, 1998, Vol. 10, No. 5, Pages 1067-1069
(doi: 10.1162/089976698300017340)
© 1998 Massachusetts Institute of Technology
Correction to Proof That Recurrent Neural Networks Can Robustly Recognize Only Regular Languages
Article PDF (40.35 KB)

Our earlier article, “The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction” (Casey, 1996), contains a corollary that shows that finite dimensional recurrent neural networks with noise in their state variables that perform algorithmic computations can perform only finite state machine computations. The proof of the corollary is technically incorrect. The problem resulted from the fact that the proof of the theorem on which the corollary is based was more general than the statement of the theorem, and it was the contents of the proof rather than the statement that were used to prove the corollary. In this note, we state the theorem in the necessary generality and then give the corrected proof of the corollary.