Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

January 1996, Vol. 8, No. 1, Pages 41-43
(doi: 10.1162/neco.1996.8.1.41)
© 1995 Massachusetts Institute of Technology
A Short Proof of the Posterior Probability Property of Classifier Neural Networks
Article PDF (280.91 KB)
Abstract

It is now well known that neural classifiers can learn to compute a posteriori probabilities of classes in input space. This note offers a shorter proof than the traditional ones. Only one class has to be considered and straightforward minimization of the error function provides the main result. The method can be extended to any kind of differentiable error function. We also present a simple visual proof of the same theorem, which stresses the fact that the network must be perfectly trained and have enough plasticity.