288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

May 1995, Vol. 7, No. 3, Pages 549-564
(doi: 10.1162/neco.1995.7.3.549)
© 1995 Massachusetts Institute of Technology
Learning Population Codes by Minimizing Description Length
Article PDF (862.79 KB)

The minimum description length (MDL) principle can be used to train the hidden units of a neural network to extract a representation that is cheap to describe but nonetheless allows the input to be reconstructed accurately. We show how MDL can be used to develop highly redundant population codes. Each hidden unit has a location in a low-dimensional implicit space. If the hidden unit activities form a bump of a standard shape in this space, they can be cheaply encoded by the center of this bump. So the weights from the input units to the hidden units in an autoencoder are trained to make the activities form a standard bump. The coordinates of the hidden units in the implicit space are also learned, thus allowing flexibility, as the network develops a discontinuous topography when presented with different input classes.