Neural Computation
In this paper an analysis of the statistical and the convergence properties of Kohonen's self-organizing map of any dimension is presented. Every feature in the map is considered as a sum of a number of random variables. We extend the Central Limit Theorem to a particular case, which is then applied to prove that the feature space during learning tends to multiple gaussian distributed stochastic processes, which will eventually converge in the mean-square sense to the probabilistic centers of input subsets to form a quantization mapping with a minimum mean squared distortion either globally or locally. The diminishing effect, as training progresses, of the initial states on the value of the feature map is also shown.