288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

October 1, 1997, Vol. 9, No. 7, Pages 1457-1482.
(doi: 10.1162/neco.1997.9.7.1457)
© 1997 Massachusetts Institute of Technology
Adaptive Online Learning Algorithms for Blind Separation: Maximum Entropy and Minimum Mutual Information
Article PDF (504.97 KB)

There are two major approaches for blind separation: maximum entropy (ME) and minimum mutual information (MMI). Both can be implemented by the stochastic gradient descent method for obtaining the demixing matrix. The MI is the contrast function for blind separation; the entropy is not. To justify the ME, the relation between ME and MMI is first elucidated by calculating the first derivative of the entropy and proving that the mean subtraction is necessary in applying the ME and at the solution points determined by the MI, the ME will not update the demixing matrix in the directions of increasing the cross-talking.

Second, the natural gradient instead of the ordinary gradient is introduced to obtain efficient algorithms, because the parameter space is a Riemannian space consisting of matrices. The mutual information is calculated by applying the Gram-Charlier expansion to approximate probability density functions of the outputs.

Finally, we propose an efficient learning algorithm that incorporates with an adaptive method of estimating the unknown cumulants. It is shown by computer simulation that the convergence of the stochastic descent algorithms is improved by using the natural gradient and the adaptively estimated cumulants.