288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

April 2010, Vol. 22, No. 4, Pages 998-1024
(doi: 10.1162/neco.2009.11-08-912)
© 2009 Massachusetts Institute of Technology
A Continuous Entropy Rate Estimator for Spike Trains Using a K-Means-Based Context Tree
Article PDF (269.45 KB)

Entropy rate quantifies the change of information of a stochastic process (Cover & Thomas, 2006). For decades, the temporal dynamics of spike trains generated by neurons has been studied as a stochastic process (Barbieri, Quirk, Frank, Wilson, & Brown, 2001; Brown, Frank, Tang, Quirk, & Wilson, 1998; Kass & Ventura, 2001; Metzner, Koch, Wessel, & Gabbiani, 1998; Zhang, Ginzburg, McNaughton, & Sejnowski, 1998). We propose here to estimate the entropy rate of a spike train from an inhomogeneous hidden Markov model of the spike intervals. The model is constructed by building a context tree structure to lay out the conditional probabilities of various subsequences of the spike train. For each state in the Markov chain, we assume a gamma distribution over the spike intervals, although any appropriate distribution may be employed as circumstances dictate. The entropy and confidence intervals for the entropy are calculated from bootstrapping samples taken from a large raw data sequence. The estimator was first tested on synthetic data generated by multiple-order Markov chains, and it always converged to the theoretical Shannon entropy rate (except in the case of a sixth-order model, where the calculations were terminated before convergence was reached). We also applied the method to experimental data and compare its performance with that of several other methods of entropy estimation.