| |
Abstract:
In this paper, we use mutual information (MI) to characterize
the distributions of phonetic and speaker/channel information over
a time-frequency space. The MI between the phonetic label and one,
two or three features is estimated. The Miller's bias formulas for
entropy and MI estimates are extended to include higher order
terms. The MI for speaker/channel recognition is also estimated.
The results are complementary to those for phonetic classification.
Our results show how the phonetic information is locally spread and
how the speaker/channel is globally spread in time and
frequency.
|