| |
Abstract:
A neural model is described which uses oscillatory
correlation to segregate speech from interfering sound sources. The
core of the model is a two-layer neural oscillator network. A sound
stream is represented by a synchronized population of oscillators,
and different streams are represented by desynchronized oscillator
populations. The model has been evaluated using a corpus of speech
mixed with interfering sounds, and produces an improvement in
signal-to-noise ratio for every mixture.
|