| |
Abstract:
Diffusion networks are a natural extension of recurrent
neural networks in which the dynamics are probabilistic. In this
paper we derive the gradient of the log-likelihood of a path with
respect to the drift parameters for a diffusion network. This
gradient can be used to optimize a diffusion network in the
nonequilibrium regime for a wide variety of problems, including
reinforcement learning, filtering and prediction, signal detection,
and continuous path density estimation. An aspect of our work which
is of interest to computational neuroscience and hardware design is
that the obtained gradient is local in space and time, i.e., no
time unfolding, backpropagation of error signals, or Boltzmann
phases are required.
|