Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

February 15, 1997, Vol. 9, No. 2, Pages 271-278
(doi: 10.1162/neco.1997.9.2.271)
© 1997 Massachusetts Institute of Technology
Using Expectation-Maximization for Reinforcement Learning
Article PDF (133.76 KB)
Abstract

We discuss Hinton's (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent. We show circumstances under which applying the RPP is guaranteed to increase the mean return, even though it can make large changes in the values of the parameters. The proof is based on a mapping between the RPP and a form of the expectation-maximization procedure of Dempster, Laird, and Rubin (1977).