288 pp. per issue
6 x 9, illustrated
2014 Impact factor:

Neural Computation

June 2013, Vol. 25, No. 6, Pages 1512-1547
(doi: 10.1162/NECO_a_00452)
© 2013 Massachusetts Institute of Technology
Efficient Sample Reuse in Policy Gradients with Parameter-Based Exploration
Article PDF (1.22 MB)

The policy gradient approach is a flexible and powerful reinforcement learning method particularly for problems with continuous actions such as robot control. A common challenge is how to reduce the variance of policy gradient estimates for reliable policy updates. In this letter, we combine the following three ideas and give a highly effective policy gradient method: (1) policy gradients with parameter-based exploration, a recently proposed policy search method with low variance of gradient estimates; (2) an importance sampling technique, which allows us to reuse previously gathered data in a consistent way; and (3) an optimal baseline, which minimizes the variance of gradient estimates with their unbiasedness being maintained. For the proposed method, we give a theoretical analysis of the variance of gradient estimates and show its usefulness through extensive experiments.