Monthly
288 pp. per issue
6 x 9, illustrated
ISSN
0899-7667
E-ISSN
1530-888X
2014 Impact factor:
2.21

Neural Computation

January 1994, Vol. 6, No. 1, Pages 147-160
(doi: 10.1162/neco.1994.6.1.147)
© 1994 Massachusetts Institute of Technology
Fast Exact Multiplication by the Hessian
Article PDF (671.94 KB)
Abstract

Just storing the Hessian H (the matrix of second derivatives δ2Ewiδwj of the error E with respect to each pair of weights) of a large neural network is difficult. Since a common use of a large matrix like H is to compute its product with various vectors, we derive a technique that directly calculates Hv, where v is an arbitrary vector. To calculate Hv, we first define a differential operator Rv{f(w)} = (δ/δr)f(w + rv)|r=0, note that Rv{▽w} = Hv and Rv{w} = v, and then apply Rv{·} to the equations used to compute ▽w. The result is an exact and numerically stable procedure for computing Hv, which takes about as much computation, and is about as local, as a gradient evaluation. We then apply the technique to a one pass gradient calculation algorithm (backpropagation), a relaxation gradient calculation algorithm (recurrent backpropagation), and two stochastic gradient calculation algorithms (Boltzmann machines and weight perturbation). Finally, we show that this technique can be used at the heart of many iterative techniques for computing various properties of H, obviating any need to calculate the full Hessian.