| |
Abstract:
The generalization ability of a neural network can sometimes
be improved dramatically by regularization. To analyze the
improvement one needs more refined results than the asymptotic
distribution of the weight vector. We study the simple case of
one-dimensional linear regression, where we derive expansions for
the optimal regularization parameter and the ensuing improvement.
It is possible to construct examples where it is best to use no
regularization.
|