| |
Abstract:
In many scientific and engineering applications, detecting and
understanding differences between two groups of examples can be
reduced to a classical problem of training a classifier for
labeling new examples while making as few mistakes as possible.
In the traditional classification setting, the resulting
classifier is rarely analyzed in terms of the properties of the
input data captured by the discriminative model. However, such
analysis is crucial if we want to understand and visualize the
detected differences. We propose an approach to interpretation of
the statistical model in the original feature space that allows
us to argue about the model in terms of the relevant changes to
the input vectors. For each point in the input space, we define a
discriminative direction to be the diretion that moves the point
towards the other class while introducing as little irreleveant
change as possible with respect to the classifier function. We
derive the discriminative direction for kernel-based classifiers,
demonstrate the technique on several examples and briefly discuss
its use in the statistical shape analysis, an application that
originally motivated this work.
|