| |
Abstract:
In this paper, we provide a description and evaluation of a
new method for extracting face motion data from standard video
sequences, which takes advantage of important constraints on face
structure and motion. Face motions are measured from video
recordings by deforming the surface of an ellipsoidal mesh fit to
the face. The mesh is initialized manually for a reference frame
and then projectedcd automatically onto each new video frame.
Location changes (between sucessive frames) for each mesh node
are determined adaptively within a well-defined area around each
mesh node, using a two-dimensional cross-correlation analysis for
a two-dimensional wavelet transform of the frames. Position
parpmeters are propagated in three steps from a coarser mesh and
a correspondingly higher scale of the wavelet transform to the
final fine mesh and lower scale of wavelet transform. The
resulting location changes of the mesh nodes represent the face
motion.
|