MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

A Generative Model for Visual Cue Combination

 Zhiyong Yang and Richard S. Zemel
  
 

Abstract:
We develop a hierarchical generative model to study cue combination. The model consists of four layers: global shape parameters at the top, followed by global cue-specific shape parameters, then local cue-specific parameters, ending with an intensity image at the bottom. Inferring parameters from images is achieved by inverting this model. Inference produces a probability distribution at each level; using distributions rather than a single value of underlying variables at each stage preserves information about the validity of each cue for the given image, which helps make this model more powerful than standard linear combination models or other existing combination models. The parameters of the model are determined using data obtained from psychophysical experiments. In these experiments, subjects estimate surface shape from intensity images containing texture information, shading information, or both types of cues, and varying degrees of noise. This model provides a good fit to our data on the combination of these two cues, and also provides a natural account for many aspects of cue combination.

 
 


© 2010 The MIT Press
MIT Logo