MIT CogNet, The Brain Sciences ConnectionFrom the MIT Press, Link to Online Catalog
SPARC Communities
Subscriber : Stanford University Libraries » LOG IN

space

Powered By Google 
Advanced Search

 

Supervised and Unsupervised Learning of Independent Sources Using Symmetric Diffusion Networks

 David A. Medler and James L. McClelland
  
 

Abstract:
Symmetric Diffusion Networks (SDNs) are a class of networks based upon the principles of continuous, stochastic, adaptive, and interactive processing. SDNs also embody Bayesian principles; that is, they develop internal representations based upon the statistics of the environment. Although these networks have many desirable properties, they are often difficult to train, especially on large data sets. Here, we systematically impose neurophysiologically-inspired constraints on the networks such as limiting the activation dynamics and constraining the network connectivity -- specifically, we impose architectural constraints of limited connectivity, excitatory connections between layers, and inhibitory connections within layers. Networks were trained on a "dual component task" in which two independent sources are juxtaposed to create a single pattern. Networks were trained within either a supervised or unsupervised paradigm. Regardless of the training paradigm, each added constraint helped the networks learn the data set faster. Furthermore, analysis of the internal representations in the most constrained networks showed that they learned to separate the independent sources and recover the repeating patterns within each source. It thus appears that these constraints allow easier training of SDNs and increase their tendency to represent the underlying statistics of the environment.

 
 


© 2010 The MIT Press
MIT Logo