| |
Abstract:
Neural network models indicate that a change in the number of
available connections changes the degree of complexity of the
representations that are learned. Ordinarily, complexity increases
with the number of connections, in so far as each synapse
constitutes a degree of freedom to be adjusted in the training
process. Hence, the most commonly held view that biological
synapses are pruned during development would predict that
representational complexity decreases in the course of maturation.
We present here a self-organizing network where pruning of synapses
leads to the opposite condition, i.e. an increase in complexity. We
show that a complexity increase is in accordance with psychological
data, even if it might not be in accordance with orthodox
psychological theories that use selective mechanisms to model
development. Using learning theory (an advanced branch of neural
network theory) to analyse our model, we also show that complexity
changes are most functional if complexity increases during
training, irrespective of architecture. Hence we propose that
rising complexity constitute a constraint on the type of synapses
that are pruned in biological brains. This hypothesis leads to a
re-evaluation of the existing literature on cortical development,
as synaptic proliferation as well as elimination can be part of the
very same learning strategy.
|