January 2016, Vol. 28, No. 1, Pages 118-186
Neural associative networks are a promising computational paradigm for both modeling neural circuits of the brain and implementing associative memory and Hebbian cell assemblies in parallel VLSI or nanoscale hardware. Previous work has extensively investigated synaptic learning in linear models of the Hopfield type and simple nonlinear models of the Steinbuch/Willshaw type. Optimized Hopfield networks of size n can store a large number of about memories of size k (or associations between them) but require real-valued synapses, which are expensive to implement and can store at most bits per synapse. Willshaw networks can store a much smaller number of about memories but get along with much cheaper binary synapses. Here I present a learning model employing synapses with discrete synaptic weights. For optimal discretization parameters, this model can store, up to a factor close to one, the same number of memories as for optimized Hopfield-type learning—for example, for binary synapses, for 2 bit (four-state) synapses, for 3 bit (8-state) synapses, and for 4 bit (16-state) synapses. The model also provides the theoretical framework to determine optimal discretization parameters for computer implementations or brainlike parallel hardware including structural plasticity. In particular, as recently shown for the Willshaw network, it is possible to store bit per computer bit and up to bits per nonsilent synapse, whereas the absolute number of stored memories can be much larger than for the Willshaw model.