Learning by stimulation avoidance scales to large neural networks

Conference Date
2017
Location
Lyon, France
ISBN
978-0-262-34633-7
Date Published
September 2017
Conference Date: 2017, Vol. 14, Pages 275-282.
(doi: 10.7551/ecal_a_048)
© 2017 Massachusetts Institute of Technology Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license
Article PDF (2.9 MB)
Abstract

Spiking neural networks with spike-timing dependent plasticity (STDP) can learn to avoid the external stimulations spontaneously. This principle is called "Learning by Stimulation Avoidance" (LSA) and can be used to reproduce learning experiments on cultured biological neural networks. LSA has promising potential, but its application and limitations have not be studied extensively. This paper focuses on the scalability of LSA for large networks and shows that LSA works well in small networks (100 neurons) and can be scaled to networks up to approximately 3,000 neurons.