## Neural Computation

The CA3 region of the hippocampus is a recurrent neural network that is essential for the storage and replay of sequences of patterns that represent behavioral events. Here we present a theoretical framework to calculate a sparsely connected network's capacity to store such sequences. As in CA3, only a limited subset of neurons in the network is active at any one time, pattern retrieval is subject to error, and the resources for plasticity are limited. Our analysis combines an analytical mean field approach, stochastic dynamics, and cellular simulations of a time-discrete McCulloch-Pitts network with binary synapses. To maximize the number of sequences that can be stored in the network, we concurrently optimize the number of active neurons, that is, pattern size, and the firing threshold. We find that for one-step associations (i.e., minimal sequences), the optimal pattern size is inversely proportional to the mean connectivity *c*, whereas the optimal firing threshold is independent of the connectivity. If the number of synapses per neuron is fixed, the maximum number *P* of stored sequences in a sufficiently large, nonmodular network is independent of its number *N* of cells. On the other hand, if the number of synapses scales as the network size to the power of *3/2*, the number of sequences *P* is proportional to *N*. In other words, sequential memory is scalable. Further-more, we find that there is an optimal ratio *r* between silent and nonsilent synapses at which the storage capacity α = *P/[c* (1 +*r*)*N*] assumes a maximum. For long sequences, the capacity of sequential memory is about one order of magnitude below the capacity for minimal sequences, but otherwise behaves similar to the case of minimal sequences. In a biologically inspired scenario, the information content per synapse is far below theoretical optimality, suggesting that the brain trades off error tolerance against information content in encoding sequential memories.