https://arxiv.org/abs/1602.08323v2 Deep Spiking Networks

We introduce an algorithm to do backpropagation on a spiking network. Our network is “spiking” in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a “spike”) when this potential crosses a threshold and the neuron is reset. Neurons only update their states when receiving signals from other neurons. Total computation of the network thus scales with the number of spikes caused by an input rather than network size. We show that the spiking Multi-Layer Perceptron behaves identically, during both prediction and training, to a conventional deep network of rectified-linear units, in the limiting case where we run the spiking network for a long time. We apply this architecture to a conventional classification problem (MNIST) and achieve performance very close to that of a conventional Multi-Layer Perceptron with the same architecture. Our network is a natural architecture for learning based on streaming event-based data, and is a stepping stone towards using spiking neural networks to learn efficiently on streaming data.

https://arxiv.org/pdf/1701.07879v1.pdf A Radically New Theory of how the Brain Represents and Computes with Probabilities

In contrast, our theory assumes: 1) binary neurons; 2) only a small subset of neurons, i.e., a sparse distributed code (SDC), comprises any individual code; 3) only binary synapses; 4) signaling via waves of contemporaneously arriving first-spikes; individual neurons 5) have completely flat TFs (all weights initially zero) and 6) are not noisy; and 7) noise is a resource generated and used to achieve the crucial property that more similar inputs map to more similar codes, which controls a tradeoff between storage capacity and embedding the statistics of the input space (in the pattern of intersections over the codes), which manifests in particular correlation patterns. The theory, Sparsey, was introduced 20 years ago as a canonical cortical circuit/algorithm model, but its interpretation as a probabilistic model was not emphasized.

https://arxiv.org/pdf/1612.05596v2.pdf Neuromorphic Deep Learning Machines

. Random BP replaces feedback weights with random ones and encourages the network to adjust its feed-forward weights to learn pseudo-inverses of the (random) feedback weights. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations in neuromorphic computing hardware. The rule requires only one addition and two comparisons for each synaptic weight using a two-compartment leaky Integrate & Fire (I&F) neuron, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving nearly identical classification accuracies compared to artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

http://journal.frontiersin.org/article/10.3389/fnins.2016.00508/full Training Deep Spiking Neural Networks Using Backpropagation

https://arxiv.org/abs/1705.11146 SuperSpike: Supervised learning in multi-layer spiking neural networks

https://www.youtube.com/watch?v=05MpycrocRI Geoffrey Hinton: “A Computational Principle that Explains Sex, the Brain, and Sparse Coding”

https://arxiv.org/abs/1802.02627v1 Going Deeper in Spiking Neural Networks: VGG and Residual Architectures

https://arxiv.org/abs/1803.09574 Long short-term memory and Learning-to-learn in networks of spiking neurons

We show here that SNNs attain similar capabilities if one includes adapting neurons in the network. Adaptation denotes an increase of the firing threshold of a neuron after preceding firing. A substantial fraction of neurons in the neocortex of rodents and humans has been found to be adapting. It turns out that if adapting neurons are integrated in a suitable manner into the architecture of SNNs, the performance of these enhanced SNNs, which we call LSNNs, for computation in the temporal domain approaches that of artificial neural networks with LSTM-units. In addition, the computing and learning capabilities of LSNNs can be substantially enhanced through learning-to-learn (L2L) methods from machine learning, that have so far been applied primarily to LSTM networks and apparently never to SSNs.

https://arxiv.org/abs/1803.09574 Long short-term memory and learning-to-learn in networks of spiking neurons

https://dgyblog.com/projects-term/res/Theory%20and%20Tools%20for%20the%20Conversion%20of%20Analog%20to%20Spiking%20Convolutional%20Neural%20Networks.pdf Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks