Name Pruning

Intent

Motivation

Structure

<Diagram>

Discussion

Known Uses

Related Patterns

<Diagram>

References

http://openreview.net/pdf?id=SkC_7v5gx THE POWER OF SPARSITY IN CONVOLUTIONAL NEURAL NETWORKS

A surprisingly effective approach to trade accuracy for size and speed is to simply reduce the number of channels in each convolutional layer by a fixed fraction and retrain the network. In many cases this leads to significantly smaller networks with only minimal changes to accuracy. In this paper, we take a step further by empirically examining a strategy for deactivating connections between filters in convolutional layers in a way that allows us to harvest savings both in run-time and memory for many network architectures.

https://arxiv.org/abs/1701.04465v1 The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning

We also observed strong evidence for the hypotheses of Mozer & Smolensky (1989a) regarding the “dualist” nature of hidden units, i.e. that learning representations are divided between units which either participate in the output approximation or learn to cancel each others influence.

https://arxiv.org/abs/1704.05119 Exploring Sparsity in Recurrent Neural Networks

We propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90% and speed-up is around 2x to 7x.

https://arxiv.org/abs/1810.04622v1 Pruning neural networks: is it time to nip it in the bud?

First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network. Second, it is the architectures obtained through the pruning process — not the learnt weights —that prove valuable. Such architectures are powerful when trained from scratch. Furthermore, these architectures are easy to approximate without any further pruning: we can prune once and obtain a family of new, scalable network architectures for different memory requirements.