Scattering Network

References

https://arxiv.org/pdf/1605.02971v2.pdf Structured Receptive Fields in CNNs

Normal CNNs treat images and their filters as pixel values, we aim for a CNN that treats images as functions in Scale-space. Thus, the learned convolution kernels become functions as well.

An illustration of the basic building block in an RFNN network. A linear comibination of a limited basis filter set φm yields an arbitrary number of effective filters. The weights αij are learned by the network.

http://arxiv.org/abs/1306.5532v2 Deep Learning by Scattering

Scattering networks iteratively apply complex valued unitary operators, and the pooling is performed by a complex modulus.

http://arxiv.org/abs/1509.09187v1 Deep Haar Scattering Networks

Structured Haar scattering on a graph computes each layer S j+1x by pairing the rows of the previous layer S jx. For each pair of rows, it stores their sum and the absolute values of their difference, in a twice bigger row.

https://arxiv.org/abs/1610.02357v1 Deep Learning with Separable Convolutions

In this light, a separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset.

http://www.di.ens.fr/data/publications/papers/cvpr_13_sifre_mallat_final.pdf

https://arxiv.org/abs/1403.1687 Rigid-Motion Scattering for Texture Classification

Rigid-motion scattering is similar to translation scattering of Figure 4, but deep wavelet modulus operators |W | are replaced with rigid-motion wavelet modulus operators |W̃ | where convolutions are applied along the rigid-motion group.

https://joanbruna.github.io/stat212b/ Spring 2016, Stats 212b (UC Berkeley): Topics on Deep Learning https://bcourses.berkeley.edu/courses/1413088

https://arxiv.org/ftp/arxiv/papers/1701/1701.02291.pdf QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures

https://arxiv.org/abs/1703.08961v1 Scaling the Scattering Transform: Deep Hybrid Networks

We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs.

https://arxiv.org/pdf/1809.06367v1.pdf Scattering Networks for Hybrid Representation Learning

For supervised learning, we demonstrate that the early layers of CNNs do not necessarily need to be learned, and can be replaced with a scattering network instead. Indeed, using hybrid architectures, we achieve the best results with predefined representations to-date, while being competitive with end-to-end learned CNNs.