Scattering Network

References Structured Receptive Fields in CNNs

Normal CNNs treat images and their filters as pixel values, we aim for a CNN that treats images as functions in Scale-space. Thus, the learned convolution kernels become functions as well.

An illustration of the basic building block in an RFNN network. A linear comibination of a limited basis filter set φm yields an arbitrary number of effective filters. The weights αij are learned by the network. Deep Learning by Scattering

Scattering networks iteratively apply complex valued unitary operators, and the pooling is performed by a complex modulus. Deep Haar Scattering Networks

Structured Haar scattering on a graph computes each layer S j+1x by pairing the rows of the previous layer S jx. For each pair of rows, it stores their sum and the absolute values of their difference, in a twice bigger row. Deep Learning with Separable Convolutions

In this light, a separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset. Rigid-Motion Scattering for Texture Classification

Rigid-motion scattering is similar to translation scattering of Figure 4, but deep wavelet modulus operators |W | are replaced with rigid-motion wavelet modulus operators |W̃ | where convolutions are applied along the rigid-motion group. Spring 2016, Stats 212b (UC Berkeley): Topics on Deep Learning QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures Scaling the Scattering Transform: Deep Hybrid Networks

We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing the best results to-date with pre-defined representations while being competitive with Deep CNNs. Scattering Networks for Hybrid Representation Learning

For supervised learning, we demonstrate that the early layers of CNNs do not necessarily need to be learned, and can be replaced with a scattering network instead. Indeed, using hybrid architectures, we achieve the best results with predefined representations to-date, while being competitive with end-to-end learned CNNs.