Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
complex_parameters [2017/12/28 11:52]
admin
complex_parameters [2018/06/12 19:15]
admin
Line 170: Line 170:
  
 https://​arxiv.org/​pdf/​1712.07811v1.pdf Multi-dimensional Graph Fourier Transform https://​arxiv.org/​pdf/​1712.07811v1.pdf Multi-dimensional Graph Fourier Transform
 +
 +https://​arxiv.org/​abs/​1802.08245v1 Arbitrarily Substantial Number Representation for Complex Number
 +
 +Researchers are often perplexed when their machine learning algorithms are required to deal with complex number. Various strategies are commonly employed to project complex number into real number, although it is frequently sacrificing the information contained in the complex number. This paper proposes a new method and four techniques to represent complex number as real number, without having to sacrifice the information contained. The proposed techniques are also capable of retrieving the original complex number from the representing real number, with little to none of information loss. The promising applicability of the proposed techniques has been demonstrated and worth to receive further exploration in representing the complex number.
 +
 +https://​arxiv.org/​abs/​1802.08235v1 Vector Field Based Neural Networks
 +
 +The data points are interpreted as particles moving along a flow defined by the vector field which intuitively represents the desired movement to enable classification. The architecture moves the data points from their original configuration to anew one following the streamlines of the vector field with the objective of achieving a final configuration where classes are separable. An optimization problem is solved through gradient descent to learn this vector field.
 +
 +https://​arxiv.org/​abs/​1803.04386v2 Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches
 +
 +We introduce flipout, an efficient method for decorrelating the gradients within a mini-batch by implicitly sampling pseudo-independent weight perturbations for each example. Empirically,​ flipout achieves the ideal linear variance reduction for fully connected networks, convolutional networks, and RNNs. We find significant speedups in training neural networks with multiplicative Gaussian perturbations.
 +
 +https://​eng.uber.com/​differentiable-plasticity/​
 +
 +https://​arxiv.org/​abs/​1711.01297v1 Implicit Weight Uncertainty in Neural Networks
 +
 +http://​mdolab.engin.umich.edu/​sites/​default/​files/​Martins2003CSD.pdf The Complex-Step Derivative Approximation
 +
 +https://​github.com/​facebookresearch/​QuaterNet QuaterNet: A Quaternion-based Recurrent Model for Human Motion
 +
 +
 +