Add deep dream research

Resisting Adversarials for Convolutional Neural Networks using Internal Projection

By forcing the network to redraw the relevant parts of the image and subsequently comparing this new image to the original, we are having the network give a \proof“ of the presence of the object.

We have analyzed the emergence of semantic parts in CNNs. We have investigated whether the network’s filters learn to respond to semantic parts. We have associated filter stimuli with ground-truth part bounding-boxes in order to perform a quantitative evaluation for different layers, network architectures and supervision levels. Despite promoting this emergence by providing favorable settings and multiple assists, we found that only 34 out of 123 semantic parts in PASCAL-Part dataset [5] emerge in AlexNet [6] finetuned for object detection [7]. Knowledge Representation in Graphs using Convolutional Neural Networks

Knowledge Graphs (KG) constitute a flexible representation of complex relationships between entities particularly useful for biomedical data. These KG, however, are very sparse with many missing edges (facts) and the visualisation of the mesh of interactions nontrivial. Here we apply a compositional model to embed nodes and relationships into a vectorised semantic space to perform graph completion. A visualisation tool based on Convolutional Neural Networks and Self-Organised Maps (SOM) is proposed to extract high-level insights from the KG. We apply this technique to a subset of CTD, containing interactions of compounds with human genes / proteins and show that the performance is comparable to the one obtained by structural models. Open sourcing the Embedding Projector: a tool for visualizing high dimensional data VISUALIZING DEEP NEURAL NETWORK DECISIONS: PREDICTION DIFFERENCE ANALYSIS

We have presented a new method for visualizing deep neural networks that improves on previous methods by using a more powerful conditional, multivariate model. The visualization method shows which pixels of a specific input image are evidence for or against a node in the network. The signed information offers new insights - for research on the networks, as well as the acceptance and usability in domains like healthcare.

The results indicate that the proposed visualization mechanism based on modeling conditional distribution identifies more salient regions as compared to a mechanism based on modeling marginal distribution. ActiVis: Visual Exploration of Industry-Scale Deep Neural Network Models Data2Vis: Automatic Generation of Data Visualizations Using Sequence-to-Sequence Recurrent Neural Networks

We formulate visualization generation as a sequence to sequence translation problem where data specification is mapped to a visualization specification in a declarative language (Vega-Lite). To this end, we train a multilayered Long Short-Term Memory (LSTM) model with attention on a corpus of visualization specifications. Qualitative results show that our model learns the vocabulary and syntax for a valid visualization specification, appropriate transformations (count, bins, mean) and how to use common data selection patterns that occur within data visualizations. Our model generates visualizations that are comparable to manually-created visualizations in a fraction of the time, with potential to learn more complex visualization strategies at scale. VizML: A Machine Learning Approach to Visualization Recommendation