search?q=feature&btnI=lucky

Graph Embedding

Aliases

This identifies the pattern and should be representative of the concept that it describes. The name should be a noun that should be easily usable within a sentence. We would like the pattern to be easily referenceable in conversation between practitioners.

Intent

Describes in a single concise sentence the meaning of the pattern.

Motivation

This section describes the reason why this pattern is needed in practice. Other pattern languages indicate this as the Problem. In our pattern language, we express this in a question or several questions and then we provide further explanation behind the question.

Sketch

This section provides alternative descriptions of the pattern in the form of an illustration or alternative formal expression. By looking at the sketch a reader may quickly understand the essence of the pattern.

Discussion

This is the main section of the pattern that goes in greater detail to explain the pattern. We leverage a vocabulary that we describe in the theory section of this book. We don’t go into intense detail into providing proofs but rather reference the sources of the proofs. How the motivation is addressed is expounded upon in this section. We also include additional questions that may be interesting topics for future research.

Known Uses

Here we review several projects or papers that have used this pattern.

Related Patterns In this section we describe in a diagram how this pattern is conceptually related to other patterns. The relationships may be as precise or may be fuzzy, so we provide further explanation into the nature of the relationship. We also describe other patterns may not be conceptually related but work well in combination with this pattern.

Relationship to Canonical Patterns

Relationship to other Patterns

Further Reading

We provide here some additional external material that will help in exploring this pattern in more detail.

References

To aid in reading, we include sources that are referenced in the text in the pattern.

https://arxiv.org/abs/1403.6652 DeepWalk: Online Learning of Social Representations

DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences.

http://arxiv.org/pdf/1605.05273v2.pdf Learning Convolutional Neural Networks for Graphs

Given a collection of graphs, learn a function that can be used for classification and regression problems on unseen graphs. The nodes of any two graphs are not necessarily in correspondence. For instance, each graph of the collection could model a chemical compound and the output could be a function mapping unseen compounds to their level of activity against cancer cells.

Given a large graph, learn graph representations that can be used to infer unseen graph properties such as node types and missing edges.

A node sequence is selected from a graph via a graph labeling procedure. For some nodes in the sequence, a local neighborhood graph is assembled and normalized. The normalized neighborhoods are used as receptive fields and combined with existing CNN components.

http://arxiv.org/abs/1506.05163 Deep Convolutional Networks on Graph-Structured Data

we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.

http://arxiv.org/pdf/1607.00653v1.pdf node2vec: Scalable Feature Learning for Networks

Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks.

https://arxiv.org/pdf/1608.00318v1.pdf A Neural Knowledge Language Model

we propose a Neural Knowledge Language Model (NKLM) which combines symbolic knowledge provided by knowledge graphs with RNN language models. At each time step, the model predicts a fact on which the observed word is supposed to be based. Then, a word is either generated from the vocabulary or copied from the knowledge graph.

http://snap.stanford.edu/node2vec

http://repository.hkbu.edu.hk/cgi/viewcontent.cgi?article=1000&context=vprd_ja Graph Neural Model

http://tkipf.github.io/graph-convolutional-networks/

https://arxiv.org/abs/1511.05493 Gated Graph Sequence Neural Networks

Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.

http://openreview.net/pdf?id=SJg498clg NEURAL GRAPH MACHINES: LEARNING NEURAL NETWORKS USING GRAPHS

In this work, we propose a training objective for neural networks, Neural Graph Machines, for combining the power of neural networks and label propagation. The new objective allows the neural networks to harness both labeled and unlabeled data by: (a) allowing the network to train using labeled data as in the supervised setting, (b) biasing the network to learn similar hidden representations for neighboring nodes on a graph, in the same vein as label propagation. Such architectures with the proposed objective can be trained efficiently using stochastic gradient descent and scaled to large graphs.

https://arxiv.org/pdf/1611.08402v2.pdf Geometric deep learning on graphs and manifolds using mixture model CNNs

We propose a uni- fied framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph and 3D shape analysis and show that it consistently outperforms previous approaches.

https://www.youtube.com/watch?v=5eTJ6yxtU5s Diffusion-Convolutional Neural Networks

https://arxiv.org/abs/1511.02136v6 We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graph-structured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on the GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks.

https://deepmind.com/blog/differentiable-neural-computers/ Differentiable neural computers

We showed how neural networks and memory systems can be combined to make learning machines that can store knowledge quickly and reason about it flexibly. These models, which we call differentiable neural computers (DNCs), can learn from examples like neural networks, but they can also store complex data like computers.

https://arxiv.org/abs/1612.02255 Knowledge Representation in Graphs using Convolutional Neural Networks

Knowledge Graphs (KG) constitute a flexible representation of complex relationships between entities particularly useful for biomedical data. These KG, however, are very sparse with many missing edges (facts) and the visualisation of the mesh of interactions nontrivial. Here we apply a compositional model to embed nodes and relationships into a vectorised semantic space to perform graph completion. A visualisation tool based on Convolutional Neural Networks and Self-Organised Maps (SOM) is proposed to extract high-level insights from the KG. We apply this technique to a subset of CTD, containing interactions of compounds with human genes / proteins and show that the performance is comparable to the one obtained by structural models.

https://arxiv.org/abs/1511.05493v3 Gated Graph Sequence Neural Networks

Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modify to use gated recurrent units and modern optimization techniques and then extend to output sequences. The result is a flexible and broadly useful class of neural network models that has favorable inductive biases relative to purely sequence-based models (e.g., LSTMs) when the problem is graph-structured. We demonstrate the capabilities on some simple AI (bAbI) and graph algorithm learning tasks. We then show it achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.

https://arxiv.org/abs/1703.02618v1 Bootstrapped Graph Diffusions: Exposing the Power of Nonlinearity

we place classic linear graph diffusions in a self-training framework. Surprisingly, we observe that SSL using the resulting {\em bootstrapped diffusions} not only significantly improves over the respective non-bootstrapped baselines but also outperform state-of-the-art non-linear SSL methods. Moreover, since the self-training wrapper retains the scalability of the base method, we obtain both higher quality and better scalability.

https://arxiv.org/abs/1703.05614v1 ParaGraphE: A Library for Parallel Knowledge Graph Embedding

ParaGraphE provides a library for parallel knowledge graph embedding, which implements several knowledge graph embedding methods on a single machine with multiple cores and a shared memory. The implemented methods in the current version include TransE[1], TransH[2], TransR[3], TransD[4], and SphereE (the sphere method in ManifoldE)[5].

https://arxiv.org/abs/1703.06103 Modeling Relational Data with Graph Convolutional Networks

Knowledge bases play a crucial role in many applications, for example question answering and information retrieval. Despite the great effort invested in creating and maintaining them, even the largest representatives (e.g., Yago, DBPedia or Wikidata) are highly incomplete. We introduce relational graph convolutional networks (R-GCNs) and apply them to two standard knowledge base completion tasks: link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing attributes of entities). R-GCNs are a generalization of graph convolutional networks, a recent class of neural networks operating on graphs, and are developed specifically to deal with highly multi-relational data, characteristic of realistic knowledge bases. Our methods achieve competitive results on standard benchmarks for both tasks.

https://arxiv.org/pdf/1704.04478v1.pdf Graphical Models: An Extension to Random Graphs, Trees, and Other Objects

https://arxiv.org/abs/1704.04675v1 Graph Convolutional Encoders for Syntax-aware Neural Machine Translation

https://github.com/hechtlinger/graph_cnn A generalization of Convolutional Neural Networks to Graph-Structured Data

https://web.stanford.edu/class/cs224n/reports/2758630.pdf GraphNet: Recommendation system based on language and network structure

https://github.com/tkipf/gae Implementation of Graph Auto-Encoders in TensorFlow

https://arxiv.org/abs/1706.09916 Graph Convolutional Networks for Molecules

https://arxiv.org/abs/1707.01476v1 Convolutional 2D Knowledge Graph Embeddings

Analysis of our convolutional model suggests that it is particularly good at modelling nodes with high indegree and nodes with high PageRank and that 2D convolution applied on embeddings seems to induce contrasting pixel-level structures.

https://arxiv.org/pdf/1707.05005v1.pdf graph2vec: Learning Distributed Representations of Graphs

We propose a neural embedding framework named graph2vec to learn data-driven distributed representations of arbitrary sized graphs. graph2vec’s embeddings are learnt in an unsupervised manner and are task agnostic. Hence, they could be used for any downstream task such as graph classification, clustering and even seeding supervised representation learning approaches.

https://arxiv.org/abs/1708.04675v1 Learning Graph While Training: An Evolving Graph Convolutional Neural Network

Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.

https://github.com/palash1992/GEM

https://arxiv.org/abs/1706.02216 Inductive Representation Learning on Large Graphs GraphSAGE

https://arxiv.org/abs/1710.06520 LASAGNE: Locality And Structure Aware Graph Node Embedding

https://openreview.net/pdf?id=Hy1d-ebAb LEARNING DEEP GENERATIVE MODELS OF GRAPHS

https://arxiv.org/abs/1711.08014 The Riemannian Geometry of Deep Generative Models

In this paper we have introduced methods for exploring the Riemannian geometry of manifolds learned by deep generative models. Our experiments show that these models represent real image Consequently, straight lines in the latent space are relatively close to geodesic curves on the manifold. This fact may explain why traversal in the latent space results in visually plausible changes to the generated data: curvilinear distances in the original data metric are roughly preserved.

Also, even for the results presented here, the role of curvature should not be completely discounted: there are still differences between latent distances and geodesic distances that may have more nuanced effects in certain applications

https://arxiv.org/pdf/1711.03189.pdf Deep Hyperspherical Learning

In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres. We introduce SphereNet, deep hyperspherical convolution networks that are distinct from conventional inner product based convolutional networks. In particular, SphereNet adopts SphereConv as its basic convolution operator and is supervised by generalized angular softmax loss - a natural loss formulation under SphereConv. We show that SphereNet can effectively encode discriminative representation and alleviate training difficulty, leading to easier optimization, faster convergence and comparable (even better) classification accuracy over convolutional counterparts.

https://arxiv.org/pdf/1801.02144v1.pdf Covariant Compositional Networks For Learning Graphs

Most existing neural networks for learning graphs address permutation invariance by conceiving of the network as a message passing scheme, where each node sums the feature vectors coming from its neighbors. We argue that this imposes a limitation on their representation power, and instead propose a new general architecture for representing objects consisting of a hierarchy of parts, which we call Covariant Compositional Networks (CCNs). Here, covariance means that the activation of each neuron must transform in a specific way under permutations, similarly to steerability in CNNs. We achieve covariance by making each activation transform according to a tensor representation of the permutation group, and derive the corresponding tensor aggregation rules that each neuron must implement. Experiments show that CCNs can outperform competing methods on standard graph learning benchmarks.

https://arxiv.org/abs/1802.08773 GraphRNN: A Deep Generative Model for Graphs

GraphRNN, a deep autoregressive model that addresses the above challenges and approximates any distribution of graphs with minimal assumptions about their structure. GraphRNN learns to generate graphs by training on a representative set of graphs and decomposes the graph generation process into a sequence of node and edge formations, conditioned on the graph structure generated so far.

https://arxiv.org/abs/1804.01882v2 Hyperbolic Entailment Cones for Learning Hierarchical Embeddings

we view hierarchical relations as partial orders defined using a family of nested geodesically convex cones. We prove that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces. Moreover, they canonically define the embedding learning process. Experiments show significant improvements of our method over strong recent baselines both in terms of representational capacity and generalization. https://github.com/dalab/hyperbolic_ cones

https://arxiv.org/abs/1804.00823v2 Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

In this work, we present a general end-to-end approach to map the input graph to a sequence of vectors, and then another attention-based LSTM to decode the target sequence from these vectors. Specifically, to address inevitable information loss for data conversion, we introduce a novel graph-to-sequence neural network model that follows the encoder-decoder architecture. Our method first uses an improved graph-based neural network to generate the node and graph embeddings by a novel aggregation strategy to incorporate the edge direction information into the node embeddings. We also propose an attention based mechanism that aligns node embeddings and decoding sequence to better cope with large graphs. Experimental results on bAbI task, Shortest Path Task, and Natural Language Generation Task demonstrate that our model achieves the state-of-the-art performance and significantly outperforms other baselines. We also show that with the proposed aggregation strategy, our proposed model is able to quickly converge to good performance.

https://arxiv.org/abs/1805.02682v1 Predicting Graph Categories from Structural Properties

In this work, we investigated whether the domain or generative process of a complex network can be accurately predicted using only a handful of simple graph features. Our results indicate that networks drawn from different domains (and network models) are trivial to distinguish using only a few graph features. In particular, we achieve 96.6% accuracy using a simple random forest model to predict the domain and/or generative process governing the formation of the network.

https://www.arxiv-vanity.com/papers/1804.00823/ Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks

https://arxiv.org/abs/1806.09835v1 Graph-to-Sequence Learning using Gated Graph Neural Networks

Our architecture couples the recently proposed Gated Graph Neural Networks with an input transformation that allows nodes and edges to have their own hidden representations, while tackling the parameter explosion problem present in previous work.

https://arxiv.org/abs/1808.03253v1 Counterfactual Normalization: Proactively Addressing Dataset Shift and Improving Reliability Using Causal Mechanisms

https://arxiv.org/abs/1808.06099 Multi-dimensional Graph Convolutional Networks

In this paper, we study the problem of graph convolutional networks for multi-dimensional graphs and propose a multi-dimensional convolutional neural network model mGCN aiming to capture rich information in learning node-level representations for multi-dimensional graphs.

https://arxiv.org/pdf/1806.08804.pdf Hierarchical Graph Representation Learning with Differentiable Pooling

current GNN methods are inherently flat and do not learn hierarchical representations of graphs—a limitation that is especially problematic for the task of graph classification, where the goal is to predict the label associated with an entire graph. Here we propose DIFFPOOL, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion. DIFFPOOL learns a differentiable soft cluster assignment for nodes at each layer of a deep GNN, mapping nodes to a set of clusters, which then form the coarsened input for the next GNN layer

https://openreview.net/forum?id=H1ERcs09KQ Hierarchically Clustered Representation Learning

https://arxiv.org/pdf/1809.10341v1.pdf DEEP GRAPH INFOMAX

We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs—both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to graph representation learning, DGI does not rely on random walks, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.

https://openreview.net/forum?id=S1xiOjC9F7 Graph Matching Networks for Learning the Similarity of Graph Structured Objects

we propose a novel Graph Matching Network model that, given a pair of graphs as input, computes a similarity score between them by jointly reasoning on the pair through a new cross-graph attention-based matching mechanism.

https://openreview.net/forum?id=Byl8BnRcYm Capsules Graph Neural Network

when applying node embeddings learned from GNN to generate graph embeddings, the scalar node representation typically used in GNN may not suffice to preserve the node/graph relationships, resulting in sub-optimal graph embeddings.

By extracting node features in the form of capsules, routing mechanism can be utilized to capture important statistic information at the graph level. As a result, our model generates multiple embeddings for each graph to capture significant graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.

https://openreview.net/forum?id=ryepUj0qtX Conditional Network Embeddings

https://arxiv.org/abs/1810.00826v1 How Powerful are Graph Neural Networks?

Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. An interesting direction for future work is to go beyond the neighborhood aggregation (or message passing) framework in order to pursue even more powerful architectures for learning with graphs.