Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
relational_semantic_network [2018/09/15 04:26]
admin
relational_semantic_network [2018/11/30 10:06] (current)
admin
Line 329: Line 329:
  
 https://​github.com/​williamleif/​graphqembed https://​github.com/​williamleif/​graphqembed
 +
 +https://​arxiv.org/​abs/​1806.01822v2 Relational recurrent neural networks
 +
 +We then improve upon these deficits by using a new memory module -- a {Relational Memory Core} (RMC) -- which employs multi-head dot product attention to allow memories to interact. Finally, we test the RMC on a suite of tasks that may profit from more capable relational reasoning across sequential information,​ and show large gains in RL domains (e.g. Mini PacMan), program evaluation, and language modeling, achieving state-of-the-art results on the WikiText-103,​ Project Gutenberg, and GigaWord datasets. https://​github.com/​deepmind/​sonnet/​blob/​master/​sonnet/​python/​modules/​relational_memory.py
 +
 +https://​slideslive.com/​38909774/​embedding-symbolic-computation-within-neural-computation-for-ai-and-nlp
 +
 +https://​arxiv.org/​abs/​1809.11044 Relational Forward Models for Multi-Agent Learning
 +
 +https://​arxiv.org/​abs/​1811.12143 Learning to Reason with Third-Order Tensor Products
 +We combine Recurrent Neural Networks with Tensor Product Representations to learn combinatorial representations of sequential data. This improves symbolic interpretation and systematic generalisation. Our architecture is trained end-to-end through gradient descent on a variety of simple natural language reasoning tasks, significantly outperforming the latest state-of-the-art models in single-task and all-tasks settings. We also augment a subset of the data such that training and test data exhibit large systematic differences and show that our approach generalises better than the previous state-of-the-art.