Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
relational_semantic_network [2018/07/25 00:31]
admin
relational_semantic_network [2018/11/30 10:06] (current)
admin
Line 314: Line 314:
 https://​arxiv.org/​abs/​1807.08058v1 Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning https://​arxiv.org/​abs/​1807.08058v1 Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning
  
-We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas+We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size - up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas
 + 
 +https://​arxiv.org/​abs/​1808.02822v1 ​ Backprop Evolution 
 + 
 +https://​arxiv.org/​abs/​1808.06068 SeVeN: Augmenting Word Embeddings with Unsupervised Relation Vectors 
 + 
 +https://​arxiv.org/​abs/​1808.07980 Ontology Reasoning with Deep Neural Networks 
 + 
 +https://​arxiv.org/​abs/​1808.09333v1 Bridging Knowledge Gaps in Neural Entailment via Symbolic Models 
 + 
 +We focus on filling these knowledge gaps in the Science Entailment task, by leveraging an external structured knowledge base (KB) of science facts. Our new architecture combines standard neural entailment models with a knowledge lookup module. To facilitate this lookup, we propose a fact-level decomposition of the hypothesis, and verifying the resulting sub-facts against both the textual premise and the structured KB. Our model, NSnet, learns to aggregate predictions from these heterogeneous data formats.  
 + 
 +https://​arxiv.org/​pdf/​1806.01445v2.pdf Embedding Logical Queries on Knowledge Graphs 
 + 
 +https://​github.com/​williamleif/​graphqembed 
 + 
 +https://​arxiv.org/​abs/​1806.01822v2 Relational recurrent neural networks 
 + 
 +We then improve upon these deficits by using a new memory module -- a {Relational Memory Core} (RMC) -- which employs multi-head dot product attention to allow memories to interact. Finally, we test the RMC on a suite of tasks that may profit from more capable relational reasoning across sequential information,​ and show large gains in RL domains (e.g. Mini PacMan), program evaluation, and language modeling, achieving state-of-the-art results on the WikiText-103,​ Project Gutenberg, and GigaWord datasets. https://​github.com/​deepmind/​sonnet/​blob/​master/​sonnet/​python/​modules/​relational_memory.py 
 + 
 +https://​slideslive.com/​38909774/​embedding-symbolic-computation-within-neural-computation-for-ai-and-nlp 
 + 
 +https://​arxiv.org/​abs/​1809.11044 Relational Forward Models for Multi-Agent Learning 
 + 
 +https://​arxiv.org/​abs/​1811.12143 Learning to Reason with Third-Order Tensor Products 
 +We combine Recurrent Neural Networks with Tensor Product Representations to learn combinatorial representations of sequential data. This improves symbolic interpretation and systematic generalisation. Our architecture is trained end-to-end through gradient descent on a variety of simple natural language reasoning tasks, significantly outperforming the latest state-of-the-art models in single-task and all-tasks settings. We also augment a subset of the data such that training and test data exhibit large systematic differences and show that our approach generalises better than the previous state-of-the-art.