Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revision Both sides next revision
program_induction [2017/04/24 01:54]
127.0.0.1 external edit
program_induction [2018/09/05 10:37]
admin
Line 199: Line 199:
  
 https://​arxiv.org/​pdf/​1605.06640v2.pdf Programming with a Differentiable Forth Interpreter https://​arxiv.org/​pdf/​1605.06640v2.pdf Programming with a Differentiable Forth Interpreter
 +
 +https://​arxiv.org/​pdf/​1710.04157v1.pdf Neural Program Meta-Induction
 +
 +In this work, we have contrasted two techniques for using cross-task knowledge sharing to improve
 +neural program induction, which are referred to as adapted program induction and meta program
 +induction. Both of these techniques can be used to improve accuracy on a new task by using models
 +that were trained on related tasks from the same family. However, adapted induction uses a transfer
 +learning style approach while meta induction uses a k-shot learning style approach.
 +
 +https://​arxiv.org/​pdf/​1712.08290v1.pdf CSGNet: Neural Shape Parser for Constructive Solid Geometry
 +
 +We present a neural architecture that takes as input a 2D
 +or 3D shape and induces a program to generate it. The instructions
 +in our program are based on constructive solid
 +geometry principles, i.e., a set of boolean operations on
 +shape primitives defined recursively. Bottom-up techniques
 +for this task that rely on primitive detection are inherently
 +slow since the search space over possible primitive combinations
 +is large. In contrast, our model uses a recurrent
 +neural network conditioned on the input shape to produce a
 +sequence of instructions in a top-down manner and is significantly
 +faster. It is also more effective as a shape detector
 +than existing state-of-the-art detection techniques. We
 +also demonstrate that our network can be trained on novel
 +datasets without ground-truth program annotations through
 +policy gradient techniques.
 +
 +https://​arxiv.org/​abs/​1707.09627 Learning to Infer Graphics Programs from Hand-Drawn Images
 +
 +We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX. The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network, measure similarity between drawings by use of similar high-level geometric structures, and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input.
 +
 +https://​arxiv.org/​abs/​1801.03526 Neural Program Synthesis with Priority Queue Training
 +
 +We consider the task of program synthesis in the presence of a reward function over the output of programs, where the goal is to find programs with maximal rewards. We employ an iterative optimization scheme, where we train an RNN on a dataset of K best programs from a priority queue of the generated programs so far. Then, we synthesize new programs and add them to the priority queue by sampling from the RNN.
 +
 +http://​proceedings.mlr.press/​v70/​bosnjak17a.html Programming with a Differentiable Forth Interpreter
 +
 +enables programmers to write program sketches with slots that can be filled with behaviour trained from program input-output data. We can optimise this behaviour directly through gradient descent techniques on user-specified objectives, and also integrate the program into any larger neural computation graph. We show empirically that our interpreter is able to effectively leverage different levels of prior program structure and learn complex behaviours such as sequence sorting and addition. ​
 +
 +https://​arxiv.org/​abs/​1802.02696v1 Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction
 +
 +Combinator abstraction dramatically reduces the number and complexity of programs that need to be interpreted by the core controller of CNPI, while still allowing the CNPI to represent and interpret arbitrary complex programs by the collaboration of the core with the other components. We propose a small set of four combinators to capture the most pervasive programming patterns. Due to the finiteness and simplicity of this combinator set and the offloading of some burden of interpretation from the core, we are able construct a CNPI that is universal with respect to the set of all combinatorizable programs, which is adequate for solving most algorithmic tasks. Moreover, besides supervised training on execution traces, CNPI can be trained by policy gradient reinforcement learning with appropriately designed curricula.
 +
 +https://​arxiv.org/​abs/​1803.09473v1 code2vec: Learning Distributed Representations of Code
 +
 +We present a neural model for representing snippets of code as continuous distributed vectors. The main idea is to represent code as a collection of paths in its abstract syntax tree, and aggregate these paths, in a smart and scalable way, into a single fixed-length \emph{code vector}, which can be used to predict semantic properties of the snippet. ​
 +We demonstrate the effectiveness of our approach by using it to predict a method'​s name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 14M methods. We show that code vectors trained on this dataset can predict method names from files that were completely unobserved during training. Furthermore,​ we show that our model learns useful method name vectors that capture semantic similarities,​ combinations,​ and analogies. ​
 +Comparing previous techniques over the same data set, our approach obtains a relative improvement of over 75%, being the first to successfully predict method names based on a large, cross-project,​ corpus.
 +
 +https://​deepmind.com/​blog/​learning-to-generate-images/​ Learning to write programs that generate images
 +
 +
 +
 +https://​www.microsoft.com/​en-us/​research/​publication/​neural-guided-deductive-search-real-time-program-synthesis-examples/​ Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
 +
 +. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems.
 +
 +https://​arxiv.org/​abs/​1804.00218v1 Synthesis of Differentiable Functional Programs for Lifelong Learning