http://nlp.stanford.edu/pubs/bowman2016spinn.pdf A Fast Unified Model for Parsing and Sentence Understanding

We address these issues by introducing the Stack-augmented Parser-Interpreter Neural Network (SPINN), which combines parsing and interpretation within a single tree-sequence hybrid model by integrating tree-structured sentence interpretation into the linear sequential structure of a shift reduce parser.

https://arxiv.org/abs/1503.01007 Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets

In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory.

http://cs.stanford.edu/people/danqi/papers/emnlp2014.pdf A Fast and Accurate Dependency Parser using Neural Networks

http://arxiv.org/pdf/1506.02516v3.pdf Learning to Transduce with Unbounded Memory

https://iamtrask.github.io/2016/02/25/deepminds-neural-stack-machine/

http://openreview.net/pdf?id=BkbY4psgg MAKING NEURAL PROGRAMMING ARCHITECTURES GENERALIZE VIA RECURSION

Recursion enables provably perfect generalization; to our knowledge, this is the first time verification has been applied to a neural network, providing provable guarantees about its behavior. We demonstrated that via recursion, it is possible to achieve provably perfect generalization on programming tasks in a Neural Programming Architecture: we instantiated this in the Neural Programmer-Interpreter. In future work, we seek to implement more tasks with recursive structure. We also hope to decrease supervision, perhaps by training with only partial or non-recursive traces and to integrate a notion of recursion into the models themselves, by constructing novel Neural Programming Architectures .

https://arxiv.org/abs/1809.02836v1 Context-Free Transductions with Neural Stacks

This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory.

https://dselsam.github.io/the-terpret-problem/