Learning Through Dialogue Interactions Dialog Markets Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data Policy Networks with Two-Stage Training for Dialogue Systems A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue Systems Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning A Deep Reinforcement Learning Chatbot Deep Reinforcement Learning for Conversational AI Deep Reinforcement Learning in Dialogue Systems Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments End-to-End Optimization of Task-Oriented Dialogue Model with Deep Reinforcement Learning Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time Neural Relational Inference for Interacting Systems

In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data. Emergent Communication through Negotiation

We introduce two communication protocols – one grounded in the semantics of the game, and one which is \textit{a priori} ungrounded and is a form of cheap talk. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded channel. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation. Learning Semantic Textual Similarity from Conversations

Our method trains an unsupervised model to predict conversational input-response pairs. The resulting sentence embeddings perform well on the semantic textual similarity (STS) benchmark and SemEval 2017's Community Question Answering (CQA) question similarity subtask. Performance is further improved by introducing multitask training combining the conversational input-response prediction task and a natural language inference task. Interactive Language Acquisition with One-shot Visual Concept Learning through a Conversational Game

We highlight the perspective that conversational interaction serves as a natural interface both for language learning and for novel knowledge acquisition and propose a joint imitation and reinforcement approach for grounded language learning through an interactive conversational game. The agent trained with this approach is able to actively acquire information by asking questions about novel objects and use the just-learned knowledge in subsequent conversations in a one-shot fashion. Results compared with other methods verified the effectiveness of the proposed approach. Dialogue Act Recognition via CRF-Attentive Structured Network Zero-Shot Adaptive Transfer for Conversational Language Understanding

we introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains, and enjoys efficient training without any explicit concept alignments. Interpretation of Natural Language Rules in Conversational Machine Reading

In this paper, we formalise this task and develop a crowd-sourcing strategy to collect 32k task instances based on real-world rules and crowd-generated questions and scenarios. We analyse the challenges of this task and assess its difficulty by evaluating the performance of rule-based and machine-learning baselines. We observe promising results when no background knowledge is necessary, and substantial room for improvement whenever background knowledge is needed. Training Millions of Personalized Dialogue Agents

owever, the dataset used in Zhang et al. (2018) is synthetic and of limited size as it contains around 1k different personas. In this paper we introduce a new dataset providing 5 million personas and 700 million persona-based dialogues. Our experiments show that, at this scale, training using personas still improves the performance of end-to-end systems. In addition, we show that other tasks benefit from the wide coverage of our dataset by fine-tuning our model on the data from Zhang et al. (2018) and achieving state-of-the-art results. Decoupling Strategy and Generation in Negotiation Dialogues

In this paper, we propose a modular approach based on coarse dialogue acts (e.g., propose(price=50)) that decouples strategy and generation. We show that we can flexibly set the strategy using supervised learning, reinforcement learning, or domain-specific knowledge without degeneracy, while our retrieval-based generation can maintain context-awareness and produce diverse utterances. We test our approach on the recently proposed DEALORNODEAL game, and we also collect a richer dataset based on real items on Craigslist. Human evaluation shows that our systems achieve higher task success rate and more human-like negotiation behavior than previous approaches. Contextual Topic Modeling For Dialog Systems

Our work for detecting conversation topics and keywords can be used to guide chatbots towards coherent dialog. FlowQA: Grasping Flow in History for Conversational Machine Comprehension