Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
rationalization [2018/09/14 11:04]
admin
rationalization [2019/01/13 20:20] (current)
admin
Line 175: Line 175:
  
 https://​arxiv.org/​abs/​1803.05268 Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning https://​arxiv.org/​abs/​1803.05268 Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning
- ​https://​github.com/​davidmascharka/​tbd-nets+ ​https://​github.com/​davidmascharka/​tbd-nets ​https://​towardsdatascience.com/​transparent-reasoning-how-mit-builds-neural-networks-that-can-explain-themselves-3aea291cd9cc 
 + 
 +https://​arxiv.org/​abs/​1809.06309v1 Commonsense for Generative Multi-Hop Question Answering Tasks 
 + 
 +https://​arxiv.org/​abs/​1809.07291v1 https://​github.com/​NPoe/​input-optimization-nlp 
 + 
 +https://​arxiv.org/​pdf/​1805.04833.pdf Hierarchical Neural Story Generation 
 + 
 +https://​openreview.net/​pdf?​id=rJGgFjA9FQ EXPLAINING ALPHAGO: INTERPRETING CONTEXTUAL 
 +EFFECTS IN NEURAL NETWORKS 
 + 
 +https://​arxiv.org/​pdf/​1804.09160.pdf No Metrics Are Perfect: 
 +Adversarial Reward Learning for Visual Storytelling 
 + 
 +https://​arxiv.org/​abs/​1810.02909v1 On the Art and Science of Machine Learning Explanations 
 + 
 +https://​arxiv.org/​abs/​1810.03993v1 Model Cards for Model Reporting 
 + 
 +https://​arxiv.org/​abs/​1810.05680v1 Bottom-up Attention, Models of http://​salicon.net/​ 
 + 
 +https://​github.com/​arviz-devs/​arviz Python package to plot and analyse samples from probabilistic models 
 + 
 +https://​blog.goodaudience.com/​holy-grail-of-ai-for-enterprise-explainable-ai-xai-6e630902f2a0  
 + 
 +https://​arxiv.org/​abs/​1809.10736 Controllable Neural Story Generation via Reinforcement Learning 
 + 
 +We introduce a policy gradient reinforcement learning approach to open story generation that learns to achieve a given narrative goal state. In this work, the goal is for a story to end with a specific type of event, given in advance.  
 + 
 +https://​arxiv.org/​pdf/​1802.07810.pdf Manipulating and Measuring Model Interpretability 
 + 
 +Participants who were 
 +shown a clear model with a small number of features were better able to simulate the model’s predictions. However, 
 +contrary to what one might expect when manipulating interpretability,​ we found no significant difference in multiple 
 +measures of trust across conditions. Even more surprisingly,​ increased transparency hampered people’s ability to detect 
 +when a model has made a sizeable mistake. These findings emphasize the importance of studying how models are 
 +presented to people and empirically verifying that interpretable models achieve their intended effects on end users. 
 + 
 +https://​arxiv.org/​abs/​1703.04730 Understanding Black-box Predictions via Influence Functions 
 + 
 + 
 +https://​christophm.github.io/​interpretable-ml-book/​proto.html