Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
rationalization [2018/10/10 11:51]
admin
rationalization [2019/01/13 20:20] (current)
admin
Line 190: Line 190:
  
 https://​arxiv.org/​abs/​1810.02909v1 On the Art and Science of Machine Learning Explanations https://​arxiv.org/​abs/​1810.02909v1 On the Art and Science of Machine Learning Explanations
 +
 +https://​arxiv.org/​abs/​1810.03993v1 Model Cards for Model Reporting
 +
 +https://​arxiv.org/​abs/​1810.05680v1 Bottom-up Attention, Models of http://​salicon.net/​
 +
 +https://​github.com/​arviz-devs/​arviz Python package to plot and analyse samples from probabilistic models
 +
 +https://​blog.goodaudience.com/​holy-grail-of-ai-for-enterprise-explainable-ai-xai-6e630902f2a0 ​
 +
 +https://​arxiv.org/​abs/​1809.10736 Controllable Neural Story Generation via Reinforcement Learning
 +
 +We introduce a policy gradient reinforcement learning approach to open story generation that learns to achieve a given narrative goal state. In this work, the goal is for a story to end with a specific type of event, given in advance. ​
 +
 +https://​arxiv.org/​pdf/​1802.07810.pdf Manipulating and Measuring Model Interpretability
 +
 +Participants who were
 +shown a clear model with a small number of features were better able to simulate the model’s predictions. However,
 +contrary to what one might expect when manipulating interpretability,​ we found no significant difference in multiple
 +measures of trust across conditions. Even more surprisingly,​ increased transparency hampered people’s ability to detect
 +when a model has made a sizeable mistake. These findings emphasize the importance of studying how models are
 +presented to people and empirically verifying that interpretable models achieve their intended effects on end users.
 +
 +https://​arxiv.org/​abs/​1703.04730 Understanding Black-box Predictions via Influence Functions
 +
 +
 +https://​christophm.github.io/​interpretable-ml-book/​proto.html