This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
rationalization [2018/10/17 16:06]
rationalization [2019/01/12 12:10]
Line 195: Line 195:
 https://​arxiv.org/​abs/​1810.05680v1 Bottom-up Attention, Models of http://​salicon.net/​ https://​arxiv.org/​abs/​1810.05680v1 Bottom-up Attention, Models of http://​salicon.net/​
 +https://​github.com/​arviz-devs/​arviz Python package to plot and analyse samples from probabilistic models
 +https://​blog.goodaudience.com/​holy-grail-of-ai-for-enterprise-explainable-ai-xai-6e630902f2a0 ​
 +https://​arxiv.org/​abs/​1809.10736 Controllable Neural Story Generation via Reinforcement Learning
 +We introduce a policy gradient reinforcement learning approach to open story generation that learns to achieve a given narrative goal state. In this work, the goal is for a story to end with a specific type of event, given in advance. ​
 +https://​arxiv.org/​pdf/​1802.07810.pdf Manipulating and Measuring Model Interpretability
 +Participants who were
 +shown a clear model with a small number of features were better able to simulate the model’s predictions. However,
 +contrary to what one might expect when manipulating interpretability,​ we found no significant difference in multiple
 +measures of trust across conditions. Even more surprisingly,​ increased transparency hampered people’s ability to detect
 +when a model has made a sizeable mistake. These findings emphasize the importance of studying how models are
 +presented to people and empirically verifying that interpretable models achieve their intended effects on end users.