Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revision Both sides next revision
security [2017/09/17 19:33]
127.0.0.1 external edit
security [2018/12/11 22:49]
admin
Line 46: Line 46:
  
 Unfortunately,​ we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. Unfortunately,​ we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper.
 +
 +https://​blog.acolyer.org/​2017/​11/​01/​deepxplore-automated-whitebox-testing-of-deep-learning-systems/​
 +
 +https://​arxiv.org/​abs/​1802.08908 Scalable Private Learning with PATE
 +
 +https://​arxiv.org/​abs/​1803.04585 Categorizing Variants of Goodhart'​s Law
 +
 +https://​arxiv.org/​abs/​1606.06565 Concrete Problems in AI Safety
 +
 +We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from 
 +
 +* having the wrong objective function ("​avoiding side effects"​ and "​avoiding reward hacking"​), ​
 +* an objective function that is too expensive to evaluate frequently ("​scalable supervision"​),​ or 
 +* undesirable behavior during the learning process ("safe exploration"​ and "​distributional shift"​).
 +
 +
 +https://​arxiv.org/​pdf/​1801.05507.pdf GAZELLE: A Low Latency Framework for Secure
 +Neural Network Inference
 +
 +https://​arxiv.org/​abs/​1808.07261 Increasing Trust in AI Services through Supplier'​s Declarations of Conformity
 +
 +https://​arxiv.org/​abs/​1810.08130 Private Machine Learning in TensorFlow
 +using Secure Computation
 +
 +https://​ai.google/​education/​responsible-ai-practices?​twitter=@bigdata
 +
 +https://​arxiv.org/​abs/​1812.00564v1 Split learning for health: Distributed deep learning without sharing raw patient data
 +
 +