Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
unsupervised_learning [2018/08/31 15:19]
admin
unsupervised_learning [2018/11/01 20:38] (current)
admin
Line 289: Line 289:
  
 . In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision,​ by composing probabilistic logic with deep learning. DPL models label decisions as latent variables, represents prior knowledge on their relations using weighted first-order logical formulas, and alternates between learning a deep neural network for the end task and refining uncertain formula weights for indirect supervision,​ using variational EM. This framework subsumes prior indirect supervision methods as special cases, and enables novel combination via infusion of rich domain and linguistic knowledge. http://​hanover.azurewebsites.net/​ . In this paper, we propose deep probabilistic logic (DPL) as a general framework for indirect supervision,​ by composing probabilistic logic with deep learning. DPL models label decisions as latent variables, represents prior knowledge on their relations using weighted first-order logical formulas, and alternates between learning a deep neural network for the end task and refining uncertain formula weights for indirect supervision,​ using variational EM. This framework subsumes prior indirect supervision methods as special cases, and enables novel combination via infusion of rich domain and linguistic knowledge. http://​hanover.azurewebsites.net/​
 +
 +https://​openreview.net/​forum?​id=r1g7y2RqYX Label Propagation Networks
 +
 +https://​arxiv.org/​abs/​1810.02840 Training Complex Models with Multi-Task Weak Supervision
 +
 + We show that by solving a matrix completion-style problem, we can recover the accuracies of these multi-task sources given their dependency structure, but without any labeled data, leading to higher-quality supervision for training an end model. Theoretically,​ we show that the generalization error of models trained with this approach improves with the number of unlabeled data points, and characterize the scaling with respect to the task and dependency structures. On three fine-grained classification problems, we show that our approach leads to average gains of 20.2 points in accuracy over a traditional supervised approach, 6.8 points over a majority vote baseline, and 4.1 points over a previously proposed weak supervision method that models tasks separately.
 +
 +https://​colinraffel.com/​publications/​nips2018realistic.pdf Realistic Evaluation of Deep Semi-Supervised
 +Learning Algorithms . https://​github.com/​brain-research/​realistic-ssl-evaluation
 +
 +https://​arxiv.org/​abs/​1810.10525 Toward an AI Physicist for Unsupervised Learning