Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
hardware_acceleration [2017/08/15 14:20]
127.0.0.1 external edit
hardware_acceleration [2018/08/10 02:39]
admin
Line 57: Line 57:
 https://​arxiv.org/​pdf/​1703.09039.pdf Efficient Processing of Deep Neural Networks: https://​arxiv.org/​pdf/​1703.09039.pdf Efficient Processing of Deep Neural Networks:
 A Tutorial and Survey A Tutorial and Survey
 +
 +https://​arxiv.org/​pdf/​1803.06333.pdf Snap Machine Learning
 +
 +Our library, named Snap Machine Learning (Snap ML), combines
 +recent advances in machine learning systems and algorithms in a
 +nested manner to reƒect the hierarchical architecture of modern
 +distributed systems. Œis allows us to e‚ectively leverage available
 +network, memory and heterogeneous compute resources. On a
 +terabyte-scale publicly available dataset for click-through-rate prediction
 +in computational advertising,​ we demonstrate the training
 +of a logistic regression classi€er in 1.53 minutes, a 46x improvement
 +over the fastest reported performance.
 +
 +https://​arxiv.org/​abs/​1808.02513 Rethinking Numerical Representations for Deep Neural Networks
 +
 + We show that inference using these custom numeric representations on production-grade DNNs, including GoogLeNet and VGG, achieves an average speedup of 7.6x with less than 1% degradation in inference accuracy relative to a state-of-the-art baseline platform representing the most sophisticated hardware using single-precision floating point.