Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
data_compression [2017/03/19 23:48]
127.0.0.1 external edit
data_compression [2017/11/06 18:07]
admin
Line 21: Line 21:
 Here we describe the concept of generative compression,​ the compression of data using generative models, and show its potential to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length entropy coding schemes. Here we describe the concept of generative compression,​ the compression of data using generative models, and show its potential to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length entropy coding schemes.
  
 +https://​arxiv.org/​abs/​1704.00648 Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations
 +
 +We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications:​ Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.
 +
 +https://​openreview.net/​forum?​id=BJRZzFlRb&​noteId=BJRZzFlRb Compressing Word Embeddings via Deep Compositional Code Learning ​