We propose a new architecture for distributed image compression from a group of distributed data sources. The work is motivated by practical needs of data-driven codec design, low power consumption, robustness, and data privacy. The proposed architecture, which we refer to as Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC), is able to train distributed encoders and one joint decoder on correlated data sources. Its compression capability is much better than the method of training codecs separately. Meanwhile, the performance of our distributed system with 10 distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of the performance of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our data-driven methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning.
|Original language||English (US)|
|Title of host publication||Proceedings - DCC 2020|
|Subtitle of host publication||Data Compression Conference|
|Editors||Ali Bilgin, Michael W. Marcellin, Joan Serra-Sagrista, James A. Storer|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||10|
|State||Published - Mar 2020|
|Event||2020 Data Compression Conference, DCC 2020 - Snowbird, United States|
Duration: Mar 24 2020 → Mar 27 2020
|Name||Data Compression Conference Proceedings|
|Conference||2020 Data Compression Conference, DCC 2020|
|Period||3/24/20 → 3/27/20|
Bibliographical noteFunding Information:
This work was supported in part by Office of Naval Research Grant No. N00014-18-1-2244. We provide our implementation at https://github.com/dem123456789/ Distributed-Recurrent-Autoencoder-for-Scalable-Image-Compression
© 2020 IEEE.