Abstract
Low-rank tensor decomposition has many applications in signal processing and machine learning, and is becoming increasingly important for analyzing big data. A significant challenge is the computation of intermediate products which can be much larger than the final result of the computation, or even the original tensor. We propose a scheme that allows memory-efficient in-place updates of intermediate matrices. Motivated by recent advances in big tensor decomposition from multiple compressed replicas, we also consider the related problem of memory-efficient tensor compression. The resulting algorithms can be parallelized, and can exploit but do not require sparsity.
Original language | English (US) |
---|---|
Title of host publication | Conference Record of the 48th Asilomar Conference on Signals, Systems and Computers |
Editors | Michael B. Matthews |
Publisher | IEEE Computer Society |
Pages | 581-585 |
Number of pages | 5 |
ISBN (Electronic) | 9781479982974 |
DOIs | |
State | Published - Apr 24 2015 |
Event | 48th Asilomar Conference on Signals, Systems and Computers, ACSSC 2015 - Pacific Grove, United States Duration: Nov 2 2014 → Nov 5 2014 |
Publication series
Name | Conference Record - Asilomar Conference on Signals, Systems and Computers |
---|---|
Volume | 2015-April |
ISSN (Print) | 1058-6393 |
Other
Other | 48th Asilomar Conference on Signals, Systems and Computers, ACSSC 2015 |
---|---|
Country/Territory | United States |
City | Pacific Grove |
Period | 11/2/14 → 11/5/14 |
Bibliographical note
Publisher Copyright:© 2014 IEEE.