TY - JOUR
T1 - Decentralized sparsity-regularized rank minimization
T2 - Algorithms and applications
AU - Mardani, Morteza
AU - Mateos, Gonzalo
AU - Giannakis, Georgios B.
PY - 2013
Y1 - 2013
N2 - Given a limited number of entries from the superposition of a low-rank matrix plus the product of a known compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for decentralized sparsity-regularized rank minimization over networks, when the nuclear- and l l 1-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its decentralized minimization. To overcome this limitation, leveraging an alternative characterization of the nuclear norm yields a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. Interestingly, if the decentralized (non-convex) estimator converges, under certain conditions it provably attains the global optimum of its centralized counterpart. As a result, this paper bridges the performance gap between centralized and in-network decentralized, sparsity-regularized rank minimization. This, in turn, facilitates (stable) recovery of the low rank and sparse model matrices through reduced-complexity per-node computations, and affordable message passing among single-hop neighbors. Several application domains are outlined to highlight the generality and impact of the proposed framework. These include unveiling traffic anomalies in backbone networks, and predicting networkwide path latencies. Simulations with synthetic and real network data confirm the convergence of the novel decentralized algorithm, and its centralized performance guarantees.
AB - Given a limited number of entries from the superposition of a low-rank matrix plus the product of a known compression matrix times a sparse matrix, recovery of the low-rank and sparse components is a fundamental task subsuming compressed sensing, matrix completion, and principal components pursuit. This paper develops algorithms for decentralized sparsity-regularized rank minimization over networks, when the nuclear- and l l 1-norm are used as surrogates to the rank and nonzero entry counts of the sought matrices, respectively. While nuclear-norm minimization has well-documented merits when centralized processing is viable, non-separability of the singular-value sum challenges its decentralized minimization. To overcome this limitation, leveraging an alternative characterization of the nuclear norm yields a separable, yet non-convex cost minimized via the alternating-direction method of multipliers. Interestingly, if the decentralized (non-convex) estimator converges, under certain conditions it provably attains the global optimum of its centralized counterpart. As a result, this paper bridges the performance gap between centralized and in-network decentralized, sparsity-regularized rank minimization. This, in turn, facilitates (stable) recovery of the low rank and sparse model matrices through reduced-complexity per-node computations, and affordable message passing among single-hop neighbors. Several application domains are outlined to highlight the generality and impact of the proposed framework. These include unveiling traffic anomalies in backbone networks, and predicting networkwide path latencies. Simulations with synthetic and real network data confirm the convergence of the novel decentralized algorithm, and its centralized performance guarantees.
KW - Decentralized optimization
KW - Lasso
KW - low rank
KW - matrix completion
KW - networks
KW - nuclear norm
KW - sparsity
UR - http://www.scopus.com/inward/record.url?scp=84885088639&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84885088639&partnerID=8YFLogxK
U2 - 10.1109/TSP.2013.2279080
DO - 10.1109/TSP.2013.2279080
M3 - Article
AN - SCOPUS:84885088639
SN - 1053-587X
VL - 61
SP - 5374
EP - 5388
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
IS - 21
M1 - 6584818
ER -