TY - GEN

T1 - Sparsity-promoting adaptive algorithm for distributed learning in diffusion networks

AU - Chouvardas, Symeon

AU - Slavakis, Konstantinos

AU - Kopsinis, Yannis

AU - Theodoridis, Sergios

PY - 2012/11/27

Y1 - 2012/11/27

N2 - In this paper, a sparsity-promoting adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale, i.e., at each time instant and at each node, a closed convex set, namely a hyperslab, is constructed around the current measurement point. This defines the region in which the solution lies. The algorithm seeks a solution in the intersection of these hyperslabs by a sequence of projections. Sparsity is encouraged in two complimentary ways: a) by employing extra projections onto a weighted ℓ1 ball, that complies with our desire to constrain the respective weighted ℓ1 norm and b) by adopting variable metric projections onto the hyperslabs, which implicitly quantify data mismatch. A combine-adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys a number of elegant convergence properties. Finally, numerical examples verify the validity of the proposed scheme, compared to other algorithms, which have been developed in the context of sparse adaptive learning.

AB - In this paper, a sparsity-promoting adaptive algorithm for distributed learning in diffusion networks is developed. The algorithm follows the set-theoretic estimation rationale, i.e., at each time instant and at each node, a closed convex set, namely a hyperslab, is constructed around the current measurement point. This defines the region in which the solution lies. The algorithm seeks a solution in the intersection of these hyperslabs by a sequence of projections. Sparsity is encouraged in two complimentary ways: a) by employing extra projections onto a weighted ℓ1 ball, that complies with our desire to constrain the respective weighted ℓ1 norm and b) by adopting variable metric projections onto the hyperslabs, which implicitly quantify data mismatch. A combine-adapt cooperation strategy is adopted. Under some mild assumptions, the scheme enjoys a number of elegant convergence properties. Finally, numerical examples verify the validity of the proposed scheme, compared to other algorithms, which have been developed in the context of sparse adaptive learning.

KW - Adaptive distributed learning

KW - diffusion networks

KW - projections

KW - sparsity

UR - http://www.scopus.com/inward/record.url?scp=84869807144&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84869807144&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84869807144

SN - 9781467310680

T3 - European Signal Processing Conference

SP - 1084

EP - 1088

BT - Proceedings of the 20th European Signal Processing Conference, EUSIPCO 2012

T2 - 20th European Signal Processing Conference, EUSIPCO 2012

Y2 - 27 August 2012 through 31 August 2012

ER -