Algorithms for computating principal and minor invariant subspaces of large matrices

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

Computing all the eigenvalues and eigenvectors of a large matrix is a time-consuming operation. There are many applications in signal processing, control, and applied mathematics that require only the minimum and/or maximum elgenpairs. In this paper, new methods for computing the smallest and largest eigenvalues of symmetric matrix are developed. These methods are modifications of the Rayleigh quotient iteration aimed at circumventing some drawbacks of that method such as its non or slow convergence. In this approach, the Rayleigh quotient is sequentially minimized over several orthogonal vectors. At each iterate, a vector is formed from a linear combination of the current iterate and an orthogonal vector that is derived from a gradient of a Ritz functional. The proposed methods have global and cubic convergence rate. These methods are also generalized to solve high resolution temporal and spatial frequency tracking problems. The eigenstructure tracking algorithm has update complexity O(n2 p), where n is the data dimension and p is the dimension of the minor or major subspaces. The performance of these algorithms is tested with several examples. Simulations involving large matrices have shown that the convergence behavior is independent of the size of the matrices.

Original languageEnglish (US)
Pages (from-to)V669-V672
JournalProceedings - IEEE International Symposium on Circuits and Systems
Volume5
StatePublished - 2003
EventProceedings of the 2003 IEEE International Symposium on Circuits and Systems - Bangkok, Thailand
Duration: May 25 2003May 28 2003

Fingerprint

Dive into the research topics of 'Algorithms for computating principal and minor invariant subspaces of large matrices'. Together they form a unique fingerprint.

Cite this