LAL  7.5.0.1-89842e6

Detailed Description

Routines to compute eigenvalues and eigenvectors.

Author
Creighton, T. D.

Description

LALSSymmetricEigenVectors() and LALDSymmetricEigenVectors() compute the eigenvalues and eigenvectors of the square matrix *matrix. The eigenvalues are stored in *values, which must be pre-allocated to the same length as each of the dimensions of *matrix. The eigenvectors are computed in-place and returned in *matrix: on return, each column of *matrix stores the eigenvector of the corresponding eigenvalue as a column vector. If you don't want the input matrix to be changed, make a copy of it first.

Algorithm

A square matrix \(\mathsf{M}^a{}_b\) is said to have an eigenvalue \(\lambda\) and associated eigenvector \(\mathsf{x}^a\) if the following equation holds:

\begin{equation} \label{eq_eigenvalue} \mathsf{M}^a{}_b \mathsf{x}^b = \lambda\mathsf{x}^a \end{equation}

Generically an \(N\times N\) matrix has \(N\) distinct eigenvalues each with a unique (up to a multiplicative constant) associated eigenvector. In certain cases, though, the system is degenerate: that is, some of the \(N\) eigenvalues may be the same, and the corresponding eigenvectors may not be unique.

We are concerned with the particular case of real, symmetric matrices, which occur in a broad class of problems. These matrices have certain useful properties. First, for any Hermitian matrix (including real symmetric matrices) the eigenvalues are all real. Second, for any matrix that commutes with its Hermitian conjugate (including Hermitian and unitary matrices, or, for real matrices, symmetric and orthogonal), the eigenvectors \({\mathsf{x}_{(i)}}^a\) of distinct eigenvalues \(\lambda_{(i)}\) are orthogonal: that is, \((\mathsf{x}_{(i)}^T)_a \mathsf{x}_{(j)}{}^a=0\) if \(\lambda_{(i)}\neq\lambda_{(j)}\). Furthermore, we note that if two or more linearly independent eigenvectors have the same eigenvalue, then any linear combination of them is also an eigenvector with that same eigenvalue: we are therefore free to choose a basis of eigenvectors that is orthonormal. Thirdly, for any matrix that commutes with its conjugate, the complete set of eigenvectors spans the entire \(N\)-dimensional vector space, so the orthonormal basis above is a complete orthonormal basis.

If we construct a square matrix \(\mathsf{X}^a{}_b\) whose columns are the orthonormal eigenvectors \(X^i{}_j={x_{(j)}}^i\), then it is clear (from the orthonormality of \({\mathsf{x}_{(j)}}^a\)) that \(\mathsf{X}^a{}_b\) is orthogonal. Furtthermore, the eigenvalue Eq. \eqref{eq_eigenvalue} gives:

\begin{equation} {(\mathsf{X}^{-1}){}^a{}_b}\mathsf{M}^b{}_c\mathsf{X}^c{}_d = \left[\begin{array}{cccc} \lambda_{(1)} & 0 & \cdots & 0 \\ 0 & \lambda_{(2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_{(N)} \end{array}\right] \;, \end{equation}

where \(\lambda_{(i)}\) are the eigenvalues of the corresponding eigenvectors (with the possibility that some of these eigenvalues have the same value). That is, the matrix \(\mathsf{M}^b{}_c\) can be diagonalized by an orthogonal similarity transformation, and the transformation matrix gives us the eigenvectors. The process of solving the eigenvalue equation is thus equivalent to the problem of diagonalizing the matrix.

For a general \(N\times N\) symmetric matrix, there is no finite algorithm that exactly diagonalizes the matrix. Most numerical eigenvalue solvers use a process of iterative orthogonal transformations, where each transformation is chosen to reduce the sum of the squares of the off-diagonal elements: the matrix uniformly converges to a diagonal form under successive transformations. This convergence can be speeded if one first transforms the matrix into a tridiagonal form (where the only nonzero elements are on the diagonal or immediately adjacent to it); this transformation can be accomplished deterministically in a finite number of steps. This is the approach advocated in Chapter 11 of [22] , and taken in this module.

The routines in call the routines first to reduce the matrix to tridiagonal form, and then to reduce it to diagonal form iteratively. The routines LALSSymmetricEigenVectors() and LALDSymmetricEigenVectors() call the routines LALSSymmetricToTriDiagonal() and LALDSymmetricToTriDiagonal() to tri-diagonalize the matrix, then LALSTriDiagonalToDiagonal() and LALDTriDiagonalToDiagonal() to diagonalize it.

Prototypes

void LALSSymmetricEigenVectors (LALStatus *stat, REAL4Vector *values, REAL4Array *matrix)
 
void LALDSymmetricEigenVectors (LALStatus *stat, REAL8Vector *values, REAL8Array *matrix)
 

Function Documentation

◆ LALSSymmetricEigenVectors()

void LALSSymmetricEigenVectors ( LALStatus stat,
REAL4Vector values,
REAL4Array matrix 
)
See also
See Module Eigen.c for documentation

Definition at line 734 of file Eigen.c.

◆ LALDSymmetricEigenVectors()

void LALDSymmetricEigenVectors ( LALStatus stat,
REAL8Vector values,
REAL8Array matrix 
)
See also
See Module Eigen.c for documentation

Definition at line 772 of file Eigen.c.