LAL  7.5.0.1-89842e6

Detailed Description

Routines to compute matrix determinants and inverses.

Author
Creighton, T. D.

Description

LALDMatrixDeterminant() computes the determinant *det of the square matrix *matrix. The internal computations will corrupt *matrix; if you don't want the input matrix to be changed, make a copy of it first.

LALSMatrixInverse() and LALDMatrixInverse() compute the inverse *inverse of the square matrix *matrix. If the pointer det is non-NULL, then the determinant is also computed and returned in *det. The array *inverse must already be allocated to the correct size. The internal computations will corrupt *matrix; if you don't want the input matrix to be changed, make a copy of it first.

LALDMatrixDeterminantErr() computes the determinant det[0] of the square matrix *matrix. If *matrixErr is non-NULL, it stores uncertainties in the corresponding components of *matrix, and the resulting uncertainty in the determinant (computed by linear error propagation) is returned in det[1]. This routine is not highly optimized, and uses an internal matrix in its computations, so the contents of *matrix and *matrixErr will not be changed.

Algorithm

A linear system of equations is written in matrix terms as:

\begin{equation} \label{eq_linear_system} \mathsf{M}^a{}_b \mathsf{x}^b = \mathsf{v}^a \;, \end{equation}

where \(\mathsf{M}^a{}_b\) is a known matrix, \(\mathsf{v}^a\) a known vector, and \(\mathsf{x}^b\) an unknown vector that we wish to solve for. A standard method for solving this is the method of LU decomposition, based on the fact that any non-singular square matrix \(\mathsf{M}^a{}_b\) can be decomposed into the product of a lower-triangular matrix \(\mathsf{L}^a{}_b\) and an upper-triangular matrix \(\mathsf{U}^a{}_b\):

\begin{equation} \mathsf{M}^a{}_b = \mathsf{L}^a{}_c \mathsf{U}^c{}_b = \left[\begin{array}{cccc} 1 & 0 & \cdots & 0 \\ L^2{}_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ L^N{}_1 & L^N{}_2 & \cdots & 1 \end{array}\right] \cdot \left[\begin{array}{cccc} U^1{}_1 & U^1{}_2 & \cdots & U^1{}_N \\ 0 & U^2{}_2 & \cdots & U^2{}_N \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & U^N{}_N \end{array}\right] \;. \end{equation}

The linear system can then be broken up as \(\mathsf{L}^a{}_b \mathsf{y}^b = \mathsf{v}^a\) and \(\mathsf{U}^b{}_c \mathsf{x}^c = \mathsf{y}^b\), where these two equations are trivially invertible:

\begin{eqnarray} y^i & = & v^i - \sum_{j=1}^i-1 L^i{}_j y^j \;,\qquad i=1,2,\ldots,N \;, \\ x^i & = & \frac{1}{U^i{}_i}\left( y^i - \sum_{j=i+1}^N U^i{}_j x_j \right) \;,\qquad i=N,N-1,\ldots,1 \;, \end{eqnarray}

where the calculations are arranged so that the computation of a given \(y^i\) or \(x^i\) requires only those values that have been computed already. This process of solving the linear system is called backsubstitution.

The determinant of \(\mathsf{M}^a{}_b\) is simply the product of the diagonal elements of the decomposed matrix: \(|\mathsf{M}^a{}_b|=\prod_{i=1}^N U^i{}_i\). The inverse matrix \((\mathsf{M}^{-1}){}^a{}_b\) can be computed by performing a column-by-column backsubstitution of the identity matrix.

The routines in Module DetInverse.c first use LALSLUDecomp() or LALDLUDecomp() to perform an LU decomposition of the matrix, then either compute the determinant from the diagonal elements, or use LALSLUBackSub() or LALDLUBackSub() to determine the inverse by back-substitution of basis vectors. The routines that compute the determinant will also handle any singular matrix error code returned by the LU decomposition routines, returning zero as the determinant.

Since the diagonal components \(L^i{}_i\) are conventionally assigned to 1, they do not need to be explicitly stored. Therefore we can store both matrices \(\mathsf{L}^a{}_b\) and \(\mathsf{U}^a{}_b\) in-place, in the same memory block used for the input matrix \(\mathsf{M}^a{}_b\). This the procedure taken by LALSLUDecomp() and LALDLUDecomp(); hence on return the routines in this module will leave the input *matrix in this decomposed state. However, these routines also permute the rows of the input matrix, and the information on this permutation is not returned by the routines in Module DetInverse.c, so the information in *matrix will be irretrievably mangled.

Computing the determinant is dominated by the cost of doing the LU decomposition, or of order \(N^3/3\) operations. Computing the inverse requires an additional \(N\) back-substitutions, but since most of the vector elements are zero, this reduces the average cost of each back-substitution from \(N^2\) to \(2N^2/3\), for a total operation count of \(N^3\).

Computing determinant uncertainties:
To determine the dependence of the determinant on any one element of the matrix, we take advantage of the fact that the determinant \(|\mathsf{M}^a{}_b|\) can be written as:

\begin{eqnarray} |\mathsf{M}^a{}_b| & = & \sum_{j=1}^N (-1)^{i+j} M^i{}_j |(\mathsf{C}^i{}_j){}^a{}_b| \quad\mbox{for any }i\in[1,N] \;,\\ & = & \sum_{i=1}^N (-1)^{i+j} M^i{}_j |(\mathsf{C}^i{}_j){}^a{}_b| \quad\mbox{for any }j\in[1,N] \;, \end{eqnarray}

where

\begin{equation} (\mathsf{C}^i{}_j){}^a{}_b = \left[\begin{array}{ccccccc} M^1{}_1 & \cdots & M^1{}_{j-1} & 0 & M^1{}_{j+1} & \cdots & M^1{}_N \\ \vdots & \ddots & \vdots & \vdots & \vdots & & \vdots \\ M^{i-1}{}_1 & \cdots & M^{i-1}{}_{j-1} & 0 & M^{i-1}{}_{j+1} & \cdots & M^{i-1}{}_N \\ 0 & \cdots & 0 & 1 & 0 & \cdots & 0 \\ \vdots & & \vdots & \vdots & \vdots & \ddots & \vdots \\ M^N{}_1 & \cdots & M^N{}_{j-1} & 0 & M^N{}_{j+1} & \cdots & M^N{}_N \end{array}\right] \end{equation}

is called the co-matrix of the element \(M^i{}_j\).

Assuming all matrix elements are statistically independent, linear error propagation can give us the uncertainty in the determinant:

\begin{eqnarray} E\left(|\mathsf{M}^a{}_b|\right) & = & \sqrt{\sum_{i,j=1}^N \left[ \frac{\partial|\mathsf{M}^a{}_b|}{\partial M^i{}_j} E\left(M^i{}_j\right)\right]^2 }\\ & = & \sqrt{\sum_{i,j=1}^N \left[ |(\mathsf{C}^i{}_j){}^a{}_b| E\left(M^i{}_j\right)\right]^2 } \;.\\ \end{eqnarray}

The routines LALSMatrixDeterminantErr() and LALDMatrixDeterminantErr() thus simply compute the determinant multiple times, successively zeroing out rows and columns of *matrix and replacing the element at their intersection with the corresponding element of *matrixErr, and take the square root of the sum of the squares of these determinants. As mentioned earlier, they use an internal matrix for all computations, so the input parameters are unchanged. The uncertainty requires evaluating \(N^2\) determinants, so the operation count scales as \(N^5\): this routine should not be used for large matrices!

Prototypes

void LALSMatrixInverse (LALStatus *stat, REAL4 *det, REAL4Array *matrix, REAL4Array *inverse)
 
void LALDMatrixDeterminant (LALStatus *stat, REAL8 *det, REAL8Array *matrix)
 
void LALDMatrixInverse (LALStatus *stat, REAL8 *det, REAL8Array *matrix, REAL8Array *inverse)
 
void LALDMatrixDeterminantErr (LALStatus *stat, REAL8 det[2], REAL8Array *matrix, REAL8Array *matrixErr)
 

Function Documentation

◆ LALSMatrixInverse()

void LALSMatrixInverse ( LALStatus stat,
REAL4 det,
REAL4Array matrix,
REAL4Array inverse 
)
See also
See Module DetInverse.c for documentation

Definition at line 616 of file DetInverse.c.

◆ LALDMatrixDeterminant()

void LALDMatrixDeterminant ( LALStatus stat,
REAL8 det,
REAL8Array matrix 
)
See also
See Module DetInverse.c for documentation

Definition at line 691 of file DetInverse.c.

◆ LALDMatrixInverse()

void LALDMatrixInverse ( LALStatus stat,
REAL8 det,
REAL8Array matrix,
REAL8Array inverse 
)
See also
See Module DetInverse.c for documentation

Definition at line 734 of file DetInverse.c.

◆ LALDMatrixDeterminantErr()

void LALDMatrixDeterminantErr ( LALStatus stat,
REAL8  det[2],
REAL8Array matrix,
REAL8Array matrixErr 
)
See also
See Module DetInverse.c for documentation

Definition at line 808 of file DetInverse.c.