About the Project
NIST
3 Numerical MethodsAreas

§3.2 Linear Algebra

Contents

§3.2(i) Gaussian Elimination

To solve the system

3.2.1 Ax=b,

with Gaussian elimination, where A is a nonsingular n×n matrix and b is an n×1 vector, we start with the augmented matrix

3.2.2 [a11a1nb1an1annbn].

By repeatedly subtracting multiples of each row from the subsequent rows we obtain a matrix of the form

3.2.3 [u11u12u1ny10u22u2ny200unnyn].

During this reduction process we store the multipliers jk that are used in each column to eliminate other elements in that column. This yields a lower triangular matrix of the form

3.2.4 L=[1002110n1n,n-11].

If we denote by U the upper triangular matrix comprising the elements ujk in (3.2.3), then we have the factorization, or triangular decomposition,

3.2.5 A=LU.

With y=[y1,y2,,yn]T the process of solution can then be regarded as first solving the equation Ly=b for y (forward elimination), followed by the solution of Ux=y for x (back substitution).

For more details see Golub and Van Loan (1996, pp. 87–100).

Example

3.2.6 [123231312]=[100210351][1230-1-50018].

In solving Ax=[1,1,1]T, we obtain by forward elimination y=[1,-1,3]T, and by back substitution x=[16,16,16]T.

In practice, if any of the multipliers jk are unduly large in magnitude compared with unity, then Gaussian elimination is unstable. To avoid instability the rows are interchanged at each elimination step in such a way that the absolute value of the element that is used as a divisor, the pivot element, is not less than that of the other available elements in its column. Then |jk|1 in all cases. This modification is called Gaussian elimination with partial pivoting.

For more information on pivoting see Golub and Van Loan (1996, pp. 109–123).

Iterative Refinement

When the factorization (3.2.5) is available, the accuracy of the computed solution x can be improved with little extra computation. Because of rounding errors, the residual vector r=b-Ax is nonzero as a rule. We solve the system Aδx=r for δx, taking advantage of the existing triangular decomposition of A to obtain an improved solution x+δx.

§3.2(ii) Gaussian Elimination for a Tridiagonal Matrix

Tridiagonal matrices are ones in which the only nonzero elements occur on the main diagonal and two adjacent diagonals. Thus

3.2.7 A=[b1c10a2b2c2an-1bn-1cn-10anbn].

Assume that A can be factored as in (3.2.5), but without partial pivoting. Then

3.2.8 L=[100210n-1100n1],
3.2.9 U=[d1u100d2u20dn-1un-100dn],

where uj=cj, j=1,2,,n-1, d1=b1, and

3.2.10 j =aj/dj-1,
dj =bj-jcj-1,
j=2,,n.

Forward elimination for solving Ax=f then becomes y1=f1,

3.2.11 yj=fj-jyj-1,
j=2,,n,

and back substitution is xn=yn/dn, followed by

3.2.12 xj=(yj-ujxj+1)/dj,
j=n-1,,1.

For more information on solving tridiagonal systems see Golub and Van Loan (1996, pp. 152–160).

§3.2(iii) Condition of Linear Systems

The p-norm of a vector x=[x1,,xn]T is given by

3.2.13 xp =(j=1n|xj|p)1/p,
p=1,2,,
x =max1jn|xj|.

The Euclidean norm is the case p=2.

The p-norm of a matrix A=[ajk] is

3.2.14 Ap=maxx0Axpxp.

The cases p=1,2, and are the most important:

3.2.15 A1 =max1knj=1n|ajk|,
A =max1jnk=1n|ajk|,
A2 =ρ(AAT),

where ρ(AAT) is the largest of the absolute values of the eigenvalues of the matrix AAT; see §3.2(iv). (We are assuming that the matrix A is real; if not AT is replaced by AH, the transpose of the complex conjugate of A.)

The sensitivity of the solution vector x in (3.2.1) to small perturbations in the matrix A and the vector b is measured by the condition number

3.2.16 κ(A)=ApA-1p,

where p is one of the matrix norms. For any norm (3.2.14) we have κ(A)1. The larger the value κ(A), the more ill-conditioned the system.

Let x* denote a computed solution of the system (3.2.1), with r=b-Ax* again denoting the residual. Then we have the a posteriori error bound

3.2.17 x*-xpxpκ(A)rpbp.

For further information see Brezinski (1999) and Trefethen and Bau (1997, Chapter 3).

§3.2(iv) Eigenvalues and Eigenvectors

If A is an n×n matrix, then a real or complex number λ is called an eigenvalue of A, and a nonzero vector x a corresponding (right) eigenvector, if

3.2.18 Ax=λx.

A nonzero vector y is called a left eigenvector of A corresponding to the eigenvalue λ if yTA=λyT or, equivalently, ATy=λy. A normalized eigenvector has Euclidean norm 1; compare (3.2.13) with p=2.

The polynomial

3.2.19 pn(λ)=det[λI-A]

is called the characteristic polynomial of A and its zeros are the eigenvalues of A. The multiplicity of an eigenvalue is its multiplicity as a zero of the characteristic polynomial (§3.8(i)). To an eigenvalue of multiplicity m, there correspond m linearly independent eigenvectors provided that A is nondefective, that is, A has a complete set of n linearly independent eigenvectors.

§3.2(v) Condition of Eigenvalues

If A is nondefective and λ is a simple zero of pn(λ), then the sensitivity of λ to small perturbations in the matrix A is measured by the condition number

3.2.20 κ(λ)=1|yTx|,

where x and y are the normalized right and left eigenvectors of A corresponding to the eigenvalue λ. Because |yTx|=|cosθ|, where θ is the angle between yT and x we always have κ(λ)1. When A is a symmetric matrix, the left and right eigenvectors coincide, yielding κ(λ)=1, and the calculation of its eigenvalues is a well-conditioned problem.

§3.2(vi) Lanczos Tridiagonalization of a Symmetric Matrix

Let A be an n×n symmetric matrix. Define the Lanczos vectors vj and coefficients αj and βj by v0=0, a normalized vector v1 (perhaps chosen randomly), α1=v1TAv1, β1=0, and for j=1,2,,n-1 by the recursive scheme

3.2.21 u =Avj-αjvj-βjvj-1,
βj+1 =u2,
vj+1 =u/βj+1,
αj+1 =vj+1TAvj+1.

Then vjTvk=δj,k, j,k=1,2,,n. The tridiagonal matrix

3.2.22 B=[α1β20β2α2β3βn-1αn-1βn0βnαn]

has the same eigenvalues as A. Its characteristic polynomial can be obtained from the recursion

3.2.23 pk+1(λ)=(λ-αk+1)pk(λ)-βk+12pk-1(λ),
k=0,1,,n-1,

with p-1(λ)=0, p0(λ)=1.

For numerical information see Stewart (2001, pp. 347–368).

Lanczos’ method is related to Gauss quadrature considered in §3.5(v). When the matrix A is replaced by a scalar x, the recurrence relation in the first line of (3.2.21) with u=βj+1vj+1 is similar to the one in (3.5.30_5). Also, the recurrence relations in (3.2.23) and (3.5.30) are similar, as well as the matrix B in (3.2.22) and the Jacobi matrix Jn in (3.5.31).

§3.2(vii) Computation of Eigenvalues

Many methods are available for computing eigenvalues; see Golub and Van Loan (1996, Chapters 7, 8), Trefethen and Bau (1997, Chapter 5), and Wilkinson (1988, Chapters 8, 9).