3 Numerical Methods3.2 Linear Algebra

§3.1 Arithmetics and Error Measures

Contents

§3.1(i) Floating-Point Arithmetic

Computer arithmetic is described for the binary based system with base 2; another frequently used system is the hexadecimal system with base 16.

A nonzero normalized binary floating-point machine number x is represented as

3.1.1x=(-1)^{s}\cdot(b_{0}.b_{1}b_{2}\dots b_{{p-1}})\cdot 2^{E},b_{0}=1,

where s is equal to 1 or 0, each b_{j}, j\geq 1, is either 0 or 1, b_{1} is the most significant bit, p (\in\NatNumber) is the number of significant bits b_{j}, b_{{p-1}} is the least significant bit, E is an integer called the exponent, b_{0}.b_{1}b_{2}\dots b_{{p-1}} is the significand, and f=.b_{1}b_{2}\dots b_{{p-1}} is the fractional part.

The set of machine numbers \Real _{{{\rm fl}}} is the union of 0 and the set

3.1.2(-1)^{s}2^{E}\sum _{{j=0}}^{{p-1}}b_{j}2^{{-j}},

with b_{0}=1 and all allowable choices of E, p, s, and b_{j}.

Let E_{{{\rm min}}}\leq E\leq E_{{{\rm max}}} with E_{{{\rm min}}}<0 and E_{{{\rm max}}}>0. For given values of E_{{{\rm min}}}, E_{{{\rm max}}}, and p, the format width in bits N of a computer word is the total number of bits: the sign (one bit), the significant bits b_{1},b_{2},\dots,b_{{p-1}} (p-1 bits), and the bits allocated to the exponent (the remaining N-p bits). The integers p, E_{{{\rm min}}}, and E_{{{\rm max}}} are characteristics of the machine. The machine epsilon \epsilon _{M}, that is, the distance between 1 and the next larger machine number with E=0 is given by \epsilon _{M}=2^{{-p+1}}. The machine precision is \frac{1}{2}\epsilon _{M}=2^{{-p}}. The lower and upper bounds for the absolute values of the nonzero machine numbers are given by

3.1.3N_{{{\rm min}}}\equiv 2^{{E_{{{\rm min}}}}}\leq|x|\leq 2^{{E_{{{\rm max}}}+1}}\left(1-2^{{-p}}\right)\equiv N_{{{\rm max}}}.

Underflow (overflow) after computing x\neq 0 occurs when |x| is smaller (larger) than N_{{{\rm min}}} (N_{{{\rm max}}}).

IEEE Standard

The current standard is the ANSI/IEEE Standard 754; see IEEE (1985, §§1–4). In the case of normalized binary representation the memory positions for single precision (N=32, p=24, E_{{{\rm min}}}=-126, E_{{{\rm max}}}=127) and double precision (N=64, p=53, E_{{{\rm min}}}=-1022, E_{{{\rm max}}}=1023) are as in Figure 3.1.1. The respective machine precisions are \frac{1}{2}\epsilon _{M}=0.596\times 10^{{-7}} and \frac{1}{2}\epsilon _{M}=0.111\times 10^{{-15}}.

\begin{picture}(85.0,25.0)(-1.0,-1.0)\put(0.0,18.0){\makebox(2.0,6.0){\small 1}}\put(0.0,13.0){\framebox(2.0,6.0){$s$}}\put(3.0,18.0){\makebox(8.0,6.0){\small 8}}\put(3.0,13.0){\framebox(8.0,6.0){$E$}}\put(12.0,18.0){\makebox(23.0,6.0){\small 23 bits}}\put(12.0,13.0){\framebox(23.0,6.0){$f$}}\put(70.0,18.0){$N=32$,}\put(72.0,14.0){$p=24$}\put(0.0,5.0){\makebox(2.0,6.0){\small 1}}\put(0.0,0.0){\framebox(2.0,6.0){$s$}}\put(3.0,5.0){\makebox(11.0,6.0){\small 11}}\put(3.0,0.0){\framebox(11.0,6.0){$E$}}\put(15.0,5.0){\makebox(52.0,6.0){\small 52 bits}}\put(15.0,0.0){\framebox(52.0,6.0){$f$}}\put(70.0,4.0){$N=64$,}\put(72.0,0.0){$p=53$}\end{picture}
Figure 3.1.1: Floating-point arithmetic. Memory positions in single and double precision, in the case of binary representation. Magnify

Rounding

Let x be any positive number with

3.1.4x=(1.b_{1}b_{2}\dots b_{{p-1}}b_{p}b_{{p+1}}\dots)\cdot 2^{E},

N_{{{\rm min}}}\leq x\leq N_{{{\rm max}}}, and

3.1.5
x_{{-}}=(1.b_{1}b_{2}\dots b_{{p-1}})\cdot 2^{E},
x_{{+}}=((1.b_{1}b_{2}\dots b_{{p-1}})+\epsilon _{M})\cdot 2^{E}.

Then rounding by chopping or rounding down of x gives x_{{-}}, with maximum relative error \epsilon _{M}. Symmetric rounding or rounding to nearest of x gives x_{{-}} or x_{{+}}, whichever is nearer to x, with maximum relative error equal to the machine precision \frac{1}{2}\epsilon _{M}=2^{{-p}}.

Negative numbers x are rounded in the same way as -x.

For further information see Goldberg (1991) and Overton (2001).

§3.1(ii) Interval Arithmetic

Interval arithmetic is intended for bounding the total effect of rounding errors of calculations with machine numbers. With this arithmetic the computed result can be proved to lie in a certain interval, which leads to validated computing with guaranteed and rigorous inclusion regions for the results.

Let G be the set of closed intervals \{[a,b]\}. The elementary arithmetical operations on intervals are defined as follows:

3.1.6I*J=\{ x*y\,|\, x\in I,y\in J\},I,J\in G,

where *\in\{+,-,\cdot,/\}, with appropriate roundings of the end points of I*J when machine numbers are being used. Division is possible only if the divisor interval does not contain zero.

A basic text on interval arithmetic and analysis is Alefeld and Herzberger (1983), and for applications and further information see Moore (1979) and Petković and Petković (1998). The last reference includes analogs for arithmetic in the complex plane \Complex.

§3.1(iii) Rational Arithmetics

Computer algebra systems use exact rational arithmetic with rational numbers p/q, where p and q are multi-length integers. During the calculations common divisors are removed from the rational numbers, and the final results can be converted to decimal representations of arbitrary length. For further information see Matula and Kornerup (1980).

§3.1(iv) Level-Index Arithmetic

To eliminate overflow or underflow in finite-precision arithmetic numbers are represented by using generalized logarithms \mathop{\ln\/}\nolimits _{{\ell}}(x) given by

3.1.7
\mathop{\ln\/}\nolimits _{{0}}(x)=x,
\mathop{\ln\/}\nolimits _{{\ell}}(x)=\mathop{\ln\/}\nolimits\!\left(\mathop{\ln\/}\nolimits _{{\ell-1}}(x)\right),\ell=1,2,\dots,

with x\geq 0 and \ell the unique nonnegative integer such that a\equiv\mathop{\ln\/}\nolimits _{{\ell}}(x)\in[0,1). In level-index arithmetic x is represented by \ell+a (or -(\ell+a) for negative numbers). Also in this arithmetic generalized precision can be defined, which includes absolute error and relative precision (§3.1(v)) as special cases.

For further information see Clenshaw and Olver (1984) and Clenshaw et al. (1989). For applications see Lozier (1993).

For further references on level-index arithmetic (and also other arithmetics) see Anuta et al. (1996). See also Hayes (2009).

§3.1(v) Error Measures

If x^{{*}} is an approximation to a real or complex number x, then the absolute error is

3.1.8\epsilon _{a}=\left|x^{{*}}-x\right|.

If x\neq 0, the relative error is

3.1.9\epsilon _{r}=\left|\frac{x^{{*}}-x}{x}\right|=\frac{\epsilon _{a}}{\left|x\right|}.

The relative precision is

3.1.10\epsilon _{{\mathit{rp}}}=\left|\mathop{\ln\/}\nolimits\!\left(\ifrac{x^{{*}}}{x}\right)\right|,

where xx^{{*}}>0 for real variables, and xx^{{*}}\neq 0 for complex variables (with the principal value of the logarithm).

The mollified error is

3.1.11\epsilon _{m}=\frac{\left|x^{{*}}-x\right|}{\max(\left|x\right|,1)}.

For error measures for complex arithmetic see Olver (1983).