3.5 Quadrature3.7 Ordinary Differential Equations

§3.6 Linear Difference Equations

Contents

§3.6(i) Introduction

Many special functions satisfy second-order recurrence relations, or difference equations, of the form

3.6.1a_{n}w_{{n+1}}-b_{n}w_{n}+c_{n}w_{{n-1}}=d_{n},

or equivalently,

3.6.2a_{n}\Delta^{2}w_{{n-1}}+(2a_{n}-b_{n})\Delta w_{{n-1}}+(a_{n}-b_{n}+c_{n})w_{{n-1}}=d_{n},

where \Delta w_{{n-1}}=w_{n}-w_{{n-1}}, \Delta^{2}w_{{n-1}}=\Delta w_{n}-\Delta w_{{n-1}}, and n\in\Integer. If d_{n}=0, \forall n, then the difference equation is homogeneous; otherwise it is inhomogeneous.

Difference equations are simple and attractive for computation. In practice, however, problems of severe instability often arise and in §§3.6(ii)3.6(vii) we show how these difficulties may be overcome.

§3.6(ii) Homogeneous Equations

Given numerical values of w_{0} and w_{1}, the solution w_{n} of the equation

3.6.3a_{n}w_{{n+1}}-b_{n}w_{n}+c_{n}w_{{n-1}}=0,

with a_{n}\neq 0, \forall n, can be computed recursively for n=2,3,\dots. Unless exact arithmetic is being used, however, each step of the calculation introduces rounding errors. These errors have the effect of perturbing the solution by unwanted small multiples of w_{n} and of an independent solution g_{n}, say. This is of little consequence if the wanted solution is growing in magnitude at least as fast as any other solution of (3.6.3), and the recursion process is stable.

But suppose that w_{n} is a nontrivial solution such that

3.6.4w_{n}/g_{n}\to 0,n\to\infty.

Then w_{n} is said to be a recessive (equivalently, minimal or distinguished) solution as n\to\infty, and it is unique except for a constant factor. In this situation the unwanted multiples of g_{n} grow more rapidly than the wanted solution, and the computations are unstable. Stability can be restored, however, by backward recursion, provided that c_{n}\neq 0, \forall n: starting from w_{N} and w_{{N+1}}, with N large, equation (3.6.3) is applied to generate in succession w_{{N-1}},w_{{N-2}},\dots,w_{0}. The unwanted multiples of g_{n} now decay in comparison with w_{n}, hence are of little consequence.

The values of w_{N} and w_{{N+1}} needed to begin the backward recursion may be available, for example, from asymptotic expansions (§2.9). However, there are alternative procedures that do not require w_{N} and w_{{N+1}} to be known in advance. These are described in §§ 3.6(iii) and 3.6(v).

§3.6(iii) Miller’s Algorithm

Because the recessive solution of a homogeneous equation is the fastest growing solution in the backward direction, it occurred to J.C.P. Miller (Bickley et al. (1952, pp. xvi–xvii)) that arbitrary “trial values” can be assigned to w_{N} and w_{{N+1}}, for example, 1 and 0. A “trial solution” is then computed by backward recursion, in the course of which the original components of the unwanted solution g_{n} die away. It therefore remains to apply a normalizing factor \Lambda. The process is then repeated with a higher value of N, and the normalized solutions compared. If agreement is not within a prescribed tolerance the cycle is continued.

The normalizing factor \Lambda can be the true value of w_{0} divided by its trial value, or \Lambda can be chosen to satisfy a known property of the wanted solution of the form

3.6.5\sum _{{n=0}}^{{\infty}}\lambda _{n}w_{n}=1,

where the \lambda’s are constants. The latter method is usually superior when the true value of w_{0} is zero or pathologically small.

For further information on Miller’s algorithm, including examples, convergence proofs, and error analyses, see Wimp (1984, Chapter 4), Gautschi (1967, 1997b), and Olver (1964a). See also Gautschi (1967) and Gil et al. (2007a, Chapter 4) for the computation of recessive solutions via continued fractions.

§3.6(iv) Inhomogeneous Equations

Similar principles apply to equation (3.6.1) when a_{n}c_{n}\neq 0, \forall n, and d_{n}\neq 0 for some, or all, values of n. If, as n\to\infty, the wanted solution w_{n} grows (decays) in magnitude at least as fast as any solution of the corresponding homogeneous equation, then forward (backward) recursion is stable.

A new problem arises, however, if, as n\to\infty, the asymptotic behavior of w_{n} is intermediate to those of two independent solutions f_{n} and g_{n} of the corresponding inhomogeneous equation (the complementary functions). More precisely, assume that f_{0}\neq 0, g_{n}\neq 0 for all sufficiently large n, and as n\to\infty

3.6.6
f_{n}/g_{n}\to 0,
w_{n}/g_{n}\to 0.

Then computation of w_{n} by forward recursion is unstable. If it also happens that f_{n}/w_{n}\to 0 as n\to\infty, then computation of w_{n} by backward recursion is unstable as well. However, w_{n} can be computed successfully in these circumstances by boundary-value methods, as follows.

Let us assume the normalizing condition is of the form w_{0}=\lambda, where \lambda is a constant, and then solve the following tridiagonal system of algebraic equations for the unknowns w_{1}^{{(N)}},w_{2}^{{(N)}},\dots,w_{{N-1}}^{{(N)}}; see §3.2(ii). Here N is an arbitrary positive integer.

§3.6(v) Olver’s Algorithm

To apply the method just described a succession of values can be prescribed for the arbitrary integer N and the results compared. However, a more powerful procedure combines the solution of the algebraic equations with the determination of the optimum value of N. It is applicable equally to the computation of the recessive solution of the homogeneous equation (3.6.3) or the computation of any solution w_{n} of the inhomogeneous equation (3.6.1) for which the conditions of §3.6(iv) are satisfied.

Suppose again that f_{0}\neq 0, w_{0} is given, and we wish to calculate w_{1},w_{2},\dots,w_{M} to a prescribed relative accuracy \epsilon for a given value of M. We first compute, by forward recurrence, the solution p_{n} of the homogeneous equation (3.6.3) with initial values p_{0}=0, p_{1}=1. At the same time we construct a sequence e_{n}, n=0,1,\dots, defined by

3.6.8a_{n}e_{n}=c_{n}e_{{n-1}}-d_{n}p_{n},

beginning with e_{0}=w_{0}. (This part of the process is equivalent to forward elimination.) The computation is continued until a value N (\geq M) is reached for which

3.6.9\left|\frac{e_{N}}{p_{N}p_{{N+1}}}\right|\leq\epsilon\min _{{1\leq n\leq M}}\left|\frac{e_{n}}{p_{n}p_{{n+1}}}\right|.

Then w_{n} is generated by backward recursion from

3.6.10p_{{n+1}}w_{n}=p_{n}w_{{n+1}}+e_{n},

starting with w_{N}=0. (This part of the process is back substitution.)

An example is included in the next subsection. For further information, including a more general form of normalizing condition, other examples, convergence proofs, and error analyses, see Olver (1967a), Olver and Sookne (1972), and Wimp (1984, Chapter 6).

§3.6(vi) Examples

Example 1. Bessel Functions

The difference equation

3.6.11w_{{n+1}}-2nw_{n}+w_{{n-1}}=0,n=1,2,\dots,

is satisfied by \mathop{J_{{n}}\/}\nolimits\!\left(1\right) and \mathop{Y_{{n}}\/}\nolimits\!\left(1\right), where \mathop{J_{{n}}\/}\nolimits\!\left(x\right) and \mathop{Y_{{n}}\/}\nolimits\!\left(x\right) are the Bessel functions of the first kind. For large n,

3.6.12\mathop{J_{{n}}\/}\nolimits\!\left(1\right)\sim\frac{1}{(2\pi n)^{{1/2}}}\left(\frac{e}{2n}\right)^{n},
3.6.13\mathop{Y_{{n}}\/}\nolimits\!\left(1\right)\sim\left(\frac{2}{\pi n}\right)^{{1/2}}\left(\frac{2n}{e}\right)^{n},

10.19(i)). Thus \mathop{Y_{{n}}\/}\nolimits\!\left(1\right) is dominant and can be computed by forward recursion, whereas \mathop{J_{{n}}\/}\nolimits\!\left(1\right) is recessive and has to be computed by backward recursion. The backward recursion can be carried out using independently computed values of \mathop{J_{{N}}\/}\nolimits\!\left(1\right) and \mathop{J_{{N+1}}\/}\nolimits\!\left(1\right) or by use of Miller’s algorithm (§3.6(iii)) or Olver’s algorithm (§3.6(v)).

Example 2. Weber Function

The Weber function \mathop{\mathbf{E}_{{n}}\/}\nolimits\!\left(1\right) satisfies

3.6.14w_{{n+1}}-2nw_{n}+w_{{n-1}}=-(2/\pi)(1-(-1)^{n}),

for n=1,2,\dots, and as n\to\infty

3.6.15\mathop{\mathbf{E}_{{2n}}\/}\nolimits\!\left(1\right)\sim\frac{2}{(4n^{2}-1)\pi},
3.6.16\mathop{\mathbf{E}_{{2n+1}}\/}\nolimits\!\left(1\right)\sim\frac{2}{(2n+1)\pi};

see §11.11(ii). Thus the asymptotic behavior of the particular solution \mathop{\mathbf{E}_{{n}}\/}\nolimits\!\left(1\right) is intermediate to those of the complementary functions \mathop{J_{{n}}\/}\nolimits\!\left(1\right) and \mathop{Y_{{n}}\/}\nolimits\!\left(1\right); moreover, the conditions for Olver’s algorithm are satisfied. We apply the algorithm to compute \mathop{\mathbf{E}_{{n}}\/}\nolimits\!\left(1\right) to 8S for the range n=1,2,\dots,10, beginning with the value \mathop{\mathbf{E}_{{0}}\/}\nolimits\!\left(1\right)=-0.56865\;\, 663 obtained from the Maclaurin series expansion (§11.10(iii)).

In the notation of §3.6(v) we have M=10 and \epsilon=\tfrac{1}{2}\times 10^{{-8}}. The least value of N that satisfies (3.6.9) is found to be 16. The results of the computations are displayed in Table 3.6.1. The values of w_{n} for n=1,2,\dots,10 are the wanted values of \mathop{\mathbf{E}_{{n}}\/}\nolimits\!\left(1\right). (It should be observed that for n>10, however, the w_{n} are progressively poorer approximations to \mathop{\mathbf{E}_{{n}}\/}\nolimits\!\left(1\right): the underlined digits are in error.)

Table 3.6.1: Weber function w_{n}=\mathop{\mathbf{E}_{{n}}\/}\nolimits\!\left(1\right) computed by Olver’s algorithm.
n p_{n} e_{n} e_{n}/(p_{n}p_{{n+1}}) w_{n}
0 0.00000 000 −0.56865 663 −0.56865 663
1 0.10000 000 ×10¹ 0.70458 291 0.35229 146 0.43816 243
2 0.20000 000 ×10¹ 0.70458 291 0.50327 351 ×10⁻¹ 0.17174 195
3 0.70000 000 ×10¹ 0.96172 597 ×10¹ 0.34347 356 ×10⁻¹ 0.24880 538
4 0.40000 000 ×10² 0.96172 597 ×10¹ 0.76815 174 ×10⁻³ 0.47850 795 ×10⁻¹
5 0.31300 000 ×10³ 0.40814 124 ×10³ 0.42199 534 ×10⁻³ 0.13400 098
6 0.30900 000 ×10⁴ 0.40814 124 ×10³ 0.35924 754 ×10⁻⁵ 0.18919 443 ×10⁻¹
7 0.36767 000 ×10⁵ 0.47221 340 ×10⁵ 0.25102 029 ×10⁻⁵ 0.93032 343 ×10⁻¹
8 0.51164 800 ×10⁶ 0.47221 340 ×10⁵ 0.11324 804 ×10⁻⁷ 0.10293 811 ×10⁻¹
9 0.81496 010 ×10⁷ 0.10423 616 ×10⁸ 0.87496 485 ×10⁻⁸ 0.71668 638 ×10⁻¹
10 0.14618 117 ×10⁹ 0.10423 616 ×10⁸ 0.24457 824 ×10⁻¹⁰ 0.65021 292 ×10⁻²
11 0.29154 738 ×10¹⁰ 0.37225 201 ×10¹⁰ 0.19952 026 ×10⁻¹⁰ 0.58373 946 ×10⁻¹
12 0.63994 242 ×10¹¹ 0.37225 201 ×10¹⁰ 0.37946 279 ×10⁻¹³ 0.44851\; 3\underline{87} ×10⁻²
13 0.15329 463 ×10¹³ 0.19555 304 ×10¹³ 0.32057 909 ×10⁻¹³ 0.49269\;\underline{383} ×10⁻¹
14 0.39792 611 ×10¹⁴ 0.19555 304 ×10¹³ 0.44167 174 ×10⁻¹⁶ 0.327\underline{92\; 861} ×10⁻²
15 0.11126 602 ×10¹⁶ 0.14186 384 ×10¹⁶ 0.38242 250 ×10⁻¹⁶ 0.425\underline{50\; 628} ×10⁻¹
16 0.33340 012 ×10¹⁷ 0.14186 384 ×10¹⁶ 0.39924 861 ×10⁻¹⁹ 0.\underline{00000\; 0 0 0}

§3.6(vii) Linear Difference Equations of Other Orders

Similar considerations apply to the first-order equation

3.6.17a_{n}w_{{n+1}}-b_{n}w_{n}=d_{n}.

Thus in the inhomogeneous case it may sometimes be necessary to recur backwards to achieve stability. For analyses and examples see Gautschi (1997b).

For a difference equation of order k (\geq 3),

3.6.18a_{{n,k}}w_{{n+k}}+a_{{n,k-1}}w_{{n+k-1}}+\dots+a_{{n,0}}w_{n}=d_{n},

or for systems of k first-order inhomogeneous equations, boundary-value methods are the rule rather than the exception. Typically k-\ell conditions are prescribed at the beginning of the range, and \ell conditions at the end. Here \ell\in[0,k], and its actual value depends on the asymptotic behavior of the wanted solution in relation to those of the other solutions. Within this framework forward and backward recursion may be regarded as the special cases \ell=0 and \ell=k, respectively.

For further information see Wimp (1984, Chapters 7–8), Cash and Zahar (1994), and Lozier (1980).