3.10 Continued Fractions3.12 Mathematical Constants

§3.11 Approximation Techniques

Contents

§3.11(i) Minimax Polynomial Approximations

Let f(x) be continuous on a closed interval [a,b]. Then there exists a unique nth degree polynomial p_{n}(x), called the minimax (or best uniform) polynomial approximation to f(x) on [a,b], that minimizes \max _{{a\leq x\leq b}}\left|\epsilon _{n}(x)\right|, where \epsilon _{n}(x)=f(x)-p_{n}(x).

A sufficient condition for p_{n}(x) to be the minimax polynomial is that \left|\epsilon _{n}(x)\right| attains its maximum at n+2 distinct points in [a,b] and \epsilon _{n}(x) changes sign at these consecutive maxima.

If we have a sufficiently close approximation

3.11.1p_{n}(x)=a_{n}x^{n}+a_{{n-1}}x^{{n-1}}+\dots+a_{0}

to f(x), then the coefficients a_{k} can be computed iteratively. Assume that f^{{\prime}}(x) is continuous on [a,b] and let x_{0}=a, x_{{n+1}}=b, and x_{1},x_{2},\dots,x_{n} be the zeros of \epsilon _{n}^{{\mspace{1.0mu}\prime}}(x) in (a,b) arranged so that

3.11.2x_{0}<x_{1}<x_{2}<\dots<x_{n}<x_{{n+1}}.

Also, let

3.11.3m_{j}=(-1)^{j}\epsilon _{n}(x_{j}),j=0,1,\dots,n+1.

(Thus the m_{j} are approximations to m, where \pm m is the maximum value of \left|\epsilon _{n}(x)\right| on [a,b].)

Then (in general) a better approximation to p_{n}(x) is given by

3.11.4\sum _{{k=0}}^{n}(a_{k}+\delta a_{k})x^{k},

where

3.11.5\sum _{{k=0}}^{n}x_{j}^{k}\delta a_{k}=(-1)^{j}(m_{j}-m),j=0,1,\dots,n+1.

This is a set of n+2 equations for the n+2 unknowns \delta a_{0},\delta a_{1},\dots,\delta a_{n} and m.

The iterative process converges locally and quadratically (§3.8(i)).

A method for obtaining a sufficiently accurate first approximation is described in the next subsection.

For the theory of minimax approximations see Meinardus (1967). For examples of minimax polynomial approximations to elementary and special functions see Hart et al. (1968). See also Cody (1970) and Ralston (1965).

§3.11(ii) Chebyshev-Series Expansions

The Chebyshev polynomials \mathop{T_{{n}}\/}\nolimits are given by

3.11.6\mathop{T_{{n}}\/}\nolimits\!\left(x\right)=\mathop{\cos\/}\nolimits\!\left(n\mathop{\mathrm{arccos}\/}\nolimits x\right),-1\leq x\leq 1.

They satisfy the recurrence relation

3.11.7\mathop{T_{{n+1}}\/}\nolimits\!\left(x\right)-2x\mathop{T_{{n}}\/}\nolimits\!\left(x\right)+\mathop{T_{{n-1}}\/}\nolimits\!\left(x\right)=0,n=1,2,\dots,

with initial values \mathop{T_{{0}}\/}\nolimits\!\left(x\right)=1, \mathop{T_{{1}}\/}\nolimits\!\left(x\right)=x. They enjoy an orthogonal property with respect to integrals:

3.11.8\int _{{-1}}^{1}\frac{\mathop{T_{{j}}\/}\nolimits\!\left(x\right)\mathop{T_{{k}}\/}\nolimits\!\left(x\right)}{\sqrt{1-x^{2}}}dx=\begin{cases}\pi,&j=k=0,\\
\frac{1}{2}\pi,&j=k\neq 0,\\
0,&j\neq k,\end{cases}

as well as an orthogonal property with respect to sums, as follows. When n>0 and 0\leq j\leq n, 0\leq k\leq n,

3.11.9\sideset{}{{}^{{\prime\prime}}}{\sum}_{{\ell=0}}^{n}\mathop{T_{{j}}\/}\nolimits\!\left(x_{\ell}\right)\mathop{T_{{k}}\/}\nolimits\!\left(x_{\ell}\right)=\begin{cases}n,&j=k=0\text{ or }n,\\
\frac{1}{2}n,&j=k\neq 0\text{ or }n,\\
0,&j\neq k,\end{cases}

where x_{\ell}=\mathop{\cos\/}\nolimits\!\left(\pi\ell/n\right) and the double prime means that the first and last terms are to be halved.

For these and further properties of Chebyshev polynomials, see Chapter 18, Gil et al. (2007a, Chapter 3), and Mason and Handscomb (2003).

Chebyshev Expansions

If f is continuously differentiable on [-1,1], then with

3.11.10c_{n}=\frac{2}{\pi}\int _{0}^{\pi}f(\mathop{\cos\/}\nolimits\theta)\mathop{\cos\/}\nolimits\!\left(n\theta\right)d\theta,n=0,1,2,\dots,

the expansion

3.11.11f(x)=\sideset{}{{}^{{\prime}}}{\sum}_{{n=0}}^{\infty}c_{n}\mathop{T_{{n}}\/}\nolimits\!\left(x\right),-1\leq x\leq 1,

converges uniformly. Here the single prime on the summation symbol means that the first term is to be halved. In fact, (3.11.11) is the Fourier-series expansion of f(\mathop{\cos\/}\nolimits\theta); compare (3.11.6) and §1.8(i).

Furthermore, if f\in\mathop{C^{{\infty}}\/}\nolimits[-1,1], then the convergence of (3.11.11) is usually very rapid; compare (1.8.7) with k arbitrary.

For general intervals [a,b] we rescale:

3.11.12f(x)=\sideset{}{{}^{{\prime}}}{\sum}_{{n=0}}^{\infty}d_{n}\mathop{T_{{n}}\/}\nolimits\!\left(\frac{2x-a-b}{b-a}\right).

Because the series (3.11.12) converges rapidly we obtain a very good first approximation to the minimax polynomial p_{n}(x) for [a,b] if we truncate (3.11.12) at its (n+1)th term. This is because in the notation of §3.11(i)

3.11.13\epsilon _{n}(x)=d_{{n+1}}\mathop{T_{{n+1}}\/}\nolimits\!\left(\frac{2x-a-b}{b-a}\right),

approximately, and the right-hand side enjoys exactly those properties concerning its maxima and minima that are required for the minimax approximation; compare Figure 18.4.3.

More precisely, it is known that for the interval [a,b], the ratio of the maximum value of the remainder

3.11.14\left|\sum _{{k=n+1}}^{\infty}{}d_{k}\mathop{T_{{k}}\/}\nolimits\!\left(\frac{2x-a-b}{b-a}\right)\right|

to the maximum error of the minimax polynomial p_{n}(x) is bounded by 1+L_{n}, where L_{n} is the nth Lebesgue constant for Fourier series; see §1.8(i). Since L_{0}=1, L_{n} is a monotonically increasing function of n, and (for example) L_{{1000}}=4.07\dots, this means that in practice the gain in replacing a truncated Chebyshev-series expansion by the corresponding minimax polynomial approximation is hardly worthwhile. Moreover, the set of minimax approximations p_{0}(x),p_{1}(x),p_{2}(x),\dots,p_{n}(x) requires the calculation and storage of \frac{1}{2}(n+1)(n+2) coefficients, whereas the corresponding set of Chebyshev-series approximations requires only n+1 coefficients.

Calculation of Chebyshev Coefficients

The c_{n} in (3.11.11) can be calculated from (3.11.10), but in general it is more efficient to make use of the orthogonal property (3.11.9). Also, in cases where f(x) satisfies a linear ordinary differential equation with polynomial coefficients, the expansion (3.11.11) can be substituted in the differential equation to yield a recurrence relation satisfied by the c_{n}.

For details and examples of these methods, see Clenshaw (1957, 1962) and Miller (1966). See also Mason and Handscomb (2003, Chapter 10) and Fox and Parker (1968, Chapter 5).

Summation of Chebyshev Series: Clenshaw’s Algorithm

For the expansion (3.11.11), numerical values of the Chebyshev polynomials \mathop{T_{{n}}\/}\nolimits\!\left(x\right) can be generated by application of the recurrence relation (3.11.7). A more efficient procedure is as follows. Let c_{n}\mathop{T_{{n}}\/}\nolimits\!\left(x\right) be the last term retained in the truncated series. Beginning with u_{{n+1}}=0, u_{n}=c_{n}, we apply

3.11.15u_{k}=2xu_{{k+1}}-u_{{k+2}}+c_{k},k=n-1,n-2,\dots,0.

Then the sum of the truncated expansion equals \frac{1}{2}(u_{0}-u_{2}). For error analysis and modifications of Clenshaw’s algorithm, see Oliver (1977).

Complex Variables

If x is replaced by a complex variable z and f(z) is analytic, then the expansion (3.11.11) converges within an ellipse. However, in general (3.11.11) affords no advantage in \Complex for numerical purposes compared with the Maclaurin expansion of f(z).

For further details on Chebyshev-series expansions in the complex plane, see Mason and Handscomb (2003, §5.10).

§3.11(iii) Minimax Rational Approximations

Let f be continuous on a closed interval [a,b] and w be a continuous nonvanishing function on [a,b]: w is called a weight function. Then the minimax (or best uniform) rational approximation

3.11.16R_{{k,\ell}}(x)=\frac{p_{0}+p_{1}x+\dots+p_{k}x^{k}}{1+q_{1}x+\dots+q_{\ell}x^{\ell}}

of type [k,\ell] to f on [a,b] minimizes the maximum value of \left|\epsilon _{{k,\ell}}(x)\right| on [a,b], where

3.11.17\epsilon _{{k,\ell}}(x)=\frac{R_{{k,\ell}}(x)-f(x)}{w(x)}.

The theory of polynomial minimax approximation given in §3.11(i) can be extended to the case when p_{n}(x) is replaced by a rational function R_{{k,\ell}}(x). There exists a unique solution of this minimax problem and there are at least k+\ell+2 values x_{j}, a\leq x_{0}<x_{1}<\dots<x_{{k+\ell+1}}\leq b, such that m_{j}=m, where

3.11.18m_{j}=(-1)^{j}\epsilon _{{k,\ell}}(x_{j}),j=0,1,\dots,k+\ell+1,

and \pm m is the maximum of \left|\epsilon _{{k,\ell}}(x)\right| on [a,b].

A collection of minimax rational approximations to elementary and special functions can be found in Hart et al. (1968).

A widely implemented and used algorithm for calculating the coefficients p_{j} and q_{j} in (3.11.16) is Remez’s second algorithm. See Remez (1957), Werner et al. (1967), and Johnson and Blair (1973).

Example

With w(x)=1 and 14-digit computation, we obtain the following rational approximation of type [3,3] to the Bessel function \mathop{J_{{0}}\/}\nolimits\!\left(x\right)10.2(ii)) on the interval 0\leq x\leq j_{{0,1}}, where j_{{0,1}} is the first positive zero of \mathop{J_{{0}}\/}\nolimits\!\left(x\right):

3.11.19R_{{3,3}}(x)=\frac{p_{0}+p_{1}x+p_{2}x^{2}+p_{3}x^{3}}{1+q_{1}x+q_{2}x^{2}+q_{3}x^{3}},

with coefficients given in Table 3.11.1.

Table 3.11.1: Coefficients p_{j}, q_{j} for the minimax rational approximation R_{{3,3}}(x).
j p_{j} q_{j}
0 0.99999 99891 7854
1 -0.34038 93820 9347 -0.34039 05233 8838
2 -0.18915 48376 3222 0.06086 50162 9812
3 0.06658 31942 0166 -0.01864 47680 9090

The error curve is shown in Figure 3.11.1.

See accompanying text
Figure 3.11.1: Error R_{{3,3}}(x)-\mathop{J_{{0}}\/}\nolimits\!\left(x\right) of the minimax rational approximation R_{{3,3}}(x) to the Bessel function \mathop{J_{{0}}\/}\nolimits\!\left(x\right) for 0\leq x\leq j_{{0,1}} (=0.89357\ldots). Magnify

§3.11(iv) Padé Approximations

Let

3.11.20f(z)=c_{0}+c_{1}z+c_{2}z^{2}+\cdots

be a formal power series. The rational function

3.11.21\frac{N_{{p,q}}(z)}{D_{{p,q}}(z)}=\frac{a_{0}+a_{1}z+\dots+a_{p}z^{p}}{b_{0}+b_{1}z+\dots+b_{q}z^{q}}

is called a Padé approximant at zero of f if

3.11.22N_{{p,q}}(z)-f(z)D_{{p,q}}(z)=\mathop{O\/}\nolimits\!\left(z^{{p+q+1}}\right),z\to 0.

It is denoted by \mathop{{[p/q]_{{f}}}\/}\nolimits\!\left(z\right). Thus if b_{0}\neq 0, then the Maclaurin expansion of (3.11.21) agrees with (3.11.20) up to, and including, the term in z^{{p+q}}.

The requirement (3.11.22) implies

3.11.23
\displaystyle a_{0}=c_{0}b_{0},
\displaystyle a_{1}=c_{1}b_{0}+c_{0}b_{1},
\displaystyle\vdots
\displaystyle a_{p}=c_{p}b_{0}+c_{{p-1}}b_{1}+\dots+c_{{p-q}}b_{q},
\displaystyle 0=c_{{p+1}}b_{0}+c_{p}b_{1}+\dots+c_{{p-q+1}}b_{q},
\displaystyle\vdots
\displaystyle 0=c_{{p+q}}b_{0}+c_{{p+q-1}}b_{1}+\dots+c_{p}b_{q},

where c_{j}=0 if j<0. With b_{0}=1, the last q equations give b_{1},\dots,b_{q} as the solution of a system of linear equations. The first p+1 equations then yield a_{0},\dots,a_{p}.

The array of Padé approximants

3.11.24\begin{array}[]{cccc}\mathop{{[0/0]_{{f}}}\/}\nolimits&\mathop{{[0/1]_{{f}}}\/}\nolimits&\mathop{{[0/2]_{{f}}}\/}\nolimits&\cdots\\
\mathop{{[1/0]_{{f}}}\/}\nolimits&\mathop{{[1/1]_{{f}}}\/}\nolimits&\mathop{{[1/2]_{{f}}}\/}\nolimits&\cdots\\
\mathop{{[2/0]_{{f}}}\/}\nolimits&\mathop{{[2/1]_{{f}}}\/}\nolimits&\mathop{{[2/2]_{{f}}}\/}\nolimits&\cdots\\
\vdots&\vdots&\vdots&\ddots\\
\end{array}

is called a Padé table. Approximants with the same denominator degree are located in the same column of the table.

For convergence results for Padé approximants, and the connection with continued fractions and Gaussian quadrature, see Baker and Graves-Morris (1996, §4.7).

The Padé approximants can be computed by Wynn’s cross rule. Any five approximants arranged in the Padé table as

\begin{picture}(3.0,3.0)\put(0.0,1.0){$W$}\put(1.0,0.0){$S$}\put(1.0,1.0){$C$}\put(1.0,2.0){$N$}\put(2.0,1.0){$E$}\end{picture}

satisfy

3.11.25(N-C)^{{-1}}+(S-C)^{{-1}}=(W-C)^{{-1}}+(E-C)^{{-1}}.

Starting with the first column \mathop{{[n/0]_{{f}}}\/}\nolimits, n=0,1,2,\dots, and initializing the preceding column by \mathop{{[n/-1]_{{f}}}\/}\nolimits=\infty, n=1,2,\dots, we can compute the lower triangular part of the table via (3.11.25). Similarly, the upper triangular part follows from the first row \mathop{{[0/n]_{{f}}}\/}\nolimits, n=0,1,2,\dots, by initializing \mathop{{[-1/n]_{{f}}}\/}\nolimits=0, n=1,2,\dots.

For the recursive computation of \mathop{{[n+k/k]_{{f}}}\/}\nolimits by Wynn’s epsilon algorithm, see (3.9.11) and the subsequent text.

Laplace Transform Inversion

Numerical inversion of the Laplace transform (§1.14(iii))

3.11.26F(s)=\mathop{\mathscr{L}\/}\nolimits\left(f;s\right)=\int _{0}^{\infty}e^{{-st}}f(t)dt

requires f={\mathop{\mathscr{L}\/}\nolimits^{{-1}}}F to be obtained from numerical values of F. A general procedure is to approximate F by a rational function R (vanishing at infinity) and then approximate f by r={\mathop{\mathscr{L}\/}\nolimits^{{-1}}}R. When F has an explicit power-series expansion a possible choice of R is a Padé approximation to F. See Luke (1969b, §16.4) for several examples involving special functions.

For further information on Padé approximations, see Baker and Graves-Morris (1996, §4.7), Brezinski (1980, pp. 9–39 and 126–177), and Lorentzen and Waadeland (1992, pp. 367–395).

§3.11(v) Least Squares Approximations

Suppose a function f(x) is approximated by the polynomial

3.11.27p_{n}(x)=a_{n}x^{n}+a_{{n-1}}x^{{n-1}}+\dots+a_{0}

that minimizes

3.11.28S=\sum _{{j=1}}^{J}\left(f(x_{j})-p_{n}(x_{j})\right)^{2}.

Here x_{j}, j=1,2,\dots,J, is a given set of distinct real points and J\geq n+1. From the equations \ifrac{\partial S}{\partial a_{k}}=0, k=0,1,\dots,n, we derive the normal equations

3.11.29\begin{bmatrix}X_{0}&X_{1}&\cdots&X_{n}\\
X_{1}&X_{2}&\cdots&X_{{n+1}}\\
\vdots&\vdots&\ddots&\vdots\\
X_{n}&X_{{n+1}}&\cdots&X_{{2n}}\end{bmatrix}\begin{bmatrix}a_{0}\\
a_{1}\\
\vdots\\
a_{n}\end{bmatrix}=\begin{bmatrix}F_{0}\\
F_{1}\\
\vdots\\
F_{n}\end{bmatrix},

where

3.11.30
X_{k}=\sum _{{j=1}}^{J}x_{j}^{k},
F_{k}=\sum _{{j=1}}^{J}f(x_{j})x_{j}^{k}.

(3.11.29) is a system of n+1 linear equations for the coefficients a_{0},a_{1},\dots,a_{n}. The matrix is symmetric and positive definite, but the system is ill-conditioned when n is large because the lower rows of the matrix are approximately proportional to one another. If J=n+1, then p_{n}(x) is the Lagrange interpolation polynomial for the set x_{1},x_{2},\dots,x_{J}3.3(i)).

More generally, let f(x) be approximated by a linear combination

3.11.31\Phi _{n}(x)=a_{n}\phi _{n}(x)+a_{{n-1}}\phi _{{n-1}}(x)+\dots+a_{0}\phi _{0}(x)

of given functions \phi _{k}(x), k=0,1,\dots,n, that minimizes

3.11.32\sum _{{j=1}}^{J}w(x_{j})\left(f(x_{j})-\Phi _{n}(x_{j})\right)^{2},

w(x) being a given positive weight function, and again J\geq n+1. Then (3.11.29) is replaced by

3.11.33\begin{bmatrix}X_{{00}}&X_{{01}}&\cdots&X_{{0n}}\\
X_{{10}}&X_{{11}}&\cdots&X_{{1n}}\\
\vdots&\vdots&\ddots&\vdots\\
X_{{n0}}&X_{{n1}}&\cdots&X_{{nn}}\end{bmatrix}\begin{bmatrix}a_{0}\\
a_{1}\\
\vdots\\
a_{n}\end{bmatrix}=\begin{bmatrix}F_{0}\\
F_{1}\\
\vdots\\
F_{n}\end{bmatrix},

with

3.11.34X_{{k\ell}}=\sum _{{j=1}}^{J}w(x_{j})\phi _{k}(x_{j})\phi _{{\ell}}(x_{j}),

and

3.11.35F_{k}=\sum _{{j=1}}^{J}w(x_{j})f(x_{j})\phi _{k}(x_{j}).

Since X_{{k\ell}}=X_{{\ell k}}, the matrix is again symmetric.

If the functions \phi _{k}(x) are linearly independent on the set x_{1},x_{2},\dots,x_{J}, that is, the only solution of the system of equations

3.11.36\sum _{{k=0}}^{n}c_{k}\phi _{k}(x_{j})=0,j=1,2,\dots,J,

is c_{0}=c_{1}=\dots=c_{n}=0, then the approximation \Phi _{n}(x) is determined uniquely.

Now suppose that X_{{k\ell}}=0 when k\neq\ell, that is, the functions \phi _{k}(x) are orthogonal with respect to weighted summation on the discrete set x_{1},x_{2},\dots,x_{J}. Then the system (3.11.33) is diagonal and hence well-conditioned.

A set of functions \phi _{0}(x),\phi _{1}(x),\dots,\phi _{n}(x) that is linearly independent on the set x_{1},x_{2},\dots,x_{J} (compare (3.11.36)) can always be orthogonalized in the sense given in the preceding paragraph by the Gram–Schmidt procedure; see Gautschi (1997a).

Example. The Discrete Fourier Transform

We take n complex exponentials \phi _{k}(x)=e^{{ikx}}, k=0,1,\dots,n-1, and approximate f(x) by the linear combination (3.11.31). The functions \phi _{k}(x) are orthogonal on the set x_{0},x_{1},\dots,x_{{n-1}}, x_{j}=2\pi j/n, with respect to the weight function w(x)=1, in the sense that

3.11.37\sum _{{j=0}}^{{n-1}}\phi _{k}(x_{j})\overline{\phi _{\ell}(x_{j})}=n\delta _{{k,\ell}},k,\ell=0,1,\dots,n-1,

\delta _{{k,\ell}} being Kronecker’s symbol and the bar denoting complex conjugate. In consequence we can solve the system

3.11.38f_{j}=\sum _{{k=0}}^{{n-1}}a_{k}\phi _{k}(x_{j}),j=0,1,\dots,n-1,

and obtain

3.11.39a_{k}=\frac{1}{n}\sum _{{j=0}}^{{n-1}}f_{j}\overline{\phi _{k}(x_{j})},k=0,1,\dots,n-1.

With this choice of a_{k} and f_{j}=f(x_{j}), the corresponding sum (3.11.32) vanishes.

The pair of vectors \{\mathbf{f},\mathbf{a}\}

3.11.40
\mathbf{f}=[f_{0},f_{1},\dots,f_{{n-1}}]^{{\rm T}},
\mathbf{a}=[a_{0},a_{1},\dots,a_{{n-1}}]^{{\rm T}},

is called a discrete Fourier transform pair.

The Fast Fourier Transform

The direct computation of the discrete Fourier transform (3.11.38), that is, of

3.11.41
f_{j}=\sum _{{k=0}}^{{n-1}}a_{k}\omega _{n}^{{jk}},
\omega _{n}=e^{{2\pi i/n}},j=0,1,\dots,n-1,

requires approximately n^{2} multiplications. The method of the fast Fourier transform (FFT) exploits the structure of the matrix \boldsymbol{{\Omega}} with elements \omega _{n}^{{jk}}, j,k=0,1,\dots,n-1. If n=2^{m}, then \boldsymbol{{\Omega}} can be factored into m matrices, the rows of which contain only a few nonzero entries and the nonzero entries are equal apart from signs. In consequence of this structure the number of operations can be reduced to nm=n\mathop{\mathrm{log}\/}\nolimits _{2}n operations.

The property

3.11.42\omega _{n}^{{2(k-(n/2))}}=\omega _{{n/2}}^{k}

is of fundamental importance in the FFT algorithm. If n is not a power of 2, then modifications are possible. For the original reference see Cooley and Tukey (1965). For further details and algorithms, see Van Loan (1992).

For further information on least squares approximations, including examples, see Gautschi (1997a, Chapter 2) and Björck (1996, Chapters 1 and 2).

§3.11(vi) Splines

Splines are defined piecewise and usually by low-degree polynomials. Given n+1 distinct points x_{k} in the real interval [a,b], with (a=)x_{0}<x_{1}<\cdots<x_{{n-1}}<x_{n}(=b), on each subinterval [x_{k},x_{{k+1}}], k=0,1,\ldots,n-1, a low-degree polynomial is defined with coefficients determined by, for example, values f_{k} and f_{k}^{{\prime}} of a function f and its derivative at the nodes x_{k} and x_{{k+1}}. The set of all the polynomials defines a function, the spline, on [a,b]. By taking more derivatives into account, the smoothness of the spline will increase.

For splines based on Bernoulli and Euler polynomials, see §24.17(ii).

For many applications a spline function is a more adaptable approximating tool than the Lagrange interpolation polynomial involving a comparable number of parameters; see §3.3(i), where a single polynomial is used for interpolating f(x) on the complete interval [a,b]. Multivariate functions can also be approximated in terms of multivariate polynomial splines. See de Boor (2001), Chui (1988), and Schumaker (1981) for further information.

In computer graphics a special type of spline is used which produces a Bézier curve. A cubic Bézier curve is defined by four points. Two are endpoints: (x_{0},y_{0}) and (x_{3},y_{3}); the other points (x_{1},y_{1}) and (x_{2},y_{2}) are control points. The slope of the curve at (x_{0},y_{0}) is tangent to the line between (x_{0},y_{0}) and (x_{1},y_{1}); similarly the slope at (x_{3},y_{3}) is tangent to the line between x_{2},y_{2} and x_{3},y_{3}. The curve is described by x(t) and y(t), which are cubic polynomials with t\in[0,1]. A complete spline results by composing several Bézier curves. A special applications area of Bézier curves is mathematical typography and the design of type fonts. See Knuth (1986, pp. 116-136).