Institute for Reliable Computing
Head: Prof. Dr. Siegfried M. Rump
INTLAB is the Matlab toolbox for self-validating algorithms. It comprises of
INTLAB has several thousand users in more than 40 countries. INTLAB is used in many areas, from verification of chaos to population biology, from controler design to computer-assisted proofs, from PDEs to Petri Nets. Here are some selected Refences to INTLAB.
The philosophy of INTLAB is that everything is written in Matlab code to assure best portability. For some architectures and old versions of Matlab (5.3-) one routine for switching the rounding mode is necessary (rounding is already integral part of Matlab 5.3 and following under Windows). For a number of other architectures the rounding routine is included in the package (see change rounding mode). Preassumption to run INTLAB is IEEE 754 arithmetic and the possibility to permanently switch the rounding mode. This is true for a large number of PCs, workstations and main frames.
Moreover, INTLAB extensively uses BLAS routines. This assures fast computing times, comparable to pure floating point arithmetic. Interval vector and matrix operations are very fast in INTLAB; however, nonlinear computations and loops may slow down the system significantly due to interpretation overhead and extensive use of the operator concept.
Consider, for example, the following code for timing of arithmetic operations (pure floating point, interval multiplication of two point matrices, point matrix times interval matrix and multiplication of two nondegenerate interval matrices):
n=200; A=2*rand(n)-1; intA=midrad(A,1e-12); k=100; tic; for i=1:k, A*A; end, toc/k tic; for i=1:k, intval(A)*A; end, toc/k tic; for i=1:k, A*intA; end, toc/k tic; for i=1:k, intA*intA; end, toc/k
The result in seconds on a 2.4 GHz Pentium IV is as follows (computing times for complex matrices are similar):
pure verified verified verified dimension floating point A*A A*intA intA*intA --------------------------------------------------------- n=100 0.0013 0.037 0.0080 0.0114 n=200 0.012 0.025 0.051 0.074 n=500 0.18 0.36 0.66 0.92 Table 4.1. Real matrix multiplication
For more information see INTLAB - INTerval LABoratory and Fast and Parallel Interval Arithmetic . In these papers information about background, implementation and timing of INTLAB is available.
We demonstrate the use of the Hessian toolbox by a model problem from
http://www.sor.princeton.edu/~rvdb/ampl/nlmodels/cute/bdqrtic.mod
# Source: Problem 61 in
# A.R. Conn, N.I.M. Gould, M. Lescrenier and Ph.L. Toint,
# "Performance of a multifrontal scheme for partially separable
# optimization",
# Report 88/4, Dept of Mathematics, FUNDP (Namur, B), 1988.
# Copyright (C) 2001 Princeton University
# All Rights Reserved
The function of the model problem is
function y = f(x) N = length(x); I = 1:N-4; y = sum( (-4*x(I)+3.0).^2 ) + sum( ( x(I).^2 + 2*x(I+1).^2 + ... 3*x(I+2).^2 + 4*x(I+3).^2 + 5*x(N).^2 ).^2 );with initial approximation x=ones(N,1) for N=1000. The function f(x) in 1000 unknowns is to be minimized. Note that the above function can be called in various ways. We give a couple of example with input and output. Ordinary use:
>> y = f(ones(5,1)) y = 226With first derivative:
>> yy = f(gradientinit(ones(5,1))) gradient value yy.x = 226 gradient derivative(s) yy.dx = 68 120 180 240 300With second derivative:
>> yy = f(hessianinit(ones(5,1))) Hessian value yy.x = 226 Hessian first derivative(s) yy.dx = 68 120 180 240 300 Hessian second derivative(s) yy.hx = 100 16 24 32 40 16 152 48 64 80 24 48 252 96 120 32 64 96 368 160 40 80 120 160 500But also verified error bounds for gradient and Hessian are easily computed:
>> yy = f(intval(hessianinit(ones(5,1)))) intval Hessian value yy.x = 226.0000 intval Hessian first derivative(s) yy.dx = 68.0000 120.0000 180.0000 240.0000 300.0000 intval Hessian second derivative(s) yy.hx = 100.0000 16.0000 24.0000 32.0000 40.0000 16.0000 152.0000 48.0000 64.0000 80.0000 24.0000 48.0000 252.0000 96.0000 120.0000 32.0000 64.0000 96.0000 368.0000 160.0000 40.0000 80.0000 120.0000 160.0000 500.0000We use a special format in our Hessian toolbox where uncertain digits are replaced by "_". Since all digits are correct in the above computation, no _ occurs (for more details see help disp_).
The given starting vector is x=ones(1000,1). Recall that y=f(hessianinit(x)) produces 1000 elements in y.x, 1e6 elements in y.dx and 1e9 elements in y.hx. In full storage this means 8 GByte of storage. In our implementation Hessians are stored sparse using a special storage scheme allowing efficient computations. The following is executable code to calculate an inclusion of a stationary point of f by first performing a simple Newton iteration followed by a verification step for the resulting nonlinear system. Error estimations are completely rigorous.
>> n = 1000; tic X = verifynlss(@f,ones(n,1),'hSparseSPD'); t = toc maxrelerr = max(relerr(X)) t = 23.8040 maxrelerr = 5.5992e-013Inclusions of all components of a stationary point are to some 13 decimal digits. We solved the nonlinear system treating the Hessian as sparse. Finally, we can verify that the Hessian at all x in X is symmetric positive definite by showing that it is diagonally dominant with positive diagonal. Therefore, a strict local minimum of f is included in X.
>> y = f(hessianinit(X)); M = gershgorin(y.hx); all( M.mid>M.rad ) ans = (1,1) 1
For more information and examples try demohessian, see also "A challenging model problem in global optimization".
As another example of how to write code, consider the following nonlinear system solver for full Jacobian in INTLAB code:
function [ X , xs ] = verifynlss(f,xs) %VERIFYNLSS Verified solution of nonlinear system % % [ X , xs ] = verifynlss(f,xs) % % f is name of function, to be called by f(x), xs is an approximation % optional output xs improved approximation % % floating point Newton iteration n = length(xs); xsold = xs; k = 0; while ( norm(xs-xsold)>1e-10*norm(xs) & k<10 ) | k<1 k = k+1; % at most 10, at least 1 iteration performed xsold = xs; x = initvar(xs); y = feval(f,x); xs = xs - y.dx\y.x; end % interval iteration R = inv(y.dx); Z = - R * feval(f,intval(xs)); X = Z; E = 0.1*rad(X)*hull(-1,1) + midrad(0,realmin); ready = 0; k = 0; while ~ready & k<10 k = k+1; Y = hull( X + E , 0 ); % epsilon inflation Yold = Y; x = initvar(xs+Y); y = feval(f,x); % f(x) and Jacobian by C = eye(n) - R * y.dx; % automatic differentiation i=0; while ~ready & i<2 % improved interval iteration i = i+1; X = Z + C * Y; ready = all(in0(X,Y)); Y = intersect(X,Yold); end end if ready X = xs+Y; % verified inclusion else X = NaN; % inclusion failed end Algorithm 4.12. Verified solution of nonlinear systems
The example is taken from the first paper cited above.
The fastest summation and dot product algorithms in the West and the East are presented in three recent papers
S.M. Rump, T. Ogita, and S. Oishi: Accurate Floating-point Summation I: Faithful
Rounding and
Accurate Floating-point Summation II: Sign, K-fold Faithful and
Rounding to Nearest, submitted for publication in SISC, 2005-2007
In about 1984 I designed a method to compute an approximate inverse of extremely ill-conditioned matrices only in working precision and using a precise dot product. Our algorithm for accurate summation and dot product give a convenient formulation of this method to compute a preconditioner for extremely ill-conditioned matrices. For given matrix A consider
format short e n = dim(A); % dimension of A kmax = 5; % maximum number of residuals R = inv(A); for k=2:kmax R = accdot( inv( accdot(R,A) ) , R , k ); disp([ k norm( accdot(R,A,-1,speye(n)) , 1 ) ]) endNote that at the beginning of step k the approximate inverse R of A is stored in k-1 parts. After step k, RES = accdot(R,A) is the faithful rounding of sum(R{i})*A to double precision, invRES = inv(RES) is the ordinary inverse of RES in double precision, and the new R is the product invRES*sum(R{i}) stored in k parts.
For a really extreme example consider (see randmat.m)
A = [ -5046135670319638, -3871391041510136, -5206336348183639, -6745986988231149 ; ... -640032173419322, 8694411469684959, -564323984386760, -2807912511823001 ; ... -16935782447203334, -18752427538303772, -8188807358110413, -14820968618548534 ; ... -1069537498856711, -14079150289610606, 7074216604373039, 7257960283978710 ];taken from
>> norm(A,inf)*norm(double(inv(sym(A,'f'))),inf) ans = 1.3826e+065That means, perturbing a matrix entry in the 70th decimal place changes the inverse in about the 65th decimal place. The result of the above code is
2.0000e+000 5.8213e+000 3.0000e+000 2.2687e+000 4.0000e+000 2.0039e+000 5.0000e+000 2.5074e-015So after 5 iterations the approximate inverse R stored in 5 parts is a good enough preconditioner to produce a residual ||I-RA|| ~ eps .
Amazingly, although really extremely ill-conditioned matrices are inverted in pure (double precision) floating-point, this produces by no means 'random' numbers but something with still enough usable information to produce a perfect preconditioner.
If you want to try even more ill-conditioned examples, visit randmat and use, for example, A=randmat(n,1e100) or more. In [143] an example of a matrix is shown with condition number exceeding 10300 ! Note that the first step is to compute the Matlab-inverse inv(A) of this matrix. It still contains enough information to eventually produce a perfect approximation to A-1.
Most routines in INTLAB are self-explaining and/or information can be obtained by the Matlab help command. Additional information is offered in some demo programs such as demointlab, demointval, demogradient and so forth (see readme.txt for more information).
In his thesis "Interval Analysis in Matlab" Gareth I. Hargreaves offers some basic facts about interval arithmetic, a nice introduction into much of INTLAB together with a tutorial. Moreover, some routines written by him in INTLAB are presented, among them an algorithm to find all zeros of a nonlinear function within a box and a computation of Viswanath's constant, the common limit of random Fibonacci recurrences.
Hargreaves' thesis (supervisor Nicholas J. Higham) is a report (No. 416) of the Department of Mathematics of the University of Manchester and can be downloaded from there (http://www.ma.man.ac.uk/~nareports or http://www.maths.man.ac.uk/~hargreaves/ ), or directly from here (narep416.pdf, narep416.ps).
Jiri Rohn from Prague offers a number of verification routines using INTLAB. He also wrote a primer giving a short introduction to INTLAB.
INTLAB LICENSE INFORMATION
Copyright (c) 1998 - 2008 Siegfried M. Rump @ TUHH, Institute for Reliable Computing
All rights reserved.
===> INTLAB is free for private use and for purely academic purposes provided proper reference is given acknowledging that the software package INTLAB has been developed by Siegfried M. Rump at Hamburg University of Technology, Germany. For any other use of INTLAB a license is required.
===> For commercial use of INTLAB as well as its use in any connection with the development or distribution of a commercial product a license is required. Especially any use of a commercial product which needs INTLAB or parts of INTLAB to work properly requires an INTLAB license. This is independent of whether the commercial product is used privately or for commercial purposes. To obtain an INTLAB license contact the author Siegfried M. Rump (∂tu-harburg.de).
===> Neither the name of the university nor the name of the author may be used to endorse or promote products derived with or from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED AS IS AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATIONS, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
DISCLAIMER: Extensive tests have been performed to ensure reliability of the algorithms. However, neither an error-free processor nor an error-free program can be guaranteed.
Given a complex interval midrad(a,r) by midpoint "a" and radius "r", current implementation of f(midrad(a,r)) calculates the midpoint of the result interval to be f(a), and calculates the radius of the result interval by formulas developed by N. C. Boersken (see INTLAB documentation). It does not take into account potential crossing of a branch cut of the input interval midrad(a,r).
Therefore, for given input interval X and some x in X it is not necessarily f(x) in f(X), where f(x) follows the usual definition of the main part of the function. However, for every x in X there is some y in f(X) such that f^-1(y)=x.
It is possible to change the defintion of the complex functions in order to satisfy f(x) in f(X). This enlarges radii dramatically where it might not be expected, e.g. sqrt(tocmplx(midrad(-4,1e-100))).
There seems some likeliness that arguments crossing the branch cut are either not in the intention of the user or, that results become so large that quickly computation yields results +/- inf or NaN.
For the moment we decided to give a warning in case an input argument crosses the branch cut. This is maybe not the perfect solution. You may change the warning into an error message (only in functions "sqrt" and "log").
The package is thoroughly tested under Matlab versions 5.3, 6.0, 6.5, 6.5.1, 7.0, 7.0.1, 7.0.4, 7.1, 7.2, 2006b, 2007a, 2007b, 2008a under Windows. It is also tested under Unix.
In Matlab Version 7.2 (2006a) there is a problem with constant*sparse. The result is correct, but it may take very long; this is fixed in following releases of Matlab. In recent releases Matlab uses IMKL as default for BLAS operations. This may cause problems, see FAQ. The library has to be changed to Atlas.
Moreover, there are known Matlab problems with access to array components of variables of user-defined data types. This occurs in Matlab version 7.0.1 (SP1) and 7.0.4 (SP2), but is resolved in 7.1 (SP3). Fortunately, these cases are rare; no incorrect result is possible because Matlab runs into a hard error (segmentation violation).
Moreover, in Matlab versions 7.0 to 7.0.4 it may be that after showing some figure the rounding is reset to nearest. INTLAB routines do not make an assumption on the current rounding mode but always set it to the correct value, so this does not cause problems with INTLAB routines.
For INTLAB and other software for self-validating methods the mailing list
validated_numerics@tu-harburg.de
has been established. To subscribe to this list, send an e-mail to
with the text 'subscribe validated_numerics' inside, similarly to unsubscribe.
If you have problems with INTLAB, let me know.
Institute for Reliable Computing
Hamburg University of Technology
Schwarzenbergstr. 95
21071 Hamburg
Germany