11 Minimisation
The routines for minimisation (or optimisation) in this library are:
 PDA_DNLS1
Minimises the sum of squares of M nonlinear functions.
 PDA_DNLS1E
Minimises the sum of squares of M nonlinear functions (easy version).
 PDA_DQED
Solve bounded nonlinear least squares and nonlinear equations.
 PDA_LMDIF1 (MINPACK/NETLIB)
Minimise the sum of the squares of m nonlinear functions in n variables, simple interface
to PDA_LMDIF.
 PDA_LMDIF (MINPACK/NETLIB)
Minimise the sum of the squares of m nonlinear functions in n variables with a modified
LevenbergMarquardt algorithm. Needs function only, the Jacobian is calculated by a
forwarddifference approximation.
 PDA_UNCMND (NMS/TIBER)
Minimises a smooth nonlinear function of n variables. Needs function values only.
 PDA_SA (module SIMANN from OPT/NETLIB)
Continuous simulated annealing global optimisation algorithm. Simple constraints can
be specified.
 PDA_SUMSL (module SUMSL from TOMS)
Minimises a general unconstrained objective function using analytic gradients and a
hessian approximation from secant update.
 PDA_SUBPLX (module SUBPLEX from OPT/NETLIB)
Subplex method to solve unconstrained optimisation problems. The method is well
suited for optimising objective functions that are noisy or are discontinuous at the
solution.
11.1 Overview
There are two sorts of minimisation, one is to minimise any old function, and one to minimise a sum
of squares. The first is more general, the leastsquares routines are presumably more efficient. For the
programmer the differences are as follows:
 For a leastsquares fit, your merit function is a vector of residuals between measurements
and current model guess. For a general minimisation your merit function is a scalar,
basically you have to add up the squared residuals within your merit function.
 Perhaps more important is the amount of workspace you have to provide to the fit
algorithm. In leastsquares fits this scales with n*m where n is the number of parameters
to be fitted and m is the number of residuals to be added up (e.g. channels in a spectrum).
m is quite large and depends on the data set at hand. The general minimisation does in
principle not know that there is a spectrum with m pixels, and its workspace scales with
the square of n, which is more or less constant for any given application.
 Another difference is that leastsquares fits tend to return the Jacobi matrix, the
derivatives of each fit parameter with respect to each measurement. This is quite valuable
if you want to know the variances of the fit parameters and the covariances between them.
With the general minimisation you would have to work out the Hesse matrix and invert
it. Which means you have to be able to work out the derivatives of you merit function
w.r.t. to each fit parameter.
Two other issues in choosing minimisation algorithms are whether only the merit function can be
provided or also first derivatives, and whether the fit parameters have to be constrained or not. It
appears that Starlink applications use mainly unconstrained fits or at most simple bounds, i.e. hard
constant limits on parameters. This library contains functiononly algorithms, and also the SUMSL
algorithm which requires both functions and gradients. The only constrainable algorithm is the
simulated annealing SIMANN.
 PDA_UNCMND is a general unconstrained minimisation using function values only. A
quasiNewton algorithm with line search is used.
 PDA_LMDIF/PDA_LMDIF1 is an unconstrained leastsquares minimisation using
residuals only (no derivatives). A modified LevenbergMarquardt algorithm is used, the
Jacobian is calculated by a forwarddifference approximation.
 SIMANN is a simulated annealing algorithm. It uses function values only and can be
used for nonsmooth functions as well. It should also have a fair chance of getting out of
local minima and going on to find the global minimum.
 SUMSL is a general unconstrained minimisation using function values and gradients. A
trusted regions algorithm is used.
 SUBPLEX is a generalisation and improvement on the Simplex algorithm. It should be
very robust with any function to minimise. But it should also be rather inefficient.
11.2 Replacing calls to E04DGF and E04DKF
E04DGF performs an unconstrained minimisation using function values and first derivatives, E04DKF
is just an auxiliary routine. To replace these, PDA_UNCMND would be used, which does not make
use of derivatives.
The existing NAG code might look as shown below. Two modules are involved, one controls the NAG
fit routine, the other serves it by providing the value and gradient of the merit function (the function
to be minimised). Information is passed to the merit function in two ways. Scalars and constantsize
arrays are in a common block, while one variablelength array is passed as the USER argument.
Its length is passed in IUSER. The controlling function may also call the merit function
directly.
SUBROUTINE DOAFIT( ... )
INTEGER N
INTEGER IUSER
INTEGER ITER, IFAIL
INTEGER IWORK(N+1)
REAL USER(IUSER)
DOUBLE PRECISION X(N), FVAL, FGRAD(N)
DOUBLE PRECISION WORK(13*N)
EXTERNAL MERIT
+ COMMON / ABLOCK / constantsize arrays
 CALL MERIT( 0, N, X, FVAL, FGRAD, 0, IUSER, USER )
 IFAIL = 1
 CALL E04DGF( N, MERIT, ITER, FVAL, FGRAD, X, IWORK, WORK,
 : IUSER, USER, IFAIL )
 IF ( IFAIL .NE. 0 ) THEN
 An error has occurred
 END IF
 END

 SUBROUTINE MERIT( MODE, N, X, FVAL, FGRAD, NSTATE, IUSER, USER )
 INTEGER MODE, N, NSTATE
 INTEGER IUSER
 REAL USER(IUSER)
 DOUBLE PRECISION X(N), FVAL, FGRAD(N)
+ COMMON / ABLOCK / constantsize arrays
FVAL = ...
DO 1 I = 1, N
FGRAD(I) = ...
1 CONTINUE
END
PDA_UNCMND has separate arguments for the guess and the fit result. With PDA_UNCMND the
merit function is simpler in that it need not calculate the gradient. It also has a simpler interface and
cannot be passed a variablelength array as above. Thus a pointer to such an array must be passed in
the common block. In order to ease dereferencing this pointer, the merit function is split into two
modules. MERIT1 receives the passed arguments from its caller and it receives the common block
from the master routine DOAFIT. That apart MERIT1 does nothing but call MERIT2. In this call the
array pointer is dereferenced.
SUBROUTINE DOAFIT( ... )
INTEGER N
INTEGER IUSER, POINTR
INTEGER ITER, IFAIL
INTEGER IWORK(N+1)
REAL USER(IUSER)
DOUBLE PRECISION GUESS(N), FIT(N), FVAL
DOUBLE PRECISION WORK( N*(N+10) )
EXTERNAL MERIT1
+ COMMON / BBLOCK / constantsize arrays,
 : IUSER, POINTR
 POINTR = %LOC(USER)
 CALL MERIT1( N, GUESS, FVAL )
 IFAIL = 0
 CALL PDA_UNCMND( N, GUESS, MERIT1, FIT, FVAL, IFAIL, WORK, N*(N+10) )
 IF ( IFAIL .LT. 0 .OR. IFAIL .GT. 3 ) THEN
 An error has occurred
 END IF
 END

 SUBROUTINE MERIT1( N, X, FVAL )
 INTEGER N
 INTEGER IUSER, POINTR
 DOUBLE PRECISION X(N), FVAL
+ COMMON / BBLOCK / constantsize arrays,
: IUSER, POINTR
CALL MERIT2( N, X, FVAL, IUSER, %VAL(POINTR) )
END
SUBROUTINE MERIT2( N, X, FVAL, IUSER, USER )
INTEGER N
INTEGER IUSER
REAL USER(IUSER)
DOUBLE PRECISION X(N), FVAL
FVAL = ...
END
An alternative to a common block is to have a subroutine with SAVE variables as a reservoir.
One routine can call the reservoir routine to set values and another can call it to retrieve
values.
%LOC and %VAL are not standard Fortran 77. The only way around it would be to use a different
programming language such as Fortran 90 or C.
11.3 Replacing calls to E04FDF and E04GBF
These routines find the unconstrained minimum of a sum of squares. E04FDF uses function values
only while E04GBF uses first derivatives as well. Either would be replaced by PDA_LMDIF or
PDA_LMDIF1, which use only function values.
11.4 Replacing calls to E04HCF
This routine checks a usersupplied gradient function. It is obsolete when NAG is not used, especially
when derivatives are not used by the minimisation algorithms.
11.5 Replacing calls to E04JAF and E04KDF
These are fairly general easytouse minimisations that allow simple bounds. E04JAF uses only
function values while E04KDF also uses first derivatives. There is no direct equivalent in this library.
PDA_UNCMND might be used in a number of cases. PDA_SUBPLX is a robust general algorithm but
cannot be constrained. PDA_SA is the only constrainable algorithm in this library. Migration hints are
not yet available.
11.6 Replacing calls to E04YCF
This routine is a followup to E04FCF, E04FDF and E04GBF to convert the Jacobian of a leastsquares
minimisation to the covariance matrix of the fitted parameters. See the documentation of E04YCF, of
the minimisation routine used previously, and of PDA_LMDIF. Migration hints are not yet
available.