Namespace for all classes in the NAG Library for .NET.

Classes

  ClassDescription
A00
The methods in this chapter provide information about the NAG Library.
a00ac enables you to check if a valid key is available for the library licence management system.
Information about the precise implementation of the NAG Library in use will be needed when communicating with the NAG Response Centre (see the Library Overview).
C05
This chapter is concerned with the calculation of zeros of continuous functions of one or more variables. The majority of problems considered are for real-valued functions of real variables, in which case complex equations must be expressed in terms of the equivalent larger system of real equations.
C05..::..c05qdCommunications
Communications Class for c05qd
C05..::..c05rdCommunications
Communications Class for c05rd
C06
This chapter is concerned with the following tasks.
(a) Calculating the discrete Fourier transform of a sequence of real or complex data values.
(b) Calculating the discrete convolution or the discrete correlation of two sequences of real or complex data values using discrete Fourier transforms.
(c) Calculating the inverse Laplace transform of a user-supplied method.
(d) Direct summation of orthogonal series.
(e) Acceleration of convergence of a seuqnce of real values.
C09
This chapter is concerned with the analysis of datasets (or functions or operators) in terms of frequency and scale components using wavelet transforms. Wavelet transforms have been applied in many fields from time series analysis to image processing and the localization in either frequency or scale that they provide is useful for data compression or denoising. In general the standard wavelet transform uses dilation and scaling of a chosen function, ψt, (called the mother wavelet) such that
ψa,bt=1aψt-ba
where a gives the scaling and b determines the translation. Wavelet methods can be divided into continuous transforms and discrete transforms. In the continuous case, the pair a and b are real numbers with a>0. For the discrete transform, a and b can be chosen as a=2-j, b=k2-j for integers j, k 
ψj,kt=2j/2ψ2jt-k.
The continuous real valued, one-dimensional wavelet transform (CWT) is included in this chapter. The discrete wavelet transform (DWT) at a single level together with its inverse and the multi-level DWT with inverse are also provided for one, two and three dimensions. The Maximal Overlap DWT (MODWT) together with its inverse and the multi-level MODWT with inverse are provided for one dimension. The choice of wavelet for CWT includes the Morlet wavelet and derivatives of a Gaussian while the DWT and MODWT offer the orthogonal wavelets of Daubechies and a selection of biorthogonal wavelets.
C09..::..C09Communications
Communications Class
D01
This chapter provides methods for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules.
E01
This chapter is concerned with the interpolation of a function of one or more variables. When provided with the value of the function (and possibly one or more of its lowest-order derivatives) at each of a number of values of the variable(s), the NAG Library methods provide either an interpolating function or an interpolated value. For some of the interpolating functions, there are supporting NAG Library methods to evaluate, differentiate or integrate them.
E02
The main aim of this chapter is to assist you in finding a function which approximates a set of data points. Typically the data contain random errors, as of experimental measurement, which need to be smoothed out. To seek an approximation to the data, it is first necessary to specify for the approximating function a mathematical form (a polynomial, for example) which contains a number of unspecified coefficients: the appropriate fitting method then derives for the coefficients the values which provide the best fit of that particular form. The chapter deals mainly with curve and surface fitting (i.e., fitting with functions of one and of two variables) when a polynomial or a cubic spline is used as the fitting function, since these cover the most common needs. However, fitting with other functions and/or more variables can be undertaken by means of general linear or nonlinear methods (some of which are contained in other chapters) depending on whether the coefficients in the function occur linearly or nonlinearly. Cases where a graph rather than a set of data points is given can be treated simply by first reading a suitable set of points from the graph.
The chapter also contains methods for evaluating, differentiating and integrating polynomial and spline curves and surfaces, once the numerical values of their coefficients have been determined.
There is, too, a method for computing a Padé approximant of a mathematical function (see [Padé Approximants] and [Padé Approximants]).
E04
An optimization problem involves minimizing a function (called the objective function) of several variables, possibly subject to restrictions on the values of the variables defined by a set of constraint functions. Most methods in the Library are concerned with function minimization only, since the problem of maximizing a given objective function F(x) is equivalent to minimizing -Fx. Some methods allow you to specify whether you are solving a minimization or maximization problem, carrying out the required transformation of the objective function in the latter case.
In general methods in this chapter find a local minimum of a function f, that is a point x* s.t. for all x near x* fxfx*.
The E05 class contains methods to find the global minimum of a function f. At a global minimum x* fxfx* for all x.
The (H not in this release) contains methods typically regarded as belonging to the field of operations research.
This introduction is only a brief guide to the subject of optimization designed for the casual user. Anyone with a difficult or protracted problem to solve will find it beneficial to consult a more detailed text, such as Gill et al. (1981) or Fletcher (1987).
If you are unfamiliar with the mathematics of the subject you may find some sections difficult at first reading; if so, you should concentrate on [Types of Optimization Problems][Geometric Representation and Terminology][Scaling][Analysis of Computed Results] and [Recommendations on Choice and Use of Available Methods].
E04..::..e04dgOptions
Options Class for e04dg
E04..::..e04mfOptions
Options Class for e04mf
E04..::..e04ncOptions
Options Class for e04nc
E04..::..e04nfOptions
Options Class for e04nf
E04..::..e04nkOptions
Options Class for e04nk
E04..::..e04nqOptions
Options Class for e04nq
E04..::..e04ucOptions
Options Class for e04uc
E04..::..e04ufOptions
Options Class for e04uf
E04..::..e04ugOptions
Options Class for e04ug
E04..::..e04usOptions
Options Class for e04us
E04..::..e04vhOptions
Options Class for e04vh
E04..::..e04wdOptions
Options Class for e04wd
E05
Global optimization involves finding the absolute maximum or minimum value of a function (the objective function) of several variables, possibly subject to restrictions (defined by a set of bounds or constraint functions) on the values of the variables. Such problems can be much harder to solve than local optimization problems (which are discussed in E04 class) because it is difficult to determine whether a potential optimum found is global, and because of the nonlocal methods required to avoid becoming trapped near local optima. Most optimization methods in the NAG Library are concerned with function minimization only, since the problem of maximizing a given objective function F is equivalent to minimizing -F. In e05jb, (E05SAF not in this release) and (E05SBF not in this release), you may specify whether you are solving a minimization or maximization problem; in the latter case, the required transformation of the objective function will be carried out automatically. In what follows we refer exclusively to minimization problems.
This introduction is a brief guide to the subject of global optimization, designed for the casual user. For further details you may find it beneficial to consult a more detailed text, such as Neumaier (2004). Furthermore, much of the material in the E04 class is relevant in this context also. In particular, it is strongly recommended that you read [] in the E04 class Chapter Introduction.
E05..::..e05jbOptions
Options Class for e05jb
E05..::..e05ucOptions
Options Class for e05uc
E05..::..e05usOptions
Options Class for e05us
F01
This chapter provides facilities for four types of problem:
(i) Matrix Inversion
(ii) Matrix Factorizations
(iii) Matrix Arithmetic and Manipulation
(iv) Matrix Functions
F06
This chapter is concerned with basic linear algebra methods which perform elementary algebraic operations involving scalars, vectors and matrices. It includes methods which conform to the specifications of the BLAS (Basic Linear Algebra Subprograms).
F07
This chapter provides methods for the solution of systems of simultaneous linear equations, and associated computations. It provides methods for
  • – matrix factorizations;
  • – solution of linear equations;
  • – estimating matrix condition numbers;
  • – computing error bounds for the solution of linear equations;
  • – matrix inversion;
  • – computing scaling factors to equilibrate a matrix.
Methods are provided for both real and complex data.
For a general introduction to the solution of systems of linear equations, you should turn first to (F04 not in this release). The decision trees, in [Decision Trees] in the F04 class Chapter Introduction, direct you to the most appropriate methods in (F04 not in this release) F07 class for solving your particular problem. In particular, (F04 not in this release) F07 class contain Black Box (or driver) methods which enable some standard types of problem to be solved by a call to a single method. Where possible, methods in (F04 not in this release) call F07 class methods to perform the necessary computational tasks.
There are two types of driver methods in this chapter: simple drivers which just return the solution to the linear equations; and expert drivers which also return condition and error estimates and, in many cases, also allow equilibration. The simple drivers for real matrices have names of the form and for complex matrices have names of the form The expert drivers for real matrices have names of the form and for complex matrices have names of the form
The methods in this chapter ( F07 class) handle only dense and band matrices (not matrices with more specialised structures, or general sparse matrices).
The methods in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.
F08
This chapter provides methods for the solution of linear least squares problems, eigenvalue problems and singular value problems, as well as associated computations. It provides methods for:
  • – solution of linear least squares problems
  • – solution of symmetric eigenvalue problems
  • – solution of nonsymmetric eigenvalue problems
  • – solution of singular value problems
  • – solution of generalized linear least squares problems
  • – solution of generalized symmetric-definite eigenvalue problems
  • – solution of generalized nonsymmetric eigenvalue problems
  • – solution of generalized singular value problems
  • – matrix factorizations associated with the above problems
  • – estimating condition numbers of eigenvalue and eigenvector problems
  • – estimating the numerical rank of a matrix
  • – solution of the Sylvester matrix equation
Methods are provided for both real and complex data.
For a general introduction to the solution of linear least squares problems, you should turn first to (F04 not in this release). The decision trees, at the end of (F04 not in this release), direct you to the most appropriate methods in (F04 not in this release) F08 class. (F04 not in this release) F08 class contain Black Box (or driver) methods which enable standard linear least squares problems to be solved by a call to a single method.
For a general introduction to eigenvalue and singular value problems, you should turn first to (F02 not in this release). The decision trees, at the end of (F02 not in this release), direct you to the most appropriate methods in (F02 not in this release) F08 class. (F02 not in this release) F08 class contain Black Box (or driver) methods which enable standard types of problem to be solved by a call to a single method. Often methods in (F02 not in this release) call F08 class methods to perform the necessary computational tasks.
The methods in this chapter ( F08 class) handle only dense, band, tridiagonal and Hessenberg matrices (not matrices with more specialised structures, or general sparse matrices). The tables in [Recommendations on Choice and Use of Available Methods] and the decision trees in [Decision Trees] direct you to the most appropriate methods in F08 class.
The methods in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.
It is not expected that you will need to read all of the following sections, but rather you will pick out those sections relevant to your particular problem.
G01
This chapter covers three topics:
  • plots, descriptive statistics, and exploratory data analysis;
  • statistical distribution functions and their inverses;
  • testing for Normality and other distributions.
G02
This chapter is concerned with two techniques – correlation analysis and regression modelling – both of which are concerned with determining the inter-relationships among two or more variables.
Other chapters of the NAG Library which cover similar problems are E02 class E04 class. E02 class methods may be used to fit linear models by criteria other than least squares, and also for polynomial regression; E04 class methods may be used to fit nonlinear models and linearly constrained linear models.
G02..::..g02qgOptions
Options Class for g02qg
G03
This chapter is concerned with methods for studying multivariate data. A multivariate dataset consists of several variables recorded on a number of objects or individuals. Multivariate methods can be classified as those that seek to examine the relationships between the variables (e.g., principal components), known as variable-directed methods, and those that seek to examine the relationships between the objects (e.g., cluster analysis), known as individual-directed methods.
Multiple regression is not included in this chapter as it involves the relationship of a single variable, known as the response variable, to the other variables in the dataset, the explanatory variables. Routines for multiple regression are provided in G02 class.
G05
This chapter is concerned with the generation of sequences of independent pseudorandom and quasi-random numbers from various distributions, and models.
G05..::..G05State
Class for holding the state of the Random number generator.
G07
This chapter deals with the estimation of unknown parameters of a univariate distribution. It includes both point and interval estimation using maximum likelihood and robust methods.
G13
This chapter provides facilities for investigating and modelling the statistical structure of series of observations collected at points in time. The models may then be used to forecast the series.
The chapter covers the following models and approaches.
1. Univariate time series analysis, including autocorrelation functions and autoregressive moving average (ARMA) models.
2. Univariate spectral analysis.
3. Transfer function (multi-input) modelling, in which one time series is dependent on other time series.
4. Bivariate spectral methods including coherency, gain and input response functions.
5. Vector ARMA models for multivariate time series.
6. Kalman filter models.
7. GARCH models for volatility.
8. Inhomogeneous Time Series.
H
This chapter provides methods to solve certain integer programming, transportation and shortest path problems. Additionally ‘best subset’ methods are included.
H..::..h02bbOptions
Options Class for h02bb
H..::..h02cbOptions
Options Class for h02cb
H..::..h02ceOptions
Options Class for h02ce
S
This chapter is concerned with the provision of some commonly occurring physical and mathematical functions.
X01
This chapter is concerned with the provision of mathematical constants required by other methods within the Library.
X02
This chapter is concerned with parameters which characterise certain aspects of the computing environment in which the NAG Library is implemented. They relate primarily to floating-point arithmetic, but also to integer arithmetic, the elementary functions and exception handling. The values of the parameters vary from one implementation of the Library to another, but within the context of a single implementation they are constants.
The parameters are intended for use primarily by other methods in the Library, but users of the Library may sometimes need to refer to them directly.
X04
This chapter contains utility methods concerned with input and output to or from an external file.
PrintManager
Utility class to control the output of error messages and monitoring information. See the examples in the Library Introduction.
DataReader
DataReader Class, IO stream for reading NAG Data files

Structures

  StructureDescription
Complex
Struct to denote a complex value as two doubles.

Delegates

  DelegateDescription
C05..::..C05AU_F
f must evaluate the function f whose zero is to be determined.
C05..::..C05AW_F
f must evaluate the function f whose zero is to be determined.
C05..::..C05AY_F
f must evaluate the function f whose zero is to be determined.
C05..::..C05QB_FCN
fcn must return the values of the functions fi at a point x.
C05..::..C05QC_FCN
fcn must return the values of the functions fi at a point x, unless iflag=0 on entry to c05qc.
C05..::..C05RB_FCN
Depending upon the value of iflag, fcn must either return the values of the functions fi at a point x or return the Jacobian at x.
C05..::..C05RC_FCN
Depending upon the value of iflag, fcn must either return the values of the functions fi at a point x or return the Jacobian at x.
D01..::..D01AH_F
f must return the value of the integrand f at a given point.
D01..::..D01AJ_F
f must return the value of the integrand f at a given point.
D01..::..D01AK_F
f must return the value of the integrand f at a given point.
D01..::..D01AL_F
f must return the value of the integrand f at a given point.
D01..::..D01AM_F
f must return the value of the integrand f at a given point.
D01..::..D01AN_G
g must return the value of the function g at a given point x.
D01..::..D01AP_G
g must return the value of the function g at a given point x.
D01..::..D01AQ_G
g must return the value of the function g at a given point x.
D01..::..D01AR_FUN
fun must return the value of the integrand f at a specified point.
D01..::..D01AS_G
g must return the value of the function g at a given point x.
D01..::..D01BD_F
f must return the value of the integrand f at a given point.
D01..::..D01DA_F
f must return the value of the integrand f at a given point.
D01..::..D01DA_PHI1
phi1 must return the lower limit of the inner integral for a given value of y.
D01..::..D01DA_PHI2
phi2 must return the upper limit of the inner integral for a given value of y.
D01..::..D01FC_FUNCTN
functn must return the value of the integrand f at a given point.
D01..::..D01GD_VECFUN
vecfun must evaluate the integrand at a specified set of points.
D01..::..D01GD_VECREG
vecreg must evaluate the limits of integration in any dimension for a set of points.
D01..::..D01JA_F
f must return the value of the integrand f at a given point.
D01..::..D01PA_FUNCTN
functn must return the value of the integrand f at a given point.
E04..::..E04AB_FUNCT
You must supply this method to calculate the value of the function Fx at any point x in a,b. It should be tested separately before being used in conjunction with e04ab.
E04..::..E04BB_FUNCT
You must supply this method to calculate the values of Fx and dFdx at any point x in a,b.
It should be tested separately before being used in conjunction with e04bb.
E04..::..E04CB_FUNCT
funct must evaluate the function F at a specified point. It should be tested separately before being used in conjunction with e04cb.
E04..::..E04CB_MONIT
monit may be used to monitor the optimization process. It is invoked once every iteration.
If no monitoring is required, monit may be the dummy monitoring method E04CBK supplied by the NAG Library.
E04..::..E04DG_OBJFUN
objfun must calculate the objective function Fx and possibly its gradient as well for a specified n-element vector x.
E04..::..E04FC_LSQFUN
lsqfun must calculate the vector of values fix at any point x. (However, if you do not wish to calculate the residuals at a particular x, there is the option of setting a parameter to cause e04fc to terminate immediately.)
E04..::..E04FC_LSQMON
If iprint0, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.
If iprint<0, the dummy method E04FDZ can be used as lsqmon.
E04..::..E04FY_LSFUN1
You must supply this method to calculate the vector of values fix at any point x. It should be tested separately before being used in conjunction with e04fy (see the E04 class).
E04..::..E04GD_LSQFUN
lsqfun must calculate the vector of values fix and Jacobian matrix of first derivatives fixj at any point x. (However, if you do not wish to calculate the residuals or first derivatives at a particular x, there is the option of setting a parameter to cause e04gd to terminate immediately.)
E04..::..E04GD_LSQMON
If iprint0, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.
If iprint<0, the dummy method E04FDZ can be used as lsqmon.
E04..::..E04GY_LSFUN2
You must supply this method to calculate the vector of values fix and the Jacobian matrix of first derivatives fixj at any point x. It should be tested separately before being used in conjunction with e04gy (see the E04 class).
E04..::..E04GZ_LSFUN2
You must supply this method to calculate the vector of values fix and the Jacobian matrix of first derivatives fixj at any point x. It should be tested separately before being used in conjunction with e04gz.
E04..::..E04HC_FUNCT
funct must evaluate the function and its first derivatives at a given point. (The minimization methods mentioned in [Description] gives you the option of resetting parameters of funct to cause the minimization process to terminate immediately. e04hc will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::..E04HD_FUNCT
funct must evaluate the function and its first derivatives at a given point. (e04lb gives you the option of resetting parameters of funct to cause the minimization process to terminate immediately. e04hd will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::..E04HD_H
h must evaluate the second derivatives of the function at a given point. (As with funct, a parameter can be set to cause immediate termination.)
E04..::..E04HE_LSQFUN
lsqfun must calculate the vector of values fix and Jacobian matrix of first derivatives fixj at any point x. (However, if you do not wish to calculate the residuals or first derivatives at a particular x, there is the option of setting a parameter to cause e04he to terminate immediately.)
E04..::..E04HE_LSQHES
lsqhes must calculate the elements of the symmetric matrix
Bx=i=1mfixGix,
at any point x, where Gix is the Hessian matrix of fix. (As with lsqfun, there is the option of causing e04he to terminate immediately.)
E04..::..E04HE_LSQMON
If iprint0, you must supply lsqmon which is suitable for monitoring the minimization process. lsqmon must not change the values of any of its parameters.
If iprint<0, the dummy method E04FDZ can be used as lsqmon.
E04..::..E04HY_LSFUN2
You must supply this method to calculate the vector of values fix and the Jacobian matrix of first derivatives fixj at any point x. It should be tested separately before being used in conjunction with e04hy (see the E04 class).
E04..::..E04HY_LSHES2
You must supply this method to calculate the elements of the symmetric matrix
Bx=i=1mfixGix,
at any point x, where Gix is the Hessian matrix of fix. It should be tested separately before being used in conjunction with e04hy (see the E04 class).
E04..::..E04JC_MONFUN
monfun may be used to monitor the optimization process. It is invoked every time a new trust-region radius is chosen.
If no monitoring is required, monfun may be the dummy monitoring method E04JCP supplied by the NAG Library.
E04..::..E04JC_OBJFUN
objfun must evaluate the objective function F at a specified vector x.
E04..::..E04JY_FUNCT1
You must supply funct1 to calculate the value of the function Fx at any point x. It should be tested separately before being used with e04jy (see the E04 class).
E04..::..E04KD_FUNCT
funct must evaluate the function Fx and its first derivatives Fxj at a specified point. (However, if you do not wish to calculate F or its first derivatives at a particular x, there is the option of setting a parameter to cause e04kd to terminate immediately.)
E04..::..E04KD_MONIT
If iprint0, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its parameters.
If iprint<0, a monit with the correct parameter list must still be supplied, although it will not be called.
E04..::..E04KY_FUNCT2
You must supply funct2 to calculate the values of the function Fx and its first derivative Fxj at any point x. It should be tested separately before being used in conjunction with e04ky (see the E04 class).
E04..::..E04KZ_FUNCT2
You must supply this method to calculate the values of the function Fx and its first derivatives Fxj at any point x. It should be tested separately before being used in conjunction with e04kz (see E04 class).
E04..::..E04LB_FUNCT
funct must evaluate the function Fx and its first derivatives Fxj at any point x. (However, if you do not wish to calculate Fx or its first derivatives at a particular x, there is the option of setting a parameter to cause e04lb to terminate immediately.)
E04..::..E04LB_H
h must calculate the second derivatives of F at any point x. (As with funct, there is the option of causing e04lb to terminate immediately.)
E04..::..E04LB_MONIT
If iprint0, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its parameters.
If iprint<0, a monit with the correct parameter list should still be supplied, although it will not be called.
E04..::..E04LY_FUNCT2
You must supply this method to calculate the values of the function Fx and its first derivatives Fxj at any point x. It should be tested separately before being used in conjunction with e04ly (see the E04 class).
E04..::..E04LY_HESS2
You must supply this method to evaluate the elements Hij=2Fxixj of the matrix of second derivatives of Fx at any point x. It should be tested separately before being used in conjunction with e04ly (see the E04 class).
E04..::..E04NF_QPHESS
In general, you need not provide a version of qphess, because a ‘default’ method with name E04NFU/E54NFU is included in the Library. However, the algorithm of e04nf requires only the product of H or HTH and a vector x; and in some cases you may obtain increased efficiency by providing a version of qphess that avoids the need to define the elements of the matrices H or HTH explicitly.
qphess is not referenced if the problem is of type FP or LP, in which case qphess may be the method E04NFU/E54NFU.
E04..::..E04NK_QPHX
For QP problems, you must supply a version of qphx to compute the matrix product Hx. If H has zero rows and columns, it is most efficient to order the variables x=yzT so that
Hx=H1000yz=H1y0,
where the nonlinear variables y appear first as shown. For FP and LP problems, qphx will never be called by e04nk and hence qphx may be the dummy method E04NKU/E54NKU.
E04..::..E04NQ_QPHX
For QP problems, you must supply a version of qphx to compute the matrix product Hx for a given vector x. If H has rows and columns of zeros, it is most efficient to order x so that the nonlinear variables appear first. For example, if x=y,zT and only y enters the objective quadratically then
Hx=H1000yz=H1y0. (2)
In this case, ncolh should be the dimension of y, and qphx should compute H1y. For FP and LP problems, qphx will never be called by e04nq and hence qphx may be the dummy method E04NSH.
E04..::..E04UC_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (=cx) for a specified n-element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04uc and confun may be the dummy method E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E04..::..E04UC_OBJFUN
objfun must calculate the objective function Fx and (optionally) its gradient gx=Fx for a specified n-vector x.
E04..::..E04UG_CONFUN
confun must calculate the vector Fx of nonlinear constraint functions and (optionally) its Jacobian =Fx for a specified n1 (n) element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04ug and confun may be the dummy method E04UGM. (E04UGM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E04..::..E04UG_OBJFUN
objfun must calculate the nonlinear part of the objective function fx and (optionally) its gradient =fx for a specified n1 (n) element vector x. If there are no nonlinear objective variables (i.e., nonln=0), objfun will never be called by e04ug and objfun may be the dummy method E04UGN. (E04UGN is included in the NAG Library.)
E04..::..E04US_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (=cx) for a specified n-element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04us and confun may be the dummy method E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E04..::..E04US_OBJFUN
objfun must calculate either the ith element of the vector fx=f1x,f2x,,fmxT or all m elements of fx and (optionally) its Jacobian (=fx) for a specified n-element vector x.
E04..::..E04VH_USRFUN
usrfun must define the nonlinear portion fx of the problem functions Fx=fx+Ax, along with its gradient elements Gijx=fixxj. (A dummy method is needed even if f0 and all functions are linear.)
In general, usrfun should return all function and gradient values on every entry except perhaps the last. This provides maximum reliability and corresponds to the default option setting, Derivative Option=1.
The elements of Gx are stored in the array g1:usrfun_leng in the order specified by the input arrays igfun and jgvar.
In practice it is often convenient not to code gradients. e04vh is able to estimate them by finite differences, using a call to usrfun for each variable xj for which some fixxj needs to be estimated. However, this reduces the reliability of the optimization algorithm, and it can be very expensive if there are many such variables xj.
As a compromise, e04vh allows you to code as many gradients as you like. This option is implemented as follows. Just before usrfun is called, each element of the derivative array g is initialized to a specific value. On exit, any element retaining that value must be estimated by finite differences.
Some rules of thumb follow:
(i) for maximum reliability, compute all gradients;
(ii) if the gradients are expensive to compute, specify optional parameter Nonderivative Linesearch and use the value of the input parameter needg to avoid computing them on certain entries. (There is no need to compute gradients if needg=0 on entry to usrfun.);
(iii) if not all gradients are known, you must specify Derivative Option=0. You should still compute as many gradients as you can. (It often happens that some of them are constant or zero.);
(iv) again, if the known gradients are expensive, don't compute them if needg=0 on entry to usrfun;
(v) use the input parameter status to test for special actions on the first or last entries;
(vi) while usrfun is being developed, use the optional parameter Verify Level to check the computation of gradients that are supposedly known;
(vii) usrfun is not called until the linear constraints and bounds on x are satisfied. This helps confine x to regions where the functions fix are likely to be defined. However, be aware of the optional parameter Minor Feasibility Tolerance if the functions have singularities on the constraint boundaries;
(viii) set status=-1 if some of the functions are undefined. The linesearch will shorten the step and try again;
(ix) set status-2 if you want e04vh to stop.
E04..::..E04VJ_USRFUN
usrfun must define the problem functions Fx. This method is passed to e04vj as the external parameter usrfun.
E04..::..E04WD_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian, cx, for a specified n-vector x. If there are no nonlinear constraints (i.e., ncnln=0), e04wd will never call confun, so it may be the dummy method E04WDP. (E04WDP is included in the NAG Library). If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
If all constraint gradients (Jacobian elements) are known (i.e., Derivative Level=2 or 3), any constant elements may be assigned to cjac once only at the start of the optimization. An element of cjac that is not subsequently assigned in confun will retain its initial value throughout. Constant elements may be loaded in cjac during the first call to confun (signalled by the value of nstate=1). The ability to preload constants is useful when many Jacobian elements are identically zero, in which case cjac may be initialized to zero and nonzero elements may be reset by confun.
It must be emphasized that, if Derivative Level<2, unassigned elements of cjac are not treated as constant; they are estimated by finite differences, at nontrivial expense.
E04..::..E04WD_OBJFUN
objfun must calculate the objective function Fx and (optionally) its gradient gx=Fx for a specified n-vector x.
E04..::..E04XA_OBJFUN
If mode=0 or 2, objfun must calculate the objective function; otherwise if mode=1, objfun must calculate the objective function and the gradients.
E04..::..E04YA_LSQFUN
lsqfun must calculate the vector of values fix and their first derivatives fixj at any point x. (The minimization methods mentioned in [Description] give you the option of resetting a parameter to terminate immediately. e04ya will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::..E04YB_LSQFUN
lsqfun must calculate the vector of values fix and their first derivatives fixj at any point x. (e04he gives you the option of resetting parameters of lsqfun to cause the minimization process to terminate immediately. e04yb will also terminate immediately, without finishing the checking process, if the parameter in question is reset.)
E04..::..E04YB_LSQHES
lsqhes must calculate the elements of the symmetric matrix
Bx=i=1mfixGix,
at any point x, where Gix is the Hessian matrix of fix. (As with lsqfun, a parameter can be set to cause immediate termination.)
E05..::..E05JB_MONIT
monit may be used to monitor the optimization process. It is invoked upon every successful completion of the procedure in which a sub-box is considered for splitting. It will also be called just before e05jb exits if that splitting procedure was not successful.
If no monitoring is required, monit may be the dummy monitoring method E05JBK supplied by the NAG Library.
E05..::..E05JB_OBJFUN
objfun must evaluate the objective function Fx for a specified n-vector x.
E05..::..E05UC_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (=cx) for a specified n-element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e05uc and confun may be the dummy method E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E05..::..E05UC_OBJFUN
objfun must calculate the objective function Fx and (optionally) its gradient gx=Fx for a specified n-vector x.
E05..::..E05UC_START
start must calculate the npts starting points to be used by the local optimizer. If you do not wish to write a method specific to your problem then E05UCZ may be used as the actual argument. E05UCZ is supplied in the NAG Library and uses the NAG quasi-random number generators to distribute starting points uniformly across the domain. It is affected by the value of repeat.
E05..::..E05US_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (=cx) for a specified n-element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e05us and confun may be the dummy method E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.
E05..::..E05US_OBJFUN
objfun must calculate either the ith element of the vector fx=f1x,f2x,,fmxT or all m elements of fx and (optionally) its Jacobian (=fx) for a specified n-element vector x.
E05..::..E05US_START
start must calculate the npts starting points to be used by the local optimizer. If you do not wish to write a method specific to your problem then E05UCZ may be used as the actual argument. E05UCZ is supplied in the NAG Library and uses the NAG quasi-random number generators to distribute starting points uniformly across the domain. It is affected by the value of repeat1.
F01..::..F01EF_F
The method f evaluates fzi at a number of points zi.
F01..::..F01FF_F
The method f evaluates fzi at a number of points zi.
G02..::..G02EF_MONFUN
You may define your own function or specify the NAG defined default function G02EFH.
G02..::..G02HB_UCV
ucv must return the value of the function u for a given value of its argument. The value of u must be non-negative.
G02..::..G02HD_CHI
If isigma>0, chi must return the value of the weight function χ for a given value of its argument. The value of χ must be non-negative.
G02..::..G02HD_PSI
psi must return the value of the weight function ψ for a given value of its argument.
G02..::..G02HF_PSI
psi must return the value of the ψ function for a given value of its argument.
G02..::..G02HF_PSP
psp must return the value of ψt=ddtψt for a given value of its argument.
G02..::..G02HL_UCV
ucv must return the values of the functions u and w and their derivatives for a given value of its argument.
G02..::..G02HM_UCV
ucv must return the values of the functions u and w for a given value of its argument.
H..::..H02CB_MONIT
monit may be used to print out intermediate output and to affect the course of the computation. Specifically, it allows you to specify a realistic value for the cut-off value (see [Description]) and to terminate the algorithm. If you do not require any intermediate output, have no estimate of the cut-off value and require an exhaustive tree search then monit may be the dummy method H02CBU.
H..::..H02CB_QPHESS
In general, you need not provide a version of qphess, because a ‘default’ method with name e04nfu is included in the Library. However, the algorithm of h02cb requires only the product of H or HTH and a vector x; and in some cases you may obtain increased efficiency by providing a version of qphess that avoids the need to define the elements of the matrices H or HTH explicitly. qphess is not referenced if the problem is of type FP or LP, in which case qphess may be the method e04nfu.
H..::..H02CE_MONIT
To provide feed-back on the progress of the branch and bound process. Additionally monit provides, via its parameter halt, the ability to terminate the process. (You might choose to do this when an integer solution is found, rather than search for a better solution.) If you do not require any intermediate output then monit may be the dummy method (H02CEY not in this release).
H..::..H02CE_QPHX
For QP problems, you must supply a version of qphx to compute the matrix product Hx. If H has rows and columns consisting entirely of zeros, it is most efficient to order the variables x=yzT so that
Hx=H1000yz=H1y0,
where the nonlinear variables y appear first as shown. For LP problems, qphx will never be called by h02ce.
PrintManager..::..MessageLogger
Delegate type to use for messages.