e04us is designed to minimize an arbitrary smooth sum of squares function subject to constraints (which may include simple bounds on the variables, linear constraints and smooth nonlinear constraints) using a sequential quadratic programming (SQP) method. As many first derivatives as possible should be supplied by you; any unspecified derivatives are approximated by finite differences. See the description of the optional parameter Derivative Level, in [Description of the Optional Parameters]. It is not intended for large sparse problems.
e04us may also be used for unconstrained, bound-constrained and linearly constrained optimization.

Syntax

C#
public static void e04us(
	int m,
	int n,
	int nclin,
	int ncnln,
	double[,] a,
	double[] bl,
	double[] bu,
	double[] y,
	E04..::..E04US_CONFUN confun,
	E04..::..E04US_OBJFUN objfun,
	out int iter,
	int[] istate,
	double[] c,
	double[,] cjac,
	double[] f,
	double[,] fjac,
	double[] clamda,
	out double objf,
	double[,] r,
	double[] x,
	E04..::..e04usOptions options,
	out int ifail
)
Visual Basic
Public Shared Sub e04us ( _
	m As Integer, _
	n As Integer, _
	nclin As Integer, _
	ncnln As Integer, _
	a As Double(,), _
	bl As Double(), _
	bu As Double(), _
	y As Double(), _
	confun As E04..::..E04US_CONFUN, _
	objfun As E04..::..E04US_OBJFUN, _
	<OutAttribute> ByRef iter As Integer, _
	istate As Integer(), _
	c As Double(), _
	cjac As Double(,), _
	f As Double(), _
	fjac As Double(,), _
	clamda As Double(), _
	<OutAttribute> ByRef objf As Double, _
	r As Double(,), _
	x As Double(), _
	options As E04..::..e04usOptions, _
	<OutAttribute> ByRef ifail As Integer _
)
Visual C++
public:
static void e04us(
	int m, 
	int n, 
	int nclin, 
	int ncnln, 
	array<double,2>^ a, 
	array<double>^ bl, 
	array<double>^ bu, 
	array<double>^ y, 
	E04..::..E04US_CONFUN^ confun, 
	E04..::..E04US_OBJFUN^ objfun, 
	[OutAttribute] int% iter, 
	array<int>^ istate, 
	array<double>^ c, 
	array<double,2>^ cjac, 
	array<double>^ f, 
	array<double,2>^ fjac, 
	array<double>^ clamda, 
	[OutAttribute] double% objf, 
	array<double,2>^ r, 
	array<double>^ x, 
	E04..::..e04usOptions^ options, 
	[OutAttribute] int% ifail
)
F#
static member e04us : 
        m : int * 
        n : int * 
        nclin : int * 
        ncnln : int * 
        a : float[,] * 
        bl : float[] * 
        bu : float[] * 
        y : float[] * 
        confun : E04..::..E04US_CONFUN * 
        objfun : E04..::..E04US_OBJFUN * 
        iter : int byref * 
        istate : int[] * 
        c : float[] * 
        cjac : float[,] * 
        f : float[] * 
        fjac : float[,] * 
        clamda : float[] * 
        objf : float byref * 
        r : float[,] * 
        x : float[] * 
        options : E04..::..e04usOptions * 
        ifail : int byref -> unit 

Parameters

m
Type: System..::..Int32
On entry: m, the number of subfunctions associated with Fx.
Constraint: m>0.
n
Type: System..::..Int32
On entry: n, the number of variables.
Constraint: n>0.
nclin
Type: System..::..Int32
On entry: nL, the number of general linear constraints.
Constraint: nclin0.
ncnln
Type: System..::..Int32
On entry: nN, the number of nonlinear constraints.
Constraint: ncnln0.
a
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint: dim1max1,nclin
Note: the second dimension of the array a must be at least n if nclin>0, and at least 1 otherwise.
On entry: the ith row of a contains the ith row of the matrix AL of general linear constraints in (1). That is, the ith row contains the coefficients of the ith general linear constraint, for i=1,2,,nclin.
If nclin=0, the array a is not referenced.
bl
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin+ncnln]
On entry: must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, the next nL elements the bounds for the general linear constraints (if any) and the next nN elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set bl[j-1]-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bu[j-1]bigbnd; the default value of bigbnd is 1020, but this may be changed by the optional parameter Infinite Bound Size. To specify the jth constraint as an equality, set bl[j-1]=bu[j-1]=β, say, where β<bigbnd.
Constraints:
  • bl[j-1]bu[j-1], for j=1,2,,n+nclin+ncnln;
  • if bl[j-1]=bu[j-1]=β, β<bigbnd.
bu
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin+ncnln]
On entry: must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, the next nL elements the bounds for the general linear constraints (if any) and the next nN elements the bounds for the general nonlinear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set bl[j-1]-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bu[j-1]bigbnd; the default value of bigbnd is 1020, but this may be changed by the optional parameter Infinite Bound Size. To specify the jth constraint as an equality, set bl[j-1]=bu[j-1]=β, say, where β<bigbnd.
Constraints:
  • bl[j-1]bu[j-1], for j=1,2,,n+nclin+ncnln;
  • if bl[j-1]=bu[j-1]=β, β<bigbnd.
y
Type: array<System..::..Double>[]()[][]
An array of size [m]
On entry: the coefficients of the constant vector y of the objective function.
confun
Type: NagLibrary..::..E04..::..E04US_CONFUN
confun must calculate the vector cx of nonlinear constraint functions and (optionally) its Jacobian (=cx) for a specified n-element vector x. If there are no nonlinear constraints (i.e., ncnln=0), confun will never be called by e04us and confun may be the dummy method E04UDM. (E04UDM is included in the NAG Library.) If there are nonlinear constraints, the first call to confun will occur before the first call to objfun.

A delegate of type E04US_CONFUN.

Note:  confun should be tested separately before being used in conjunction with e04us. See also the description of the optional parameter Verify.
objfun
Type: NagLibrary..::..E04..::..E04US_OBJFUN
objfun must calculate either the ith element of the vector fx=f1x,f2x,,fmxT or all m elements of fx and (optionally) its Jacobian (=fx) for a specified n-element vector x.

A delegate of type E04US_OBJFUN.

Note:  objfun should be tested separately before being used in conjunction with e04us. See also the description of the optional parameter Verify.
iter
Type: System..::..Int32%
On exit: the number of major iterations performed.
istate
Type: array<System..::..Int32>[]()[][]
An array of size [n+nclin+ncnln]
On entry: need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, the elements of istate corresponding to the bounds and linear constraints define the initial working set for the procedure that finds a feasible point for the linear constraints and bounds. The active set at the conclusion of this procedure and the elements of istate corresponding to nonlinear constraints then define the initial working set for the first QP subproblem. More precisely, the first n elements of istate refer to the upper and lower bounds on the variables, the next nL elements refer to the upper and lower bounds on ALx, and the next nN elements refer to the upper and lower bounds on cx. Possible values for istate[j-1] are as follows:
istate[j-1]Meaning
0The corresponding constraint is not in the initial QP working set.
1This inequality constraint should be in the working set at its lower bound.
2This inequality constraint should be in the working set at its upper bound.
3This equality constraint should be in the initial working set. This value must not be specified unless bl[j-1]=bu[j-1].
The values -2, -1 and 4 are also acceptable but will be modified by the method. If e04us has been called previously with the same values of n, nclin and ncnln, istate already contains satisfactory information. (See also the description of the optional parameter Warm Start.) The method also adjusts (if necessary) the values supplied in x to be consistent with istate.
Constraint: -2istate[j-1]4, for j=1,2,,n+nclin+ncnln.
On exit: the status of the constraints in the QP working set at the point returned in x. The significance of each possible value of istate[j-1] is as follows:
istate[j-1]Meaning
-2This constraint violates its lower bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem.
-1This constraint violates its upper bound by more than the appropriate feasibility tolerance (see the optional parameters Linear Feasibility Tolerance and Nonlinear Feasibility Tolerance). This value can occur only when no feasible point can be found for a QP subproblem.
-0The constraint is satisfied to within the feasibility tolerance, but is not in the QP working set.
-1This inequality constraint is included in the QP working set at its lower bound.
-2This inequality constraint is included in the QP working set at its upper bound.
-3This constraint is included in the QP working set as an equality. This value of istate can occur only when bl[j-1]=bu[j-1].
c
Type: array<System..::..Double>[]()[][]
An array of size [max1,ncnln]
On exit: if ncnln>0, c[i-1] contains the value of the ith nonlinear constraint function ci at the final iterate, for i=1,2,,ncnln.
If ncnln=0, the array c is not referenced.
cjac
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint: dim1max1,ncnln
Note: the second dimension of the array cjac must be at least n if ncnln>0, and at least 1 otherwise.
On entry: in general, cjac need not be initialized before the call to e04us. However, if Derivative Level=3, you may optionally set the constant elements of cjac (see parameter nstate in the description of confun). Such constant elements need not be re-assigned on subsequent calls to confun.
On exit: if ncnln>0, cjac contains the Jacobian matrix of the nonlinear constraint functions at the final iterate, i.e., cjac[i-1,j-1] contains the partial derivative of the ith constraint function with respect to the jth variable, for i=1,2,,ncnln and j=1,2,,n. (See the discussion of parameter cjac under confun.)
If ncnln=0, the array cjac is not referenced.
f
Type: array<System..::..Double>[]()[][]
An array of size [m]
On exit: f[i-1] contains the value of the ith function fi at the final iterate, for i=1,2,,m.
fjac
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, n]
Note: dim1 must satisfy the constraint: dim1m
On entry: in general, fjac need not be initialized before the call to e04us. However, if Derivative Level=3, you may optionally set the constant elements of fjac (see parameter nstate in the description of objfun). Such constant elements need not be re-assigned on subsequent calls to objfun.
On exit: the Jacobian matrix of the functions f1,f2,,fm at the final iterate, i.e., fjac[i-1,j-1] contains the partial derivative of the ith function with respect to the jth variable, for i=1,2,,m and j=1,2,,n. (See also the discussion of parameter fjac under objfun.)
clamda
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin+ncnln]
On entry: need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, clamda[j-1] must contain a multiplier estimate for each nonlinear constraint with a sign that matches the status of the constraint specified by the istate array, for j=n+nclin+1,,n+nclin+ncnln. The remaining elements need not be set. Note that if the jth constraint is defined as ‘inactive’ by the initial value of the istate array (i.e., istate[j-1]=0), clamda[j-1] should be zero; if the jth constraint is an inequality active at its lower bound (i.e., istate[j-1]=1), clamda[j-1] should be non-negative; if the jth constraint is an inequality active at its upper bound (i.e., istate[j-1]=2, clamda[j-1] should be non-positive. If necessary, the method will modify clamda to match these rules.
On exit: the values of the QP multipliers from the last QP subproblem. clamda[j-1] should be non-negative if istate[j-1]=1 and non-positive if istate[j-1]=2.
objf
Type: System..::..Double%
On exit: the value of the objective function at the final iterate.
r
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, n]
Note: dim1 must satisfy the constraint: dim1n
On entry: need not be initialized if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, r must contain the upper triangular Cholesky factor R of the initial approximation of the Hessian of the Lagrangian function, with the variables in the natural order. Elements not in the upper triangular part of r are assumed to be zero and need not be assigned.
On exit: if Hessian=NO, r contains the upper triangular Cholesky factor R of QTH~Q, an estimate of the transformed and reordered Hessian of the Lagrangian at x (see (6) in e04uf). If Hessian=YES, r contains the upper triangular Cholesky factor R of H, the approximate (untransformed) Hessian of the Lagrangian, with the variables in the natural order.
x
Type: array<System..::..Double>[]()[][]
An array of size [n]
On entry: an initial estimate of the solution.
On exit: the final estimate of the solution.
options
Type: NagLibrary..::..E04..::..e04usOptions
An Object of type E04.e04usOptions. Used to configure optional parameters to this method.
ifail
Type: System..::..Int32%
On exit: ifail=0 unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

Description

e04us is designed to solve the nonlinear least squares programming problem – the minimization of a smooth nonlinear sum of squares function subject to a set of constraints on the variables. The problem is assumed to be stated in the following form:
minimizexRnFx=12i=1myi-fix2  subject to  lxALxcxu, (1)
where Fx (the objective function) is a nonlinear function which can be represented as the sum of squares of m subfunctions y1-f1x,y2-f2x,,ym-fmx, the yi are constant, AL is an nL by n constant matrix, and cx is an nN element vector of nonlinear constraint functions. (The matrix AL and the vector cx may be empty.) The objective function and the constraint functions are assumed to be smooth, i.e., at least twice-continuously differentiable. (The method of e04us will usually solve (1) if any isolated discontinuities are away from the solution.)
Note that although the bounds on the variables could be included in the definition of the linear constraints, we prefer to distinguish between them for reasons of computational efficiency. For the same reason, the linear constraints should not be included in the definition of the nonlinear constraints. Upper and lower bounds are specified for all the variables and for all the constraints. An equality constraint can be specified by setting li=ui. If certain bounds are not present, the associated elements of l or u can be set to special values that will be treated as - or +. (See the description of the optional parameter Infinite Bound Size.)
You must supply an initial estimate of the solution to (1), together with methods that define fx=f1x,f2x,,fmxT, cx and as many first partial derivatives as possible; unspecified derivatives are approximated by finite differences.
The subfunctions are defined by the array y and objfun, and the nonlinear constraints are defined by confun. On every call, these methods must return appropriate values of fx and cx. You should also provide the available partial derivatives. Any unspecified derivatives are approximated by finite differences for a discussion of the optional parameter Derivative Level. Note that if there are any nonlinear constraints, then the first call to confun will precede the first call to objfun.
For maximum reliability, it is preferable for you to provide all partial derivatives (see Chapter 8 of Gill et al. (1981) for a detailed discussion). If all gradients cannot be provided, it is similarly advisable to provide as many as possible. While developing objfun and confun, the optional parameter Verify should be used to check the calculation of any known gradients.

References

Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Hock W and Schittkowski K (1981) Test Examples for Nonlinear Programming Codes. Lecture Notes in Economics and Mathematical Systems 187 Springer–Verlag

Error Indicators and Warnings

Note: e04us may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface (LDA, LDCJ, LDFJ, LDR) In these cases, an error in another parameter has usually caused an incorrect value to be inferred.
ifail<0
A negative value of ifail indicates an exit from e04us because you set mode<0 in objfun or confun. The value of ifail will be the same as your setting of mode.
ifail=1
The final iterate x satisfies the first-order Kuhn–Tucker conditions (see [] in e04uf) to the accuracy requested, but the sequence of iterates has not yet converged. e04us was terminated because no further improvement could be made in the merit function (see [Description of the Printed Output]).
This value of ifail may occur in several circumstances. The most common situation is that you ask for a solution with accuracy that is not attainable with the given precision of the problem (as specified by the optional parameter Function Precision (default value=ε0.9, where ε is the machine precision)). This condition will also occur if, by chance, an iterate is an ‘exact’ Kuhn–Tucker point, but the change in the variables was significant at the previous iteration. (This situation often happens when minimizing very simple functions, such as quadratics.)
If the four conditions listed in [Parameters] for ifail=0 are satisfied, x is likely to be a solution of (1) even if ifail=1.
ifail=2
e04us has terminated without finding a feasible point for the linear constraints and bounds, which means that either no feasible point exists for the given value of the optional parameter Linear Feasibility Tolerance (default value=ε, where ε is the machine precision), or no feasible point could be found in the number of iterations specified by the optional parameter Minor Iteration Limit (default value=max50,3n+nL+nN). You should check that there are no constraint redundancies. If the data for the constraints are accurate only to an absolute precision σ, you should ensure that the value of the optional parameter Linear Feasibility Tolerance is greater than σ. For example, if all elements of AL are of order unity and are accurate to only three decimal places, Linear Feasibility Tolerance should be at least 10-3.
ifail=3
No feasible point could be found for the nonlinear constraints. The problem may have no feasible solution. This means that there has been a sequence of QP subproblems for which no feasible point could be found (indicated by I at the end of each line of intermediate printout produced by the major iterations; see [Description of the Printed Output]). This behaviour will occur if there is no feasible point for the nonlinear constraints. (However, there is no general test that can determine whether a feasible point exists for a set of nonlinear constraints.) If the infeasible subproblems occur from the very first major iteration, it is highly likely that no feasible point exists. If infeasibilities occur when earlier subproblems have been feasible, small constraint inconsistencies may be present. You should check the validity of constraints with negative values of istate. If you are convinced that a feasible point does exist, e04us should be restarted at a different starting point.
ifail=4
The limiting number of iterations (as determined by the optional parameter Major Iteration Limit (default value=max50,3n+nL+10nN) has been reached.
If the algorithm appears to be making satisfactory progress, then Major Iteration Limit may be too small. If so, either increase its value and rerun e04us or, alternatively, rerun e04us using the optional parameter Warm Start. If the algorithm seems to be making little or no progress however, then you should check for incorrect gradients or ill-conditioning as described under ifail=6.
Note that ill-conditioning in the working set is sometimes resolved automatically by the algorithm, in which case performing additional iterations may be helpful. However, ill-conditioning in the Hessian approximation tends to persist once it has begun, so that allowing additional iterations without altering r is usually inadvisable. If the quasi-Newton update of the Hessian approximation was reset during the latter major iterations (i.e., an r occurs at the end of each line of intermediate printout; see [Description of the Printed Output]), it may be worthwhile to try a Warm Start at the final point as suggested above.
ifail=5
Not used by this method.
ifail=6
x does not satisfy the first-order Kuhn–Tucker conditions (see [] in e04uf), and no improved point for the merit function (see [Description of the Printed Output]) could be found during the final linesearch.
This sometimes occurs because an overly stringent accuracy has been requested, i.e., the value of the optional parameter Optimality Tolerance (default value=εr0.8, where εr is the value of the optional parameter Function Precision (default value=ε0.9, where ε is the machine precision)) is too small. In this case you should apply the four tests described under ifail=0 to determine whether or not the final solution is acceptable (see Gill et al. (1981), for a discussion of the attainable accuracy).
If many iterations have occurred in which essentially no progress has been made and e04us has failed completely to move from the initial point then user-supplied delegates objfun and/or confun may be incorrect. You should refer to comments under ifail=7 and check the gradients using the optional parameter Verify (default value=0). Unfortunately, there may be small errors in the objective and constraint gradients that cannot be detected by the verification process. Finite difference approximations to first derivatives are catastrophically affected by even small inaccuracies. An indication of this situation is a dramatic alteration in the iterates if the finite difference interval is altered. One might also suspect this type of error if a switch is made to central differences even when Norm Gz and Violtn (see [Description of the Printed Output]) are large.
Another possibility is that the search direction has become inaccurate because of ill-conditioning in the Hessian approximation or the matrix of constraints in the working set; either form of ill-conditioning tends to be reflected in large values of Mnr (the number of iterations required to solve each QP subproblem; see [Description of the Printed Output]).
If the condition estimate of the projected Hessian (Cond Hz; see [Description of Monitoring Information]) is extremely large, it may be worthwhile rerunning e04us from the final point with the optional parameter Warm Start. In this situation, istate and clamda should be left unaltered and r should be reset to the identity matrix.
If the matrix of constraints in the working set is ill-conditioned (i.e., Cond T is extremely large; see [Description of Monitoring Information]), it may be helpful to run e04us with a relaxed value of the optional parameter Feasibility Tolerance (default value=ε, where ε is the machine precision). (Constraint dependencies are often indicated by wide variations in size in the diagonal elements of the matrix T, whose diagonals will be printed if Major Print Level30).
ifail=7
The user-supplied derivatives of the subfunctions and/or nonlinear constraints appear to be incorrect.
Large errors were found in the derivatives of the subfunctions and/or nonlinear constraints. This value of ifail will occur if the verification process indicated that at least one Jacobian element had no correct figures. You should refer to the printed output to determine which elements are suspected to be in error.
As a first-step, you should check that the code for the subfunction and constraint values is correct – for example, by computing the subfunctions at a point where the correct value of Fx is known. However, care should be taken that the chosen point fully tests the evaluation of the subfunctions. It is remarkable how often the values x=0 or x=1 are used to test function evaluation procedures, and how often the special properties of these numbers make the test meaningless.
Special care should be used in this test if computation of the subfunctions involves subsidiary data communicated in storage. Although the first evaluation of the subfunctions may be correct, subsequent calculations may be in error because some of the subsidiary data has accidentally been overwritten.
Gradient checking will be ineffective if the objective function uses information computed by the constraints, since they are not necessarily computed before each function evaluation.
Errors in programming the subfunctions may be quite subtle in that the subfunction values are ‘almost’ correct. For example, a subfunction may not be accurate to full precision because of the inaccurate calculation of a subsidiary quantity, or the limited accuracy of data upon which the subfunction depends. A common error on machines where numerical calculations are usually performed in double precision is to include even one single precision constant in the calculation of the subfunction; since some compilers do not convert such constants to double precision, half the correct figures may be lost by such a seemingly trivial error.
ifail=8
Not used by this method.
ifail=9
An input parameter is invalid.
overflow
If overflow occurs then either an element of C is very large, or the singular values or singular vectors have been incorrectly supplied.
ifail=-9000
An error occured, see message report.
ifail=-6000
Invalid Parameters value
ifail=-4000
Invalid dimension for array value
ifail=-8000
Negative dimension for array value
ifail=-6000
Invalid Parameters value

Accuracy

If ifail=0 on exit, then the vector returned in the array x is an estimate of the solution to an accuracy of approximately Optimality Tolerance (default value=ε0.8, where ε is the machine precision).

Parallelism and Performance

None.

Further Comments

Description of the Printed Output

This section describes the intermediate printout and final printout produced by e04us. The intermediate printout is a subset of the monitoring information produced by the method at every iteration (see [Description of Monitoring Information]). You can control the level of printed output (see the description of the optional parameter Major Print Level). Note that the intermediate printout and final printout are produced only if Major Print Level10 (the default for e04us, by default no output is produced by ). (by default no output is produced by e04us).
The following line of summary output (<80 characters) is produced at every major iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj is the major iteration count.
Mnr is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be 1 in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see [Algorithmic Details] in e04uf).
Note that Mnr may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase.
Step is the step αk taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., αk=1) will be taken as the solution is approached.
Merit Function is the value of the augmented Lagrangian merit function (see (12) in e04uf) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see [] in e04uf). As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or e04us terminates with ifail=3 (no feasible point could be found for the nonlinear constraints).
If there are no nonlinear constraints present (i.e., ncnln=0) then this entry contains Objective, the value of the objective function Fx. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz is ZTgFR, the Euclidean norm of the projected gradient (see [] in e04uf). Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Cond Hz is a lower bound on the condition number of the projected Hessian approximation HZ (HZ=ZTHFRZ=RZTRZ; see (6) and (11) in e04uf). The larger this number, the more difficult the problem.
M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see [] in e04uf).
I is printed if the QP subproblem has no feasible point.
C is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that x is close to a Kuhn–Tucker point (see [] in e04uf).
L is printed if the linesearch has produced a relative change in x greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value.
R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of R indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, R is modified so that its diagonal condition estimator is bounded.
The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Varbl gives the name (V) and index j, for j=1,2,,n, of the variable.
State gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance, State will be ++ or -- respectively.
A key is sometimes printed before State.
A Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange multiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound then there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange multipliers might also change.
D Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds.
I Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance.
Value is the value of the variable at the final iteration.
Lower Bound is the lower bound specified for the variable. None indicates that bl[j-1]-bigbnd.
Upper Bound is the upper bound specified for the variable. None indicates that bu[j-1]bigbnd.
Lagr Mult is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless bl[j-1]-bigbnd and bu[j-1]bigbnd, in which case the entry will be blank. If x is optimal, the multiplier should be non-negative if State is LL and non-positive if State is UL.
Slack is the difference between the variable Value and the nearer of its (finite) bounds bl[j-1] and bu[j-1]. A blank entry indicates that the associated variable is not bounded (i.e., bl[j-1]-bigbnd and bu[j-1]bigbnd).
The meaning of the printout for linear and nonlinear constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, bl[j-1] and bu[j-1] are replaced by bl[n+j-1] and bu[n+j-1] respectively, and with the following changes in the heading:
L Con gives the name (L) and index j, for j=1,2,,nL, of the linear constraint.
N Con gives the name (N) and index (j-nL), for j=nL+1,,nL+nN, of the nonlinear constraint.
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.

Example

This example is based on Problem 57 in Hock and Schittkowski (1981) and involves the minimization of the sum of squares function
Fx=12i=144yi-fix2,
where
fix=x1+0.49-x1e-x2ai-8
and
iyiaiiyiai10.498230.412220.498240.402230.4810250.422440.4710260.402450.4810270.402460.4710280.412670.4612290.402680.4612300.412690.4512310.4128100.4312320.4028110.4514330.4030120.4314340.4030130.4314350.3830140.4416360.4132150.4316370.4032160.4316380.4034170.4618390.4136180.4518400.3836190.4220410.4038200.4220420.4038210.4320430.3940220.4122440.3942
subject to the bounds
x1-0.4x2-4.0
to the general linear constraint
x1+x21.0
and to the nonlinear constraint
0.49x2-x1x20.09.
The initial point, which is infeasible, is
x0=0.4,0.0T
and Fx0=0.002241.
The optimal solution (to five figures) is
x*=0.41995,1.28484T,
and Fx*=0.01423. The nonlinear constraint is active at the solution.

Example program (C#): e04use.cs

Example program data: e04use.d

Example program results: e04use.r

Algorithmic Details

e04us implements a sequential quadratic programming (SQP) method incorporating an augmented Lagrangian merit function and a BFGS (Broyden–Fletcher–Goldfarb–Shanno) quasi-Newton approximation to the Hessian of the Lagrangian, and is based on e04wd. The documents for e04nce04uf and e04wd should be consulted for details of the method.

Description of Monitoring Information

This section describes the long line of output (>80 characters) which forms part of the monitoring information produced by e04us. (See also the description of the optional parameters Major Print Level, Minor Print Level and Monitoring File.) You can control the level of printed output.
When Major Print Level5 and Monitoring File0, the following line of output is produced at every major iteration of e04us on the unit number specified by optional parameter Monitoring File. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Maj is the major iteration count.
Mnr is the number of minor iterations required by the feasibility and optimality phases of the QP subproblem. Generally, Mnr will be 1 in the later iterations, since theoretical analysis predicts that the correct active set will be identified near the solution (see [Algorithmic Details] in e04uf).
Note that Mnr may be greater than the optional parameter Minor Iteration Limit if some iterations are required for the feasibility phase.
Step is the step αk taken along the computed search direction. On reasonably well-behaved problems, the unit step (i.e., αk=1) will be taken as the solution is approached.
Nfun is the cumulative number of evaluations of the objective function needed for the linesearch. Evaluations needed for the estimation of the gradients by finite differences are not included. Nfun is printed as a guide to the amount of work required for the linesearch.
Merit Function is the value of the augmented Lagrangian merit function (see (12) in e04uf) at the current iterate. This function will decrease at each iteration unless it was necessary to increase the penalty parameters (see [] in e04uf). As the solution is approached, Merit Function will converge to the value of the objective function at the solution.
If the QP subproblem does not have a feasible point (signified by I at the end of the current output line) then the merit function is a large multiple of the constraint violations, weighted by the penalty parameters. During a sequence of major iterations with infeasible subproblems, the sequence of Merit Function values will decrease monotonically until either a feasible subproblem is obtained or e04us terminates with ifail=3 (no feasible point could be found for the nonlinear constraints).
If there are no nonlinear constraints present (i.e., ncnln=0) then this entry contains Objective, the value of the objective function Fx. The objective function will decrease monotonically to its optimal value when there are no nonlinear constraints.
Norm Gz is ZTgFR, the Euclidean norm of the projected gradient (see [] in e04uf). Norm Gz will be approximately zero in the neighbourhood of a solution.
Violtn is the Euclidean norm of the residuals of constraints that are violated or in the predicted active set (not printed if ncnln is zero). Violtn will be approximately zero in the neighbourhood of a solution.
Nz is the number of columns of Z (see [] in e04uf). The value of Nz is the number of variables minus the number of constraints in the predicted active set; i.e., Nz=n-Bnd+Lin+Nln.
Bnd is the number of simple bound constraints in the current working set.
Lin is the number of general linear constraints in the current working set.
Nln is the number of nonlinear constraints in the predicted active set (not printed if ncnln is zero).
Penalty is the Euclidean norm of the vector of penalty parameters used in the augmented Lagrangian merit function (not printed if ncnln is zero).
Cond H is a lower bound on the condition number of the Hessian approximation H.
Cond Hz is a lower bound on the condition number of the projected Hessian approximation HZ (HZ=ZTHFRZ=RZTRZ; see (6) and (11) in e04uf). The larger this number, the more difficult the problem.
Cond T is a lower bound on the condition number of the matrix of predicted active constraints.
Conv is a three-letter indication of the status of the three convergence tests (2)–(4) defined in the description of the optional parameter Optimality Tolerance. Each letter is T if the test is satisfied and F otherwise. The three tests indicate whether:
(i) the sequence of iterates has converged;
(ii) the projected gradient (Norm Gz) is sufficiently small; and
(iii) the norm of the residuals of constraints in the predicted active set (Violtn) is small enough.
If any of these indicators is F when e04us terminates with ifail=0, you should check the solution carefully.
M is printed if the quasi-Newton update has been modified to ensure that the Hessian approximation is positive definite (see [] in e04uf).
I is printed if the QP subproblem has no feasible point.
C is printed if central differences have been used to compute the unspecified objective and constraint gradients. If the value of Step is zero then the switch to central differences was made because no lower point could be found in the linesearch. (In this case, the QP subproblem is resolved with the central difference gradient and Jacobian.) If the value of Step is nonzero then central differences were computed because Norm Gz and Violtn imply that x is close to a Kuhn–Tucker point (see [] in e04uf).
L is printed if the linesearch has produced a relative change in x greater than the value defined by the optional parameter Step Limit. If this output occurs frequently during later iterations of the run, optional parameter Step Limit should be set to a larger value.
R is printed if the approximate Hessian has been refactorized. If the diagonal condition estimator of R indicates that the approximate Hessian is badly conditioned then the approximate Hessian is refactorized using column interchanges. If necessary, R is modified so that its diagonal condition estimator is bounded.

See Also