ConSolve: Difference between revisions

From TomWiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
[[Category:Solvers]]
[[Category:Solvers]]
{{cleanup|Clean this article.}}
==Purpose==
==Purpose==


Line 19: Line 17:
<math>c(x)</math>, <math>c_{L}</math>, <math>c_{U}\in \MATHSET{R}^{m_{1}}</math>,
<math>c(x)</math>, <math>c_{L}</math>, <math>c_{U}\in \MATHSET{R}^{m_{1}}</math>,
<math>A \in \MATHSET{R}^{m_{2}\times n}</math> and <math>b_{L},b_{U}\in \MATHSET{R}^{m_{2}}</math>.
<math>A \in \MATHSET{R}^{m_{2}\times n}</math> and <math>b_{L},b_{U}\in \MATHSET{R}^{m_{2}}</math>.


==Calling Syntax==
==Calling Syntax==


<syntaxhighlight lang="matlab">
Result = conSolve(Prob, varargin)  
Result = conSolve(Prob, varargin)  
Result = tomRun('conSolve', Prob);
</syntaxhighlight>
==Inputs==


Result = tomRun('conSolve', Prob);
{| class="wikitable"
!''Prob''
!colspan="2" | Problem description structure. The following fields are used:
|-valign="top"
|||''Solver.Alg''||Choice of algorithm. Also affects how derivatives are obtained.


===Description  of Inputs===
See following fields and the table on page 92.


{|
0,1,2: Schittkowski SQP.


|''Prob''|| colspan="2" | Problem description structure. The following fields are used:
3,4: Han-Powell SQP.
|-
|||''Solver.Alg''||Choice of algorithm. Also affects how derivatives are obtained.
|-
|||||See following fields and the table on page 92.
|-
|||||0,1,2: Schittkowski SQP.
|-
|||||3,4: Han-Powell SQP.
|-
|-
|||''x_0''||Starting point.
|||''x_0''||Starting point.
Line 60: Line 58:
|-
|-
|||''ConsDiff''||How to obtain the constraint derivative matrix.
|||''ConsDiff''||How to obtain the constraint derivative matrix.
|-
|-valign="top"
|||''SolverQP''||Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
|||''SolverQP''||Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
|-
|-
Line 88: Line 86:
|}
|}


===Description of Outputs===
==Outputs==
{|
 
|''Result''||colspan="2"|Structure with result from optimization. The following fields are changed:
{| class="wikitable"
!''Result''
!colspan="2"|Structure with result from optimization. The following fields are changed:
|-
|-
|||''x_k''||Optimal point.
|||''x_k''||Optimal point.
Line 121: Line 121:
|-
|-
|||''ExitText'||'Text string giving ''ExitFlag ''and ''Inform ''information.
|||''ExitText'||'Text string giving ''ExitFlag ''and ''Inform ''information.
|-
|-valign="top"
|||''Inform''||Code telling type of convergence:
|||''Inform''||Code telling type of convergence:
|-
 
|||||1: Iteration points are close. ''Result'' Structure with result from optimization. The following fields are changed:, continued
1: Iteration points are close. ''Result'' Structure with result from optimization. The following fields are changed:, continued
|-
 
|||||2: Small search direction.
2: Small search direction.
|-
 
|||||3: Iteration points are close and Small search direction.
3: Iteration points are close and Small search direction.
|-
 
|||||4: Gradient of merit function small.
4: Gradient of merit function small.
|-
 
|||||5: Iteration points are close and gradient of merit function small.
5: Iteration points are close and gradient of merit function small.
|-
 
|||||6: Small search direction and gradient of merit function small.
6: Small search direction and gradient of merit function small.
|-
 
|||||7: Iteration points are close, small search direction and gradient of merit func-tion small.
7: Iteration points are close, small search direction and gradient of merit func-tion small.
|-
 
|||||8: Small search direction ''p ''and constraints satisfied.
8: Small search direction ''p ''and constraints satisfied.
|-
 
|||||101: Maximum number of iterations reached.
101: Maximum number of iterations reached.
|-
 
|||||102: Function value below given estimate.
102: Function value below given estimate.
|-
 
|||||103: Iteration points are close, but constraints not fulfilled.  Too large penalty weights to be able to continue. Problem is maybe infeasible.
103: Iteration points are close, but constraints not fulfilled.  Too large penalty weights to be able to continue. Problem is maybe infeasible.
|-
 
|||||104: Search direction is zero and infeasible constraints. The problem is very likely infeasible.
104: Search direction is zero and infeasible constraints. The problem is very likely infeasible.
|-
 
|||||105: Merit function is infinity.
105: Merit function is infinity.
|-
 
|||||106: Penalty weights too high.
106: Penalty weights too high.
|-
|-
|||''Solver''||Solver used.  
|||''Solver''||Solver used.  
Line 161: Line 161:
==Description==
==Description==


The routine ''conSolve ''implements two SQP algorithms for general constrained minimization problems. One imple- mentation, ''Solver.Alg ''= 0'', ''1'', ''2, is based on the SQP algorithm by Schittkowski with Augmented Lagrangian merit function described in \[69\]. The other, ''Solver.Alg ''= 3'', ''4, is an implementation of the Han-Powell SQP method.
The routine ''conSolve ''implements two SQP algorithms for general constrained minimization problems. One imple- mentation, ''Solver.Alg ''= 0'', ''1'', ''2, is based on the SQP algorithm by Schittkowski with Augmented Lagrangian merit function. The other, ''Solver.Alg ''= 3'', ''4, is an implementation of the Han-Powell SQP method.


The Hessian in the QP subproblems are determined in one of several ways, dependent on the input parameters. The following table shows how the algorithm and Hessian method is selected.
The Hessian in the QP subproblems are determined in one of several ways, dependent on the input parameters. The following table shows how the algorithm and Hessian method is selected.


{|
{| class="wikitable"
|'''Solver.Alg'''||'''NumDiff'''||'''AutoDiff'''||'''isempty(FUNCS.H)'''||'''Hessian computation'''||'''Algorithm'''
!Solver.Alg
!NumDiff
!AutoDiff
!isempty(FUNCS.H)
!Hessian computation
!Algorithm
|-
|-
|0||0||0||0||Analytic Hessian||Schittkowski SQP
|0||0||0||0||Analytic Hessian||Schittkowski SQP
Line 204: Line 209:
|4||any||1||any||BFGS, automatic diff gradient||Han-Powell SQP
|4||any||1||any||BFGS, automatic diff gradient||Han-Powell SQP
|}
|}
==M-files  Used==
==M-files  Used==



Revision as of 14:19, 17 July 2011

Purpose

Solve general constrained nonlinear optimization problems.

conSolve solves problems of the form


where , Failed to parse (unknown function "\MATHSET"): {\displaystyle x_{U}\in \MATHSET{R}^{n}} , , , Failed to parse (unknown function "\MATHSET"): {\displaystyle c_{U}\in \MATHSET{R}^{m_{1}}} , Failed to parse (unknown function "\MATHSET"): {\displaystyle A \in \MATHSET{R}^{m_{2}\times n}} and Failed to parse (unknown function "\MATHSET"): {\displaystyle b_{L},b_{U}\in \MATHSET{R}^{m_{2}}} .

Calling Syntax

Result = conSolve(Prob, varargin) 
Result = tomRun('conSolve', Prob);

Inputs

Prob Problem description structure. The following fields are used:
Solver.Alg Choice of algorithm. Also affects how derivatives are obtained.

See following fields and the table on page 92.

0,1,2: Schittkowski SQP.

3,4: Han-Powell SQP.

x_0 Starting point.
x_L Lower bounds on the variables.
x_U Upper bounds on the variables.
b_L Lower bounds on the linear constraints.
b_U Upper bounds on the linear constraints.
A Constraint matrix for linear constraints.
c_L Lower bounds on the general constraints.
c_U Upper bounds on the general constraints.
NumDiff How to obtain derivatives (gradient, Hessian).
ConsDiff How to obtain the constraint derivative matrix.
SolverQP Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
f_Low A lower bound on the optimal function value, see LineParam.fLowBnd below.
Used in convergence tests, f k(x k) <= f Low. Only a feasible point x k is accepted.
FUNCS.f Name of m-file computing the objective function f (x).
FUNCS.g Name of m-file computing the gradient vector g(x).
FUNCS.H Name of m-file computing the Hessian matrix H (x).
FUNCS.c Name of m-file computing the vector of constraint functions c(x).
FUNCS.dc Name of m-file computing the matrix of constraint normals ?c(x)/dx.
PriLevOpt Print level.
optParam Structure with optimization parameters, see Table 141. Fields that are used: bTol, cTol, eps absf, eps g, eps x, eps Rank, IterPrint, MaxIter, QN InitMatrix, size f, size x, xTol and wait.
LineParam Structure with line search parameters. See Table 140.
fLowBnd A lower bound on the global optimum of f(x). The user might also give lower bound estimate in Prob.f Low conSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(Prob.f Low,Prob.LineParam.fLowBnd).
varargin Other parameters directly sent to low level routines.

Outputs

Result Structure with result from optimization. The following fields are changed:
x_k Optimal point.
v_k Lagrange multipliers.
f_k Function value at optimum.
g_k Gradient value at optimum.
H_k Hessian value at optimum.
x_0 Starting point.
f_0 Function value at start.
c_k Value of constraints at optimum.
cJac Constraint Jacobian at optimum.
xState State of each variable, described in Table 150 .
bState State of each linear constraint, described in Table 151.
cState State of each nonlinear constraint.
Iter Number of iterations.
ExitFlag Flag giving exit status.
ExitText' 'Text string giving ExitFlag and Inform information.
Inform Code telling type of convergence:

1: Iteration points are close. Result Structure with result from optimization. The following fields are changed:, continued

2: Small search direction.

3: Iteration points are close and Small search direction.

4: Gradient of merit function small.

5: Iteration points are close and gradient of merit function small.

6: Small search direction and gradient of merit function small.

7: Iteration points are close, small search direction and gradient of merit func-tion small.

8: Small search direction p and constraints satisfied.

101: Maximum number of iterations reached.

102: Function value below given estimate.

103: Iteration points are close, but constraints not fulfilled. Too large penalty weights to be able to continue. Problem is maybe infeasible.

104: Search direction is zero and infeasible constraints. The problem is very likely infeasible.

105: Merit function is infinity.

106: Penalty weights too high.

Solver Solver used.
SolverAlgorithm Solver algorithm used.
Prob Problem structure used.

Description

The routine conSolve implements two SQP algorithms for general constrained minimization problems. One imple- mentation, Solver.Alg = 0, 1, 2, is based on the SQP algorithm by Schittkowski with Augmented Lagrangian merit function. The other, Solver.Alg = 3, 4, is an implementation of the Han-Powell SQP method.

The Hessian in the QP subproblems are determined in one of several ways, dependent on the input parameters. The following table shows how the algorithm and Hessian method is selected.

Solver.Alg NumDiff AutoDiff isempty(FUNCS.H) Hessian computation Algorithm
0 0 0 0 Analytic Hessian Schittkowski SQP
0 any any any BFGS Schittkowski SQP
1 0 0 0 Analytic Hessian Schittkowski SQP
1 0 0 1 Numerical differences H Schittkowski SQP
1 > 0 0 any Numerical differences g,H Schittkowski SQP
1 < 0 0 any Numerical differences H Schittkowski SQP
1 any 1 any Automatic differentiation Schittkowski SQP
2 0 0 any BFGS Schittkowski SQP
2 = 0 0 any BFGS, numerical gradient g Schittkowski SQP
2 any 1 any BFGS, automatic diff gradient Schittkowski SQP
3 0 0 0 Analytic Hessian Han-Powell SQP
3 0 0 1 Numerical differences H Han-Powell SQP
3 > 0 0 any Numerical differences g,H Han-Powell SQP
3 < 0 0 any Numerical differences H Han-Powell SQP
3 any 1 any Automatic differentiation Han-Powell SQP
4 0 0 any BFGS Han-Powell SQP
4 = 0 0 any BFGS, numerical gradient g Han-Powell SQP
4 any 1 any BFGS, automatic diff gradient Han-Powell SQP

M-files Used

ResultDef.m, tomSolve.m, LineSearch.m, iniSolve.m, endSolve.m, ProbCheck.m.

See Also

nlpSolve, sTrustr