ConSolve: Difference between revisions

From TomWiki
Jump to navigationJump to search
(Created page with "==Purpose== Solve general constrained nonlinear optimization problems. ''conSolve ''solves problems of the form <math> \begin{array}{cccccc} \min\limits_{x} & f(x) & & & &...")
 
No edit summary
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:Solvers]]
==Purpose==
==Purpose==


Line 13: Line 14:
</math>
</math>


where <math>x,x_{L}</math>, <math>x_{U}\in \MATHSET{R}^{n}</math>,
where <math>x,x_{L}</math>, <math>x_{U}\in \mathbb{R}^{n}</math>,
<math>c(x)</math>, <math>c_{L}</math>, <math>c_{U}\in \MATHSET{R}^{m_{1}}</math>,
<math>c(x)</math>, <math>c_{L}</math>, <math>c_{U}\in \mathbb{R}^{m_{1}}</math>,
<math>A \in \MATHSET{R}^{m_{2}\times n}</math> and <math>b_{L},b_{U}\in \MATHSET{R}^{m_{2}}</math>.
<math>A \in \mathbb{R}^{m_{2}\times n}</math> and <math>b_{L},b_{U}\in \mathbb{R}^{m_{2}}</math>.
 


==Calling Syntax==
==Calling Syntax==


<source lang="matlab">
Result = conSolve(Prob, varargin)  
Result = conSolve(Prob, varargin)  
Result = tomRun('conSolve', Prob);
</source>
==Inputs==


Result = tomRun('conSolve', Prob);
{| class="wikitable"
!''Prob''
!colspan="2" | Problem description structure. The following fields are used:
|-valign="top"
|||''Solver.Alg''||Choice of algorithm. Also affects how derivatives are obtained.


'''Description  of Inputs'''
See following fields.


{|
0,1,2: Schittkowski SQP.


|''Prob''|| colspan="2" | Problem description structure. The following fields are used:
3,4: Han-Powell SQP.
|-
|||''Solver.Alg''||Choice of algorithm. Also affects how derivatives are obtained.
|-
|||||See following fields and the table on page 92.
|-
|||||0,1,2: Schittkowski SQP.
|-
|||||3,4: Han-Powell SQP.
|-
|-
|||''x_0''||Starting point.
|||''x_0''||Starting point.
Line 57: Line 58:
|-
|-
|||''ConsDiff''||How to obtain the constraint derivative matrix.
|||''ConsDiff''||How to obtain the constraint derivative matrix.
|-
|-valign="top"
|||''SolverQP''||Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
|||''SolverQP''||Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
|-
|-
|||''f_Low''||A lower bound on the optimal function value, see LineParam.fLowBnd below.
|||''f_Low''||A lower bound on the optimal function value, see LineParam.fLowBnd below.
|-
|-
||||||Used in convergence tests, f k(x k) ''<''= f Low.  Only a feasible point  x k is accepted.
||||||Used in convergence tests, f_k(x_k) ''<''= f_Low.  Only a feasible point  x_k is accepted.
|-
|-
|||''FUNCS.f''||Name of m-file computing the objective function ''f ''(''x'').  
|||''FUNCS.f''||Name of m-file computing the objective function ''f ''(''x'').  
Line 72: Line 73:
|||''FUNCS.c''||Name of m-file computing the vector of constraint functions ''c''(''x'').
|||''FUNCS.c''||Name of m-file computing the vector of constraint functions ''c''(''x'').
|-
|-
|||''FUNCS.dc''||Name of m-file computing the matrix of constraint normals ''?c''(''x'')''/dx''.
|||''FUNCS.dc''||Name of m-file computing the matrix of constraint normals ''&delta;''(''x'')''/dx''.
|-
|-
|||''PriLevOpt''||Print level.
|||''PriLevOpt''||Print level.
|- valign="top"
|- valign="top"
||||''optParam''||Structure with optimization parameters, see Table 141. Fields that are used: ''bTol'', ''cTol'', ''eps absf'', ''eps g'', ''eps x'', ''eps Rank'', ''IterPrint'', ''MaxIter'', ''QN InitMatrix'', ''size f'', ''size x'', ''xTol ''and ''wait''.
||||''optParam''||Structure with optimization parameters, see [[TOMLAB Appendix A]]. Fields that are used: ''bTol'', ''cTol'', ''eps_absf'', ''eps_g'', ''eps_x'', ''eps_Rank'', ''IterPrint'', ''MaxIter'', ''QN InitMatrix'', ''size_f'', ''size_x'', ''xTol ''and ''wait''.
|-
|-
|||''LineParam''||Structure with line search parameters. See Table 140.
|||''LineParam''||Structure with line search parameters. See [[TOMLAB Appendix A]].
|-
|-
|||''fLowBnd''||A lower bound on the global optimum  of f(x). The user might also give lower bound estimate in Prob.f Low conSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(Prob.f Low,Prob.LineParam.fLowBnd).
|||''fLowBnd''||A lower bound on the global optimum  of f(x). The user might also give lower bound estimate in Prob.f_Low conSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(Prob.f_Low,Prob.LineParam.fLowBnd).
|-
|-
|''varargin''|| colspan="2" | Other parameters directly sent to low level routines.
|''varargin''|| colspan="2" | Other parameters directly sent to low level routines.
|}
|}


==Description of Outputs==
==Outputs==
{|
 
|''Result''||colspan="2"|Structure with result from optimization. The following fields are changed:
{| class="wikitable"
!''Result''
!colspan="2"|Structure with result from optimization. The following fields are changed:
|-
|-
|||''x_k''||Optimal point.
|||''x_k''||Optimal point.
Line 107: Line 110:
|||''cJac''||Constraint Jacobian at optimum.
|||''cJac''||Constraint Jacobian at optimum.
|-
|-
|||''xState''||State of each variable, described in Table 150 .
|||''xState''||State of each variable, described in [[TOMLAB Appendix B]].
|-
|-
|||''bState''||State of each linear constraint, described in Table 151.
|||''bState''||State of each linear constraint, described in [[TOMLAB Appendix B]].
|-
|-
|||''cState''||State of each nonlinear constraint.
|||''cState''||State of each nonlinear constraint.
Line 117: Line 120:
|||''ExitFlag''||Flag giving exit status.
|||''ExitFlag''||Flag giving exit status.
|-
|-
|||''ExitText'||'Text string giving ''ExitFlag ''and ''Inform ''information.
|||''ExitText''||'Text string giving ''ExitFlag ''and ''Inform ''information.
|-
|-valign="top"
|||''Inform''||Code telling type of convergence:
|||''Inform''||Code telling type of convergence:
|-
 
|||||1: Iteration points are close. ''Result'' Structure with result from optimization. The following fields are changed:, continued
1: Iteration points are close. ''Result'' Structure with result from optimization. The following fields are changed:, continued
|-
 
|||||2: Small search direction.
2: Small search direction.
|-
 
|||||3: Iteration points are close and Small search direction.
3: Iteration points are close and Small search direction.
|-
 
|||||4: Gradient of merit function small.
4: Gradient of merit function small.
|-
 
|||||5: Iteration points are close and gradient of merit function small.
5: Iteration points are close and gradient of merit function small.
|-
 
|||||6: Small search direction and gradient of merit function small.
6: Small search direction and gradient of merit function small.
|-
 
|||||7: Iteration points are close, small search direction and gradient of merit func-tion small.
7: Iteration points are close, small search direction and gradient of merit func-tion small.
|-
 
|||||8: Small search direction ''p ''and constraints satisfied.
8: Small search direction ''p ''and constraints satisfied.
|-
 
|||||101: Maximum number of iterations reached.
101: Maximum number of iterations reached.
|-
 
|||||102: Function value below given estimate.
102: Function value below given estimate.
|-
 
|||||103: Iteration points are close, but constraints not fulfilled.  Too large penalty weights to be able to continue. Problem is maybe infeasible.
103: Iteration points are close, but constraints not fulfilled.  Too large penalty weights to be able to continue. Problem is maybe infeasible.
|-
 
|||||104: Search direction is zero and infeasible constraints. The problem is very likely infeasible.
104: Search direction is zero and infeasible constraints. The problem is very likely infeasible.
|-
 
|||||105: Merit function is infinity.
105: Merit function is infinity.
|-
 
|||||106: Penalty weights too high.
106: Penalty weights too high.
|-
|-
|||''Solver''||Solver used.  
|||''Solver''||Solver used.  
Line 155: Line 158:
|||''Prob''||Problem structure used.
|||''Prob''||Problem structure used.
|}
|}
 
==Description==
==Description==


The routine ''conSolve ''implements two SQP algorithms for general constrained minimization problems. One imple- mentation, ''Solver.Alg ''= 0'', ''1'', ''2, is based on the SQP algorithm by Schittkowski with Augmented Lagrangian merit function described in \[69\]. The other, ''Solver.Alg ''= 3'', ''4, is an implementation of the Han-Powell SQP method.
The routine ''conSolve ''implements two SQP algorithms for general constrained minimization problems. One imple- mentation, ''Solver.Alg ''= 0'', ''1'', ''2, is based on the SQP algorithm by Schittkowski with Augmented Lagrangian merit function. The other, ''Solver.Alg ''= 3'', ''4, is an implementation of the Han-Powell SQP method.


The Hessian in the QP subproblems are determined in one of several ways, dependent on the input parameters. The following table shows how the algorithm and Hessian method is selected.
The Hessian in the QP subproblems are determined in one of several ways, dependent on the input parameters. The following table shows how the algorithm and Hessian method is selected.


{|
{| class="wikitable"
|'''Solver.Alg'''||'''NumDiff'''||'''AutoDiff'''||'''isempty(FUNCS.H)'''||'''Hessian computation'''||'''Algorithm'''
!Solver.Alg
!NumDiff
!AutoDiff
!isempty(FUNCS.H)
!Hessian computation
!Algorithm
|-
|-
|0||0||0||0||Analytic Hessian||Schittkowski SQP
|0||0||0||0||Analytic Hessian||Schittkowski SQP
Line 201: Line 209:
|4||any||1||any||BFGS, automatic diff gradient||Han-Powell SQP
|4||any||1||any||BFGS, automatic diff gradient||Han-Powell SQP
|}
|}
'''M-files  Used'''
 
==M-files  Used==


''ResultDef.m'', ''tomSolve.m'', ''LineSearch.m'', ''iniSolve.m'', ''endSolve.m'', ''ProbCheck.m''.
''ResultDef.m'', ''tomSolve.m'', ''LineSearch.m'', ''iniSolve.m'', ''endSolve.m'', ''ProbCheck.m''.
Line 207: Line 216:
==See Also==
==See Also==


[[nlpSolve|''nlpSolve'']], [[sTrustr|''sTrustr'']]
nlpSolve, sTrustr

Latest revision as of 08:09, 10 January 2012

Purpose

Solve general constrained nonlinear optimization problems.

conSolve solves problems of the form


where , , , , , and .

Calling Syntax

Result = conSolve(Prob, varargin) 
Result = tomRun('conSolve', Prob);

Inputs

Prob Problem description structure. The following fields are used:
Solver.Alg Choice of algorithm. Also affects how derivatives are obtained.

See following fields.

0,1,2: Schittkowski SQP.

3,4: Han-Powell SQP.

x_0 Starting point.
x_L Lower bounds on the variables.
x_U Upper bounds on the variables.
b_L Lower bounds on the linear constraints.
b_U Upper bounds on the linear constraints.
A Constraint matrix for linear constraints.
c_L Lower bounds on the general constraints.
c_U Upper bounds on the general constraints.
NumDiff How to obtain derivatives (gradient, Hessian).
ConsDiff How to obtain the constraint derivative matrix.
SolverQP Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
f_Low A lower bound on the optimal function value, see LineParam.fLowBnd below.
Used in convergence tests, f_k(x_k) <= f_Low. Only a feasible point x_k is accepted.
FUNCS.f Name of m-file computing the objective function f (x).
FUNCS.g Name of m-file computing the gradient vector g(x).
FUNCS.H Name of m-file computing the Hessian matrix H (x).
FUNCS.c Name of m-file computing the vector of constraint functions c(x).
FUNCS.dc Name of m-file computing the matrix of constraint normals δ(x)/dx.
PriLevOpt Print level.
optParam Structure with optimization parameters, see TOMLAB Appendix A. Fields that are used: bTol, cTol, eps_absf, eps_g, eps_x, eps_Rank, IterPrint, MaxIter, QN InitMatrix, size_f, size_x, xTol and wait.
LineParam Structure with line search parameters. See TOMLAB Appendix A.
fLowBnd A lower bound on the global optimum of f(x). The user might also give lower bound estimate in Prob.f_Low conSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(Prob.f_Low,Prob.LineParam.fLowBnd).
varargin Other parameters directly sent to low level routines.

Outputs

Result Structure with result from optimization. The following fields are changed:
x_k Optimal point.
v_k Lagrange multipliers.
f_k Function value at optimum.
g_k Gradient value at optimum.
H_k Hessian value at optimum.
x_0 Starting point.
f_0 Function value at start.
c_k Value of constraints at optimum.
cJac Constraint Jacobian at optimum.
xState State of each variable, described in TOMLAB Appendix B.
bState State of each linear constraint, described in TOMLAB Appendix B.
cState State of each nonlinear constraint.
Iter Number of iterations.
ExitFlag Flag giving exit status.
ExitText 'Text string giving ExitFlag and Inform information.
Inform Code telling type of convergence:

1: Iteration points are close. Result Structure with result from optimization. The following fields are changed:, continued

2: Small search direction.

3: Iteration points are close and Small search direction.

4: Gradient of merit function small.

5: Iteration points are close and gradient of merit function small.

6: Small search direction and gradient of merit function small.

7: Iteration points are close, small search direction and gradient of merit func-tion small.

8: Small search direction p and constraints satisfied.

101: Maximum number of iterations reached.

102: Function value below given estimate.

103: Iteration points are close, but constraints not fulfilled. Too large penalty weights to be able to continue. Problem is maybe infeasible.

104: Search direction is zero and infeasible constraints. The problem is very likely infeasible.

105: Merit function is infinity.

106: Penalty weights too high.

Solver Solver used.
SolverAlgorithm Solver algorithm used.
Prob Problem structure used.

Description

The routine conSolve implements two SQP algorithms for general constrained minimization problems. One imple- mentation, Solver.Alg = 0, 1, 2, is based on the SQP algorithm by Schittkowski with Augmented Lagrangian merit function. The other, Solver.Alg = 3, 4, is an implementation of the Han-Powell SQP method.

The Hessian in the QP subproblems are determined in one of several ways, dependent on the input parameters. The following table shows how the algorithm and Hessian method is selected.

Solver.Alg NumDiff AutoDiff isempty(FUNCS.H) Hessian computation Algorithm
0 0 0 0 Analytic Hessian Schittkowski SQP
0 any any any BFGS Schittkowski SQP
1 0 0 0 Analytic Hessian Schittkowski SQP
1 0 0 1 Numerical differences H Schittkowski SQP
1 > 0 0 any Numerical differences g,H Schittkowski SQP
1 < 0 0 any Numerical differences H Schittkowski SQP
1 any 1 any Automatic differentiation Schittkowski SQP
2 0 0 any BFGS Schittkowski SQP
2 = 0 0 any BFGS, numerical gradient g Schittkowski SQP
2 any 1 any BFGS, automatic diff gradient Schittkowski SQP
3 0 0 0 Analytic Hessian Han-Powell SQP
3 0 0 1 Numerical differences H Han-Powell SQP
3 > 0 0 any Numerical differences g,H Han-Powell SQP
3 < 0 0 any Numerical differences H Han-Powell SQP
3 any 1 any Automatic differentiation Han-Powell SQP
4 0 0 any BFGS Han-Powell SQP
4 = 0 0 any BFGS, numerical gradient g Han-Powell SQP
4 any 1 any BFGS, automatic diff gradient Han-Powell SQP

M-files Used

ResultDef.m, tomSolve.m, LineSearch.m, iniSolve.m, endSolve.m, ProbCheck.m.

See Also

nlpSolve, sTrustr