TOMLAB Solver Reference: Difference between revisions

From TomWiki
Jump to navigationJump to search
(Undo revision 478 by Elias (talk))
No edit summary
 
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{Part Of Manual|title=the TOMLAB Manual|link=[[TOMLAB|TOMLAB Manual]]}}
[[Category:Manuals]]
[[Category:Manuals]]
{{cleanup|Clean this article.}}
{{Part Of Manual|title=the TOMLAB Manual|link=[[TOMLAB|TOMLAB Manual]]}}
Detailed descriptions of the TOMLAB  solvers, driver routines and some utilities are given in the following sections. Also see the M-file help for each solver. All solvers except for the TOMLAB  Base Module are described in separate manuals.
Detailed descriptions of the TOMLAB  solvers, driver routines and some utilities are given in the following sections. Also see the M-file help for each solver. All solvers except for the TOMLAB  Base Module are described in separate manuals.


Line 105: Line 101:
*[[MilpSolve]]
*[[MilpSolve]]


====minlpSolve====
==minlpSolve==


Branch & Bound algorithm for Mixed-Integer Nonlinear Programming (MINLP) with convex or nonconvex sub problems using NLP relaxation (Formulated as minlp-IP).  
Branch & Bound algorithm for Mixed-Integer Nonlinear Programming (MINLP) with convex or nonconvex sub problems using NLP relaxation (Formulated as minlp-IP).  
Line 129: Line 125:
*[[multiMINLP]]
*[[multiMINLP]]


====nlpSolve====
==nlpSolve==


Solve general constrained nonlinear optimization problems.
Solve general constrained nonlinear optimization problems.
Line 135: Line 131:
*[[nlpSolve]]
*[[nlpSolve]]


====pdcoTL====
==pdcoTL==
 
'''Purpose'''
 
''pdcoTL ''solves linearly constrained convex nonlinear optimization problems of the kind
 
 
<math>
\begin{array}{cccccc}
\min\limits_x & f(x) \\
\mathrm{s/t} & x_L & \leq & x  & \leq & x_U \\
{}          & b_L & \leq & Ax & \leq & b_U \\
\end{array}
</math>
 
 
where <math>f(x)</math> is a convex nonlinear function, <math>x,x_L,x_U \in \RR^n</math> , <math>A\in \RR^{m \times n}</math> and <math>b_L, b_U \in \RR^m</math>.
 
'''Calling  Syntax'''
 
Result=tomRun('pdco',Prob,...);
 
'''Description  of Inputs'''
 
{|
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|||''x_0''||Initial ''x ''vector, used if non-empty.
|-
|||''A''||The linear constraint matrix.
|-
|||''b_L,b_U''||Lower and upper bounds for the linear constraints.
|-
|||''PriLevOpt''||Print level in ''pdsco ''solver. If ''> ''0: prints summary information.
|-
|||''SOL''||Structure with SOL special parameters:
|-
|||''pdco''||Options structure with fields as defined  by ''pdcoSet''.
|-valign="top"
|||''d1''||Primal regularization vector. Must be a positive vector (length ''n'') or scalar, in which case ''D''1 = ''diag''(''d''1) is used. Default: 10''-''4 .
|-valign="top"
|||''d2''||Dual regularization vector.  Must be a positive vector (length ''m'') or a scalar value, in which case ''D''2 = ''diag''(''d''2) is used. Default: 10''-''4 .
|-
|||''y0''||Initial  dual parameters for linear constraints (default 0)
|-valign="top"
|||''z0''||Initial  dual parameters for simple bounds (default 1''/N '')
''xsize'',''zsize ''are used to scale (''x, y, z''). Good estimates should improve the performance of the barrier method.
|-
|||''xsize''||Estimate of the biggest x at the solution. (default 1''/N '')
|-
|||''zsize''||Estimate of the biggest z at the solution. (default 1''/N '')
|-
|||''optParam''||Structure with optimization parameters. The following fields are used:
|-
|||''MaxIter''||Maximum number of iterations. (''Prob.SOL.pdco.MaxIter'').
|-valign="top"
|||''MinorIter''||Maximum number of iterations in ''LSQR'' (''Prob.SOL.pdco.LSQRMaxIter'').
|-
|||''eps_x''||Accuracy for satisfying ''x''1 ''. * z''1 = 0'', x''2 ''.z''1 = 0, where ''z ''= ''z''1 ''- z''2 and ''z''1 '', z''2 ''> ''0.(''Prob.SOL.pdco.OptTol'')
|-valign="top"
|||''bTol''||Accuracy for satisfying ''Ax ''+ ''D''2 ''r ''= ''b'', ''AT y ''+ ''z ''= ''?f ''(''x'') and ''x - x''1  = ''bL , x ''+''x''2 = ''bU '', where ''x''1 '', x''2  ''> ''0 (''Prob.SOL.pdco.FeaTol'')
|-
|||''wait''||0 - solve the problem with default internal parameters; 1 - pause: allows interactive resetting of parameters. (''Prob.SOL.pdco.wait'')
|}
 
'''Description  of Outputs'''
 
{|
|''Result''||colspan="2"|Structure with result from optimization. The following fields are set by ''pdcoTL''
|-
|||''x_k''||Solution vector
|-
|||''f_k''||Function value at optimum
|-
|||''g_k''||Gradient of the function at the solution
|-
|||''H_k''||Hessian of the function at the solution, diagonal only
|-
|||''x_0''||Initial  solution vector
|-
|||''f_0''||Function value at start, x = x_0
|-
|||''xState''||State of variables. Free == 0; On lower == 1; On upper == 2; Fixed == 3;
|-valign="top"
|||''bState''||State of linear constraints.  Free == 0; Lower == 1; Upper == 2; Equality == 3;
|-
|||''v_k''||Lagrangian multipliers (orignal bounds + constraints )
|-valign="top"
|||''y_k''||Lagrangian multipliers (for bounds + dual solution vector) The full \[''z''; ''y''\] vec- tor as returned from ''pdco'', including slacks and extra linear constraints after rewriting constraints:  ''-inf  < b L < A * x < b U < inf ''; non-inf lower AND upper bounds
|-
|||''ExitFlag''||Tomlab Exit status from ''pdco ''MEX
|-
|||''Inform''||''pdco''information parameter: 0 = Solution found;
|-
|||0||Solution found
|-
|||1||Too many iterations
|-
|||2||Linesearch failed too often
|-
|||''Iter''||Number of iterations
|-
|||''FuncEv''||Number of function evaluations
|-
|||''GradEv''||Number of gradient evaluations
|-
|||''HessEv''||Number of Hessian evaluations
|-
|||''Solver''||Name of the solver ('pdco')
|-
|||''SolverAlgorithm''||Description of the solver
|}
 
'''Description'''
 
''pdco ''implements an primal-dual barrier method developed at Stanford Systems Optimization Laboratory (SOL).
 
The problem (19) is first reformulated into SOL PDCO form:
 
 
<math>
\begin{array}{cccccc}
\min\limits_x & f(x) \\
\mathrm{s/t} & x_L & \leq & x  & \leq & x_U \\
{}          &    &      & Ax &  =  & b \\
{}          & \multicolumn{5}{l}{r \mathrm{\ unconstrained}} \\
\end{array}
</math>
 
 
The problem actually solved by ''pdco ''is
 
 
<math>
\begin{array}{lllcll}
\min\limits_{x,r} &
\multicolumn{5}{l}{\phi(x) + \frac{1}{2}\|D_1 x\|^2 + \frac{1}{2}\|r\|^2 } \\
\\
\mathrm{s/t}  & x_L & \leq & x      & \leq & x_U  \\
{}            & Ax  &  +  & D_{2}r &  =  & b    \\
\end{array}
</math>
 
 
where <math>D_1</math> and <math>D_2</math> are positive-definite diagonal matrices defined from <math>d_1</math>, <math>d_2</math> given in Prob.SOL.d1 and Prob.SOL.d2.
 
In particular, <math>d_2</math> indicates the accuracy required for satisfying each row of <math>Ax=b</math>. See ''pdco'' for a detailed discussion of <math>D_1</math> and <math>D_2</math>. Note that in ''pdco'', the objective <math>f(x)</math> is denoted <math>\phi(x)</math>, <math>bl == x\_L</math> and <math>bu == x\_U</math>.
 
'''Examples'''
 
Problem 14 and 15 in ''tomlab/testprob/con prob.m.m ''are good examples of the use of ''pdcoTL''.
 
'''M-files  Used'''
 
''pdcoSet.m'', ''pdco.m'', ''Tlsqrmat.m''
 
'''See Also'''
 
''pdscoTL.m''
 
====pdscoTL====
 
'''Purpose'''
 
''pdscoTL ''solves linearly constrained convex nonlinear optimization problems of the kind
 
 
<math>
\begin{array}{cccccc}
\min\limits_x & f(x) \\
\mathrm{s/t} & x_L & \leq & x  & \leq & x_U \\
{}          & b_L & \leq & Ax & \leq & b_U \\
\end{array}
</math>
 
where <math>f(x)</math> is a convex separable nonlinear function, <math>x,x_L,x_U
\in \RR^n</math> , <math>A\in \RR^{m \times n}</math> and <math>b_L, b_U \in\RR^m</math>.
 
'''Calling  Syntax'''
 
Result=tomRun('pdsco',Prob,...);


'''Description  of Inputs'''
''pdcoTL ''solves linearly constrained convex nonlinear optimization problems.
{|
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|||''x_0''||Initial ''x ''vector, used if non-empty.
|-
|||''A''||The linear constraints coefficient matrix.
|-
|||''b_L,b_U''||Lower and upper bounds for the linear constraints.
|-valign="top"
|||''HessPattern''||Non-zero pattern for the objective function. Only the diagonal is needed. Default if empty is the unit matrix.
|-
|||''PriLevOpt''||Print level in ''pdsco ''solver. If ''> ''0: prints summary information.
|-
|||''SOL''||Structure with SOL special parameters:
|-
|||''pdco''||Options structure with fields as defined  by ''pdscoSet''.
|-
|||''gamma''||Primal regularization parameter.
|-
|||''delta''||Dual regularization parameter.
|-
|||''y0''||Initial  dual parameters for linear constraints (default 0)
|-
|||''z0''||Initial  dual parameters for simple bounds (default 1''/N '')
|-
|||||''xsize'',''zsize ''are used to scale (''x, y, z''). Good estimates should improve the performance of the barrier method.
|-
|||''xsize''||Estimate of the biggest ''x ''at the solution. (default 1''/N '')
|-
|||''zsize''||Estimate of the biggest ''z ''at the solution. (default 1''/N '')
|-
|||''optParam''||Structure with optimization parameters. The following fields are used:
|-
|||''MaxIter''||Maximum number of iterations. (''Prob.SOL.pdco.MaxIter'').
|-
|||''MinorIter''||Maximum number of iterations in ''LSQR ''(''Prob.SOL.pdco.LSQRMaxIter'').
|-
|||''eps_x''||Accuracy for satisfying ''x. * z ''= 0
|-valign="top"
|||''bTol''||Accuracy for satisfying ''Ax ''+ ''r ''= ''b'', ''AT y ''+ ''z ''= ''?f ''(''x'') and ''x - x''1  = ''bL , x ''+ ''x''2 =''bU'', where ''x''1 '', x''2  ''> ''0. (''Prob.SOL.pdco.FeaTol'')
|-valign="top"
|||''wait''||0 - solve the problem with default internal parameters; 1 - pause: allows interactive resetting of parameters. (''Prob.SOL.pdco.wait'')
|}


'''Description  of Outputs'''
*[[pdcoTL]]


{|
==pdscoTL==
|''Result''||colspan="2"|Structure with result from optimization. The following fields are set by ''pdscoTL'':
|-
|||''x_k''||Solution vector
|-
|||''f_k''||Function value at optimum
|-
|||''g_k''||Gradient of the function at the solution
|-
|||''H_k''||Hessian of the function at the solution, diagonal only
|-
|||''x_0''||Initial  solution vector
|-
|||''f_0''||Function value at start, x = x_0
|-
|||''xState''||State of variables. Free == 0; On lower == 1; On upper == 2; Fixed == 3;
|-
|||''bState''||State of linear constraints.  Free == 0; Lower == 1; Upper == 2; Equality == 3;
|-
|||''v_k''||Lagrangian multipliers (orignal bounds + constraints )
|-valign="top"
|||''y k''||Lagrangian multipliers (for bounds + dual solution vector) The full \[''z''; ''y''\] vec- tor as returned from ''pdsco'', including slacks and extra linear constraints after rewriting constraints:  ''-inf  < b L < A * x < b U < inf ''; non-inf lower AND upper bounds
|-
|||''ExitFlag''||Tomlab Exit status from ''pdsco ''MEX
|-
|||''Inform''||''pdsco ''information parameter: 0 = Solution found;
|-
|||0||Solution found
|-
|||1||Too many iterations
|-
|||2||Linesearch failed too often
|-
|||''Iter''||Number of iterations
|-
|||''FuncEv''||Number of function evaluations
|-
|||''GradEv''||Number of gradient evaluations
|-
|||''HessEv''||Number of Hessian evaluations
|-
|||''Solver''||Name of the solver ('pdsco')
|-
|||''SolverAlgorithm''||Description of the solver
|}


'''Description'''
''pdscoTL'' solves linearly constrained convex nonlinear optimization problems.


''pdsco ''implements  an primal-dual barrier method developed at Stanford Systems Optimization Laboratory (SOL). The problem (20) is first reformulated into SOL PDSCO form:
*[[pdscoTL]]


 
==qpSolve==
<math>
\begin{array}{clll}
\min\limits_x  & f(x) \\
\mathrm{s/t} & x  & \geq & x_U \\
{}          & Ax &  =  & b. \\
\end{array}
</math>
 
 
The problem actually solved by ''pdsco ''is
 
 
<math>
\begin{array}{lllcll}
\min\limits_{x,r} &
\multicolumn{5}{l}{f(x) +
\frac{1}{2}\|\gamma x\|^2 + \frac{1}{2}\|r / \delta \|^2 } \\
\\
\mathrm{s/t}  &    &      & x & \geq & 0  \\
{}            & Ax  &  +  & r &  =  & b  \\
{}            & \multicolumn{5}{l}{r \mathrm{\ unconstrained}} \\
\end{array}
</math>
 
 
where <math>\gamma</math> is the primal regularization parameter, typically small but 0 is allowed. Furthermore, <math>\delta</math> is the dual regularization parameter, typically small or 1; must be strictly greater than zero.
 
With positive <math>\gamma,\delta</math> the primal-dual solution <math>(x,y,z)</math> is bounded and unique.
 
See ''pdsco'' for a detailed discussion of </math>\gamma</math> and <math>\delta</math>. Note that in ''pdsco'', the objective <math>f(x)</math> is denoted <math>\phi(x)</math>, <math>bl == x\_L</math> and <math>bu == x\_U</math>.
 
'''Examples'''
 
Problem 14 and 15 in tomlab/testprob/con prob.m are good examples of the use of ''pdscoTL''.
 
'''M-files  Used'''
 
''pdscoSet.m'', ''pdsco.m'', ''Tlsqrmat.m''
 
'''See Also'''
 
''pdcoTL.m''
 
====qpSolve====
 
'''Purpose'''


Solve general quadratic programming problems.
Solve general quadratic programming problems.


''qpSolve ''solves problems  of the form
*[[qpSolve]]
 
 
<math>
\begin{array}{cccccl}
\min\limits_{x} & f(x) & = & \frac{1}{2}(x)^{T}Fx + c^{T}x &  &  \\s/t & x_{L} & \leq  & x  & \leq  & x_{U} \\& b_{L} & \leq  & Ax  & \leq  & b_{U} \\
\end{array}
</math>
 
 
where <math>x,x_{L},x_{U}\in \MATHSET{R} ^{n}</math>, <math>F\in \MATHSET{R}^{n\times n}</math>, <math>c \in \MATHSET{R}^{n}</math>, <math>A\in \MATHSET{R}^{m\times n}</math> and <math>b_{L},b_{U}\in \MATHSET{R}^{m}</math>.
 
'''Calling  Syntax'''
 
Result = qpSolve(Prob) or


Result = tomRun('qpSolve', Prob, 1);
==slsSolve==
 
'''Description  of Inputs'''
 
{|
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|||''QP.F''||Constant matrix, the Hessian.
|-
|||''QP.c''||Constant vector.
|-
|||''A''||Constraint matrix for linear constraints.
|-
|||''b_L''||Lower bounds on the linear constraints.
|-
|||''b_U''||Upper bounds on the linear constraints.
|-
|||''x_L''||Lower bounds on the variables.
|-
|||''x_U''||Upper bounds on the variables.
|-
|||''x_0''||Starting point.
|-valign="top"
|||''optParam''||Structure with special fields for optimization parameters, see Table 141.Fields used are: ''eps f'', ''eps Rank'', ''MaxIter'', ''wait'', ''bTol ''and ''PriLev''.
|}
 
'''Description  of Outputs'''
 
{|
|''Result''||colspan="2"|Structure with result from optimization.  The following fields are changed:
|-
|||''x_k''||Optimal point.
|-
|||''f_k''||Function value at optimum.
|-
|||''g_k''||Gradient value at optimum.
|-
|||''H_k''||Hessian value at optimum.
|-
|||''v_k''||Lagrange multipliers.
|-
|||''x_0''||Starting point.
|-
|||''f_0''||Function value at start.
|-
|||''xState''||State of each variable, described in Table 150 .
|-
|||''Iter''||Number of iterations.
|-
|||''ExitFlag''||0: OK, see ''Inform ''for type of convergence.
|-
|||||2: Can not find feasible starting point ''x_0''.
|-
|||||3: Rank problems. Can not find any solution point.
|-
|||||4: Unbounded solution.
|-
|||||10: Errors in input parameters.
|-valign="top"
|||''Inform''||If ''ExitF lag > ''0, ''I nf orm ''= ''ExitF lag'', otherwise ''I nf orm ''show type of convergence:
|-
|||||0: Unconstrained solution.
|-
|||||1: ''? = ''0.
|-
|||||2: ''? = ''0. No second order Lagrange mult. estimate available.
|-
|||||3: ''? ''and 2nd order Lagr. mult.  positive, problem is not negative definite.
|-
|||||4: Negative definite problem. 2nd order Lagr.  mult. positive, but releasing variables leads to the same working set.
|-
|||''Solver''||Solver used.
|-
|||''SolverAlgorithm''||Solver algorithm used.
|-
|||''Prob''||Problem structure used.
|}
 
'''Description'''
 
Implements an active set strategy for Quadratic Programming. For negative definite problems it computes eigen- values and is using directions of negative curvature to proceed. To find an initial  feasible point the Phase 1 LP problem is solved calling ''lpSimplex''. The routine is the standard QP solver used by ''nlpSolve'', ''sTrustr ''and ''conSolve''.
 
'''M-files  Used'''
 
''ResultDef.m'', ''lpSimplex.m'', ''tomSolve.m'', ''iniSolve.m'', ''endSolve.m''
 
'''See Also'''
 
''qpBiggs'', ''qpe'', ''qplm'', ''nlpSolve'', ''sTrustr ''and ''conSolve''
 
====slsSolve====
 
'''Purpose'''


Find a Sparse Least Squares (sls) solution to a constrained least squares problem with  the use of any suitable TOMLAB  NLP solver.
Find a Sparse Least Squares (sls) solution to a constrained least squares problem with  the use of any suitable TOMLAB  NLP solver.


''slsSolve ''solves problems of the type:
*[[slsSolve]]
 
 
<math>
\begin{array}{cccccc}
\min\limits_x & \frac{1}{2}r(x)^T r(x)\\\mbox{subject to} & x_L & \leq &    x & \leq & x_U \\{}                & b_L & \leq &  Ax & \leq & b_U \\{}                & c_L & \leq & c(x) & \leq & c_U \\
\end{array}
</math>


 
==sTrustr==
where <math>x,x_L,x_U \in \Rdim{n}</math>, <math>r(x) \in \Rdim{m}</math>, <math>A \in\Rdim{m_1,n}</math>, <math>b_L,b_U \in \Rdim{m_1}</math> and <math>c(x),c_L,c_U \in\Rdim{m_2}</math>.
 
The use of ''slsSolve ''is mainly for large, sparse problems, where the structure in the Jacobians of the residuals and the nonlinear constraints are utilized by a sparse NLP solver, e.g. ''SNOPT''.
 
'''Calling  Syntax'''
 
Result=slsSolve(Prob,PriLev)
 
'''Description  of Inputs'''
 
{|
|''Prob''||colspan="2"|Problem description structure.  Should be created in the '''cls '''format, preferably by calling''Prob=clsAssign(...) ''if using the '''TQ  '''format.
|-
|||colspan="2"|slsSolve uses two special fields in ''Prob'':
|-valign="top"
|||''SolverL2''||Text  string with  name of the NLP solver used for solving the reformulated problem. Valid choices are ''conSolve'', ''nlpSolve'', ''sTrustr'', ''clsSolve''. Suitable SOL solvers, if available: ''minos'', ''snopt'', ''npopt''.
|-valign="top"
|||''L2Type''||Set to 1 for standard constrained formulation. Currently this is the only allowed choice.
|-
|||colspan="2"|All other fields should be set as expected  by the nonlinear solver selected. In particular:
|-
|||''A''||Linear constraint matrix.
|-
|||''b_L''||Lower bounds on the linear constraints.
|-
|||''b_U''||Upper bounds on the linear constraints.
|-
|||''c_L''||Upper bounds on the nonlinear constraints.
|-
|||''c_U''||Lower bounds on the nonlinear constraints.
|-
|||''x_L''||Lower bounds on the variables.
|-
|||''x_U''||Upper bounds on the variables.
|-
|||''x_0''||Starting point.
|-
|||''ConsPattern''||The nonzero pattern of the constraint Jacobian.
|-
|||''JacPattern''||The nonzero pattern of the residual Jacobian.
|-
|||||Note that  ''Prob.LS.y ''must be of correct length if ''JacPattern ''is empty (but ConsPattern is not). ''slsSolve ''will create the new ''Prob.ConsPattern ''to be used by the nonlinear solver using the information in the supplied ''ConsPattern ''and ''JacPattern''.
|-
|||''PriLev''||Print level in ''slsSolve''. Default value is 2.
|-
|||0||Silent except for error messages.
|-
|||''> ''1||Print summary information about problem transformation. ''slsSolve ''calls ''Print- Result(Result,PriLev)''.
|-
|||2||Standard output in PrintResult.
|}
 
'''Description  of Outputs'''
 
{|
|''Result''||colspan="2"|Structure with results from optimization. The contents of ''Result ''depend on which nonlinear solver was used to solved
|-
|||colspan="2"|''slsSolve ''transforms the following fields of ''Result ''back to the format of the original problem:
|-
|||''x_k''||Optimal point.
|-
|||''r_k''||Residual at optimum.
|-
|||''J_k''||Jacobian of residuals at optimum.
|-
|||''c_k''||Nonlinear constraint vector at optimum.
|-
|||''v_k''||Lagrange multipliers.
|-
|||''g_k''||The gradient vector is calculated as ''J kT  · r k''.
|-
|||''cJac''||Jacobian of nonlinear constraints at optimum.
|-
|||''x_0''||Starting point.
|-
|||''xState''||State of variables at optimum.
|-
|||''cState''||State of constraints at optimum.
|-
|||''Result.Prob''||The problem structure defining the reformulated problem.
|}
 
'''Description'''
 
The constrained least squares problem is solved in slsSolve by rewriting  the problem as a general constrained optimization problem. A set of ''m ''(the number of residuals) extra variables <math>z=(z_1,z_2,\ldots,z_m)</math> are added at the end of the vector of unknowns. The reformulated problem
 
 
<math>
\begin{array}{cccccc}
\min \limits_x & \multicolumn{5}{l}{\frac{1}{2} z^T z} \\
\mbox{subject to} & x_L & \leq & (x_1,x_2,\ldots,x_n) & \leq & x_U \\
{}                & b_L & \leq &  Ax                & \leq & b_U \\
{}                & c_L & \leq & c(x)                & \leq & c_U \\
{}                & 0  & \leq & r(x) - z            & \leq &  0 \\
\end{array}
</math>
 
 
is then solved by the solver given by ''Prob.SolverL2''.
 
'''Examples'''
 
''slsDemo.m''
 
'''M-files  Used'''
 
''iniSolve.m'', ''GetSolver.m''
 
====sTrustr====
 
'''Purpose'''


Solve optimization problems constrained by a convex feasible region.
Solve optimization problems constrained by a convex feasible region.


''sTrustr ''solves problems of the form
*[[sTrustr]]
 
 
<math>
\begin{array}{cccccc}
\min\limits_{x} & f(x) &  &  &  &  \\
s/t & x_{L} & \leq  & x    & \leq  & x_{U} \\
& b_{L} & \leq  & Ax  & \leq  & b_{U} \\
& c_{L} & \leq  & c(x) & \leq  & c_{U} \\
\end{array}
</math>
 


where <math>x,x_{L},x_{U}\in \MATHSET{R}^{n}</math>, <math>c(x),c_{L},c_{U}\in \MATHSET{R}^{m_{1}}</math>, <math>A\in \MATHSET{R}^{m_{2}\times n}</math> and <math>b_{L},b_{U}\in \MATHSET{R}^{m_{2}}</math>.
==Tfmin==


'''Calling Syntax'''
Minimize function of one variable. Find miniumum x in [x_L, x_U] for function Func within tolerance xTol. Solves using Brents minimization algorithm.


Result = sTrustr(Prob, varargin)
*[[Tfmin]]


'''Description  of Inputs'''
==Tfzero==
 
{|
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|||''A ''||Constraint matrix for linear constraints.
|-
|||''b_L''||Lower bounds on the linear constraints.
|-
|||''b_U''||Upper bounds on the linear constraints.
|-
|||''c_L''||Lower bounds on the general constraints.
|-
|||''c_U''||Upper bounds on the general constraints.
|-
|||''x_L''||Lower bounds on the variables.
|-
|||''x_U''||Upper bounds on the variables.
|-
|||''x_0''||Starting point.
|-
|||''FUNCS.f''||Name of m-file computing the objective function ''f ''(''x'').
|-
|||''FUNCS.g''||Name of m-file computing the gradient vector ''g''(''x'').
|-
|||''FUNCS.H''||Name of m-file computing the Hessian matrix ''H ''(''x'').
|-
|||''FUNCS.c''||Name of m-file computing the vector of constraint functions ''c''(''x'').
|-
|||''FUNCS.dc''||Name of m-file computing the matrix of constraint normals ''?c''(''x'')''/dx''.
|-
|||''optParam''||Structure with special fields for optimization parameters, see Table 141.
|-
|||||Fields used are: ''eps f'', ''eps g'', ''eps c'', ''eps x'', ''eps Rank'', ''MaxIter'', ''wait'', ''size x'', ''size f'', ''xTol'', ''LowIts'', ''PriLev'', ''method ''and ''QN InitMatrix''.
|-
|||''PartSep''||Structure with special fields for partially separable functions, see Table 142.
|-
|||''varargin''||Other parameters directly sent to low level routines.
|}
 
'''Description  of Outputs'''
 
{|
|''Result''||colspan="2"|Structure with result from optimization. The following fields are changed:
|-
|||''x_k''||Optimal point.
|-
|||''f_k''||Function value at optimum.
|-
|||''g_k''||Gradient value at optimum.
|-
|||''c_k''||Value of constraints at optimum.
|-
|||''H_k''||Hessian value at optimum.
|-
|||''v_k''||Lagrange multipliers.
|-
|||''x_0''||Starting point.
|-
|||''f_0''||Function value at start.
|-
|||''cJac''||Constraint Jacobian at optimum.
|-
|||''xState''||State of each variable, described in Table 150.
|-
|||''Iter''||Number of iterations.
|-
|||''ExitFlag''||Flag giving exit status.
|-
|||''Inform''||Binary code telling type of convergence:
|-
|||||1: Iteration points are close.
|-
|||||2: Projected gradient small.
|-
|||||3: Iteration points are close and projected gradient small.
|-
|||||4: Relative function value reduction low for ''LowIts ''iterations.
|-
|||||5: Iteration points are close and relative function value reduction low for LowIts iterations.
|-
|||||6: Projected gradient small and relative function value reduction low for LowIts iterations.
|-
|||||7:  Iteration  points are close, projected gradient  small and relative  function value reduction low for LowIts iterations.
|-
|||||8: Too small trust region.
|-
|||||9: Trust region small. Iteration points close.
|-
|||||10: Trust region and projected gradient small.
|-
|||||11: Trust region and projected gradient small, iterations close.
|-
|||||12: Trust region small, Relative f(x) reduction low.
|-
|||||13: Trust region small, Relative f(x) reduction low. Iteration points are close.
|-
|||||14: Trust region small, Relative f(x) reduction low. Projected gradient small.
|-
|||||15:  Trust  region small, Relative  f(x)  reduction low. Iteration  points close, Projected gradient small.
|-
|||||101: Maximum number of iterations reached.
|-
|||||102: Function value below given estimate.
|-
|||||103: Convergence to saddle point (eigenvalues computed).
|-
|||''Solver''||Solver used.
|-
|||''SolverAlgorithm ''||Solver algorithm used.
|-
|||''Prob''||Problem structure used.
|}
 
'''Description'''
 
The routine ''sTrustr ''is a solver for general constrained optimization, which uses a structural trust region algorithm combined with an initial  trust region radius algorithm (''itrr''). The feasible region defined by the constraints must be convex. The code is based on the algorithms in \[13\] and \[67\]. BFGS or DFP is used for the Quasi-Newton update, if the analytical Hessian is not used. ''sTrustr ''calls internal routine ''itrr''.
 
'''M-files  Used'''
 
''qpSolve.m'', ''tomSolve.m'', ''iniSolve.m'', ''endSolve.m''
 
'''See Also'''
 
''conSolve'', ''nlpSolve'', ''clsSolve''
 
====Tfmin====
 
'''Purpose'''
 
Minimize function of one variable. Find miniumum x in [x_L, x_U] for function Func within tolerance xTol.  Solves using Brents minimization algorithm. Reference: "Computer Methods for Mathematical Computations", Forsythe, Malcolm, and Moler, Prentice-Hall, 1976.
 
 
'''Calling  Syntax'''
 
[x, nFunc] = Tfmin(Func, x_L, x_U, xTol, Prob)
 
'''Description  of Inputs'''
 
{|
|''Variable''||Description
|-
|''Func''||Function of x to be minimized. Func must be defined as:
|-
|||f = Func(x) if no 5th argument Prob is given or
|-
|||f = Func(x, Prob) if 5th argument Prob is given.
|-
|''x_L''||Lower bound on x.
|-
|''x_U''||Upper bound on x.
|-
|''xTol''||Tolerance on accuracy of minimum.
|-valign="top"
|''Prob''||Structure (or any Matlab variable) sent to Func. If many parameters are to be sent to Func set them in Prob as a structure.  Example for parameters a and b:
|-
|||Prob.user.a = a; Prob.user.b = b;
|-
|||[x, nFunc] = Tfmin('myFunc',0,1,1E-5,Prob); In myFunc:
|-
|||function f = myFunc(x, Prob)
|-
|||a = Prob.user.a;
|-
|||b = Prob.user.b;
|-
|||f = "matlab expression dependent of x, a and b";
|}
 
'''Description  of Outputs'''
 
{|
|''Variable''||Description
|-
|''x''||Solution.
|-
|''nFunc''||Number of calls to Func.
|}
 
====Tfzero====
 
'''Purpose'''


Tfzero, TOMLAB  fzero routine.
Tfzero, TOMLAB  fzero routine.


Tfzero, TOMLAB fzero routine.\\ \\Find a zero for <math>f(x)</math> in the interval <math>[x_L, x_U]</math>. Tfzero searches for a zero of a function <math>f(x)</math> between the given scalar values <math>x_L</math> and <math>x_U</math> until the width of the interval (xLow, xUpp) has collapsed to within a tolerance specified by the stopping criterion, <math>abs(xLow-xUpp) <= 2.*(RelErr*abs(xLow)+AbsErr)</math>. The method used is an efficient combination of bisection and the secant rule and is due to T. J. Dekker.
*[[Tfzero]]
 
'''Calling  Syntax'''
 
[xLow, xUpp, ExitFlag] = Tfzero(x L, x U, Prob, x 0, RelErr, AbsErr)
 
'''Description  of Inputs'''
 
{|
|''Variable''||Description
|-
|''x_L''||Lower limit  on the zero x to f(x).
|-
|''x_U''||Upper limit  on the zero x to f(x).
|-valign="top"
|''Prob''||Structure, sent to Matlab routine ZeroFunc. The function name should be set in Prob.FUNCS.f0. Only the function will be used, not the gradient.
|-
|''x_0''||An initial  guess on the zero to f(x). If empty, x 0 is set as the middle point in [x_L, x_U].
|-
|''RelErr''||Relative error tolerance, default 1E-7.
|-
|''AbsErr''||Absolute error tolerance, default 1E-14.
|}
 
'''Description  of Outputs'''
 
{|
|''Variable''||Description
|-
|''xLow''||Lower limit on the zero x to f(x).
|-
|''xUpp''||Upper limit on the zero x to f(x).
|-
|''ExitFlag''||Status flag 1,2,3,4,5.
|-
|||1: xLow is within the requested tolerance of a zero. The interval (xLow, xUpp) collapsed to the requested tolerance, the function changes sign in (xLow, xUpp), and f(x) decreased in magnitude as (xLow, xUpp) collapsed.
|-
|||2: f(xLow) = 0. However, the interval (xLow, xUpp) may not have collapsed to the requested tolerance.
|-
|||3:  xLow may be near a singular point  of f(x).  The interval  (xLow, xUpp) collapsed to the requested tolerance and the function changes sign in (xLow, xUpp),  but  f(x)  increased in  magnitude as (xLow,  xUpp)  collapsed, i.e. ''abs''(''f ''(''xLow'')) ''> max''(''abs''(''f ''(''xLow - I N ''))'', abs''(''f ''(''xU pp - I N ''))).
|-
|||4:  No change in sign of f(x)  was found although the interval (xLow, xUpp) collapsed to the requested tolerance.  The user must examine this case and decide whether xLow is near a local minimum of f(x), or xLow is near a zero of even multiplicity, or neither of these.
|-
|||5:  Too  many (> 500)  function evaluations used.
|}


====ucSolve====
==ucSolve==
 
'''Purpose'''


Solve unconstrained  nonlinear optimization problems with simple bounds on the variables.
Solve unconstrained  nonlinear optimization problems with simple bounds on the variables.


''ucSolve ''solves problems of the form
*[[ucSolve]]
 
 
<math>
\begin{array}{cccccc}
\min\limits_{x} & f(x) &  &  &  &  \\
s/t & x_{L} & \leq  & x & \leq  & x_{U} \\
\end{array}
</math>
 
 
where <math>x,x_{L},x_{U}\in \MATHSET{R} ^{n}</math>.
 
'''Calling  Syntax'''
 
Result = ucSolve(Prob, varargin)
 
'''Description  of Inputs'''
 
{|
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|||''x_L''||Lower bounds on the variables.
|-
|||''x_U''||Upper bounds on the variables.
|-
|||''x_0''||Starting point.
|-
|||''FUNCS.f''||Name of m-file computing the objective function ''f ''(''x'').
|-
|||''FUNCS.g''||Name of m-file computing the gradient vector ''g''(''x'').
|-
|||''FUNCS.H''||Name of m-file computing the Hessian matrix ''H ''(''x'').
|-
|||''f_Low''||Lower bound on function value.
|-
|||''Solver.Alg ''||Solver algorithm to be run:
|-
|||||0: Gives default, either Newton or BFGS.
|-
|||||1: Newton with subspace minimization, using SVD.
|-
|||||2: Safeguarded BFGS with inverse Hessian updates (standard).
|-
|||||3: Safeguarded BFGS with Hessian updates.
|-
|||||4: Safeguarded DFP with inverse Hessian updates.
|-
|||||5: Safeguarded DFP with Hessian updates.
|-
|||||6: Fletcher-Reeves CG.
|-
|||||7: Polak-Ribiere CG.
|-
|||||8: Fletcher conjugate descent CG-method.
|-
|||''Solver.Method''||Method used to solve equation system:
|-
|||||0: SVD (default).
|-
|||||1: LU-decomposition.
|-
|||||2: LU-decomposition with pivoting.
|-
|||||3: Matlab built in QR.
|-
|||||4: Matlab inversion.
|-
|||||5: Explicit  inverse.
|-
|||''Solver.Method''||Restart or not for C-G method:
|-
|||||0: Use restart in CG-method each n:th step.
|-
|||||1: Use restart in CG-method each n:th step.
|-
|||''LineParam ''||Structure with line search parameters,  see routine ''LineSearch ''and Table 140.
|-
|||''optParam ''||Structure with special fields for optimization parameters,  see Table 141.
|-
|||||Fields used are: ''eps absf'', ''eps f'', ''eps g'', ''eps x'', ''eps Rank'', ''MaxIter'', ''wait'', ''size x'', ''xTol'', ''size f'', ''LineSearch'', ''LineAlg'', ''xTol'', ''IterPrint ''and ''QN InitMatrix''.
|-
|||''PriLevOpt''||Print level.
|-
|''varargin''||colspan="2"|Other parameters directly sent to low level routines.
|}
 
'''Description  of Outputs'''
 
{|
|''Result''||colspan="2"|Structure with result from optimization.  The following fields are changed:
|-
|||''x_k''||Optimal point.
|-
|||''f_k''||Function value at optimum.
|-
|||''g_k''||Gradient value at optimum.
|-
|||''H_k''||Hessian value at optimum.
|-
|||''B_k''||Quasi-Newton approximation of the Hessian at optimum.
|-
|||''v_k''||Lagrange multipliers.
|-
|||''x_0''||Starting point.
|-
|||''f_0''||Function value at start.
|-
|||''xState''||State of each variable, described in Table 150.
|-
|||''Iter''||Number of iterations.
|-
|||''ExitFlag''||0 if convergence to local min. Otherwise errors.
|-
|||''Inform''||Binary code telling type of convergence:
|-
|||||1: Iteration points are close.
|-
|||||2: Projected gradient small.
|-
|||||4: Relative function value reduction low for ''LowIts ''iterations.
|-
|||||101: Maximum number of iterations reached.
|-
|||||102: Function value below given estimate.
|-
|||||104: Convergence to a saddle point.
|-
|||''Solver''||Solver used.
|-
|||''SolverAlgorithm''||Solver algorithm used.
|-
|||''Prob''||Problem structure used.
|}
 
'''Description'''
 
The solver ''ucSolve ''includes several of the most popular search step methods for unconstrained optimization. The search step methods included in ''ucSolve ''are: the Newton method, the quasi-Newton BFGS and DFP methods, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian are using a subspace minimization  technique to handle rank problems, see  Lindstr¨om  \[53\]. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix. The line search algorithm in the routine ''LineSearch ''is a modified version of an algorithm by Fletcher \[20\]. Bound constraints are treated as described in Gill, Murray and Wright \[28\]. The accuracy in the line search is critical for the performance of quasi-Newton BFGS and DFP methods and for the CG methods. If the accuracy parameter ''Prob.LineParam.sigma ''is set to the default value 0''.''9, ''ucSolve ''changes it automatically according to:
 
{|
|''Prob.Solver.Alg''||''Prob.LineParam.sigma''
|-
|4,5 (DFP)||0.2
|-
|6,7,8 (CG)||0.01
|}
 
'''M-files  Used'''
 
''ResultDef.m'', ''LineSearch.m'', ''iniSolve.m'', ''tomSolve.m'', ''endSolve.m''
 
'''See Also'''


''clsSolve''
==Additional solvers==
c
====Additional solvers====


Documentation for the following solvers is only available at http://tomopt.com and in the m-file help.
Documentation for the following solvers is only available at http://tomopt.com and in the m-file help.

Latest revision as of 13:30, 17 July 2011

Notice.png

This page is part of the TOMLAB Manual. See TOMLAB Manual.

Detailed descriptions of the TOMLAB solvers, driver routines and some utilities are given in the following sections. Also see the M-file help for each solver. All solvers except for the TOMLAB Base Module are described in separate manuals.

For a description of solvers called using the MEX-file interface, see the M-file help, e.g. for the MINOS solver minosTL.m. For more details, see the User's Guide for the particular solver.

clsSolve

Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality con- straints and simple bounds on the variables.

conSolve

Solve general constrained nonlinear optimization problems.

cutPlane

Solve mixed integer linear programming problems (MIP).

DualSolve

Solve linear programming problems when a dual feasible solution is available.

expSolve

Solve exponential fitting problems for given number of terms p.

glbDirect

Solve box-bounded global optimization problems.

glbSolve

Solve box-bounded global optimization problems.

glcCluster

Solve general constrained mixed-integer global optimization problems using a hybrid algorithm.

glcDirect

Solve global mixed-integer nonlinear programming problems.

glcSolve

Solve general constrained mixed-integer global optimization problems.

infLinSolve

Finds a linearly constrained minimax solution of a function of several variables with the use of any suitable TOMLAB solver. The decision variables may be binary or integer.

infSolve

Find a constrained minimax solution with the use of any suitable TOMLAB solver.

linRatSolve

Finds a linearly constrained solution of a function of the ratio of two linear functions with the use of any suitable TOMLAB solver. Binary and integer variables are not supported.

lpSimplex

Solve general linear programming problems.

L1Solve

Find a constrained L1 solution of a function of several variables with the use of any suitable nonlinear TOMLAB solver.

MilpSolve

Solve mixed integer linear programming problems (MILP).

minlpSolve

Branch & Bound algorithm for Mixed-Integer Nonlinear Programming (MINLP) with convex or nonconvex sub problems using NLP relaxation (Formulated as minlp-IP).

mipSolve

Solve mixed integer linear programming problems (MIP).

multiMin

multiMin solves general constrained mixed-integer global optimization problems. It tries to find all local minima by a multi-start method using a suitable nonlinear programming subsolver.

multiMINLP

multiMINLP solves general constrained mixed-integer global nonlinear optimization problems.

nlpSolve

Solve general constrained nonlinear optimization problems.

pdcoTL

pdcoTL solves linearly constrained convex nonlinear optimization problems.

pdscoTL

pdscoTL solves linearly constrained convex nonlinear optimization problems.

qpSolve

Solve general quadratic programming problems.

slsSolve

Find a Sparse Least Squares (sls) solution to a constrained least squares problem with the use of any suitable TOMLAB NLP solver.

sTrustr

Solve optimization problems constrained by a convex feasible region.

Tfmin

Minimize function of one variable. Find miniumum x in [x_L, x_U] for function Func within tolerance xTol. Solves using Brents minimization algorithm.

Tfzero

Tfzero, TOMLAB fzero routine.

ucSolve

Solve unconstrained nonlinear optimization problems with simple bounds on the variables.

Additional solvers

Documentation for the following solvers is only available at http://tomopt.com and in the m-file help.

  • goalSolve - For sparse multi-objective goal attainment problems, with linear and nonlinear constraints.
  • Tlsqr - Solves large, sparse linear least squares problem, as well as unsymmetric linear systems.
  • lsei - For linearly constrained least squares problem with both equality and inequality constraints.
  • Tnnls - Also for linearly constrained least squares problem with both equality and inequality constraints.
  • qld - For convex quadratic programming problem.