TOMLAB Solver Reference: Difference between revisions

From TomWiki
Jump to navigationJump to search
Line 159: Line 159:
*[[slsSolve]]
*[[slsSolve]]


====sTrustr====
==sTrustr==
 
'''Purpose'''


Solve optimization problems constrained by a convex feasible region.
Solve optimization problems constrained by a convex feasible region.


''sTrustr ''solves problems of the form
*[[sTrustr]]
 
 
<math>
\begin{array}{cccccc}
\min\limits_{x} & f(x) &  &  &  &  \\
s/t & x_{L} & \leq  & x    & \leq  & x_{U} \\
& b_{L} & \leq  & Ax  & \leq  & b_{U} \\
& c_{L} & \leq  & c(x) & \leq  & c_{U} \\
\end{array}
</math>
 
 
where <math>x,x_{L},x_{U}\in \MATHSET{R}^{n}</math>, <math>c(x),c_{L},c_{U}\in \MATHSET{R}^{m_{1}}</math>, <math>A\in \MATHSET{R}^{m_{2}\times n}</math> and <math>b_{L},b_{U}\in \MATHSET{R}^{m_{2}}</math>.
 
'''Calling  Syntax'''
 
Result = sTrustr(Prob, varargin)
 
'''Description  of Inputs'''
 
{|
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|||''A ''||Constraint matrix for linear constraints.
|-
|||''b_L''||Lower bounds on the linear constraints.
|-
|||''b_U''||Upper bounds on the linear constraints.
|-
|||''c_L''||Lower bounds on the general constraints.
|-
|||''c_U''||Upper bounds on the general constraints.
|-
|||''x_L''||Lower bounds on the variables.
|-
|||''x_U''||Upper bounds on the variables.
|-
|||''x_0''||Starting point.
|-
|||''FUNCS.f''||Name of m-file computing the objective function ''f ''(''x'').
|-
|||''FUNCS.g''||Name of m-file computing the gradient vector ''g''(''x'').
|-
|||''FUNCS.H''||Name of m-file computing the Hessian matrix ''H ''(''x'').
|-
|||''FUNCS.c''||Name of m-file computing the vector of constraint functions ''c''(''x'').
|-
|||''FUNCS.dc''||Name of m-file computing the matrix of constraint normals ''?c''(''x'')''/dx''.
|-
|||''optParam''||Structure with special fields for optimization parameters, see Table 141.
|-
|||||Fields used are: ''eps f'', ''eps g'', ''eps c'', ''eps x'', ''eps Rank'', ''MaxIter'', ''wait'', ''size x'', ''size f'', ''xTol'', ''LowIts'', ''PriLev'', ''method ''and ''QN InitMatrix''.
|-
|||''PartSep''||Structure with special fields for partially separable functions, see Table 142.
|-
|||''varargin''||Other parameters directly sent to low level routines.
|}
 
'''Description  of Outputs'''
 
{|
|''Result''||colspan="2"|Structure with result from optimization. The following fields are changed:
|-
|||''x_k''||Optimal point.
|-
|||''f_k''||Function value at optimum.
|-
|||''g_k''||Gradient value at optimum.
|-
|||''c_k''||Value of constraints at optimum.
|-
|||''H_k''||Hessian value at optimum.
|-
|||''v_k''||Lagrange multipliers.
|-
|||''x_0''||Starting point.
|-
|||''f_0''||Function value at start.
|-
|||''cJac''||Constraint Jacobian at optimum.
|-
|||''xState''||State of each variable, described in Table 150.
|-
|||''Iter''||Number of iterations.
|-
|||''ExitFlag''||Flag giving exit status.
|-
|||''Inform''||Binary code telling type of convergence:
|-
|||||1: Iteration points are close.
|-
|||||2: Projected gradient small.
|-
|||||3: Iteration points are close and projected gradient small.
|-
|||||4: Relative function value reduction low for ''LowIts ''iterations.
|-
|||||5: Iteration points are close and relative function value reduction low for LowIts iterations.
|-
|||||6: Projected gradient small and relative function value reduction low for LowIts iterations.
|-
|||||7:  Iteration  points are close, projected gradient  small and relative  function value reduction low for LowIts iterations.
|-
|||||8: Too small trust region.
|-
|||||9: Trust region small. Iteration points close.
|-
|||||10: Trust region and projected gradient small.
|-
|||||11: Trust region and projected gradient small, iterations close.
|-
|||||12: Trust region small, Relative f(x) reduction low.
|-
|||||13: Trust region small, Relative f(x) reduction low. Iteration points are close.
|-
|||||14: Trust region small, Relative f(x) reduction low. Projected gradient small.
|-
|||||15:  Trust  region small, Relative  f(x)  reduction low. Iteration  points close, Projected gradient small.
|-
|||||101: Maximum number of iterations reached.
|-
|||||102: Function value below given estimate.
|-
|||||103: Convergence to saddle point (eigenvalues computed).
|-
|||''Solver''||Solver used.
|-
|||''SolverAlgorithm ''||Solver algorithm used.
|-
|||''Prob''||Problem structure used.
|}
 
'''Description'''
 
The routine ''sTrustr ''is a solver for general constrained optimization, which uses a structural trust region algorithm combined with an initial  trust region radius algorithm (''itrr''). The feasible region defined by the constraints must be convex. The code is based on the algorithms in \[13\] and \[67\]. BFGS or DFP is used for the Quasi-Newton update, if the analytical Hessian is not used. ''sTrustr ''calls internal routine ''itrr''.
 
'''M-files  Used'''
 
''qpSolve.m'', ''tomSolve.m'', ''iniSolve.m'', ''endSolve.m''
 
'''See Also'''
 
''conSolve'', ''nlpSolve'', ''clsSolve''


====Tfmin====
====Tfmin====

Revision as of 07:34, 11 July 2011


{{#switch: | left =

{{#switch:{{#if: | {{{smallimage}}} | }} | none =

| #default =

}} {{#if:{{#if: | {{{smallimageright}}} | }} | {{#ifeq:{{#if: | {{{smallimageright}}} | }}|none | | }} }}

| #default =

{{#switch: | none =

| #default =

}}

{{#if: | {{#ifeq:|none

 | 
| }} }}

}}

Notice.png

This page is part of the TOMLAB Manual. See TOMLAB Manual.

Detailed descriptions of the TOMLAB solvers, driver routines and some utilities are given in the following sections. Also see the M-file help for each solver. All solvers except for the TOMLAB Base Module are described in separate manuals.

For a description of solvers called using the MEX-file interface, see the M-file help, e.g. for the MINOS solver minosTL.m. For more details, see the User's Guide for the particular solver.

clsSolve

Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality con- straints and simple bounds on the variables.

conSolve

Solve general constrained nonlinear optimization problems.

cutPlane

Solve mixed integer linear programming problems (MIP).

DualSolve

Solve linear programming problems when a dual feasible solution is available.

expSolve

Solve exponential fitting problems for given number of terms p.

glbDirect

Solve box-bounded global optimization problems.

glbSolve

Solve box-bounded global optimization problems.

glcCluster

Solve general constrained mixed-integer global optimization problems using a hybrid algorithm.

glcDirect

Solve global mixed-integer nonlinear programming problems.

glcSolve

Solve general constrained mixed-integer global optimization problems.

infLinSolve

Finds a linearly constrained minimax solution of a function of several variables with the use of any suitable TOMLAB solver. The decision variables may be binary or integer.

infSolve

Find a constrained minimax solution with the use of any suitable TOMLAB solver.

linRatSolve

Finds a linearly constrained solution of a function of the ratio of two linear functions with the use of any suitable TOMLAB solver. Binary and integer variables are not supported.

lpSimplex

Solve general linear programming problems.

L1Solve

Find a constrained L1 solution of a function of several variables with the use of any suitable nonlinear TOMLAB solver.

MilpSolve

Solve mixed integer linear programming problems (MILP).

minlpSolve

Branch & Bound algorithm for Mixed-Integer Nonlinear Programming (MINLP) with convex or nonconvex sub problems using NLP relaxation (Formulated as minlp-IP).

mipSolve

Solve mixed integer linear programming problems (MIP).

multiMin

multiMin solves general constrained mixed-integer global optimization problems. It tries to find all local minima by a multi-start method using a suitable nonlinear programming subsolver.

multiMINLP

multiMINLP solves general constrained mixed-integer global nonlinear optimization problems.

nlpSolve

Solve general constrained nonlinear optimization problems.

pdcoTL

pdcoTL solves linearly constrained convex nonlinear optimization problems.

pdscoTL

pdscoTL solves linearly constrained convex nonlinear optimization problems.

qpSolve

Solve general quadratic programming problems.

slsSolve

Find a Sparse Least Squares (sls) solution to a constrained least squares problem with the use of any suitable TOMLAB NLP solver.

sTrustr

Solve optimization problems constrained by a convex feasible region.

Tfmin

Purpose

Minimize function of one variable. Find miniumum x in [x_L, x_U] for function Func within tolerance xTol. Solves using Brents minimization algorithm. Reference: "Computer Methods for Mathematical Computations", Forsythe, Malcolm, and Moler, Prentice-Hall, 1976.


Calling Syntax

[x, nFunc] = Tfmin(Func, x_L, x_U, xTol, Prob)

Description of Inputs

Variable Description
Func Function of x to be minimized. Func must be defined as:
f = Func(x) if no 5th argument Prob is given or
f = Func(x, Prob) if 5th argument Prob is given.
x_L Lower bound on x.
x_U Upper bound on x.
xTol Tolerance on accuracy of minimum.
Prob Structure (or any Matlab variable) sent to Func. If many parameters are to be sent to Func set them in Prob as a structure. Example for parameters a and b:
Prob.user.a = a; Prob.user.b = b;
[x, nFunc] = Tfmin('myFunc',0,1,1E-5,Prob); In myFunc:
function f = myFunc(x, Prob)
a = Prob.user.a;
b = Prob.user.b;
f = "matlab expression dependent of x, a and b";

Description of Outputs

Variable Description
x Solution.
nFunc Number of calls to Func.

Tfzero

Purpose

Tfzero, TOMLAB fzero routine.

Tfzero, TOMLAB fzero routine.\\ \\Find a zero for in the interval . Tfzero searches for a zero of a function between the given scalar values and until the width of the interval (xLow, xUpp) has collapsed to within a tolerance specified by the stopping criterion, . The method used is an efficient combination of bisection and the secant rule and is due to T. J. Dekker.

Calling Syntax

[xLow, xUpp, ExitFlag] = Tfzero(x L, x U, Prob, x 0, RelErr, AbsErr)

Description of Inputs

Variable Description
x_L Lower limit on the zero x to f(x).
x_U Upper limit on the zero x to f(x).
Prob Structure, sent to Matlab routine ZeroFunc. The function name should be set in Prob.FUNCS.f0. Only the function will be used, not the gradient.
x_0 An initial guess on the zero to f(x). If empty, x 0 is set as the middle point in [x_L, x_U].
RelErr Relative error tolerance, default 1E-7.
AbsErr Absolute error tolerance, default 1E-14.

Description of Outputs

Variable Description
xLow Lower limit on the zero x to f(x).
xUpp Upper limit on the zero x to f(x).
ExitFlag Status flag 1,2,3,4,5.
1: xLow is within the requested tolerance of a zero. The interval (xLow, xUpp) collapsed to the requested tolerance, the function changes sign in (xLow, xUpp), and f(x) decreased in magnitude as (xLow, xUpp) collapsed.
2: f(xLow) = 0. However, the interval (xLow, xUpp) may not have collapsed to the requested tolerance.
3: xLow may be near a singular point of f(x). The interval (xLow, xUpp) collapsed to the requested tolerance and the function changes sign in (xLow, xUpp), but f(x) increased in magnitude as (xLow, xUpp) collapsed, i.e. abs(f (xLow)) > max(abs(f (xLow - I N )), abs(f (xU pp - I N ))).
4: No change in sign of f(x) was found although the interval (xLow, xUpp) collapsed to the requested tolerance. The user must examine this case and decide whether xLow is near a local minimum of f(x), or xLow is near a zero of even multiplicity, or neither of these.
5: Too many (> 500) function evaluations used.

ucSolve

Purpose

Solve unconstrained nonlinear optimization problems with simple bounds on the variables.

ucSolve solves problems of the form



where Failed to parse (unknown function "\MATHSET"): {\displaystyle x,x_{L},x_{U}\in \MATHSET{R} ^{n}} .

Calling Syntax

Result = ucSolve(Prob, varargin)

Description of Inputs

Prob Problem description structure. The following fields are used:
x_L Lower bounds on the variables.
x_U Upper bounds on the variables.
x_0 Starting point.
FUNCS.f Name of m-file computing the objective function f (x).
FUNCS.g Name of m-file computing the gradient vector g(x).
FUNCS.H Name of m-file computing the Hessian matrix H (x).
f_Low Lower bound on function value.
Solver.Alg Solver algorithm to be run:
0: Gives default, either Newton or BFGS.
1: Newton with subspace minimization, using SVD.
2: Safeguarded BFGS with inverse Hessian updates (standard).
3: Safeguarded BFGS with Hessian updates.
4: Safeguarded DFP with inverse Hessian updates.
5: Safeguarded DFP with Hessian updates.
6: Fletcher-Reeves CG.
7: Polak-Ribiere CG.
8: Fletcher conjugate descent CG-method.
Solver.Method Method used to solve equation system:
0: SVD (default).
1: LU-decomposition.
2: LU-decomposition with pivoting.
3: Matlab built in QR.
4: Matlab inversion.
5: Explicit inverse.
Solver.Method Restart or not for C-G method:
0: Use restart in CG-method each n:th step.
1: Use restart in CG-method each n:th step.
LineParam Structure with line search parameters, see routine LineSearch and Table 140.
optParam Structure with special fields for optimization parameters, see Table 141.
Fields used are: eps absf, eps f, eps g, eps x, eps Rank, MaxIter, wait, size x, xTol, size f, LineSearch, LineAlg, xTol, IterPrint and QN InitMatrix.
PriLevOpt Print level.
varargin Other parameters directly sent to low level routines.

Description of Outputs

Result Structure with result from optimization. The following fields are changed:
x_k Optimal point.
f_k Function value at optimum.
g_k Gradient value at optimum.
H_k Hessian value at optimum.
B_k Quasi-Newton approximation of the Hessian at optimum.
v_k Lagrange multipliers.
x_0 Starting point.
f_0 Function value at start.
xState State of each variable, described in Table 150.
Iter Number of iterations.
ExitFlag 0 if convergence to local min. Otherwise errors.
Inform Binary code telling type of convergence:
1: Iteration points are close.
2: Projected gradient small.
4: Relative function value reduction low for LowIts iterations.
101: Maximum number of iterations reached.
102: Function value below given estimate.
104: Convergence to a saddle point.
Solver Solver used.
SolverAlgorithm Solver algorithm used.
Prob Problem structure used.

Description

The solver ucSolve includes several of the most popular search step methods for unconstrained optimization. The search step methods included in ucSolve are: the Newton method, the quasi-Newton BFGS and DFP methods, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian are using a subspace minimization technique to handle rank problems, see Lindstr¨om \[53\]. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix. The line search algorithm in the routine LineSearch is a modified version of an algorithm by Fletcher \[20\]. Bound constraints are treated as described in Gill, Murray and Wright \[28\]. The accuracy in the line search is critical for the performance of quasi-Newton BFGS and DFP methods and for the CG methods. If the accuracy parameter Prob.LineParam.sigma is set to the default value 0.9, ucSolve changes it automatically according to:

Prob.Solver.Alg Prob.LineParam.sigma
4,5 (DFP) 0.2
6,7,8 (CG) 0.01

M-files Used

ResultDef.m, LineSearch.m, iniSolve.m, tomSolve.m, endSolve.m

See Also

clsSolve c

Additional solvers

Documentation for the following solvers is only available at http://tomopt.com and in the m-file help.

  • goalSolve - For sparse multi-objective goal attainment problems, with linear and nonlinear constraints.
  • Tlsqr - Solves large, sparse linear least squares problem, as well as unsymmetric linear systems.
  • lsei - For linearly constrained least squares problem with both equality and inequality constraints.
  • Tnnls - Also for linearly constrained least squares problem with both equality and inequality constraints.
  • qld - For convex quadratic programming problem.