TOMLAB Solver Reference
{{#switch:
| left =
| #default =
}} {{#if:{{#if: | {{{smallimageright}}} | }} | {{#ifeq:{{#if: | {{{smallimageright}}} | }}|none | | }} }}{{
#switch:left | left =| #default = }} {{#if:{{#if: | {{{smallimage}}} | }} | {{#if: | {{{smallimage}}} | }} | [[File:{{#switch:caution | critical = Ambox speedy deletion.png | important = Ambox deletion.png | warning = Ambox content.png | caution = Cleanup.png | move = Ambox move.png | protection = Ambox protection.png | notice | #default = Cleanup.png }} | {{#switch:left | left = 20x20px | #default = 40x40px }} |link=|alt=]] }}{{#switch:left | left =| #default = | {{#if:
| {{{smalltext}}} | Cleanup is needed}} | {{#switch:left
| left = {{#if: | {{{smallimageright}}} | }}| #default = {{#if:
}}| {{{smallimageright}}} |}} |
| #default =
{{#switch: | none =| #default =
}}{{#if: | {{#ifeq:|none
|| }} }}
{{
#switch: | left =| #default = }} {{#if: | | [[File:{{#switch:caution | critical = Ambox speedy deletion.png | important = Ambox deletion.png | warning = Ambox content.png | caution = Cleanup.png | move = Ambox move.png | protection = Ambox protection.png | notice | #default = Cleanup.png }} | {{#switch: | left = 20x20px | #default = 40x40px }} |link=|alt=]] }}{{#switch: | left =| #default = | Cleanup is needed Clean this article. |
{{#switch:
| left =| #default = |
}}
This page is part of the TOMLAB Manual. See TOMLAB Manual. |
Detailed descriptions of the TOMLAB solvers, driver routines and some utilities are given in the following sections. Also see the M-file help for each solver. All solvers except for the TOMLAB Base Module are described in separate manuals.
For a description of solvers called using the MEX-file interface, see the M-file help, e.g. for the MINOS solver minosTL.m. For more details, see the User's Guide for the particular solver.
clsSolve
Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality con- straints and simple bounds on the variables.
conSolve
Solve general constrained nonlinear optimization problems.
cutPlane
Solve mixed integer linear programming problems (MIP).
DualSolve
Solve linear programming problems when a dual feasible solution is available.
expSolve
Solve exponential fitting problems for given number of terms p.
glbDirect
Solve box-bounded global optimization problems.
glbSolve
Solve box-bounded global optimization problems.
glcCluster
Solve general constrained mixed-integer global optimization problems using a hybrid algorithm.
glcDirect
Solve global mixed-integer nonlinear programming problems.
glcSolve
Solve general constrained mixed-integer global optimization problems.
infLinSolve
Finds a linearly constrained minimax solution of a function of several variables with the use of any suitable TOMLAB solver. The decision variables may be binary or integer.
infSolve
Find a constrained minimax solution with the use of any suitable TOMLAB solver.
linRatSolve
Finds a linearly constrained solution of a function of the ratio of two linear functions with the use of any suitable TOMLAB solver. Binary and integer variables are not supported.
lpSimplex
Solve general linear programming problems.
L1Solve
Find a constrained L1 solution of a function of several variables with the use of any suitable nonlinear TOMLAB solver.
MilpSolve
Solve mixed integer linear programming problems (MILP).
minlpSolve
Branch & Bound algorithm for Mixed-Integer Nonlinear Programming (MINLP) with convex or nonconvex sub problems using NLP relaxation (Formulated as minlp-IP).
mipSolve
Solve mixed integer linear programming problems (MIP).
multiMin
multiMin solves general constrained mixed-integer global optimization problems. It tries to find all local minima by a multi-start method using a suitable nonlinear programming subsolver.
multiMINLP
multiMINLP solves general constrained mixed-integer global nonlinear optimization problems.
nlpSolve
Solve general constrained nonlinear optimization problems.
pdcoTL
pdcoTL solves linearly constrained convex nonlinear optimization problems.
pdscoTL
pdscoTL solves linearly constrained convex nonlinear optimization problems.
qpSolve
Solve general quadratic programming problems.
slsSolve
Find a Sparse Least Squares (sls) solution to a constrained least squares problem with the use of any suitable TOMLAB NLP solver.
sTrustr
Solve optimization problems constrained by a convex feasible region.
Tfmin
Minimize function of one variable. Find miniumum x in [x_L, x_U] for function Func within tolerance xTol. Solves using Brents minimization algorithm. Reference: "Computer Methods for Mathematical Computations", Forsythe, Malcolm, and Moler, Prentice-Hall, 1976.
Tfzero
Tfzero, TOMLAB fzero routine.
ucSolve
Purpose
Solve unconstrained nonlinear optimization problems with simple bounds on the variables.
ucSolve solves problems of the form
where Failed to parse (unknown function "\MATHSET"): {\displaystyle x,x_{L},x_{U}\in \MATHSET{R} ^{n}}
.
Calling Syntax
Result = ucSolve(Prob, varargin)
Description of Inputs
Prob | Problem description structure. The following fields are used: | |
x_L | Lower bounds on the variables. | |
x_U | Upper bounds on the variables. | |
x_0 | Starting point. | |
FUNCS.f | Name of m-file computing the objective function f (x). | |
FUNCS.g | Name of m-file computing the gradient vector g(x). | |
FUNCS.H | Name of m-file computing the Hessian matrix H (x). | |
f_Low | Lower bound on function value. | |
Solver.Alg | Solver algorithm to be run: | |
0: Gives default, either Newton or BFGS. | ||
1: Newton with subspace minimization, using SVD. | ||
2: Safeguarded BFGS with inverse Hessian updates (standard). | ||
3: Safeguarded BFGS with Hessian updates. | ||
4: Safeguarded DFP with inverse Hessian updates. | ||
5: Safeguarded DFP with Hessian updates. | ||
6: Fletcher-Reeves CG. | ||
7: Polak-Ribiere CG. | ||
8: Fletcher conjugate descent CG-method. | ||
Solver.Method | Method used to solve equation system: | |
0: SVD (default). | ||
1: LU-decomposition. | ||
2: LU-decomposition with pivoting. | ||
3: Matlab built in QR. | ||
4: Matlab inversion. | ||
5: Explicit inverse. | ||
Solver.Method | Restart or not for C-G method: | |
0: Use restart in CG-method each n:th step. | ||
1: Use restart in CG-method each n:th step. | ||
LineParam | Structure with line search parameters, see routine LineSearch and Table 140. | |
optParam | Structure with special fields for optimization parameters, see Table 141. | |
Fields used are: eps absf, eps f, eps g, eps x, eps Rank, MaxIter, wait, size x, xTol, size f, LineSearch, LineAlg, xTol, IterPrint and QN InitMatrix. | ||
PriLevOpt | Print level. | |
varargin | Other parameters directly sent to low level routines. |
Description of Outputs
Result | Structure with result from optimization. The following fields are changed: | |
x_k | Optimal point. | |
f_k | Function value at optimum. | |
g_k | Gradient value at optimum. | |
H_k | Hessian value at optimum. | |
B_k | Quasi-Newton approximation of the Hessian at optimum. | |
v_k | Lagrange multipliers. | |
x_0 | Starting point. | |
f_0 | Function value at start. | |
xState | State of each variable, described in Table 150. | |
Iter | Number of iterations. | |
ExitFlag | 0 if convergence to local min. Otherwise errors. | |
Inform | Binary code telling type of convergence: | |
1: Iteration points are close. | ||
2: Projected gradient small. | ||
4: Relative function value reduction low for LowIts iterations. | ||
101: Maximum number of iterations reached. | ||
102: Function value below given estimate. | ||
104: Convergence to a saddle point. | ||
Solver | Solver used. | |
SolverAlgorithm | Solver algorithm used. | |
Prob | Problem structure used. |
Description
The solver ucSolve includes several of the most popular search step methods for unconstrained optimization. The search step methods included in ucSolve are: the Newton method, the quasi-Newton BFGS and DFP methods, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian are using a subspace minimization technique to handle rank problems, see Lindstr¨om \[53\]. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix. The line search algorithm in the routine LineSearch is a modified version of an algorithm by Fletcher \[20\]. Bound constraints are treated as described in Gill, Murray and Wright \[28\]. The accuracy in the line search is critical for the performance of quasi-Newton BFGS and DFP methods and for the CG methods. If the accuracy parameter Prob.LineParam.sigma is set to the default value 0.9, ucSolve changes it automatically according to:
Prob.Solver.Alg | Prob.LineParam.sigma |
4,5 (DFP) | 0.2 |
6,7,8 (CG) | 0.01 |
M-files Used
ResultDef.m, LineSearch.m, iniSolve.m, tomSolve.m, endSolve.m
See Also
clsSolve c
Additional solvers
Documentation for the following solvers is only available at http://tomopt.com and in the m-file help.
- goalSolve - For sparse multi-objective goal attainment problems, with linear and nonlinear constraints.
- Tlsqr - Solves large, sparse linear least squares problem, as well as unsymmetric linear systems.
- lsei - For linearly constrained least squares problem with both equality and inequality constraints.
- Tnnls - Also for linearly constrained least squares problem with both equality and inequality constraints.
- qld - For convex quadratic programming problem.