TOMLAB: Difference between revisions

From TomWiki
Jump to navigationJump to search
mNo edit summary
No edit summary
Line 1: Line 1:
==Introduction==
=The Organization  of This Guide=


===What  is TOMLAB?===
'''[[User's Guide Section 2|Overall Design]]''' presents the general design of TOMLAB.


TOMLAB is a general purpose development, modeling and optimal control environment in Matlab for research, teaching and practical solution of optimization problems.
'''[[User's Guide Section 3|Problem Types and Solver Routines]]''' contains strict mathematical definitions of the optimization problem types.  All solver routines available in TOMLAB  are described.


TOMLAB  has grown out of the need for advanced, robust and reliable tools to be used in the development of algorithms and software for the solution of many different types of applied optimization problems.
'''[[User's Guide Section 4|Defining Problems in TOMLAB]]''' describes  the input  format and modeling environment.  The functionality of the modeling engine TomSym is discussed in 4.3 and also in appendix C.


There are many good tools available in the area of numerical analysis, operations research and optimization, but because of the different languages and systems, as well as a lack of standardization, it is a time consuming and complicated task to use these tools. Often one has to rewrite the problem formulation, rewrite the function specifications, or make some new interface routine to make everything work. Therefore the first obvious and basic design principle in TOMLAB is: ''Define your problem once, run all available solvers''. The system takes care of all interface problems, whether between languages or due to different demands on the problem specification.
'''[[User's Guide Section 5|Section 5]], [[User's Guide Section 6|Section 6]], [[User's Guide Section 7|Section 7]] and [[User's Guide Section 8|Section 8]] '''contain examples on the process of defining problems and solving them. All test examples are available as part of the TOMLAB  distribution.


In the process of optimization one sometimes wants to graphically view the problem and the solution process, especially for ill-conditioned nonlinear problems. Sometimes it is not clear what solver is best for the particular type  of problem and tests on different  solvers can be of use. In teaching one wants to view the details of the algorithms and look more carefully at the different algorithmic steps. When developing new algorithms tests on thousands of problems are necessary to fully  access the  pros and cons of the new algorithm.  One might  want to solve a practical problem very many times, with slightly different conditions for the run.  Or solve a control problem looping in real-time and solving the optimization problem each time slot.
'''[[User's Guide Section 9| Section 9]]''' shows how to setup and define multi layer optimization problems in TOMLAB.


All these issues and many more are addressed with the TOMLAB  optimization environment. TOMLAB gives easy access to a large set of standard test problems, optimization solvers and utilities.
'''[[User's Guide Section 10|Section 10]]''' contains detailed descriptions of many of the functions in TOMLAB.  The TOM  solvers, originally developed by the Applied Optimization and Modeling (TOM)  group, are described together with TOMLAB  driver routine and utility functionsOther solvers, like  the Stanford Optimization  Laboratory (SOL) solvers are not described, but documentation is available for each solver.


===The Organization  of This Guide===
'''[[User's Guide Section 12|Section 12]]''' describes the utility functions that can be used, for example ''tomRun ''and ''SolverList''.


'''[[#Overall_Design|Section 2]]''' presents the general design of TOMLAB.
'''[[User's Guide Section 13|Section 13]]''' introduces the different options for derivatives, automatic differentiation.


'''[[#Problem_Types_and_Solver_Routines|Section 3]]''' contains strict mathematical definitions of the optimization problem types.  All solver routines available in TOMLAB  are described.
'''[[User's Guide Section 14|Section 14]]''' discusses a number of special system features such as partially separable functions and user supplied parameter information for the function computations.
 
'''[[#Defining_Problems_in_TOMLAB|Section 4]]''' describes  the input  format and modeling environment.  The functionality  of the modeling engine TomSym is discussed in 4.3 and also in appendix C.
 
'''[[#Solving_Linear.2C_Quadratic_and_Integer_Programming_Problems|Section 5]], 6, 7 and 8 '''contain examples on the process of defining problems and solving them. All test examples are available as part of the TOMLAB  distribution.
 
'''Section 9 '''shows how to setup and define multi layer optimization problems in TOMLAB.
 
'''Section 11 '''contains detailed descriptions of many of the functions in TOMLAB.  The TOM  solvers, originally developed by the Applied Optimization and Modeling (TOM)  group, are described together with TOMLAB  driver routine and utility functions.  Other solvers, like  the Stanford Optimization  Laboratory (SOL) solvers are not described, but documentation is available for each solver.
 
'''Section 12 '''describes the utility functions that can be used, for example ''tomRun ''and ''SolverList''.
 
'''Section 13 '''introduces the different options for derivatives, automatic differentiation.
 
'''Section 14 '''discusses a number of special system features such as partially separable functions and user supplied parameter information for the function computations.


'''Appendix  A '''contains tables describing all elements defined in the problem structure. Some subfields are either empty, or filled with  information if the particular  type  of optimization  problem is defined.  To be able to set different parameter options for the optimization solution, and change problem dependent information, the user should consult the tables in this Appendix.
'''Appendix  A '''contains tables describing all elements defined in the problem structure. Some subfields are either empty, or filled with  information if the particular  type  of optimization  problem is defined.  To be able to set different parameter options for the optimization solution, and change problem dependent information, the user should consult the tables in this Appendix.
Line 42: Line 28:


'''Appendix  F '''gives some motivation for the development of TOMLAB.
'''Appendix  F '''gives some motivation for the development of TOMLAB.
===Further  Reading===
TOMLAB  has been discussed in several papers and at several conferences. The main paper on TOMLAB  v1.0 is \[42\].
The use of TOMLAB  for nonlinear programming and parameter estimation is presented in \[45\], and the use of linear and discrete optimization is discussed in \[46\]. Global optimization routines are also implemented,  one is described in \[8\].
In all these papers TOMLAB  was divided into two toolboxes, the NLPLIB TB and the OPERA TB. TOMLAB  v2.0 was discussed in \[43\], \[40\].  and \[41\].  TOMLAB  v4.0 and how to solve practical optimization  problems  with TOMLAB  is discussed in \[44\].
The use of TOMLAB  for costly global optimization with industrial applications is discussed in \[9\]; costly global optimization with financial applications in \[37, 38, 39\]. Applications of global optimization for robust control is presented in \[25, 26\]. The use of TOMLAB  for exponential fitting and nonlinear parameter estimation are discussed in e.g. \[49, 4, 22, 23, 47, 48\].
The manuals for the add-on solver packages are also recommended reading material.
==Overall  Design==
The scope of TOMLAB  is large and broad, and therefore there is a need of a well-designed system. It is also natural to use the power of the Matlab language, to make the system flexible and easy to use and maintain. The concept of structure arrays is used and the ability in Matlab to execute Matlab code defined as string expressions and to execute functions specified by a string.
===Structure Input  and Output===
Normally, when solving an optimization problem, a direct call to a solver is made with a long list of parameters in the call.  The parameter list is solver-dependent and makes it difficult  to make additions and changes to the system.
TOMLAB  solves the problem in two steps. First the problem is defined and stored in a Matlab structure. Then the solver is called with a single argument, the problem structure. Solvers that were not originally developed for the TOMLAB  environment needs the usual long list of parameters. This is handled by the driver routine ''tomRun.m''which can call any available solver, hiding the details of the call from the user. The solver output is collected in a standardized result structure and returned to the user.
===Introduction  to Solver and Problem Types===
TOMLAB  solves a number of different types of optimization problems. The currently defined types are listed in Table 1.
The global variable ''probType ''contains the current type of optimization problem to be solved. An optimization solver is defined to be of type ''solvType'', where ''solvType ''is any of the ''probType ''entries in Table 1. It is clear that a solver of a certain ''solvType ''is able to solve a problem defined to be of another type.  For example, a constrained nonlinear programming solver should  be able to solve unconstrained problems, linear and quadratic programs and constrained nonlinear least squares problems. In the graphical user interface and menu system an additional variable ''optType ''is defined to keep track of what type of problem is defined as the main subject.  As an example, the user may select the type of optimization to be quadratic programming (''optType ''== 2), then select a particular problem that is a linear programming problem (''probType ''== 8) and then as the solver choose a constrained NLP solver like MINOS (''solvType ''== 3).
{|
|+Table 1: The different types of optimization problems defined in TOMLAB.
|-
|'''probType'''<hr />||||Type of optimization problem<hr />
|-
|'''uc'''||1||Unconstrained optimization (incl. bound constraints).
|-
|'''qp'''||2||Quadratic programming.
|-
|'''con'''||3||Constrained nonlinear optimization.
|-
|'''ls'''||4||Nonlinear least squares problems (incl. bound constraints).
|-
|'''lls'''||5||Linear least squares problems.
|-
|'''cls'''||6||Constrained nonlinear least squares problems.
|-
|'''mip'''||7||Mixed-Integer programming.
|-
|'''lp'''||8||Linear programming.
|-
|'''glb'''||9||Box-bounded global optimization.
|-
|'''glc'''||10||Global mixed-integer nonlinear programming.
|-
|'''miqp'''||11||Constrained mixed-integer quadratic programming.
|-
|'''minlp'''||12||Constrained mixed-integer nonlinear optimization.
|-
|'''lmi'''||13||Semi-definite programming with Linear Matrix  Inequalities.
|-
|'''bmi'''||14||Semi-definite programming with Bilinear Matrix  Inequalities.
|-
|'''exp'''||15||Exponential fitting  problems.
|-
|'''nts'''||16||Nonlinear Time Series.
|-
|'''lcp'''||22||Linear Mixed-Complimentary Problems.
|-
|'''mcp'''||23||Nonlinear Mixed-Complimentary Problems.
|-
|}
Please note that since the actual numbers used for ''probType ''may change in future releases, it is recommended to use the text abbreviations. See help for ''checkType ''for further information.
Define ''probSet ''to be a set of defined optimization  problems belonging to a certain class of problems of type ''probType''. Each ''probSet ''is physically stored in one file, an ''Init  File''. In Table 2 the currently defined problem sets are listed, and new ''probSet ''sets are easily added.
{|
|+Table 2:  Defined test  problem sets in TOMLAB.  '''probSets '''marked with ''* ''are not part of the standard distribution
|-
|'''probSet'''<hr />||'''probType'''<hr />||'''Description  of test problem set'''<hr />
|-
|'''uc'''||1||Unconstrained test problems.
|-
|'''qp'''||2||Quadratic programming test problems.
|-
|'''con'''||3||Constrained test problems.
|-
|'''ls'''||4||Nonlinear least squares test problems.
|-
|'''lls'''||5||Linear least squares problems.
|-
|'''cls'''||6||Linear constrained nonlinear least squares problems.
|-
|'''mip'''||7||Mixed-integer programming problems.
|-
|'''lp'''||8||Linear programming problems.
|-
|'''glb'''||9||Box-bounded global optimization test problems.
|-
|'''glc'''||10||Global MINLP  test problems.
|-
|'''miqp'''||11||Constrained mixed-integer quadratic problems.
|-
|'''minlp'''||12||Constrained mixed-integer nonlinear problems.
|-
|'''lmi'''||13||Semi-definite programming with Linear Matrix  Inequalities.
|-
|'''bmi'''||14||Semi-definite programming with Bilinear Matrix  Inequalities.
|-
|'''exp'''||15||Exponential fitting  problems.
|-
|'''nts'''||16||Nonlinear time series problems.
|-
|'''lcp'''||22||Linear mixed-complimentary problems.
|-
|'''mcp'''||23||Nonlinear mixed-complimentary problems.
|-
|
|-
|-
|
|-
|'''mgh'''||4||More, Garbow, Hillstrom nonlinear least squares problems.
|-
|'''chs'''||3||Hock-Schittkowski constrained test problems.
|-
|'''uhs'''||1||Hock-Schittkowski unconstrained test problems.
|-
|}
The names of the predefined Init Files that do the problem setup, and the corresponding low level routines are constructed as two  parts.  The first part being the abbreviation of the relevant  ''probSet'', see  Table 2, and the second part denotes the computed task, shown in Table 3. The user normally does not have to define the more complicated functions ''o_d2c ''and ''o_d2r''.  It is recommended to supply this information when using solvers which can utilize second order information, such as TOMLAB /KNITRO and TOMLAB /CONOPT.
{|
|+Table 3: Names for predefined Init  Files and low level m-files in TOMLAB.
|-
|'''Generic name<hr/>||Purpose '''( ''o ''is any ''probSet'', e.g. ''o''='''con''')<hr/>
|-
|''o_''prob||Init File that either defines a string matrix with problem names
|-
|''o_''f||Compute objective function ''f ''(''x'').
|-
|''o_''g||Compute the gradient vector ''g''(''x'').
|-
|''o_''H||Compute the Hessian matrix ''H ''(''x'').
|-
|''o_''c||Compute the vector of constraint functions ''c''(''x'').
|-
|''o_''dc||Compute the matrix of constraint normals, ''?c''(''x'')''/dx''.
|-
|''o_''d2c||Compute the 2nd part of 2nd derivative matrix of the Lagrangian function, ?''i ?i ?''2 '' ci
''(''x'')''/dx''2 .
|-
|''o_''r||Compute the residual vector ''r''(''x'').
|-
|''o_''J ||Compute the Jacobian matrix ''J ''(''x'').
|-
|''o_''d2r||Compute the 2nd part of the Hessian matrix, ?''i ri ''(''x'')''?''2 ''ri ''(''x'')''/dx''2
|-
|}
The Init  File has two modes of operation; either return a string matrix  with the names of the problems in the ''probSet ''or a structure with all information about the selected problem. All fields in the structure, named ''Prob'', are presented in tables in Section A. Using a structure makes it easy to add new items without too many changes in the rest of the system. For further discussion about the definition of optimization problems in TOMLAB, see Section 4.
There are default values for everything that is possible to set defaults for, and all routines are written  in a way that makes it possible for the user to just set an input argument empty and get the default.
===The Process of Solving Optimization Problems===
A flow-chart of the process of optimization in TOMLAB is shown in Figure 1. It is inefficient to use an interactive system. This is symbolized withthe ''Standard User ''box in the figure, which directly runs the ''Optimization Driver'', ''tomRun''. The direct solver call is possible for all TOMLAB  solvers, if the user has executed ''ProbCheck ''prior to the call. See Section  3 for a list of the TOMLAB  solvers.
Depending on the type of problem, the user needs to supply the ''low-level  ''routines that calculate the objective function, constraint functions for constrained problems, and also if possible, derivatives.  To simplify this coding process so that the work has to be performed only once, TOMLAB  provides ''gateway ''routines that ensure that any solver can obtain the values in the correct format.
For example, when working with a least squares problem, it is natural to code the function that computes the vector of residual functions ''ri ''(''x''1 '', x''2 '', . . .''), since a dedicated least squares solver probably operates on the residual while a general nonlinear solver needs a scalar function, in this case ''f ''(''x'') = 1 ''rT ''(''x'')''r''(''x''). Such issues are automatically handled by the gateway functions.
===Low Level Routines and Gateway Routines===
''Low level routines ''are the routines that compute:
*The objective function value
*The gradient vector
*The Hessian matrix (second derivative matrix)
*The residual vector (for nonlinear least squares problems)
*The Jacobian matrix (for nonlinear least squares problems)
*The vector of constraint functions
*The matrix of constraint normals (the constraint Jacobian)
*The second part of the second derivative of the Lagrangian function. The last three routines are only needed for constrained problems.
The names of these routines are defined in the structure fields ''Prob.FUNCS.f'', ''Prob.FUNCS.g'', ''Prob.FUNCS.H ''etc.
It is the task for the ''Assign ''routine to set the names of the low level m-files. This is done by a call to the routine ''conAssign ''with the names as arguments  for example. There are ''Assign ''routines for all problem types handled by TOMLAB. As an example,  see 'help conAssign' in MATLAB.
<pre>
Prob = conAssign('f', 'g', 'H', HessPattern, x_L,  x_U, Name,x_0,...
pSepFunc, fLowBnd, A,  b_L,  b_U, 'c', 'dc', 'd2c', ConsPattern,...
c_L,  c_U, x_min,  x_max, f_opt, x_opt);
</pre>
Only the low level routines relevant for a certain type of optimization problem need to be coded. There are dummy routines for the others.  Numerical differentiation is automatically used for gradient, Jacobian and constraint gradient if the corresponding user routine is non present or left out when calling ''conAssign''. However, the solver always needs more time to estimate the derivatives compared to if the user supplies a code for them.  Also the numerical accuracy is lower for estimated derivatives, so it is recommended that the user always tries to code the derivatives, if it is possible. Another option is automatic differentiation with TOMLAB  /MAD.
TOMLAB  uses gateway routines (''nlp f'', ''nlp g'', ''nlp H'', ''nlp c'', ''nlp dc'', ''nlp d2c'', ''nlp r'', ''nlp J'', ''nlp d2r''). These routines extract the search directions and line search steps, count iterations, handle separable functions, keep track of the kind of differentiation wanted etc. They also handle the separable NLLS case and NLLS weighting. By the use of global variables, unnecessary evaluations of the user supplied routines are avoided.
To get a picture of how the low-level routines are used in the system, consider Figure 2 and 3. Figure 2 illustrates the chain of calls when computing the objective function value in ''ucSolve ''for a nonlinear least squares problem defined in ''mgh prob'', ''mgh r ''and ''mgh J''.  Figure 3 illustrates the chain of calls when computing the numerical approximation of the gradient  (by use of the  routine ''fdng'') in ''ucSolve ''for an unconstrained problem defined in ''uc_prob ''and ''uc_f''. Information about a problem is stored in the structure variable ''Prob'', described in detail in the tables in Appendix A. This variable is an argument to all low level routines. In the field element ''Prob.user'', problem specific information
<pre>
ucSolve <==> nlp_f <==> ls_f <==> nlp_r <==> mgh_r
</pre>
Figure 2: The chain of calls when computing the objective function value in ''ucSolve ''for a nonlinear least squares problem defined in ''mgh prob'', ''mgh r ''and ''mgh J''.
<pre>
ucSolve <==> nlp_g <==> fdng <==> nlp_r <==> uc_f
</pre>
Figure 3: The chain of calls when computing the numerical approximation of the gradient in ''ucSolve ''for an unconstrained problem defined in ''uc prob ''and ''uc f''. needed to evaluate the low level routines are stored. This field is most often used if problem related questions are asked when generating the problem. It is often the case that the user wants to supply the low-level routines with additional information besides the variables ''x ''that are optimized. Any unused fields could be defined in the structure ''Prob ''for that purpose. To avoid potential conflicts with future TOMLAB  releases, it is recommended to use subfields  of ''Prob.user''. It the user wants to send some variables a, B and C, then, after creating the ''Prob ''structure, these extra variables are added to the structure:
<pre>
Prob.user.a=a;
Prob.user.B=B;
Prob.user.C=C;
</pre>
Then, because the ''Prob ''structure is sent to all low-level routines, in any of these routines the variables are picked out from the structure:
<pre>
a = Prob.user.a;
B = Prob.user.B;
C = Prob.user.C;
</pre>
A more detailed description of how to define new problems is given in sections 5, 6 and 8. The usage of ''Prob.user'' is described in Section 14.2.
Different solvers all have different demand on how information should be supplied, i.e. the function to optimize, the gradient vector, the Hessian matrix etc.  To be able to code the problem only once, and then use this formulation to run all types of solvers, interface routines that returns information in the format needed for the relevant solver were developed.
Table 4 describes the low level test functions and the corresponding Init  File routine needed for the predefined constrained optimization  ('''con''') problems.  For the predefined unconstrained optimization  ('''uc''')  problems, the global optimization ('''glb, glc''') problems and the quadratic programming problems ('''qp''') similar routines have been defined.
To conclude, the system design is flexible and easy to expand in many different ways.
{|
|+Table 4: Generally constrained nonlinear ('''con''') test problems.
|-
|'''Function'''<hr />||'''Description'''<hr/>
|-
|''con_prob''||Init File. Does the initialization  of the '''con '''test problems.
|-
|''con_f''||Compute the objective function ''f ''(''x'') for '''con ''' test problems.
|-
|''con_g''||Compute the gradient ''g''(''x'') for '''con '''test problems. x
|-
|''con_H''||Compute the Hessian matrix ''H ''(''x'') of ''f ''(''x'') for '''con '''test problems.
|-
|''con_c''||Compute the constraint residuals ''c''(''x'') for '''con '''test problems.
|-
|''con_dc''||Compute the derivative of the constraint residuals for '''con '''test problems.
|-
|''con_d2c''||Compute  the  2 nd  part  of  2 nd  derivative  matrix  of  the  Lagrangian  function, ?''i ?i
?''2 ''ci ''(''x'')''/dx''2 for '''con ''' test problems.
|-
|''con_fm''||Compute merit function ''?''(''xk '').
|-
|''con_gm''||Compute gradient of merit function ''?''(''xk '').
|}
==Problem Types and Solver Routines==
Section 3.1 defines all problem types in TOMLAB. Each problem definition is accompanied by brief suggestions on suitable solvers. This is followed in Section 3.2 by a complete list of the available solver routines in TOMLAB and the various available extensions,  such as /SOL and /CGO.
===Problem Types Defined in TOMLAB===
The '''unconstrained optimization '''('''uc''') problem is defined as
<math>
\label{eq:uc}
\begin{array}{ll}
\min\limits_{x} & f(x) \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x & \leq & x_{U}, \\
\end{array}
\end{array}
</math>
where <math>$x, x_L, x_U \in \MATHSET{R}^n$</math> and <math>$f(x) \in \MATHSET{R}$</math> . For unbounded variables, the corresponding elements of <math>$x_L,x_U$</math> may be set to <math>$\pm \infty$</math>.
Recommended solvers: '''TOMLAB /KNITRO and TOMLAB /SNOPT'''.
The TOMLAB  Base Module routine ''ucSolve ''includes several of the most popular search step methods for unconstrained optimization.  Bound constraints are treated as described in Gill  et. al. \[28\]. The search step methods for unconstrained optimization included in ''ucSolve ''are: the Newton method, the quasi-Newton BFGS and DFP method, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian use a subspace minimization technique to handle rank problems, see Lindstr¨om \[53\]. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix.
Another TOMLAB  Base Module solver suitable for unconstrained problems is the structural trust region algorithm ''sTrustr'', combined with an initial  trust region radius algorithm.  The code is based on the algorithms in \[13\] and \[67\], and treats partially  separable functions. Safeguarded BFGS or DFP are used for the quasi-Newton update, if the analytical Hessian is not used. The set of constrained nonlinear solvers could also be used for unconstrained problems.
The '''quadratic  programming  '''('''qp''') problem is defined as
<math>
\label{eq:qp}
\begin{array}{ll}
\min\limits_{x} & f(x) = \frac{1}{2} x^T F x + c^T x \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x    & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U} \\
\end{array}
\end{array}
</math>
where <math>$c, x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$F \in \MATHSET{R}^{n \times n}$</math>, <math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, and <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math>.
Recommended solvers: '''TOMLAB /KNITRO,  TOMLAB /SNOPT and TOMLAB /MINLP'''.
A positive semidefinite ''F ''-matrix gives a convex QP, otherwise the problem is nonconvex. Nonconvex quadratic programs are solved with a standard active-set method \[54\], implemented in the TOM routine ''qpSolve''. ''qpSolve ''explicitly  treats both inequality and equality constraints, as well as lower and upper bounds on the variables (simple bounds). It converges to a local minimum for indefinite quadratic programs.
In TOMLAB ''MINOS ''in the general or the QP-version (''QP-MINOS''),  or the dense QP solver ''QPOPT ''may be used. In the TOMLAB  /SOL extension the ''SQOPT ''solver is suitable for both dense and large, sparse convex QP and ''SNOPT ''works fine for dense or sparse nonconvex QP.
For very large-scale problems, an interior point solver is recommended,  such as TOMLAB  /KNITRO or TOMLAB /BARNLP.
TOMLAB  /CPLEX, TOMLAB  /Xpress and TOMLAB  /XA should always be considered  for large-scale QP problems.
The '''constrained nonlinear optimization '''problem ('''con''') is defined as
<math>
\label{eq:con}
\begin{array}{ll}
\min\limits_{x} & f(x) \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x    & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U} \\
c_{L} & \leq  & c(x) & \leq & c_{U} \\
\end{array}
\end{array}
</math>
Recommended solvers: '''TOMLAB /SNOPT, TOMLAB /NPSOL and TOMLAB /KNITRO'''.
For general constrained nonlinear optimization a sequential quadratic programming (SQP) method by Schittkowski \[69\] is implemented in the TOMLAB  Base Module solver ''conSolve''.  ''conSolve ''also includes an implementation of the Han-Powell SQP method. There are also a TOMLAB  Base Module routine ''nlpSolve ''implementing the Filter SQP by Fletcher and Leyffer presented in \[21\].
Another constrained solver in TOMLAB  is the structural trust region algorithm ''sTrustr'', described above. Currently, ''sTrustr ''only solves problems where the feasible region, defined by the constraints, is convex. TOMLAB /MINOS  solves constrained nonlinear programs. The TOMLAB  /SOL extension gives an additional set of general solvers for dense or sparse problems.
''sTrustr'', ''pdco ''and ''pdsco ''in TOMLAB  Base Module handle nonlinear problems with ''linear  ''constraints only.
There are many other options for large-scale nonlinear optimization to consider in TOMLAB. TOMLAB /CONOPT, TOMLAB /BARNLP, TOMLAB  /MINLP, TOMLAB  /NLPQL and TOMLAB  /SPRNLP.
The '''box-bounded global optimization '''('''glb''') problem is defined as
<math>
\label{eq:glb}
\begin{array}{ll}
\min\limits_{x} & f(x) \\
&  \\
s/t & \begin{array}{llcccll}
-\infty < &x_{L} & \leq  & x & \leq & x_{U}& < \infty , \\
\end{array}
\end{array}
</math>
where <math>$x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$f(x) \in \MATHSET{R}$</math>, i.e. problems of the form \ref{eq:uc} that have finite simple bounds on all variables.
Recommended solvers: '''TOMLAB /LGO and TOMLAB /CGO with  TOMLAB /SOL'''.
The TOM solver ''glbSolve ''implements the DIRECT algorithm \[14\], which is a modification of the standard Lipschitzian approach that eliminates the need to specify a Lipschitz constant. In ''glbSolve ''no derivative information is used. For global optimization problems with expensive function evaluations the TOMLAB  /CGO  routine ''ego ''implements the Efficient Global Optimization (EGO) algorithm \[16\]. The idea of the EGO algorithm is to first fit a response surface to data collected by evaluating the objective function at a few points.  Then, EGO balances between finding the minimum of the surface and improving the approximation by sampling where the prediction error may be high.
The '''global mixed-integer nonlinear programming  '''('''glc''') problem is defined as
<math>
\label{eq:glc}
\begin{array}{ll}
\min\limits_{x} & f(x) \\
&  \\
s/t & \begin{array}{llcccll}
-\infty < &x_{L} & \leq  & x & \leq & x_{U}& < \infty  \\
&b_{L} & \leq  & A x  & \leq & b_{U}& \\
&c_{L} & \leq  & c(x) & \leq & c_{U},& x_{j} \in \MATHSET{N}\ ~~\forall j \in $I$
\end{array}
\end{array}
</math>
where <math>$x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$f(x) \in \MATHSET{R}$</math>, <math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math> and <math>$c_L,c(x),c_U \in \MATHSET{R}^{m_2}$</math>.
The variables <math>$x \in I$</math>, the index subset of <math>$1,...,n$</math>, are restricted to be integers.
Recommended solvers: '''TOMLAB /OQNLP'''.
The TOMLAB  Base Module solver ''glcSolve ''implements an extended version of the DIRECT algorithm \[52\], that handles problems with both nonlinear and integer constraints.
The '''linear programming  '''('''lp''') problem is defined as
<math>
\label{eq:LP}
\begin{array}{ll}
\min\limits_{x} & f(x) = c^T x \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x    & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U} \\
\end{array}
\end{array}
</math>
where <math>$c, x, x_L, x_U \in \MATHSET{R}^n$</math>,
<math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, and <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math>.
Recommended solvers: '''TOMLAB /CPLEX, TOMLAB /Xpress  and TOMLAB /XA'''.
The TOMLAB  Base Module solver ''lpSimplex ''implements a simplex algorithm for '''lp '''problems.
When a dual feasible point is known in (6) it is efficient to use the dual simplex algorithm implemented in the TOMLAB  Base Module solver ''DualSolve''. In TOMLAB  /MINOS  the LP interface to ''MINOS'', called ''LP-MINOS ''is efficient for solving large, sparse LP problems. Dense problems are solved by ''LPOPT''.  The TOMLAB  /SOL extension gives the additional possibility of using ''SQOPT ''to solve large, sparse LP.
The recommended solvers normally outperforms all other solvers.
The '''mixed-integer  programming  problem '''('''mip''')  is defined as
<math>
\label{eq:mip}
\begin{array}{ll}
\min\limits_{x} & f(x) = c^T x \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x    & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U}, ~x_{j} \in \MATHSET{N}\ ~~\forall j \in $I$  \\
\end{array}
\end{array}
</math>
where <math>$c, x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$A \in \MATHSET{R}^{m_1
\times n}$</math>, and <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math>. The variables <math>$x
\in I$</math>, the index subset of <math>$1,...,n$</math> are restricted to be
integers. Equality constraints are defined by setting the lower
bound equal to the upper bound, i.e. for constraint <math>$i$: $b_{L}(i)
=  b_{U}(i)$</math>.
Recommended solvers: '''TOMLAB /CPLEX, TOMLAB /Xpress  and TOMLAB /XA'''.
Mixed-integer programs can be solved using the TOMLAB  Base Module routine ''mipSolve ''that  implements a standard branch-and-bound algorithm, see Nemhauser  and Wolsey \[58, chap. 8\].  When dual feasible solutions are available, mipSolve  is using the TOMLAB  dual simplex solver DualSolve  to solve subproblems, which is significantly faster than using an ordinary linear programming solver, like the TOMLAB  lpSimplex. ''mipSolve ''also implements user defined priorities for variable selection, and different tree search strategies. For 0/1 - knapsack problems a round-down primal heuristic is included. Another TOMLAB  Base Module solver is the cutting plane routine ''cutplane'', using Gomory cuts.  It is recommended to use ''mipSolve ''with the LP version of ''MINOS ''with warm starts for the subproblems, giving great speed improvement. The TOMLAB  /Xpress extension  gives access to the state-of-the-art LP, QP, MILP  and MIQP solver Xpress-MP. For many MIP problems, it is necessary to use such a powerful solver, if the solution should be obtained in any reasonable time frame. TOMLAB /CPLEX is equally powerful as TOMLAB  /Xpress and also includes a network solver.
The '''linear least squares '''('''lls''') problem is defined as
<math>
\label{eq:lls}
\begin{array}{ll}
\min\limits_{x} & f(x) = \frac{1}{2} || C x - d || \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x  & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U} \\
\end{array}
\end{array}
</math>
where <math>$x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$d \in \MATHSET{R}^M$</math>,
<math>$C \in \MATHSET{R}^{M \times n}$</math>,
<math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math> and <math>$c_L,c(x),c_U \in \MATHSET{R}^{m_2}$</math>.
Recommended solvers: '''TOMLAB /LSSOL'''.
''Tlsqr ''solves unconstrained  sparse '''lls '''problems. ''lsei ''solves the general dense problems. ''Tnnls ''is
a fast and robust replacement for the Matlab nnls. The general least squares solver ''clsSolve ''may also be used.
In the TOMLAB
/NPSOL or TOMLAB  /SOL extension the ''LSSOL ''solver is suitable for dense linear least squares problems.
The '''constrained nonlinear least squares '''('''cls''') problem is defined as
<math>
\label{eq:cls}
\begin{array}{ll}
\min\limits_{x} & f(x) = \frac{1}{2} r(x)^T r(x) \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x  & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U} \\
c_{L} & \leq  & c(x) & \leq & c_{U} \\
\end{array}
\end{array}
</math>
where <math>$x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$r(x) \in \MATHSET{R}^M$</math>,
<math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math>
and <math>$c_L,c(x),c_U \in \MATHSET{R}^{m_2}$</math>.
Recommended solvers: '''TOMLAB /NLSSOL'''.
The TOMLAB  Base Module nonlinear least squares solver ''clsSolve ''treats problems with bound constraints in a similar way as the routine ''ucSolve''. This strategy is combined with an active-set strategy to handle linear equality and inequality constraints \[7\].
''clsSolve ''includes seven optimization methods for nonlinear  least squares problems, among them: the Gauss-Newton method, the Al-Baali-Fletcher \[2\] and the Fletcher-Xu \[19\] hybrid method, and the Huschens TSSM method \[50\]. If rank problems occur, the solver uses subspace minimization.  The line search algorithm used is the same as for unconstrained problems.
Another fast and robust solver is ''NLSSOL'', available in the TOMLAB  /NPSOL or the TOMLAB  /SOL extension toolbox.
One important utility routine is the TOMLAB  Base Module line search algorithm ''LineSearch'',  used by the solvers ''conSolve'', ''clsSolve ''and ''ucSolve''. It implements a modified version of an algorithm by Fletcher \[20, chap. 2\]. The line search algorithm uses quadratic and cubic interpolation, see Section  12.10. Another TOMLAB  Base Module routine, ''preSolve'', is running a presolve analysis on a system of linear qualities, linear inequalities and simple bounds.  An algorithm by Gondzio \[36\], somewhat modified, is implemented in ''preSolve''.  See \[7\] for a more detailed presentation.
The '''linear semi-definite programming  problem with  linear matrix inequalities '''('''sdp''') is defined as
<math>
\label{eq:sdp}
\begin{array}{rccccl}
\min\limits_{x} & \multicolumn{5}{l}{f(x) = {c^T}x} \\
& \\s/t & x_{L}  & \leq & x  & \leq & x_{U}  \\
& b_{L}  & \leq & Ax & \leq & b_{U} \\
& \multicolumn{5}{r}{Q^{i}_0 + \Sum{k=1}{n} Q^{i}_{k}x_{k} \preccurlyeq 0,\qquad i=1,\ldots,m.} \\
\end{array}
</math>
where <math>$c, x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$A \in \MATHSET{R}^{m_l\times n}$</math>, <math>$b_L,b_U \in \MATHSET{R}^{m_l}$</math> and <math>$Q^{i}_{k}$</math> are symmetric matrices of similar dimensions in each constraint <math>$i$</math>. If there are several LMI constraints, each may have it's own dimension.
Recommended solvers: '''TOMLAB /PENSDP and TOMLAB /PENBMI'''.
The '''linear semi-definite programming  problem with  bilinear  matrix  inequalities '''('''bmi''')  is defined similarly to but with the matrix inequality
<math>
\label{eq:bmi}
Q^{i}_0 + \Sum{k=1}{n} Q^{i}_{k}x_{k} + \Sum{k=1}{n}\Sum{l=1}{n} x_{k}x_{l}K^{i}_{kl}\preccurlyeq 0 \\
</math>
The MEX  solvers ''pensdp ''and ''penbmi ''treat  '''sdp '''and '''bmi  '''problems, respectively.  These are available in the TOMLAB  /PENSDP and TOMLAB  /PENBMI toolboxes.
The MEX-file solver ''pensdp ''available in the TOMLAB  /PENSDP toolbox implements a penalty method aimed at large-scale dense and sparse '''sdp '''problems. The interface to the solver allows for data input in the sparse SDPA input format as well as a TOMLAB  specific format corresponding to.
The MEX-file solver ''penbmi ''available in the TOMLAB  /PENBMI toolbox is similar to ''pensdp'', with added support for the bilinear matrix inequalities.
===Solver Routines in TOMLAB===
====TOMLAB Base Module====
TOMLAB  includes a large set of optimization solvers. Most of them were originally developed by the Applied Optimization and Modeling group (TOM)  \[42\]. Since then they have been improved e.g. to handle Matlab sparse arrays and been further  developed. Table 5 lists the main set of TOM  optimization  solvers in all versions of TOMLAB.
{|
|+Table 5: The optimization solvers in TOMLAB  Base Module.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Section'''<hr />||'''Page'''<hr />
|- valign="top"
|''clsSolve''||Constrained nonlinear least squares. Handles simple bounds and linear equality and inequality constraints using an active-set strat- egy.  Implements Gauss-Newton, and hybrid quasi-Newton and Gauss-Newton methods.||11.1.1||PAGE
|- valign="top"
|''conSolve''||Constrained nonlinear minimization solver using two different sequential quadratic programming methods.||11.1.2||PAGE
|- valign="top"
|''cutplane''||Mixed-integer programming using a cutting plane algorithm.||11.1.3||PAGE
|- valign="top"
|''DualSolve||Solves a linear program with a dual feasible starting point.||11.1.4||PAGE
|- valign="top"
|''glbSolve||Box-bounded global optimization, using only function values.||11.1.7||PAGE
|- valign="top"
|''glcCluster||Hybrid algorithm for constrained mixed-integer global optimization. Uses a combination of ''glcFast ''(DIRECT) and a clustering algorithm.||11.1.7||PAGE
|- valign="top"
|''glcSolve||Global mixed-integer nonlinear programming, using no derivatives.||11.1.7||PAGE
|- valign="top"
|''infSolve||Constrained minimax optimization. Reformulates problem and calls any suitable nonlinear solver.||11.1.8||PAGE
|- valign="top"
|''lpSimplex||Linear programming using a simplex algorithm.||11.1.14||PAGE
|- valign="top"
|''MILPSOLVE''||Mixed-integer programming using branch and bound.||11.1.16||PAGE
|- valign="top"
|''L1Solve''||Constrained L1 optimization. Reformulates problem and calls any suitable nonlinear solver.||11.1.15||PAGE
|- valign="top"
|''mipSolve''||Mixed-integer programming using a branch-and-bound algorithm.||11.1.18||PAGE
|- valign="top"
|''nlpSolve''||Constrained nonlinear minimization solver using a filter SQP algorithm.||SECTION||PAGE
|- valign="top"
|''pdco''||Linearly constrained nonlinear minimization solver using a primal- dual barrier algorithm.
|- valign="top"
|''pdsco''||Linearly constrained nonlinear minimization solver using a primal- dual barrier algorithm, for separable functions.||SECTION||PAGE
|- valign="top"
|''qpSolve''||Non-convex quadratic programming.||11.1.24||PAGE
|- valign="top"
|''slsSolve''||Sparse constrained nonlinear least squares. Reformulates problem and calls any suitable sparse nonlinear solver.
|- valign="top"
|''sTrustr''||Constrained convex optimization of partially separable functions, using a structural trust region algorithm.
|- valign="top"
|''ucSolve''||Unconstrained optimization with simple bounds on the parameters. Implements Newton, quasi-Newton and conjugate-gradient methods.||11.1.29||PAGE
|}
Additional  Fortran solvers in TOMLAB  are listed in Table 6. They are called using a set of MEX-file interfaces developed in TOMLAB.
{|
|+Table 6: Additional  solvers in TOMLAB.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Reference'''<hr />||'''Page'''<hr />
|-
|''goalSolve''||Solves sparse multi-objective goal attainment problems||
|-
|''lsei''||Dense constrained  linear least squares||
|-
|''qld''||Convex quadratic programming||
|-
|''Tfzero''||Finding a zero to f(x) in an interval, ''x '' is one-dimensional.||\[70, 15\]
|-
|''Tlsqr''|| Sparse linear least squares.|||\[60, 59, 68\]
|-
|''Tnnls''||Nonnegative constrained linear least squares||
|}
====TOMLAB /BARNLP====
The add-on toolbox TOMLAB  /BARNLP solves  large-scale  nonlinear programming problems using a sparse primal-dual interior point algorithm, in conjunction with a filter for globalization. The solver package was developed in co-operation with Boeing Phantom Works. The TOMLAB  /BARNLP package is described in a separate manual available at [http://tomopt.com/ http://tomopt.com].
====TOMLAB /CGO====
The add-on toolbox TOMLAB  /CGO solves costly global optimization problems. The solvers are listed in Table 7. They are written in a combination of Matlab and Fortran code, where the Fortran code is called using a set of
MEX-file interfaces developed in TOMLAB.
{|
|+Table 7: Additional  solvers in TOMLAB /CGO.
|-
|'''Function'''||'''Description'''||'''Reference'''
|-
|''rbfSolve||''Costly constrained box-bounded optimization using a RBF algorithm.||\[9\]
|-
|''ego''||Costly constrained box-bounded optimization using the Efficient Global Optimization (EGO) algorithm.||\[16\]
|}
====TOMLAB /CONOPT====
The add-on toolbox TOMLAB  /CONOPT  solves large-scale nonlinear programming problems with a feasible path GRG method .  The solver package was developed in co-operation with Arki  Consulting and Development A/S. The TOMLAB  /CONOPT  is described in a separate manual available at [http://tomopt.com/ http://tomopt.com]. There is also m-file help available.
====TOMLAB /CPLEX====
The add-on toolbox TOMLAB  /CPLEX solves large-scale mixed-integer linear and quadratic programming prob- lems. The solver package was developed in co-operation with ILOG S.A. The TOMLAB  /CPLEX solver package and interface are described in a manual available at [http://tomopt.com/ http://tomopt.com].
====TOMLAB /KNITRO====
The add-on toolbox TOMLAB  /KNITRO solves large-scale nonlinear programming problems with interior (barrier) point trust-region methods. The solver package was developed in co-operation with Ziena Optimization Inc. The TOMLAB  /KNITRO manual is available at [http://tomopt.com/ http://tomopt.com].
====TOMLAB /LGO====
The add-on toolbox TOMLAB  /LGO  solves global nonlinear programming problems. The LGO solver package is developed by Pint´er Consulting Services, Inc. \[63\] The TOMLAB  /LGO  manual is available at [http://tomopt.com/ http://tomopt. ] [http://tomopt.com/ com].
====TOMLAB /MINLP====
The add-on toolbox TOMLAB  /MINLP provides solvers for continuous and mixed-integer quadratic programming
('''qp''','''miqp''')  and continuous and mixed-integer nonlinear constrained optimization ('''con''', '''minlp''').
All  four solvers, written  by Roger Fletcher and Sven Leyffer, University of Dundee, Scotland, are available in sparse and dense versions.  The solvers are listed in Table 8.
The TOMLAB  /MINLP manual is available at [http://tomopt.com/ http://tomopt.com].
====TOMLAB /MINOS====
Another set of Fortran solvers were developed by the Stanford Optimization Laboratory (SOL). Table 9 lists the solvers included in TOMLAB  /MINOS.  The solvers are called using a set of MEX-file interfaces  developed as part of TOMLAB.  All  functionality  of the SOL solvers are available and changeable in the TOMLAB  framework in Matlab.
====TOMLAB /OQNLP====
The add-on toolbox TOMLAB  /OQNLP  uses a multistart  heuristic algorithm to find global optima of smooth constrained nonlinear programs (NLPs) and mixed-integer nonlinear programs (MINLPs). The solver package was developed in co-operation with Optimal Methods Inc. The TOMLAB  /OQNLP  manual is available at [http://tomopt.com/ http:]
[http://tomopt.com/ //tomopt.com].
{|
|+Table 8: Solver routines in TOMLAB  /MINLP.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Reference'''<hr />
|-
|''bqpd''||Quadratic programming using a null-space method.||''bqpdTL.m''
|-
|''miqpBB''||Mixed-integer quadratic programming using ''bqpd ''as subsolver.''miqpBBTL.m''
|-
|''filterSQP''||Constrained nonlinear minimization using a Filtered Sequential QP method.''filterSQPTL.m''
|-
|''minlpBB''||Constrained, mixed-integer nonlinear minimization using a branch- and-bound search scheme. ''filterSQP ''is used as NLP solver.||''minlpBBTL.m''
|}
{|
|+Table 9: The SOL optimization solvers in TOMLAB  /MINOS.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Reference'''<hr />||'''Page'''<hr />
|-
|''MINOS 5.51||''Sparse linear and nonlinear programming with linear and nonlin- ear constraints.||\[57\]
|-
|''LP-MINOS||''A special version of the ''MINOS 5.5 ''MEX-file interface for sparse linear programming.||\[57\]
|-
|''QP-MINOS''||A special version of the ''MINOS 5.5 ''MEX-file interface for sparse quadratic programming.||\[57\]
|-
|''LPOPT 1.0-10''||Dense linear programming.||\[30\]
|-
|''QPOPT 1.0-10 ''||Non-convex quadratic programming with dense constraint matrix and sparse or dense quadratic matrix.||\[30\]
|}
====TOMLAB /NLPQL====
The add-on toolbox TOMLAB  /NLPQL solves dense nonlinear programming problems, multi  criteria optimiza- tion  problems and nonlinear fitting  problems.  The solver package  was developed in co-operation with  Klaus Schittkowski. The TOMLAB  /NLPQL manual is available at [http://tomopt.com/ http://tomopt.com].
====TOMLAB /NPSOL====
The add-on toolbox TOMLAB  /NPSOL is a sub package of TOMLAB  /SOL. The package includes the MINOS solvers as well as NPSOL, LSSOL and NLSSOL. The TOMLAB  /NPSOL manual is available at [http://tomopt.com/ http://tomopt. ] [http://tomopt.com/ com].
====TOMLAB /PENBMI====
The add-on toolbox TOMLAB  /PENBMI solves linear semi-definite programming problems with linear and bilinear matrix inequalities. The solvers are listed in Table 10. They are written in a combination of Matlab and C code. The TOMLAB  /PENBMI manual is available at [http://tomopt.com/ http://tomopt.com].
{|
|+Table 10: Additional  solvers in TOMLAB  /PENSDP.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Reference<hr />||'''Page'''<hr />
|-
|''penbmi''||Sparse and dense linear semi-definite programming using a penalty algorithm.
|-
|''penfeas bmi''||Feasibility check of systems of linear matrix inequalities, using ''penbmi''.
|}
====TOMLAB /PENSDP====
The add-on toolbox TOMLAB  /PENSDP  solves linear semi-definite programming problems with linear matrix inequalities. The solvers are listed in Table 11. They are written  in a combination of Matlab and C code. The TOMLAB  /PENSDP manual is available at [http://tomopt.com/ http://tomopt.com].
{|
|+Table 11: Additional  solvers in TOMLAB  /PENSDP.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Reference'''<hr />||'''Page'''<hr />
|-
|''pensdp''||Sparse and dense linear semi-definite programming using a penalty algorithm.
|-
|''penfeas_sdp''||Feasibility check of systems of linear matrix inequalities, using ''pensdp''.
|}
====TOMLAB /SNOPT====
The add-on toolbox TOMLAB  /SNOPT  is a sub package of TOMLAB  /SOL. The package includes the MINOS solvers  as well as SNOPT and SQOPT. The TOMLAB  /SNOPT  manual is available at [http://tomopt.com/ http://tomopt.com].
====TOMLAB /SOL====
The extension toolbox TOMLAB  /SOL  gives  access to the complete set of Fortran  solvers developed by the Stanford Systems Optimization Laboratory (SOL). These solvers are listed in Table 9 and 12.
====TOMLAB /SPRNLP====
The add-on toolbox TOMLAB  /SPRNLP solves large-scale nonlinear programming problems. SPRNLP is a state- of-the-art sequential quadratic programming (SQP) method, using an augmented Lagrangian merit function and safeguarded line search.  The solver package was developed in co-operation with  Boeing Phantom Works.  The TOMLAB  /SPRNLP package is described in a separate manual available at [http://tomopt.com/ http://tomopt.com].
{|
|+Table 12: The optimization solvers in the TOMLAB  /SOL toolbox.
|-
|'''Function'''<hr />||'''Description'''<hr />||'''Reference'''<hr />||'''Page'''<hr />
|-
|''NPSOL 5.02''||Dense linear and nonlinear programming with linear and nonlinear constraints.||\[34\]
|-
|''SNOPT 6.2-2''||Large, sparse linear and nonlinear programming with linear and nonlinear constraints.||\[33, 31\]
|-
|''SQOPT 6.2-2''||Sparse convex quadratic programming.||\[32\]
|-
|''NLSSOL 5.0-2''||Constrained nonlinear  least squares. NLSSOL  is  based on NPSOL. No reference except for general NPSOL reference.||\[34\]
|-
|''LSSOL 1.05-4''||Dense linear and quadratic programs (convex), and constrained linear least squares problems.||\[29\]
|}
====TOMLAB /XA====
The add-on toolbox TOMLAB  /XA solves large-scale linear, binary, integer and semi-continuous linear program- ming problems, as well as quadratic programming problems. The solver package was developed in co-operation with  Sunset Software  Technology.  The TOMLAB  /XA package is described in a separate manual available at [http://tomopt.com/ http://tomopt.com].
====3.2.19    TOMLAB /Xpress====
The add-on toolbox TOMLAB  /Xpress solves large-scale mixed-integer linear and quadratic programming prob- lems. The solver package was developed in co-operation with Dash Optimization  Ltd.  The TOMLAB  /Xpress solver package and interface are described in the html manual that comes with the installation package. There is also a TOMLAB  /Xpress manual available at [http://tomopt.com/ http://tomopt.com].
====3.2.20    Finding  Available  Solvers====
To get a list of all available solvers, including Fortran, C and Matlab Optimization Toolbox solvers, for a certain ''solvType'', call the routine ''SolverList ''with ''solvType  ''as argument. ''solvType ''should either be a string ('uc', 'con' etc.) or the corresponding ''solvType ''number as given in Table 1, page 11. For example, if wanting a list of all available solvers of ''solvType '''''con''', then
<pre>
SolverList('con')
</pre>
gives the output
<pre>
>> SolverList('con');
Tomlab recommended choice for Constrained Nonlinear Programming (NLP)
npsol
Other solvers for NLP
    Licensed:
    nlpSolve
    conSolve
    sTrustr
    constr
    minos
    snopt
    fmincon
    filterSQP
    PDCO
    PDSCO
    Non-licensed:
    NONE
Solvers also handling NLP
    Licensed:
   
    glcSolve
    glcFast
    glcCluster
    rbfSolve
    minlpBB
    Non-licensed:
    NONE
</pre>
''SolverList ''also returns two  output  arguments: all available solvers as a string matrix  and a vector with  the corresponding ''solvType ''for each solver.
Note that solvers for a more general problem type may be used to solve the problem. In Table 13 an attempt has been made to classify these relations.
Table 13: The problem classes (''probType'') possible to solve with each type of solver (''solvType'') is marked with an ''x''.  When the solver is in theory possible to use, but from a practical point of view is probably not suitable, parenthesis are added (''x'').
{|
|-
| || colspan="10" align="center" border="2" | solvType
|-
|probType||'''uc'''||'''qp'''||'''con'''||'''ls'''||'''lls'''||'''cls'''||'''mip'''||'''lp'''||'''glb'''||'''glc'''
|-
| colspan="11" | <hr />
|-
|'''uc'''||x||||x||||||||||||x||(x)
|-
|'''qp'''||||x||x||||||||||||||(x)
|-
|'''con'''||||||x||||||||||||||(x)
|-
|'''ls'''||||||x||x||||x||||||||(x)
|-
|'''lls'''||||x||x||x||x||x||||||||(x)
|-
|'''cls'''||||||x||||||x||||||||(x)
|-
|'''mip'''||||||||||||||x||||||(x)
|-
|'''lp'''||||x||x||||||||x||x||||(x)
|-
|'''glb'''||||||(x)||||||||||||x||x
|-
|'''glc'''||||||(x)||||||||||||||x
|-
| colspan="11" | <hr />
|-
|'''exp'''||x||||x||(x)||||x||||||||(x)
|-
|}
==Defining  Problems in TOMLAB==
TOMLAB  is based on the  principle of creating a problem structure that  defines the problem and includes all relevant information needed for the solution of the user problem.  One unified format is defined, the TOMLAB format. The TOMLAB  format gives the user a fast way to setup a problem structure and solve the problem from the Matlab command line using any suitable TOMLAB  solver.
TOMLAB  also includes a modeling engine (or advanced Matlab class), TomSym, see Section  4.3. The package uses Matlab objects and operator overloading to capture Matlab procedures, and generates source code for derivatives of any order.
In this section follows a more detailed description of the TOMLAB format.
===The TOMLAB Format===
The TOMLAB format is a quick way to setup a problem and easily solve it using any of the TOMLAB  solvers. The principle is to put all information in a Matlab structure, which then is passed to the solver, which extracts the relevant information.  The structure is passed to the user function routines for nonlinear problems, making it a convenient way to pass other types of information.
The solution process for the TOMLAB  format has four steps:
# Define the problem structure, often called Prob.
# Call the solver or the universal driver routine ''tomRun''.
# Postprocessing, e.g. print the result of the optimization.
Step 1 could be done in several ways in TOMLAB. Recommended is to call one of the following routines dependent on the type of optimization problem, see Table 14.
Step 2, the solver call, is either a direct call, e.g. conSolve:
<pre>
Prob = ProbCheck(Prob,  'conSolve');
Result  = conSolve(Prob);
</pre>
or a call to the multi-solver driver routine ''tomRun'', e.g. for constrained optimization:
<pre>
Result  = tomRun('conSolve', Prob);
</pre>
Note that ''tomRun ''handles several input formats. Step 3 could be a call to PrintResult.m:
<pre>
PrintResult(Result);
</pre>
The 3rd step could be included in Step 2 by increasing the print level to 1, 2 or 3 in the call to the driver routine
<pre>
Result  = tomRun('conSolve',Prob, 3);
</pre>
See the different demo files that gives examples of how to apply the TOMLAB  format:  ''conDemo.m'', ''ucDemo.m'', ''qpDemo.m'', ''lsDemo.m'', ''lpDemo.m'', ''mipDemo.m'', ''glbDemo.m ''and ''glcDemo.m''.
{|
|+Table 14: Routines to create a problem structure in the TOMLAB  format.
|-
|'''Matlab call'''||'''probTypes'''||'''Type of optimization problem'''
|-
|Prob = bmiAssign( ... )||14||Semi-definite programming with bilinear matrix inequalities.
|-
|Prob = clsAssign( ... )||4,5,6||Unconstrained and constrained nonlinear least squares.
|-
|Prob = conAssign( ... )||1,3||Unconstrained and constrained nonlinear optimization.
|-
|Prob = expAssign( ... )||17||Exponential fitting  problems.
|-
|Prob = glcAssign( ... )||9,10,15||Box-bounded or mixed-integer constrained global programming.
|-
|Prob = lcpAssign( ... )||22||Linear mixed-complimentary problems.
|-
|Prob = llsAssign( ... )||5||Linear least-square problems.
|-
|Prob = lpAssign( ... )||8||Linear programming.
|-
|Prob = lpconAssign( ... )||3||Linear programming with nonlinear constraints.
|-
|Prob = mcpAssign( ... )||23||Nonlinear mixed-complimentary problems.
|-
|Prob = minlpAssign( ... )||12||Mixed-Integer nonlinear programming.
|-
|Prob = mipAssign( ... )||7||Mixed-Integer programming.
|-
|Prob = miqpAssign( ... )||11||Mixed-Integer quadratic programming.
|-
|Prob = miqqAssign( ... )||18||Mixed-Integer quadratic programming with quadratic constraints.
|-
|Prob = qcpAssign( ... )||23||Quadratic mixed-complimentary problems.
|-
|Prob = qpblockAssign( ... )||2||Quadratic programming (factorized).
|-
|Prob = qpAssign( ... )||2||Quadratic programming.
|-
|Prob = qpconAssign( ... )||3||Quadratic programming with nonlinear constraints.
|-
|Prob = sdpAssign( ... )||13||Semi-definite programming with linear matrix inequalities.
|-
| colspan="3" | <hr />
|-
|Prob = amplAssign( ... )||1-3,7,8,11,12||For AMPL problems defined as ''nl ''-files.
|-
|Prob = simAssign( ... )||1,3-6,9-10||General routine, functions and constraints calculated at the same
|-
|||||time .
|-
|}
===Modifying existing problems===
It is possible to modify an existing ''Prob ''structure by editing elements directly, however this is not recommended since there are dependencies for memory allocation and problem sizes that the user may not be aware of.
There are a set of routines developed specifically for modifying linear constraints (do not modify directly, ''Prob.mLin''
need to be set to a proper value if so). All the static information can be set with the following routines.
====add_A====
'''Purpose'''
Adds linear constraints to an existing problem.
'''Calling  Syntax'''
Prob = add_A(Prob, A, b L, b U)
'''Description  of Inputs'''
''Prob'' Existing TOMLAB  problem.
''A'' The additional linear constraints.
''b_L'' The lower bounds for the new linear constraints.
''b_U'' The upper bounds for the new linear constraints.
'''Description  of Outputs'''
''Prob'' Modified TOMLAB  problem.
====keep_A====
'''Purpose'''
Keeps the linear constraints specified by idx.
'''Calling  Syntax'''
Prob = keep_A(Prob, idx)
'''Description of Inputs'''
''Prob'' Existing TOMLAB  problem.
''idx'' The row indices to keep in the linear constraints.
'''Description of Outputs'''
''Prob'' Modified TOMLAB problem.
====remove A====
'''Purpose'''
Removes the linear constraints specified by idx.
'''Calling  Syntax'''
Prob = remove_A(Prob, idx)
'''Description  of Inputs'''
''Prob''Existing TOMLAB  problem.
''idx''The row indices to remove in the linear constraints.
'''Description  of Outputs'''
''Prob'':Modified TOMLAB  problem.
====replace A====
'''Purpose'''
Replaces the linear constraints.
'''Calling  Syntax'''
Prob = replace_A(Prob, A, b L, b U)
'''Description of Inputs'''
''Prob ''Existing TOMLAB  problem.
''A'' New linear constraints.
''b_L'' Lower bounds for linear constraints.
''b_U'' Upper bounds for linear constraints.
'''Description  of Outputs'''
''Prob'' Modified TOMLAB  problem.
====modify b_L====
'''Purpose'''
Modify lower bounds for linear constraints. If idx is not given b L will be replaced.
'''Calling  Syntax'''
Prob = modify_b_L(Prob, b L, idx)
'''Description  of Inputs'''
''Prob'' Existing TOMLAB  problem.
''b_L'' New lower bounds for the linear constraints.
''idx'' Indices for the modified constraint bounds (optional).
'''Description  of Outputs'''
''Prob'' Modified TOMLAB  problem.
====modify b_U====
'''Purpose'''
Modify upper bounds for linear constraints. If idx is not given b U will be replaced.
'''Calling Syntax'''
Prob = modify_b_U(Prob, b_U, idx)
'''Description  of Inputs'''
''Prob'' Existing TOMLAB  problem.
''b_U'' New upper bounds for the linear constraints.
''idx'' Indices for the modified constraint bounds (optional).
'''Description  of Outputs'''
''Prob'' Modified TOMLAB  problem.
====modify c====
'''Purpose'''
Modify linear objective (LP/QP  only).
'''Calling  Syntax'''
Prob = modify_c(Prob, c, idx)
'''Description  of Inputs'''
''Prob ''Existing TOMLAB  problem.
''c ''New linear coefficients.
''idx ''Indices for the modified linear coefficients (optional).
'''Description  of Outputs'''
''Prob ''Modified TOMLAB  problem.
====modify c_L====
'''Purpose'''
Modify lower bounds for nonlinear constraints. If idx is not given c L will be replaced.
'''Calling  Syntax'''
Prob = modify_c_L(Prob, c_L, idx)
'''Description of Inputs'''
''Prob ''Existing TOMLAB  problem.
''c_L ''New lower bounds for the nonlinear constraints.
''idx ''Indices for the modified constraint bounds (optional).
'''Description  of Outputs'''
''Prob ''Modified TOMLAB  problem.
====modify c_U ====
'''Purpose'''
Modify upper bounds for nonlinear constraints. If idx is not given c U will be replaced.
'''Calling  Syntax'''
Prob = modify_c_U(Prob, c U, idx)
'''Description  of Inputs'''
''Prob ''Existing TOMLAB  problem.
''c_U ''New upper bounds for the nonlinear constraints.
''idx ''Indices for the modified constraint bounds (optional).
'''Description  of Outputs'''
''Prob ''Modified TOMLAB  problem.
====modify x_0====
'''Purpose'''
Modify starting point. If x_0 is outside the bounds an error will be returned. If idx is not given x 0 will be replaced.
'''Calling  Syntax'''
Prob = modify_x_0(Prob, x 0, idx)
'''Description  of Inputs'''
''Prob ''Existing TOMLAB  problem.
''x_0 ''New starting points.
''idx ''Indices for the modified starting points (optional).
'''Description of Outputs'''
''Prob ''Modified TOMLAB  problem.
====modify x_L====
'''Purpose'''
Modify lower bounds for decision variables. If idx is not given x L will be replaced. x 0 will be shifted if needed.
'''Calling  Syntax'''
Prob = modify_x_L(Prob, x_L, idx)
'''Description of Inputs'''
''Prob ''Existing TOMLAB  problem.
''x_L ''New lower bounds for the decision variables.
''idx ''Indices for the modified lower bounds (optional).
'''Description  of Outputs'''
''Prob ''Modified TOMLAB  problem.
====modify x_U====
'''Purpose'''
Modify upper bounds for decision variables. If idx is not given x U will be replaced. x 0 will be shifted if needed.
'''Calling  Syntax'''
Prob = modify_x_U(Prob, x_U, idx)
'''Description  of Inputs'''
''Prob ''Existing TOMLAB  problem.
''x_U ''New upper bounds for the decision variables.
''idx ''Indices for the modified upper bounds (optional).
'''Description  of Outputs'''
''Prob ''Modified TOMLAB  problem.
===TomSym===
For further information about TomSym, please visit [http://tomsym.com/ http://tomsym.com/] - the pages contain detailed modeling examples and real life applications. All illustrated examples are available in the folder /tomlab/tomsym/examples/ in the TOMLAB  installation.  The modeling engine supports all problem types in TOMLAB  with  some minor exceptions.
A detailed function listing is available in Appendix C.
TomSym combines the best features of symbolic differentiation, i.e. produces source code with simplifications and optimizations, with the strong point of automatic differentiation where the result is a procedure, rather than an expression. Hence it does not grow exponentially in size for complex expressions.
Both forward and reverse modes are supported, however, reverse is default when computing the derivative with respect to more than one variable. The command ''derivative ''results in forward mode, and ''derivatives ''in reverse mode.
TomSym produces very efficient and fully vectorized code and is compatible with TOMLAB  /MAD for situations where automatic differentiation may be the only option for parts of the model.
It should also be noted that TomSym automatically provides first and second order derivatives as well as problem sparsity patterns. With  the use of TomSym the user no longer needs to code cumbersome derivative expressions and Jacobian/Hessian sparsity patterns for most optimization and optimal control problems.
The main features in TomSym can be summarized with the following list:
*A full modeling environment in Matlab with support for most built-in  mathematical operators.
*Automatically  generates derivatives as Matlab code.
*A complete integration with PROPT (optimal control platform).
*Interfaced and compatible with MAD, i.e. MAD can be used when symbolic modeling is not suitable.
*Support for if, then, else statements.
*Automated code simplification for generated models.
*Ability to analyze most p-coded files (if code is vectorized).
====Modeling====
One of the main strength of TomSym is the ability  to automatically and quickly compute symbolic derivatives of matrix expressions. The derivatives can then be converted  into efficient Matlab code.
The matrix  derivative  of a matrix  function  is a fourth rank tensor - that  is, a matrix  each of whose entries is a matrix.  Rather than using four-dimensional matrices to represent  this, TomSym continues to work in two dimensions. This makes it possible to take advantage of the very efficient handling of sparse matrices in Matlab (not available for higher-dimensional matrices).
In order for the derivative to be two-dimensional, TomSym's derivative reduces its arguments to one-dimensional vectors before the derivative is computed. In the returned ''J '', each row corresponds to an element of ''F '', and each column corresponds to an element of ''X ''. As usual in Matlab, the elements of a matrix are taken in column-first order.
For vectors ''F ''and ''X '', the resulting ''J ''is the well-known Jacobian matrix.
Observe that the TomSym notation is slightly different from commonly used mathematical notation. The notation used in tomSym was chosen to minimize the amount of element reordering needed to compute gradients for common expressions in optimization problems. It needs to be pointed out that this is different from the commonly used mathematical notation, where the tensor ( ''dF '') is flattened into a two-dimensional matrix as it is written (There are actually two variations of this in common  use - the indexing of the elements of X may or may not be transposed).
For example, in common mathematical notation, the so-called self derivative matrix ( ''dX '') is a mn-by-mn square (or mm-by-nn rectangular in the non-transposed variation) matrix containing mn ones spread out in a random-looking manner. In tomSym notation, the self-derivative matrix is the mn-by-mn identity matrix.
The difference in notation only involves the ordering of the elements, and reordering the elements to a different notational convention should be trivial  if tomSym is used to generate derivatives for applications other than for TOMLAB  and PROPT.
Example:
<pre>
>> toms y
>> toms 3x1 x
>> toms 2x3 A
>> f = (A\*x).^(2\*y)
f = tomSym(2x1):
  (A\*x).^(2\*y)
>> derivative(f,A)
ans = tomSym(2x6):
  (2\*y)\*setdiag((A\*x).^(2\*y-1))\*kron(x',eye(2))
</pre>
In the above example, the 2x1 symbol ''f ''is differentiated with respect to the 2x3 symbol ''A''.  The result is a 2x6 matrix, representing <math>d(vec(f))</math>.
The displayed text  is not necessarily identical to the m-code that  will  be generated  from an expression. For example, the identity matrix is generated using speye in m-code, but displayed  as eye (Derivatives tend to involve many sparse matrices, which Matlab handles efficiently). The mcodestr command converts a tomSym object to a Matlab code string.
<pre>
>> mcodestr(ans)
ans =
setdiag((2\*y)\*(A\*x).^(2\*y-1))\*kron(x',\[1 0;0  1\])
</pre>
Observe that the command mcode and not mcodestr should be used when generating efficient production code.
====Ezsolve====
TomSym provides the function ezsolve, which needs minimal input  to solve an optimization problem: only the objective function and constraints. For example, the miqpQG example from the tomlab quickguide can be reduced to the following:
<pre>
toms integer  x
toms       y
objective = -6\*x  + 2\*x^2  + 2\*y^2  - 2\*x\*y;
constraints = \{x+y<=1.9,x>=0, y>=0\};
solution = ezsolve(objective,constraints)
</pre>
Ezsolve calls tomDiagnose to determine the problem type, getSolver to find an appropriate solver, then sym2prob, tomRun and getSoluton in sequence to obtain the solution.
Advanced users might  not use ezsolve, and instead call sym2prob and tomRun directly.  This provides for the possibility to modify the Prob struct and set flags for the solver.
====Usage====
TomSym, unlike most other symbolic algebra packages, focuses on '''vectorized '''notation.  This should be familiar to Matlab users, but is very different from many other programming languages.  When computing the derivative of a vector-valued function with  respect to a vector valued variable, tomSym attempts to give a derivative  as vectorized Matlab code. However, this only works if the original expressions use vectorized  notation. For example:
<pre>
toms 3x1 x
f = sum(exp(x));
g = derivative(f,x);
</pre>
results in the efficient ''g ''= ''exp''(''x'')''1''.  In contrast, the mathematically equivalent but slower code:
<pre>
toms 3x1 x
f = 0;
for i=1:length(x)
  f = f+exp(x(i));
end
g = derivative(f,x);
</pre>
results in
<math>
g=(exp(x(1))*[100]+exp(x(2))*[010])+exp(x(3))*[001]
</math>
as each  term is differentiated individually. Since tomSym has no way of knowing that all the terms have the same format, it has to compute the symbolic derivative for each one. In this example, with only three iterations, that is not really a problem, but large for-loops
can easily result in symbolic calculations that require more time than the actual numeric solution of the underlying optimization problem.
It is thus recommended to avoid for-loops  as far as possible  when working with tomSym.
Because tomSym computes derivatives with respect to whole symbols, and not their individual elements, it is also a good idea not to group several variables into one vector, when they are mostly used individually.  For example:
<pre>
toms 2x1 x
f = x(1)\*sin(x(2));
g = derivative(f,x);
</pre>
results in
<math>g=sin(x(2))*[10]+x(1)*(cos(x(2))*[01])</math>
Since x is never used as a 2x1 vector, it is better to use two independent 1x1 variables:
<pre>
toms a b
f = a\*sin(b);
g = derivative(f,\[a; b\]);
</pre>
which results in
<math>g=[sin(b) a * cos(b)]</math>. 
The main benefit here is the increased readability of the auto-generated code, but there is also a slight performance increase (Should the vector ''x ''later be needed, it can of course easily be created using the code
<math>x=[a;b]</math>
====Scaling variables====
Because tomSym provides analytic derivatives (including second order derivatives) for the solvers, badly scaled problems will  likely be less problematic from the start.  To further improve  the model, tomSym also makes it very easy to manually scale variables before they are presented to the solver.  For example, assuming that  an optimization problem involves the variable x which is of the order of magnitude 1''e''6, and the variable y, which is of the order of 1''e - ''6, the code:
<pre>
toms xScaled  yScaled
x = 1e+6\*xScaled;
y = 1e-6\*yScaled;
</pre>
will make it possible to work with the tomSym expressions x and y when coding the optimization problem, while the solver will solve for the symbols xScaled and yScaled, which will both be in the order of 1. It is even possible to provide starting guesses on x and y (in equation form), because tomSym will solve the linear equation to obtain starting guesses for the underlying xScaled and yScaled.
The solution structure returned by ezsolve will of course only contain xScaled and yScaled, but numeric values for x and y are easily obtained via, e.g. subs(x,solution).
====SDP/LMI/BMI interface====
An interface for bilinear semidefinite problems is included with  tomSym.  It is also possible to solve nonlinear problems involving semidefinite constraints, using any nonlinear solver (The square root of the semidefinte matrix is then introduced as an extra set of unknowns).
See the examples ''optimalFeedbackBMI  ''and ''example sdp''.
====Interface  to MAD and finite differences====
If a user function is  incompatible with  tomSym, it can still  be used in symbolic computations, by giving it a "wrapper".  For example, if the cos function was not already overloaded by tomSym, it would still be possible to do the equivalent of cos(3\*x) by writing feval('cos',3\*x).
MAD  then computes the derivatives  when the Jacobian matrix  of a wrapped function is needed. If  MAD  is unavailable, or unable to do the job, numeric differentiation is used.
Second order derivatives cannot be obtained in the current implementation.
It is also possible to force the use of automatic or numerical differentiation for any function used in the code. The follow examples shows a few of the options available:
<pre>
toms x1 x2 alpha  = 100;
%  1.  USE  MAD  FOR  ONE  FUNCTION.
%  Create  a wrapper  function. In  this case we  use sin, but  it could  be any
%  MAD  supported  function.
y = wrap(struct('fun','sin','n',1,'sz1',1,'sz2',1,'JFuns','MAD'),x1/x2);
f = alpha\*(x2-x1^2)^2 + (1-x1)^2 + y;
%  Setup and solve  the  problem
c = -x1^2  - x2;
con = \{-1000  <= c <= 0
    -10  <= x1 <= 2
    -10  <= x2 <= 2\};
x0 = \{x1  == -1.2
    x2 == 1\};
solution1 = ezsolve(f,con,x0);
%  2.  USE  MAD  FOR  ALL FUNCTIONS.
options = struct;
options.derivatives = 'automatic';
f = alpha\*(x2-x1^2)^2 + (1-x1)^2 + sin(x1/x2);
solution2 = ezsolve(f,con,x0,options);
%  3.  USE  FD  (Finite  Differences) FOR  ONE  FUNCTIONS.
%  Create  a new wrapper  function. In  this case we  use sin, but  it could  be
%  any function since  we  use numerical derivatives.
y = wrap(struct('fun','sin','n',1,'sz1',1,'sz2',1,'JFuns','FDJac'),x1/x2);
f = alpha\*(x2-x1^2)^2 + (1-x1)^2 + y;
solution3 = ezsolve(f,con,x0);
%  4.  USE  FD  and MAD  FOR  ONE  FUNCTION    EACH.
y1 = 0.5\*wrap(struct('fun','sin','n',1,'sz1',1,'sz2',1,'JFuns','MAD'),x1/x2);
y2 = 0.5\*wrap(struct('fun','sin','n',1,'sz1',1,'sz2',1,'JFuns','FDJac'),x1/x2);
f = alpha\*(x2-x1^2)^2 + (1-x1)^2 + y1 + y2;
solution4 = ezsolve(f,con,x0);
%  5.  USE  FD  FOR  ALL FUNCTIONS.
options = struct;
options.derivatives = 'numerical';
f = alpha\*(x2-x1^2)^2 + (1-x1)^2 + sin(x1/x2);
solution5 = ezsolve(f,con,x0,options);
%  6.  USE MAD FOR OBJECTIVE,  FD FOR CONSTRAINTS
options = struct;
options.derivatives = 'numerical';
options.use_H = 0;
options.use_d2c = 0;
options.type = 'con';
Prob = sym2prob(f,con,x0,options);
madinitglobals; Prob.ADObj = 1; Prob.ConsDiff = 1;
Result  = tomRun('snopt', Prob,  1);
solution6 = getSolution(Result);
</pre>
====Simplifications====
The code generation  function detects sub-expressions that occur more than once, and optimizes by creating tem- porary variables for those since it is very common for a function to share expressions with its derivative, or for the derivative to contain repeated expressions.
Note that it is not necessary to complete code generation in order to evaluate a tomSym object numerically. The subs function can be used to replace symbols by their numeric values, resulting in an evaluation.
TomSym also automatically implements algebraic simplifications of expressions. Among them are:
*Multiplication by 1 is eliminated: 1 ''* A ''= ''A''
*Addition/subtraction of 0 is eliminated: 0 + ''A ''= ''A''
*All-same matrices are reduced to scalars: \[3; 3; 3\] + ''x ''= 3 + ''x''
*Scalars are moved to the left in multiplications:  ''A * y ''= ''y * A''
*Scalars are moved to the left in addition/subtraction:  ''A - y ''= ''-y ''+ ''A''
*Expressions involving element-wise operations are moved inside setdiag: ''setdiag''(''A'')+''setdiag''(''A'') = ''setdiag''(''A''+ ''A'')
*Inverse operations cancel: ''sqrt''(''x'')2  = ''x''
*Multiplication by inverse cancels: ''A * inv''(''A'')  = ''eye''(''size''(''A''))
*Subtraction of self cancels: ''A - A ''= ''zeros''(''size''(''A''))
*Among others...
Except in the case of scalar-matrix operations, tomSym does not reorder multiplications or additions, which means that  some expressions,  like (A+B)-A will  not be simplified (This might be implemented in a later version, but must be done carefully so that truncation errors are not introduced).
Simplifications are also applied when using subs. This makes it quick and easy to handle parameterized problems. For example, if an intricate optimization problem is to be solved for several values of a parameter a, then one
might first create the symbolic functions and gradients using a symbolic a, and then substitute the different values, and generate m-code for each substituted function. If some case, like ''a ''= 0 results in entire sub-expressions being eliminated, then the m-code will be shorter in that case.
It is also possible to generate complete problems with constants as decision variables and then change the bounds for these variables to make them "real constants". The backside of this is that the problem will be slightly larger, but the problem only has to be generated once.
The following problem defines the variable alpha as a toms, then the bounds are adjusted for alpha to solve the problem for all alphas from 1 to 100.
<pre>
toms x1 x2
%  Define  alpha  as a toms although it is a constant toms alpha
%  Setup and solve  the  problem
f = alpha\*(x2-x1^2)^2 + (1-x1)^2;
c = -x1^2  - x2;
con = \{-1000  <= c <= 0
    -10  <= x1 <= 2
    -10  <= x2 <= 2\};
x0 = \{x1  == -1.2; x2 == 1\};
Prob = sym2prob(f,con,x0);
%  Now  solve  for alpha  = 1:100, while reusing x_0
obj  = zeros(100,1);
for  i=1:100
    Prob.x_L(Prob.tomSym.idx.alpha) = i;
    Prob.x_U(Prob.tomSym.idx.alpha) = i;
    Prob.x_0(Prob.tomSym.idx.alpha) = i;
    Result  = tomRun('snopt', Prob,  1);
    Prob.x_0  = Result.x_k;
    obj(i) = Result.f_k;
end
</pre>
====Special functions====
TomSym adds some functions that duplicates the functionality of Matlab, but that are more suitable for symbolic treatment. For example:
*'''setDiag '''and '''getDiag '''- Replaces  some uses of Matlab's diag function, but clarifies whether diag(x) means "create a matrix where the diagonal elements are the elements of x" or "extract  the main diagonal from the matrix x".
*'''subsVec''' applies an expression to a list of values. The same effect can be achieved  with  a for-loop, but subsVec gives more efficient derivatives.
*ifThenElse - A replacement for the if ... then ... else constructs (See below).
'''If ...  then ...  else:'''
A common reason that it is difficult to implement a function in tomSym is that it contains code like the following:
<pre>
if x<2
    y = 0;
else
    y = x-2;
end
</pre>
Because x is a symbolic object, the expression ''x < ''2 does not evaluate to true or false, but to another symbolic object.
In tomSym, one should instead write:
<pre>
y = ifThenElse(x<2,0,x-2)
</pre>
This will result in a symbolic object that contains information about both the "true" and the "false" scenario. However, taking the derivative of this function will result in a warning, because the derivative is not well-defined at ''x ''= 2.
The "smoothed" form:
<pre>
y = ifThenElse(x<2,0,x-2,0.1)
</pre>
yields a function that is essentially the same for ''abs''(''x - ''2) ''> ''3 ''* ''0''.''1, but which follows a smooth curve near ''x ''= 2, ensuring that derivatives of all orders exist. However, this introduces a local minimum which did not exist in the original function, and invalidates the convexity.
It is recommended that the smooth form ifThenElse be used for nonlinear problems whenever it replaces a dis- continuous function. However, for convex functions (like the one above) it is usually better to use code that helps
tomSym know that convexity is preserved. For example, instead of the above ''if ThenElse''(''x < ''2'', ''0'', x - ''2'', ''0''.''1), the equivalent ''max''(0'', x - ''2) is preferred.
====Procedure vs parse-tree====
TomSym works with procedures. This makes it different from many symbolic algebra packages, that mainly work with parse-trees.
In optimization, it is not uncommon for objectives and constraints to be defined using procedures that  involve loops. TomSym is built  to handle these efficiently.  If a function is defined using many intermediate steps, then tomSym will keep track of those steps in an optimized procedure description. For example, consider the code:
<pre>
toms x
y = x*x;
z = sin(y)+cos(y);
</pre>
In the tomSym object z, there is now a procedure, which looks something like:
<pre>
temp = x*x;
result = cos(temp)+sin(temp);
</pre>
Note: It is not necessary to use the intermediate symbol y. TomSym, automatically detects duplicated expressions, so the code
<math>sin(x*x)+cos(x*x)</math>
would result in the same optimized procedure for z.
On the other hand, the same corresponding  code using the symbolic toolbox:
<pre>
syms x
y = x*x;
z = sin(y)+cos(y);
</pre>
results in an object z that contains
<math>
cos(x^2) + sin(x^2)
</math>
, resulting in a double evaluation of
<math>
x^2
</math>
.
This may seem like a small difference in this simplified example, but in real-life applications, the difference can be significant.
'''Numeric  stability:'''
For example, consider the following code, which computes the Legendre polynomials up to the 100th order in tomSym (The calculation takes about two seconds on a modern computer).
<pre>
toms x
p\{1\}=1;
p\{2\}=x;
for i=1:99
  p\{i+2\} = ((2\*i+1)\*x.\*p\{i+1\}-i\*p\{i\})./(i+1);
end
</pre>
Replacing "toms" by "syms" on the first line should cause the same polynomials  to be computed using Mathwork's Symbolic Toolbox. But after a few minutes, when only about 30 polynomials have been computed, the program crashes as it fails to allocate more memory.  This is because  the expression grows exponentially in size.  To circumvent  the problem, the expression  must be algebraically simplified after each step.  The following code succeeds in computing the 100 polynomials using the symbolic toolbox.
<pre>
syms x
p\{1\}=1;
p\{2\}=x;
for i=1:99
  p\{i+2\} = simplify(((2\*i+1)\*x.\*p\{i+1\}-i\*p\{i\})./(i+1));
end
</pre>
However, the simplification changes the way in which the polynomial is computed. This is clearly illustrated if we insert ''x ''= 1 into the 100th order polynomial. This is accomplished by using the command subs(p101,x,1) for both the tomSym and the Symbolic Toolbox expressions. TomSym returns the result 1''.''0000, which is correct.  The symbolic toolbox, on the other hand, returns 2''.''6759''e ''+ 020, which is off by 20 orders of magnitude. The reason is that the "simplified"  symbolic expressions involves subtracting very large numbers. Note: It is of course possible to get correct results from the Symbolic Toolbox using exact arithmetic instead of machine-precision floating-point, but at the cost of much slower evaluation.
In tomSym, there are also simplifications, for example identifying identical sub-trees, or multiplication  by zero, but the simplifications are not as extensive, and care is taken to avoid simplifications that can lead to truncation errors. Thus, an expression computed using tomSym should be exactly as stable (or unstable) as the algorithm used to generate it.
'''Another  example:'''
The following code, iteratively defines q as a function of the tomSym symbol x, and computes its derivative:
<pre>
toms x
q=x;
for i=1:4
  q = x\*cos(q+2)\*cos(q);
end derivative(q,x)
</pre>
This yields the following tomSym procedure:
<pre>
tempC3 = x+2;
tempC4 = cos(tempC3);
tempC5 = x*tempC4;
tempC10 = cos(x);
tempC12 = tempC10*(tempC4-x*sin(tempC3))-tempC5*sin(x);
tempC13 = tempC5*tempC10;
tempC16 = tempC13+2;
tempC17 = cos(tempC16);
tempC18 = x*tempC17;
tempC24 = cos(tempC13);
tempC26 = tempC24*(tempC17-x*(sin(tempC16)*tempC12))-tempC18*(sin(tempC13)*tempC12);
tempC27 = tempC18*tempC24;
tempC30 = tempC27+2;
tempC31 = cos(tempC30);
tempC32 = x*tempC31;
tempC38 = cos(tempC27);
tempC40 = tempC38*(tempC31-x*(sin(tempC30)*tempC26))-tempC32*(sin(tempC27)*tempC26);
tempC41 = tempC32*tempC38;
tempC44 = tempC41+2;
tempC45 = cos(tempC44);
out = cos(tempC41)*(tempC45-x*(sin(tempC44)*tempC40))-(x*tempC45)*(sin(tempC41)*tempC40);
</pre>
Now, complete the same task using the symbolic toolbox:
<pre>
syms x q=x;
for i=1:4
  q = x\*cos(q+2)\*cos(q);
end
diff(q,x)
</pre>
This yields the following symbolic expression:
<pre>
cos(x*cos(x*cos(cos(x+2)*x*cos(x)+2)*cos(cos(x+2)*x*cos(x))+2)*cos(x*cos(cos(x+2)*x*cos(x)+2)*...
cos(cos(x+2)*x*cos(x)))+2)*cos(x*cos(x*cos(cos(x+2)*x*cos(x)+2)*cos(cos(x+2)*x*cos(x))+2)*cos(x*...
cos(cos(x+2)*x*cos(x)+2)*cos(cos(x+2)*x*cos(x))))-x*sin(x*cos(x*cos(cos(x+2)*x*cos(x)+2)*cos(...
...
and 23 more lines of  code.
</pre>
And this example only had four iterations of the loop. Increasing the number of iterations, the Symbolic toolbox expressions quickly becomes unmanageable,  while the tomSym procedure only grows linearly.
====Problems and error messages====
*'''Warning: Directory c:'''''\'''''Temp'''''\'''''tp563415 could not be removed (or similar).  '''When tomSym is used to automatically create m-code it places the code in a temporary directory given by Matlab's tempname function.  Sometimes Matlab chooses a name that already exists, which results in this error message (The temporary directory is cleared of old files regularly by most modern operating systems.  Otherwise the temporary Matlab files can easily be removed manually).
*'''Attempting to  call SCRIPT as a function  (or  similar). '''Due to a bug in the Matlab syntax, the parser cannot know if ''f ''(''x'') is a function call or the x:th element of the vector f. Hence, it has to guess. The Matlab parser does not understand that toms creates variables,  so it will get confused if one of the names is previously used by a function or script (For example, "cs" is a script in the systems identification toolbox). Declaring toms cs and then indexing cs(1) will work at the Matlab prompt, but not in a script. The bug can be circumvented by assigning something to each variable before calling toms.
====Example====
A TomSym model is to a great extent independent upon the problem type, i.e. a linear, nonlinear or mixed-integer nonlinear model would be modeled with  about the same commands. The  following example illustrates how to construct and solve a MINLP  problem using TomSym.
<pre>
Name='minlp1Demo  - Kocis/Grossman.';
toms 2x1 x
toms 3x1 integer  y
objective = [2  3 1.5  2 -0.5]*[x;y];
constraints = { ...
  x(1) >= 0,  ...
  x(2) >= 1e-8,  ...
  x <= 1e8,  ...
  0 <= y <=1,  ...
  [1  0 1 0 0]*[x;y] <= 1.6, ...
  1.333*x(2) + y(2) <= 3,  ...
  [-1 -1  1]*y <= 0,  ...
  x(1)^2+y(1) == 1.25, ...
  sqrt(x(2)^3)+1.5\*y(2) == 3,  ...
};
guess = struct('x',ones(size(x)),'y',ones(size(y)));
options = struct;
options.name  = Name;
Prob = sym2prob('minlp',objective,constraints,guess,options);
Prob.DUNDEE.optPar(20) = 1;
Result  = tomRun('minlpBB',Prob,2);
</pre>
The TomSym engine automatically completes the separation of simple bounds, linear and nonlinear constraints.
==Solving Linear, Quadratic  and Integer  Programming Problems==
This section describes how to define and solve linear and quadratic programming problems, and mixed-integer linear programs using TOMLAB. Several examples are given on how to proceed, depending on if a quick solution is wanted, or more advanced tests are needed. TOMLAB  is also compatible with MathWorks Optimization TB. See Appendix E for more information and test examples.
The test examples and output  files are part  of the standard distribution  of TOMLAB, available in directory ''usersguide'', and all tests can be run by the user. There is a file ''RunAllTests ''that goes through and runs all tests for this section.
Also see the files ''lpDemo.m'', ''qpDemo.m'', and ''mipDemo.m'', in the directory ''examples'', where in each file a set of simple examples are defined. The examples may be ran by giving the corresponding file name, which displays a menu, or by running the general TOMLAB  help routine ''tomHelp.m''.
===Linear Programming  Problems===
The general formulation in TOMLAB  for a linear programming problem is
<math>
\label{eq2:lp}
\begin{array}{ll}
\min\limits_{x} & f(x) = c^T x \\
&  \\
s/t & \begin{array}{lcccl}
x_{L} & \leq  & x    & \leq & x_{U}, \\
b_{L} & \leq  & A x  & \leq & b_{U} \\
\end{array}
\end{array}
</math>
where <math>$c, x, x_L, x_U \in \MATHSET{R}^n$</math>,
<math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, and <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math>.
Equality constraints are defined by setting
the lower bound equal to the upper bound, i.e.
for constraint <math>$i$: $b_{L}(i) =  b_{U}(i)$</math>.
To illustrate the solution of LPs consider the simple linear programming test problem
<math>
\label{defLP1}
\begin{array}{ccccl}
\min\limits_{x_{1},x_{2}} & f(x_{1},x_{2}) & = & -7x_{1}-5x_{2} & 
\\s/t &  x_{1}  + 2x_{2}  & \leq  &  6  &  \\    & 4x_{1} +  x_{2}  & \leq  &  12  &  \\    &  x_{1},x_{2}      & \geq  &  0  &
\end{array}
</math>
named ''LP Example''.
The following statements define this problem in Matlab
'''File: '''tomlab/usersguide/lpExample.m
<pre>
Name = 'lptest';
c = [-7 -5]'; %  Coefficients in linear objective function
A = [ 1 2
            4 1 ]; %  Matrix defining linear  constraints
b_U = [ 6 12 ]'; %  Upper bounds on the  linear inequalities
x_L = [ 0 0 ]'; %  Lower bounds on x
%  x_min and x_max are  only  needed if doing  plots
x_min = [ 0  0 ]';
x_max = [10  10 ]';
%  b_L,  x_U and x_0 have default values  and need not  be defined.
%  It is possible to  call  lpAssign with  empty \[\] arguments instead b_L = [-inf -inf]';
x_U = [];
x_0 = [];
</pre>
====A Quick Linear  Programming  Solution====
The quickest way to solve this problem is to define the following Matlab statements using the TOMLAB  format:
'''File: '''tomlab/usersguide/lpTest1.m
<pre>
lpExample;
Prob = lpAssign(c, A,  b_L,  b_U, x_L,  x_U, x_0,  'lpExample');
Result  = tomRun('lpSimplex', Prob,  1);
</pre>
lpAssign is used to define the standard Prob structure, which TOMLAB  always uses to store all information about a problem. The three last parameters could be left out.  The upper bounds will default be Inf, and the problem name is only used in the printout  in ''PrintResult ''to make the output nicer to read. If x 0, the initial  value, is left out, an initial  point is found by ''lpSimplex ''solving a feasible point (Phase I) linear programming problem. In this test the given x 0 is empty, so a Phase I problem must be solved. The solution of this problem gives the following output to the screen
'''File: '''tomlab/usersguide/lpTest1.out
<pre>
===== \* \* \* =================================================================== * * *
TOMLAB  /SOL + /CGO  + /Xpress  MEX  + /CPLEX  Parallel 2-CPU  + 21 more - Tomlab Optimizat
=====================================================================================
Problem:  --- 1:  lpExample f_k -26.571428571428569000
Solver: lpSimplex. EXIT=0.
Simplex  method.  Minimum  reduced  cost.
Optimal  solution  found
FuncEv 3 Iter 3
CPU  time: 0.046800 sec.  Elapsed  time: 0.019000 sec.
</pre>
Having defined the ''Prob ''structure is it easy to call any solver that can handle linear programming problems,
<pre>
Result  = tomRun('qpSolve', Prob,  1);
</pre>
Even a nonlinear solver may be used.
<pre>
Result  = tomRun('nlpSolve',Prob, 3);
</pre>
All TOMLAB  solvers may either be called directly, or by using the driver routine ''tomRun'', as in this case.
===Quadratic  Programming  Problems===
The general formulation in TOMLAB  for a quadratic programming problem is
<math>
\label{eq2:qp}
\begin{array}{ll}
\min\limits_{x} & f(x) = \frac{1}{2} x^T F x + c^T x \\
&  \\
s/t & \begin{array}{lcccl} x_{L} & \leq  & x    & \leq & x_{U}, \\ b_{L} & \leq  & A x  & \leq & b_{U} \\
\end{array}
\end{array}
</math>
where <math>$c, x, x_L, x_U \in \MATHSET{R}^n$</math>, <math>$F \in \MATHSET{R}^{n \times n}$</math>, <math>$A \in \MATHSET{R}^{m_1 \times n}$</math>, and <math>$b_L,b_U \in \MATHSET{R}^{m_1}$</math>. Equality constraints are defined by setting
the lower bound equal to the upper bound, i.e.
for constraint <math>$i$</math>:
<math>$b_{L}(i) =  b_{U}(i)$</math>.
<math>%$b_{L}(i) =  b_{U}(i), \all i \in E$, where $E$</math> is the set of equalities.
Fixed variables are handled the same way.
To illustrate the solution of QPs consider the simple quadratic programming test problem
<math>
\label{defQP1}
\begin{array}{ll}
\min\limits_{x} & f(x)=4x_{1}^2+1x_{1}x_{2}+4x_{2}^2+3x_{1}-4x_{2} \\
s/t            & x_{1}+x_{2} \leq 5 \\& x_{1}-x_{2}  =  0 \\& x_{1} \geq 0 \\& x_{2} \geq 0, \\
\end{array}
</math>
named ''QP Example''. The following statements define this problem in Matlab.
'''File: '''tomlab/usersguide/qpExample.m
<pre>
Name  = 'QP Example';  % File qpExample.m
F    = [ 8  1        % Matrix F in 1/2 * x' * F * x + c' * x
          1  8 ];
c    = [ 3  -4 ]';    % Vector c in 1/2 * x' * F * x + c' * x
A    = [ 1  1        % Constraint matrix
          1  -1 ];
b_L  = [-inf  0  ]';  % Lower bounds on the linear constraints
b_U  = [ 5    0  ]';  % Upper bounds on the linear constraints
x_L  = [ 0    0  ]';  % Lower bounds on the variables
x_U  = [ inf inf ]';  % Upper bounds on the variables
x_0  = [ 0    1  ]';  % Starting point
x_min = [-1 -1 ];      % Plot region lower bound parameters
x_max = [ 6  6 ];      % Plot region upper bound parameters
</pre>
====A Quick Quadratic Programming solution====
The quickest way to solve this problem is to define the following Matlab statements using the TOMLAB  format:
'''File: '''tomlab/usersguide/qpTest1.m
<pre>
qpExample;
Prob = qpAssign(F, c,  A,  b_L,  b_U, x_L,  x_U, x_0,  'qpExample');
Result  = tomRun('qpSolve', Prob,  1);
</pre>
The solution of this problem gives the following output to the screen.
'''File: '''tomlab/usersguide/qpTest1.out
<pre>
===== * * * ===================================================================  * * *
TOMLAB  /SOL + /CGO  + /Xpress  MEX  + /CPLEX  Parallel 2-CPU  + 21 more - Tomlab Optimizat
=====================================================================================
Problem:  --- 1:  qpExample                     f_k -0.027777777777777790
Solver: qpSolve. EXIT=0. INFORM=1.
Active set  strategy
Optimal  point  found
First order  multipliers >= 0
Iter    4
CPU  time: 0.046800 sec.  Elapsed  time: 0.037000 sec.
</pre>
qpAssign is used to define the standard Prob structure, which TOMLAB  always uses to store all information about a problem. The three last parameters could be left out.  The upper bounds will default be Inf, and the problem name is only used in the printout  in ''PrintResult ''to make the output nicer to read. If x 0, the initial  value, is left out, a initial  point is found by ''qpSolve ''solving a feasible point (Phase I) linear programming problem calling the
TOMLAB  ''lpSimplex ''solver. In fact, the output shows that the given ''x''0  = (0'', -''1)''T  ''was rejected because it was infeasible, and instead a Phase I solution lead to the initial  point ''x''0  = (0'', ''0)''T ''.
===Mixed-Integer Programming Problems===
This section describes how to solve mixed-integer programming problems efficiently using TOMLAB. To illustrate the solution of MIPs consider the simple knapsack 0/1 test problem ''Weingartner 1'', which has 28 binary variables and two knapsacks. The problem is defined
<math>
\label{defKS1}
\begin{array}{ccccl}
\min\limits_{x} & c^T x \\
&  \\
s/t & \begin{array}{lcccl} 0& \leq  & x    & \leq & 1, \\ &      & A x  & = & b, \\
\end{array}
\end{array}
</math>
where <math>$b=(600, 600)^T$</math>,
<math>
\begin{array}{rccccccccccccccl}
c = -(&1898 & 440 &22507 &270    &14148 & 3100 & 4650 &30800 &615 & 4975 &1160 & 4225 & 510 & 11880 & \\
& 479  &440  & 490  & 330 & 110 & 560 &24355&  2885&11748&  4550 &  750 & 3720 &1950 &10500&)^T
\end{array}
</math>
and the A matrix is
<math>
\left(
\begin{array}{cccccccccccccccccccc}
45 &  0 & 85 &150 &65& 95& 30&  0& 170& 0 &
40 & 25 & 20 &  0 & 0& 25&  0&  0&  25& 0 \\
165&  0 & 85 &  0 & 0&  0&  0& 100
\\
30 &  20 &  125 &    5 &  80  &  25  &  35 &  73 &  12 &15 &
15 &  40 &    5 &    10 &  10  &  12  &  10 &    9 &    0 &20 \\
60 &  40 &  50 &    36 &  49  &  40  &  19 &  150\\
\end{array}
\right)
</math>
The following statements define this problem in Matlab using the TOMLAB  format:
'''File: '''tomlab/usersguide/mipExample.m
<pre>
Name='Weingartner 1 - 2/28  0-1  knapsack';
%  Problem formulated as a minimum  problem
A = [ 45 0 85 150 65 95 30 0 170 0 ...
      40 25 20 0 0 25 0 0 25 0 ...
      165 0 85 0 0 0 0 100 ; ...
      30 20 125 5 80 25 35 73 12 15 ...
      15 40 5 10 10 12 10 9 0 20 ...
      60 40 50 36 49 40 19 150];
b_U = [600;600]; % 2 knapsack capacities
c = [1898 440 22507 270 14148 3100 4650 30800 615 4975 ...
    1160 4225 510 11880 479 440 490 330 110 560 ...
    24355 2885 11748 4550 750 3720 1950 10500]'; % 28 weights
%  Make  problem  on standard  form  for  mipSolve
[m,n] = size(A);
A = [A  eye(m,m)];
c = [-c;zeros(m,1)]; %  Change  sign  to  make  a minimum  problem
[mm nn] = size(A);
x_L = zeros(nn,1);
x_U = [ones(n,1);b_U];
x_0 = [zeros(n,1);b_U];
fprintf('Knapsack problem.  Variables %d. Knapsacks %d\n',n,m);
fprintf('Making standard  form  with %d  variables\n',nn);
%  All  original variables should  be integer, but  also  slacks in this case
IntVars = nn; %  Could also  be set  as:  IntVars=1:nn; or  IntVars=ones(nn,1);
x_min = x_L; 
x_max = x_U;
f_Low = -1E7; %  f_Low <= f_optimal must hold
n = length(c);
b_L = b_U;
f_opt = -141278;
The quickest way to solve this problem is to define the following Matlab statements:
'''File: '''tomlab/usersguide/mipTest1.m
<pre>
mipExample;
nProblem  = 7;  %  Use the  same  problem  number as in mip_prob.m
fIP   = []; %  Do  not  use any prior knowledge
xIP   = []; %  Do  not  use any prior knowledge
setupFile = []; %  Just  define the  Prob structure, not  any permanent setup  file
x_opt   = []; %  The optimal integer solution is not  known
VarWeight = []; %  No variable priorities, largest fractional part will  be used
KNAPSACK  = 0; %  First run  without the  knapsack heuristic
Prob = mipAssign(c, A,  b_L,  b_U, x_L,  x_U, x_0,  Name, setupFile, nProblem,...
IntVars, VarWeight,  KNAPSACK,  fIP, xIP, ... f_Low,  x_min,  x_max, f_opt, x_opt);
Prob.Solver.Alg       = 2;    %  Depth First, then  Breadth  (Default Depth First)
Prob.optParam.MaxIter = 5000;  %  Must increase iterations from  default 500
Prob.optParam.IterPrint = 0;
Prob.PriLev = 1;
Result = tomRun('mipSolve', Prob,  0);
%  ------------------------------------------------
%  Add priorities on the  variables
%  ------------------------------------------------
VarWeight = c;
%  NOTE.  Prob is  already defined, could  skip  mipAssign,  directly set:
%  Prob.MIP.VarWeight=c;
Prob = mipAssign(c, A,  b_L,  b_U, x_L,  x_U, x_0,  Name, setupFile, nProblem,...
                IntVars, VarWeight,  KNAPSACK,  fIP, xIP, ...
                f_Low,  x_min,  x_max, f_opt, x_opt);
Prob.Solver.Alg = 2;          % Depth First, then  Breadth  search
Prob.optParam.MaxIter = 5000; % Must increase number of  iterations
Prob.optParam.IterPrint = 0;
Prob.PriLev = 1;
Result = tomRun('mipSolve', Prob,  0);
%  ----------------------------------------------
%  Use the  knapsack heuristic, but  not  priorities
%  ----------------------------------------------
KNAPSACK = 1;  VarWeight = [];
%  NOTE.  Prob is  already defined,  could  skip  mipAssign,  directly set:
%  Prob.MIP.KNAPSACK=1;
%  Prob.MIP.VarWeight=[];
Prob = mipAssign(c, A,  b_L,  b_U, x_L,  x_U, x_0,  Name, setupFile, ...
                    nProblem,  IntVars, VarWeight,  KNAPSACK,  fIP, xIP, ...
                    f_Low,  x_min,  x_max, f_opt, x_opt);
Prob.Solver.Alg = 2; % Depth First, then  Breadth  search
Prob.optParam.IterPrint = 0;
Prob.PriLev = 1;
Result = tomRun('mipSolve', Prob,  0);
%  --------------------------------------------------
%  Now  use both  the  knapsack heuristic and priorities
%  --------------------------------------------------
VarWeight = c; KNAPSACK = 1;
% NOTE.  Prob is  already defined,  could  skip  mipAssign,  directly set:
% Prob.MIP.KNAPSACK=1;
% Prob.MIP.VarWeight=c;
Prob = mipAssign(c, A,  b_L,  b_U, x_L,  x_U, x_0,  Name, setupFile, nProblem,...
                IntVars, VarWeight,  KNAPSACK,  fIP, xIP, ...
                f_Low,  x_min,  x_max, f_opt, x_opt);
Prob.Solver.Alg = 2; % Depth First, then  Breadth  search
Prob.optParam.IterPrint = 0;
Prob.PriLev = 1;
Result = tomRun('mipSolve', Prob,  0);
</pre>
To make it easier to see  all variable settings, the first  lines define the needed variables.  Several of them are just empty arrays, and could be directly set as empty in the call to ''mipAssign''. ''mipAssign ''is used to define the standard ''Prob ''structure, which TOMLAB  always uses to store all information about a problem. After mipAssign has defined the structure ''Prob ''it is then easy to set or change fields in the structure. The solver ''mipSolve ''is using three different strategies to search the branch-and-bound tree.  The default is the ''Depth first ''strategy, which is also the result if setting the field ''Solver.Alg ''as zero. Setting the field as one gives the ''Breadth first ''strategy and setting it as two gives the ''Depth first, then breadth search ''strategy. In the example our choice is the last strategy. The number of iterations might  be many, thus the maximal number of iterations must be increased,  the field ''optParam.MaxIter''.
Tests show two  ways to improve  the convergence of MIP  problems. One is to define a priority  order in which the different  non-integer variables are selected as variables to branch on.  The field ''MIP.VarWeight  ''is used to set priority  numbers for each variable. Note that the lower the number, the higher the priority.  In our test case the coefficients of the objective function is used as priority  weights. The other way to improve convergence is to use a heuristic.  For binary variables a simple knapsack heuristic is implemented in ''mipSolve''. Setting the field ''MIP.KNAPSACK ''as true instructs ''mipSolve ''to use the heuristic.
Running the four different tests on the knapsack problem gives the following output to the screen
'''File: '''tomlab/usersguide/mipTest1.out
<pre>
mipTest1
Knapsack problem.  Variables 28.  Knapsacks 2
Branch and bound.  Depth First, then  Breadth.
--- Branch &  Bound converged!  Iterations (nodes  visited) =    714
Total LP Iterations =      713
Optimal  Objective function  = -141278.0000000000000000
x: 0 0 1 -0 1 1 1 1 0 1 0 1 1 1
  0 0 0  0 1 0 1 0 1 1 0 1 0 0
B: -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Branch & bound. Depth First, then Breadth. Priority weights.
--- Branch & Bound converged!
Iterations (nodes visited) =    470
Total LP Iterations        =    469
Optimal  Objective function  = -141278.0000000000000000
x: 0 0 1 -0 1 1 1 1 0 1 0 1 1 1
  0 0 0  0 1 0 1 0 1 1 0 1 0 0
B: -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
Branch and bound.  Depth First, then  Breadth. Knapsack Heuristic.
Found new BEST  Knapsack. Nodes left 0.  Nodes deleted 0.
Best  IP  function value -139508.0000000000000000
Found new BEST  Knapsack. Nodes left 1.  Nodes deleted 0.
Best  IP  function value -140768.0000000000000000
Found new BEST  Knapsack. Nodes left 4.  Nodes deleted 0.
Best  IP  function value -141278.0000000000000000
--- Branch &  Bound converged!  Iterations (nodes  visited) =      96
Total LP Iterations =        95
Optimal  Objective function  = -141278.0000000000000000
x: 0 0 1 -0 1 1 1 1 0 1 0 1 1 1
  0 0 0  0 1 0 1 0 1 1 0 1 0 0
B:  1 1 -1 1 -1 -1 -1 -1 1 -1 0 -1 -1 -1 1 1 1 1 -1 1 -1 0 -1 -1 1 -1 -1 1
Branch &  bound.  Depth First, then  Breadth. Knapsack heuristic. Priority weights.
Found new BEST  Knapsack. Nodes left 0.  Nodes deleted 0.
Best  IP  function
value -139508.0000000000000000
Found new BEST  Knapsack. Nodes left 1.  Nodes deleted 0.
Best  IP  function value -140768.0000000000000000
Found new BEST  Knapsack. Nodes left 4.  Nodes deleted 0.
Best  IP  function value -141278.0000000000000000
--- Branch &  Bound converged!  Iterations (nodes  visited) = 94
Total LP Iterations = 93
Optimal  Objective function  = -141278.0000000000000000
x: 0 0 1 -0 1 1 1 1 0 1 0 1 1 1
  0 0 0  0 1 0 1 0 1 1 0 1 0 0
B:  1 1 -1 1 -1 -1 -1 -1 1 -1 0 -1 -1 -1 1 1 1 1 -1 1 -1 0 -1 -1 1 -1 -1 1
diary off
</pre>
Note that there is a large difference in the number of iterations if the additional heuristic and priorities are used. Similar results are obtained if running the other two tree search strategies.
==Solving Unconstrained and Constrained Optimization Problems==
This section describes how to define and solve unconstrained and constrained optimization  problems.  Several examples are given on how to proceed, depending on if a quick solution is wanted, or more advanced runs are needed.  See Appendix E for more information on your external solvers.
All demonstration examples that are using the TOMLAB  format are collected in the directory ''examples''. It is also possible to run each example separately. The examples relevant to this section are ''ucDemo ''and ''conDemo''. The full path to these files are always given in the text. Throughout this section the test problem ''Rosenbrock's banana'',
<math>
\label{rosen}
\begin{array}{ll}
\min\limits_{x} & f(x)=\alpha\left(x_{2}-x_{1}^{2}\right)^{2}+\left(1-x_{1} \right)^{2} \\
s/t & \begin{array}{lcccl}
-10 & \leq  & x_1  & \leq &2 \\
-10 & \leq  & x_2  & \leq &2 \\
\end{array}
\end{array}
</math>
is used to illustrate the solution of unconstrained problems. The standard value is ''a ''= 100. In this formulation simple bounds are added on the variables, and also constraints in illustrative  purpose. This problem is called ''RB BANANA  ''in the following descriptions to avoid mixing it up with problems already defined in the problem definition files.
=== Defining the Problem in Matlab m-files===
TOMLAB  demands that the general nonlinear problem is defined in Matlab m-files, and not given as an input text string. A file defining the function to be optimized must always be supplied. For linear constraints the constraint coefficient matrix and the right hand side vector are given directly. Nonlinear constraints are defined in a separate file.  First order derivatives and second order derivatives, if available, are stored in separate files, both function derivatives and constraint derivatives.
TOMLAB  is compatible with MathWorks Optimization TB, which in various ways demands both functions, derivatives and constraints to be returned by the same function. TOMLAB  handle all this by use of interface routines, hidden for the user.
It is generally recommended to use the TOMLAB format instead, because having defined the files in this format, all MathWorks Optimization TB solvers are accessible through the TOMLAB  multi-solver driver routines.
The rest of this section shows how to make the m-files for the cases of unconstrained and constrained optimization. In Section 6.2 and onwards similar m-files are used to solve unconstrained optimization using the TOMLAB  format.
The most simple way to write the m-file to compute the function value is shown for the example in (17) assuming ''a ''= 100:
'''File: '''tomlab/usersguide/rbbs f.m
<pre>
%  rbbs_f  - function value  for Constrained  Rosenbrocks Banana
%
%  function f = rbbs_f(x)
function f = rbbs_f(x)
f = 100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
</pre>
Running TOMLAB  it is recommended to use a more general format for the m-files, adding one extra parameter, the ''Prob ''problem definition structure described in detail in Appendix A. TOMLAB  will handle the simpler format also, but the more advanced features of TOMLAB  are not possible to use.
If using this extra parameter, then any information needed in the low-level function computation routine may be sent as fields in this structure.  For single parameter values, like the above ''a ''parameter in the example, the field ''Prob.user ''is recommended.
Using the above convention, then the new m-file for the example in (17) is defined as
'''File: '''tomlab/usersguide/rbb f.m
<pre>
%  rbb_f  - function value  for Rosenbrocks Banana, Problem RB  BANANA
%
%  function f = rbb_f(x,  Prob) function f = rbb_f(x,  Prob) alpha  = Prob.user.alpha;
f = alpha*(x(2)-x(1)^2)^2 + (1-x(1))^2;
</pre>
The value in the field ''Prob.user ''is the ''a ''value. It is defined before calling the solver by explicit setting the ''Prob'' structure. In a similar way the gradient routine is defined as
'''File: '''tomlab/usersguide/rbb g.m
<pre>
%  rbb_g  - gradient vector for Rosenbrocks Banana, Problem RB  BANANA
%
%  function g = rbb_g(x,  Prob) function g = rbb_g(x,  Prob) alpha  = Prob.user.alpha;
g = [ -4*alpha*x(1)*(x(2)-x(1)^2)-2*(1-x(1));  2*alpha*(x(2)-x(1)^2) ];
</pre>
If the gradient routine is not supplied, TOMLAB  will use finite differences (or automatic differentiation) if the gradient vector is needed for the particular solver. In this case it is also easy to compute the Hessian matrix, which gives the following code
'''File: '''tomlab/usersguide/rbb H.m
<pre>
%  rbb_H - Hessian  matrix for Rosenbrocks Banana, Problem RB  BANANA
%
%  function H  = crbb_H(x, Prob) function H  = rbb_H(x,  Prob) alpha  = Prob.user.alpha;
H  = [ 12*alpha*x(1)^2-4*alpha*x(2)+2 , -4*alpha*x(1); -4*alpha*x(1), 2*alpha ];
</pre>
If the Hessian routine is not supplied, TOMLAB  will  use finite differences (or automatic differentiation)  if the Hessian matrix is needed for the particular solver. Often a positive-definite approximation of the Hessian matrix is estimated during the optimization , and the second derivative routine is then not used.
If using the constraints defined for the example in (17) then a constraint routine needs to defined for the single nonlinear constraint, in this case
'''File: '''tomlab/usersguide/rbb c.m
<pre>
%  rbb_c  - nonlinear constraint vector for  Rosenbrocks Banana, Problem RB  BANANA
%
%  function c = rbb_c(x,  Prob) function cx  = rbb_c(x,  Prob) cx  = -x(1)^2 - x(2);
The constraint Jacobian matrix is also of interest and is defined as
</pre>
'''File: '''tomlab/usersguide/rbb dc.m
<pre>
%  rbb_dc  - nonlinear constraint  gradient matrix
%  for Rosenbrocks Banana, Problem RB  BANANA
%
%  function dc = crbb_dc(x,  Prob)
function dc = rbb_dc(x,  Prob)
%  One row for each constraint, one column for each variable. dc = [-2*x(1),-1];
</pre>
If the constraint Jacobian routine is not supplied, TOMLAB  will use finite differences (or automatic differentiation) to estimate the constraint Jacobian matrix if it is needed for the particular solver.
The solver ''nlpSolve ''is also using the second derivatives  of the  constraint  vector.  The result is returned as a weighted sum of the second derivative matrices with respect to each constraint vector element, the weights being the Lagrange multipliers supplied as input to the routine. For the example problem the routine is defined as
'''File: '''tomlab/usersguide/rbb d2c.m
<pre>
%  rbb_d2c  - The second part of  the  Hessian  to  the  Lagrangian  function for the
%  nonlinear constraints for Rosenbrocks Banana, Problem RB  BANANA,  i.e.
%
%  lam' * d2c(x)
%
%  in
%
%  L(x,lam) =    f(x) - lam'  * c(x)
%  d2L(x,lam) = d2f(x) - lam'  * d2c(x)  = H(x)  - lam'  * d2c(x)
%
%  function d2c=crbb_d2c(x, lam,  Prob)
function  d2c=rbb_d2c(x, lam,  Prob)
%  The only  nonzero  element  in the  second derivative matrix for the  single
%  constraint is the  (1,1)  element, which  is a constant -2.
d2c = lam(1)*[-2 0;  0 0];
</pre>
====Communication between user routines====
It is often the case that  mathematical expressions that  occur in the function computation also is part  of the gradient  and Hessian computation.  If these operations are costly it is natural to avoid recomputing these and reuse them when computing the gradient and Hessian.
The function routine is always called before the gradient routine in TOMLAB, and the gradient routine is always called before the Hessian routine. The constraint routine is similarly called before the computation of the constraint gradient matrix.  However, the TOM solvers call the function before the constraint routine, but the SOL solvers do the reverse.
Thus it is safe to use global variables to communicate information from the function routine to the gradient and Hessian, and similarly from the constraint routine to the constraint gradient routine.  Any non-conflicting name could be used as global variable, see Table 154 in Appendix D to find out which names are in use. However, the recommendation is to always use a predefined global variable named ''US A ''for this communication. TOMLAB  is designed to handle recursive calls, and any use of new global variables may cause conflicts.  The variable ''US A ''(and also ''US B'') is automatically saved in a stack, and any level of recursions may safely be used. The user is free to use ''US A ''both as variable, and as a structure. If much information is to be communicated,  defining ''US A ''as a structure makes it possible to send any amount of information between the user routines.
In the ''examples ''directory the constrained optimization example in ''conDemo ''is using the defined functions ''con1 f'', ''con1 g ''and ''con1 H''. They include an example of communicating one exponential expression between the routines.
The ''lsDemo ''example file in the ''examples ''directory communicates two exponential expressions between ''ls1 r ''and ''ls1 J ''with the use of ''US A ''and ''US B''. In ''ls1 r ''the main part is
<pre>
...
global US_A
t  = Prob.LS.t(:);
%  Exponential computations  takes  time, and may  be done once,  and
%  reused  when computing  the  Jacobian
US_A  = exp(-x(1)*t);
US_B  = exp(-x(2)*t);
r = K*x(1)*(US_B  - US_A) / (x(3)*(x(1)-x(2))) - Prob.LS.y;
</pre>
In ''ls1 J ''then ''US A ''is used
<pre>
...
global US_A
%  Pick  up the  globally saved exponential computations
e1 = US_A;
e2 = US_B;
%  Compute  the  three  columns in the  Jacobian, one for each of  variable
J = a * [ t.*e1+(e2-e1)*(1-1/b), -t.*e2+(e2-e1)/b, (e1-e2)/x(3) ];
</pre>
For more discussions on global variables and the use of recursive calls in TOMLAB, see Appendix D.
In the following sections it is described how to setup problems in TOMLAB  and use the defined m-files. First comes the simplest way, to use the TOMLAB format.
===Unconstrained Optimization Problems===
The use of the TOMLAB  format is best illustrated by examples
The following is the first example in the ''ucDemo ''demonstration file.  It shows an example of making a call to ''conAssign ''to create a structure in the TOMLAB  format, and solve the problem with a call to ''ucSolve''.
<pre>
%  ---------------------------------------------------------------------
function uc1Demo
%  ---------------------------------------------------------------------
format  compact
fprintf('=====================================================\n');
fprintf('Rosenbrocks banana with  explicit f(x),  g(x) and H(x)\n');
fprintf('=====================================================\n');
Name = 'RB Banana';
x_0 = [-1.2 1]'; %  Starting values  for the  optimization. x_L = [-10;-10];
x_L    = {-10;-10];    %  Lower bounds for x.
x_U = [2;2]; %  Upper bounds for x.
fLowBnd = 0;         %  Lower bound on function.
% Generate the problem structure using the TOMLAB format (short call)
Prob = conAssign('uc1_f', 'uc1_g',  'uc1_H', \[\], x_L,  x_U, Name, ...
                x_0,  [], fLowBnd);
Result    = tomRun('ucSolve', Prob,  1);
</pre>
In its more general form ''conAssign ''is used to define constrained  problems. It also takes as input  the nonzero pattern of the Hessian matrix, stored in the matrix ''HessPattern''. In this case all elements of the Hessian matrix are nonzero, and either ''HessPattern ''is set as empty or as a matrix with all ones. Also the parameter ''pSepFunc ''should be set. It defines if the objective function is partially  separable,  see Section  14.5. Setting this parameter empty (the default), then this property is not used. In the above example the call would be
<pre>
...
HessPattern  = ones(2,2); %  The pattern of  nonzeros
pSepFunc = [];         %  No  partial separability used
%  conAssign  is used to  generate  the  TOMLAB  problem  structure
Prob = conAssign('uc1_f',  'uc1_g',  'uc1_H', HessPattern, ...
                    x_L,  x_U, Name, x_0,  pSepFunc, fLowBnd);
...
</pre>
Also see the other examples in ''ucDemo ''on how to solve the problem, when gradients routines are not present, and numerical differentiation must be used. An example on how to solve a sequence of problems is also presented.
If the gradient is not possible to define one just sets the corresponding gradient function name empty.
The example ''uc3Demo ''in file ''ucDemo ''show how to solve a sequence of problems in TOMLAB, in this case changing the steepness parameter ''a ''in (17). It is important to point out that it is only necessary to define the ''Prob ''structure once and then just change the varying parameters, in this case the ''a ''value. The ''a ''value is sent to the user routines using the field ''user ''in the ''Prob ''structure. Any field in the ''Prob ''structure could be used that is not conflicting with the predefined fields.  In this example the a vector of ''Result ''structures are saved for later preprocessing.
<pre>
%  ---------------------------------------------------------------------
function uc3Demo  - Sequence  of  Rosenbrocks banana functions
%  ---------------------------------------------------------------------
%  conAssign  is used to  generate  the  TQ  problem  structure
%  Prob = conAssign(f,g,H,  HessPattern, x_L,  x_U, Name, x_0,  pSepFunc, fLowBnd);
Prob = conAssign('uc3_f',\[\],\[\],\[\],\[-10;-10\], \[2;2\], \[-1.2;1\], 'RB Banana',\[\],0)
%  The different steepness  parameters  to  be tried
Steep = [100  500 1000 10000];
for i = 1:4
    Prob.user.alpha = Steep(i);
    Result(i) = tomRun('ucSolve', Prob,  1);
end
</pre>
===Direct Call to an Optimization Routine===
When wanting to solve a problem by a direct call to an optimization routine there are two possible ways of doing it.  The difference is in the way the problem dependent parameters are defined. The most natural way is to use a Init  File, like the predefined TOMLAB  Init  Files ''o prob ''(e.g. ''uc prob ''if the problem is of the type unconstrained) to define those parameters. The other way is to call the routine ''conAssign''. In this subsection, examples of two different approaches are given.
First, solve the problem ''RB BANANA  ''in (17) as an unconstrained problem. In this case, define the problem in the files ''ucnew prob'', ''ucnew f'', ''ucnew  g ''and ''ucnew H''.  Using the problem definition files in the working directory solve the problem and print the result by the following calls.
'''File: '''tomlab/usersguide/ucnewSolve1.m
<pre>
probFile = 'ucnew_prob';          %  Problem definition file.
P  = 18;                          %  Problem number for the  added RB  Banana.
Prob = probInit(probFile, P);   %  Setup Prob structure.
Result  = tomRun('ucSolve', Prob,  1);
</pre>
Now, solve the same problem as in the example above but define the parameters ''x 0'', ''x L ''and ''x L ''by calling the routine ''conAssign''. Note that in this case the file ''ucnew prob ''is not used, only the files ''ucnew f ''and ''ucnew g''. The file ''ucnew H ''is not needed because a quasi-Newton BFGS algorithm is used.
'''File: '''tomlab/usersguide/ucnewSolve2.m
<pre>
x_0 = [-1.2;1];           %  Starting values for the optimization.
x_L = [-10;-10];           %  Lower bounds for x.
x_U    = [2;2];                  %  Upper bounds for x.
Prob = conAssign('ucnew_f','ucnew_g', \[\], \[\], x_L,  x_U,...
                'ucNew', x_0);
Prob.P = 18;                   %  Problem number.
Prob.Solver.Alg=1;           %  Use quasi-Newton  BFGS
Prob.user.uP = 100;           %  Set  alpha  parameter
Result = tomRun('ucSolve',Prob,1);
</pre>
===Constrained Optimization Problems===
Study the following constrained exponential problem, ''Exponential problem III'',
<math>
\label{ExpIII}
\begin{array}{ll}
\min\limits_{x} & f(x)=
  exp(x_1) (4 x_1^2+ 2 x_2^2+ 4 x_1 x_2+2 x_2+1)
\\
s/t &
\begin{array}{lcccl}
-10 & \leq  & x_1  & \leq &10 \\-10 & \leq  & x_2  & \leq &10 \\  0 & \leq  & x_1 + x_2  & \leq &0 \\  1.5 & \leq  & -x_1 x_2+x_1+x_2 &      &  \\  -10 & \leq  & x_1 x_2 &      &  \\
\end{array}
\end{array}.
</math>
The first two constraints are simple bounds, the third is a linear equality constraint, because lower and upper bounds are the same. The last two constraints are nonlinear inequality constraints.  To solve the problem, define the following statements, available as ''con1Demo  ''in file ''conDemo''.
<pre>
Name    = 'Exponential problem III';
A      = [1  1]; % One    linear constraint
b_L = 0; % Lower bound on linear constraint
b_U = 0; % b_L == b_U implies equality
c_L    = [1.5;-10]    % Two nonlinear inequality constraints
c_U = []; % Empty means  Inf (default) for the  two constraints
x_0 = [-5;5]; % Initial value  for  x
x_L = [-10;-10]; % Lower bounds on x
x_U = [10;10]; % Upper bounds on x
fLowBnd = 0; % A lower bound on the optimal function value
x_min = [-2;-2]; % Used for plotting, lower  bounds
x_max    = [4;4];      % Used for plotting, upper  bounds
x_opt = [-3.162278, 3.162278; -1.224745, 1.224745]; % Two stationary points
f_opt = [1.156627; 1.8951];
HessPattern  = [];      % All elements  in Hessian  are  nonzero.
ConsPattern  = [];      % All elements  in the  constraint Jacobian  are  nonzero.
pSepFunc    = [];      % The function f is not  defined as separable
%  Generate the  problem  structure using  the  TOMLAB  format
Prob = conAssign('con1_f',  'con1_g',  'con1_H', HessPattern, x_L,  x_U, ...
      Name, x_0,  pSepFunc, fLowBnd, A,  b_L,  b_U, 'con1_c', 'con1_dc',... \[\], 
      ConsPattern, c_L,  c_U, x_min,  x_max, f_opt, x_opt);
Result = tomRun('conSolve',Prob); PrintResult(Result);
</pre>
The following example, ''con2Demo ''in file ''conDemo'', illustrates numerical estimates of the gradient and constrained Jacobian matrix.  Only the statements different from the previous example is given. Note that the gradient routine is not given at all, but  the constraint  Jacobian routine is given.  Setting ''Prob.ConsDiff ''greater than zero will overrule the use of the constraint Jacobian routine. The solver ''conSolve ''is in this case called directly.
<pre>
%  Generate the  problem  structure using  the  TOMLAB  format
Prob = conAssign('con1_f', \[\], \[\], HessPattern, x_L,  x_U, Name, x_0, ...
                pSepFunc, fLowBnd, A,  b_L,  b_U, 'con1_c', 'con1_dc', \[\], ...
                ConsPattern, c_L,  c_U, x_min,  x_max, f_opt, x_opt);
Prob.NumDiff  = 1;      %  Use standard  numerical differences
Prob.ConsDiff = 5;      %  Use the  complex variable method to estimate derivatives
Prob.Solver.Alg = 0;    %  Use default algorithm in conSolve
Result    = tomRun('conSolve', Prob,  1);
</pre>
The third  example, ''con3Demo ''in file ''conDemo'',  shows how to solve the same problem for a number of different initial values on x. The initial values are stored in the matrix ''X ''0, and in each loop step ''Prob.x 0 ''is set to one of the columns in ''X ''0. In a similar way any of the values in the Prob structure may be changed in a loop step, if e.g. the loop is part of a control loop. The Prob structure only needs to be defined once. The different initial points reveal that this problem is nasty, and that several points fulfill  the convergence criteria.  Only the statements different from the previous example is given. A different solver is called dependent on which TOMLAB version is used.
<pre>
X0              = [ -1  -5    1  0  -5 ;
                    1 7  -1  0  5];
%  Generate the  problem  structure using  the  TOMLAB  format
Prob = conAssign('con1_f',  'con1_g',  'con1_H', HessPattern, x_L,  x_U, Name, ...
                X0(:,1), pSepFunc, fLowBnd, A,  b_L,  b_U, 'con1_c', 'con1_dc',...
                [], ConsPattern, c_L,  c_U, x_min,  x_max, f_opt, x_opt);
Prob.Solver.Alg = 0;
TomV            = tomlabVersion;
for i = 1:size(X0,2)
  Prob.x_0  = X0(:,i);
  if TomV(1:1) ~= 'M'
      % Users of v3.0 may instead call MINOS (or SNOPT, NPSOL in v3.0 /SOL)   
      Result = tomRun('minos',Prob, 2);
  else
      Result = tomRun('conSolve',Prob, 2);
  end
end
</pre>
The constrained optimization solvers all have proven global convergence to a local minimum. If the problem is not convex, then it is always difficult  to assure that a global minimum has been reached. One way to make it more likely that  the global minimum is found is to optimize very many times with different initial  values. The fifth example, ''con5Demo ''in file ''conDemo ''illustrates this approach by solving the exponential problem 50 times with randomly generated initial  points.
If the number of variables are not that many, say fifteen, another approach is to use a global optimization solver like ''glcSolve ''to crunch the problem and search for the global minimum.  If letting it run long enough, it is very likely to find the global minimum, but maybe not with  high precision.  To run ''glcSolve ''the problem must be box-bounded, and the advise is to try to squeeze the size of the box down as much as possible.  The sixth example, ''con6Demo ''in file ''conDemo'', illustrates a call to ''glcSolve''.  It is very simple to do this call if the problem has been defined in the TOMLAB  format. The statements  needed are the following
<pre>
Prob.optParam.MaxFunc = 5000; % Define  maximal number of  function evaluations
Result                = tomRun('glcSolve',Prob,2);
</pre>
A more clever approach, using warm starts and successive checks on the best function value obtained, is discussed in Section 7. It is also better to use ''glcAssign  ''and not ''conAssign ''if the intension is to use global optimization.
===Efficient use of the TOMLAB solvers===
To follow the iterations in the TOMLAB  Base Module solvers, it is useful to set the ''IterPrint ''parameter as true. This gives one line of information for each iteration.  This parameter is part of the ''optParam ''subfield:
<pre>
Prob.optParam.IterPrint = 1;
</pre>
Note that ''ucSolve ''implements a whole set of methods for unconstrained optimization.  If the user explicitly wants Newtons method to be used, utilizing second order information, then set
<pre>
Prob.Solver.Alg=1;                    %  Use Newtons Method
</pre>
But ''ucSolve ''will switch to the default BFGS method if no routine has been given to return the Hessian matrix. If the user still  wants to run Newtons method, then the Hessian routine must be defined and return an empty Hessian. That triggers a numerical estimation of the Hessian. Do ''help ucSolve ''to see the different algorithmic options and other comments on how to run the solver.
Both ''ucSolve ''and ''conSolve  ''use line search based methods. The parameter ''s ''influences the accuracy of the line search each step. The default value is
<pre>
Prob.LineParam.sigma  = 0.9;    %  Inaccurate line  search
</pre>
However, using the conjugate gradient methods in ''ucSolve'', they benefit from a more accurate line search
<pre>
Prob.LineParam.sigma  = 0.01;    %  Default accurate line  search  for C-G methods
</pre>
as do quasi-Newton DFP methods (default ''s ''= 0''.''2). The test for the last two cases are made for ''s ''= 0''.''9. If the user really wishes these methods to use ''s ''= 0''.''9, the value must be set slightly different to fool the test:
<pre>
Prob.LineParam.sigma  = 0.9001;  %  Avoid  the  default value  for C-G methods
</pre>
The choice of line search interpolation method is also important, a cubic or quadratic interpolation.  The default is to use cubic interpolation.
<pre>
Prob.LineParam.LineAlg = 1;  %  0 = quadratic, 1 = cubic
</pre>
==Solving Global Optimization Problems==
Global Optimization deals with optimization problems that might have more than one local minimum. To find the global minimum out of a set of local minimum demands other types of methods than for the problem of finding local minimum.  The TOMLAB  routines for global optimization are based on using only function or constraint values, and no derivative information. Two different types are defined, Box-bounded global optimization '''glb '''and global mixed-integer nonlinear programming '''glc'''. For the second case, still the problem should be box-bounded.
All demonstration examples that are using the TOMLAB  format are collected in the directory ''examples''. Running the menu program ''tomMenu'', it is possible to run all demonstration examples. It is also possible to run each example separately. The examples relevant to this section are ''glbDemo ''and ''glcDemo''.
===Box-Bounded Global Optimization Problems===
Box-bounded global optimization problems are simple to define, only one function routine is needed, because the global optimization routines in TOMLAB  does not utilize information about derivatives.  To define the ''Shekel 5 ''test problem in a routine ''glb1_f'', the following statements are needed
<pre>
function f = glb1_f(x,  Prob)
A  = [4 4 4 4;  1 1 1 1;  8 8 8 8;  6 6 6 6;  3 7 3 7]';
f=0;    c = [.1 .2  .2  .4  .4]';
for i = 1:5
  z = x-A(:,i);
  f = f - 1/(z'\*z + c(i) ); % Shekel  5
end
</pre>
To solve the ''Shekel 5 ''test problem define the following statements, available as ''glb1Demo  ''in ''glbDemo''.
<pre>
function glb1Demo
Name      = 'Shekel 5';
x_L = [ 0  0  0  0]';    % Lower bounds in the  box
x_U = [10 10 10 10]'; % Upper bounds in the  box
% Generate the  problem  structure using  the  TOMLAB  format  (short call)
Prob = glcAssign('glb1_f', x_L,  x_U, Name);
Result  = tomRun('glbSolve', Prob,  1); % Solve  using  the  default of  200 iterations
</pre>
If the user knows the optimal function value or some good approximation, it could be set as a target for the optimization, and the solver will stop if the target value is achieved within  a relative tolerance. For the ''Shekel 5 ''problem, the optimal function value is known and could be set as target value with the following statements.
<pre>
Prob.optParam.fGoal = -10.1532; %  The optimal value  set  as target
Prob.optParam.eps_f =  0.01;    %  Convergence tolerance one percent
</pre>
Convergence will occur if the function value sampled is within one percent of the optimal function value.
Without  additional knowledge about the problem, like the function value at the optimum, there is no convergence criteria to be used. The global optimization routines continues to sample points until the maximal number  of function evaluations or the maximum number of iteration cycles are reached. In practice, it is therefore important to be able to do warm starts, starting once again without having to recompute the past history of iterations and function evaluations. Before doing a new warm start, the user can evaluate the results and determine if to continue or not.  If the best function value has not changed for long it is a good chance that there are no better function value to be found.
In TOMLAB  warm starts are automatically handled, the only thing  the user needs to do is to set one flag, ''Prob.WarmStart'', as true. The solver ''glbSolve ''defines a binary ''mat''-file called ''glbSave.mat ''to store the information needed for a warm start. It is important to avoid running other problems with this solver when doing warm starts. The warm start information would then be overwritten.  The example ''glb3Demo ''in ''glbDemo  ''shows how to do warm starts. The number of iterations per call is set very low to be able to follow the process.
<pre>
Name      = 'Shekel 5';
x_L      = [ 0  0  0  0]';
x_U      = [10  10  10  10]';
%  Generate the  problem  structure using  the  TOMLAB  format  (short call)
Prob = glcAssign('glb1_f', x_L,  x_U, Name);
Prob.optParam.MaxIter = 5;  % Do only  five iterations per  call
Result = tomRun('glbSolve',Prob,2); 
pause(1)
Prob.WarmStart = 1;  %  Set  the  flag for warm  start
for i = 1:6  %  Do  6 warm  starts
  Result  = tomRun('glbSolve',Prob,2);  pause(1)
end
</pre>
The example ''glb4Demo ''in ''glbDemo ''illustrates how to send parameter values down to the function routine from the calling routine. Change the ''Shekel 5 ''test problem definition so that ''A ''and ''c ''are given as input to the function routine
<pre>
function f = glb4_f(x,  Prob)
%  A  and c info are  sent  using  Prob structure
f = 0;  A  = Prob.user.A; c = Prob.user.c;
for i = 1:5
  z = x-A(:,i);
  f = f - 1/(z'\*z + c(i) ); %  Shekel  5
end
Then the following statements solve the ''Shekel 5 ''test problem.
Name = 'Shekel 5';
x_L = [0  0  0  0]';
x_U = [10 10 10 10]';
%  Generate the  problem  structure using  the  TOMLAB  format  (short call)
Prob = glcAssign('glb4_f', x_L,  x_U, Name);
%  Add information to  be sent  to  glb4_f. Used in f(x) computation
Prob.user.A = [4  4 4 4;1  1 1 1;8  8 8 8;6  6 6 6;3  7 3 7]';
Prob.user.c = [.1 .2  .2  .4  .4]';
Result  = tomRun('glbSolve',Prob,2);
</pre>
===Global Mixed-Integer Nonlinear  Problems===
To solve global mixed-integer nonlinear programming problems with the TOMLAB  format, only two routines need to be defined, one routine that  defines the function and one that  defines the constraint  vector.  No derivative information is utilized by the TOMLAB  solvers.  To define the ''Floudas-Pardalos 3.3 ''test problem, one routine ''glc1 f''
<pre>
function f = fp3_3f(x,  Prob)
f = -25*(x(1)-2)^2-(x(2)-2)^2-(x(3)-1)^2-(x(4)-4)^2-(x(5)-1)^2-(x(6)-4)^2;
</pre>
and one routine ''glc1 c''
<pre>
function c = fp3_3c(x,  Prob)
c = [(x(3)-3)^2+x(4); (x(5)-3)^2+x(6)]; % Two nonlinear constraints (QP)
</pre>
is needed. Below is the example ''glc1Demo ''in ''glcDemo ''that  shows how to solve this problem doing ten warm starts.  The warm starts are automatically handled, the only thing the user needs to do is to set one flag as true, ''Prob.WarmStart''.  The solver ''glcSolve ''defines a binary ''mat''-file called ''glcSave.mat ''to store the information needed for the warm start. It is important to avoid running other problems with ''glcSolve ''when doing warm starts. Otherwise the warm start information will be overwritten with the new problem. The original ''Floudas-Pardalos'' ''3.3 ''test problem, has no upper bounds on ''x''1  and ''x''2 , but such bounds are implicit from the third linear constraint, ''x''1 + ''x''2  ''= ''6. This constraint, together with the simple bounds ''x''1  ''= ''0 and ''x''2  ''= ''0 immediately leads to ''x''1  ''= ''6 and ''x''2  ''= ''6. Using these inequalities a finite box-bounded problem can be defined.
<pre>
Name = 'Floudas-Pardalos 3.3'; % This  example is number 16 in  glc_prob.m
x_L = [ 0  0 1 0 1 0]';  % Lower bounds on x
A      = [ 1 -3 0 0 0 0
          -1  1 0 0 0 0
            1  1 0 0 0 0];    % Linear equations
b_L = [-inf -inf 2 ]'; % Upper bounds for linear equations
b_U = [  2   2 6 ]'; % Lower bounds for linear equations
x_U = [6  6 5 6 5 10]';  % Upper bounds after x(1),x(2) values  inserted
c_L = [4  4]';       % Lower bounds on two nonlinear constraints
c_U = [];               % Upper bounds are  infinity for nonlinear constraints
x_opt  = [5  1 5 0 5 10]';  % Optimal x value
f_opt  = -310;       % Optimal f(x) value
x_min = x_L;  x_max = x_U;    % Plotting bounds
% Set the rest of  the  arguments as empty
IntVars = []; VarWeight = [];
fIP = []; xIP = []; fLowBnd = []; x_0 = [];
%IntVars  = [1:5]; % Indices of  the  variables that should  be integer  valued
Prob = glcAssign('glc1_f', x_L,  x_U, Name, A,  b_L,  b_U, 'glc1_c', ...
          c_L,  c_U, x_0,  IntVars, VarWeight,  ...
          fIP, xIP, fLowBnd, x_min,  x_max, f_opt, x_opt);
% Increase the default max number of function evaluations in glcSolve
Prob.optParam.MaxFunc = 500;
Result  = tomRun('glcSolve', Prob,  3);
Prob.WarmStart  = 1;
% Do 10 restarts, call driver tomRun, PriLev  = 2 gives call to PrintResult
for i=1:10
  Result  = tomRun('glcSolve',Prob,2);
end
</pre>
==Solving Least Squares and Parameter Estimation Problems==
This section describes how to define and solve different types of linear and nonlinear least squares and parameter estimation problems. Several examples are given on how to proceed, depending on if a quick solution is wanted, or more advanced tests are needed. TOMLAB  is also compatible with MathWorks Optimization TB.  See Appendix E for more information and test examples.
All  demonstration examples that  are using the TOMLAB  format are collected in the directory ''examples''. The examples relevant to this section are ''lsDemo ''and ''llsDemo''. The full path to these files are always given in the text.
Section 8.5 (page 81) contains information on solving extreme large-scale '''ls '''problems with ''Tlsqr''.
===Linear Least Squares Problems===
This section shows examples how to define and solve linear least squares problems using the TOMLAB  format. As a first illustration,  the example ''lls1Demo ''in file ''llsDemo ''shows how to fit a linear least squares model with linear constraints to given data. This test problem is taken from the Users Guide of ''LSSOL ''\[29\].
<pre>
Name='LSSOL  test  example';
%  In TOMLAB it is best to use Inf and -Inf, not  big  numbers.
n = 9; % Number  of  unknown  parameters
x_L = [-2 -2  -Inf, -2*ones(1,6)]';
x_U = 2*ones(n,1);
A      = [ ones(1,8) 4;  1:4,-2,1 1 1 1;  1 -1  1 -1, ones(1,5)];
b_L = [2    -Inf  -4]';
b_U = [Inf    -2  -2]';
y = ones(10,1);
C  = \[ ones(1,n); 1 2 1 1 1 1 2 0 0; 1 1 3 1 1 1 -1  -1  -3; ...
1 1 1 4 1 1 1 1 1;1 1 1 3 1 1 1 1 1;1 1 2 1 1 0 0 0 -1; ...
1 1 1 1 0 1 1 1 1;1 1 1 0 1 1 1 1 1;1 1 0 1 1 1 2 2 3;  ...
1 0 1 1 1 1 0 2 2\];
x_0 = 1./[1:n]';
t          = [];  %  No time set for y(t) (used  for plotting)
weightY    = [];  %  No weighting
weightType  = [];  %  No weighting type  set
x_min     = [];  %  No lower bound for plotting
x_max     = [];  %  No upper bound for plotting
Prob = llsAssign(C, y, x_L,  x_U, Name, x_0,  t, weightType,  weightY, ...
                A,  b_L,  b_U, x_min,  x_max); Result = tomRun('lsei',Prob,2);
</pre>
It is trivial to change the solver in the call to ''tomRun '' to a nonlinear least squares solver, e.g. ''clsSolve'', or a general nonlinear programming solver.
===Linear Least Squares Problems using the SOL Solver LSSOL===
The example ''lls2Demo ''in file ''llsDemo ''shows how to fit a linear least squares model with linear constraints to given data using a direct call to the SOL solver ''LSSOL''. The test problem is taken from the Users Guide of ''LSSOL ''\[29\].
<pre>
%  Note that when calling the  LSSOL  MEX  interface directly,  avoid  using
%  Inf and -Inf. Instead use big  numbers that indicate Inf.
%  The standard  for the  MEX  interfaces is 1E20 and -1E20,  respectively.
n = 9; % There are  nine  unknown  parameters, and 10 equations
x_L = [-2 -2  -1E20,  -2*ones(1,6)]';
x_U = 2*ones(n,1);
A  = [ ones(1,8) 4;  1:4,-2,1 1 1 1;  1 -1  1 -1, ones(1,5)];
b_L = [2 -1E20 -4]';
b_U = [1E20   -2 -2]';
% Must put lower and upper bounds on variables and constraints together
bl = \[x_L;b_L];
bu = \[x_U;b_U\];
H  = [ ones(1,n); 1 2 1 1 1 1 2 0 0;  1 1 3 1 1 1 -1  -1  -3; ...
        1 1 1 4 1 1 1 1 1;1  1 1 3 1 1 1 1 1;1  1 2 1 1 0 0 0 -1; ...
        1 1 1 1 0 1 1 1 1;1  1 1 0 1 1 1 1 1;1  1 0 1 1 1 2 2 3;  ...
        1 0 1 1 1 1 0 2 2];
y  = ones(10,1);
x_0 = 1./[1:n]';
% Set  empty indicating default values  for most variables
c        = [];           % No linear coefficients, they are for LP/QP
Warm   = [];           % No  warm  start
iState   = [];           % No  warm  start
Upper   = [];           % C  is not  factorized
kx   = [];           % No  warm  start
SpecsFile = [];           % No  parameter  settings in a SPECS  file
PriLev   = [];           % PriLev  is not  really used in LSSOL
ProbName  = [];          % ProbName  is not  really used in LSSOL
optPar(1) = 50;          % Set  print level at  maximum
PrintFile  = 'lssol.txt'; % Print result on the  file with  name  lssol.txt
z0 = (y-H*x_0);
f0  = 0.5*z0'*z0;
fprintf('Initial function value  %f\n',f0);
[x,  Inform, iState, cLamda, Iter, fObj, r, kx]  = ...
    lssol( A,  bl, bu,  c, x_0,  optPar, H,  y, Warm, ...
    iState, Upper,  kx,  SpecsFile, PrintFile,  PriLev, ProbName );
%  We could equally well call with the following shorter call:
%  [x,  Inform, iState, cLamda, Iter, fObj, r, kx]  = ...
%      lssol( A,  bl, bu,  c, x, optPar, H,  y);
z = (y-H*x);
f = 0.5*z'*z;
fprintf('Optimal  function value  %f\n',f);
</pre>
===Nonlinear  Least Squares Problems===
This section shows examples how to define and solve nonlinear least squares problems using the TOMLAB  format. As a first illustration,  the example ''ls1Demo ''in file ''lsDemo ''shows how to fit a nonlinear model of exponential type with three unknown parameters to experimental data. This problem, ''Gisela'', is also defined as problem three in ''ls prob''.  A weighting parameter ''K ''is sent  to the residual and Jacobian routine using the ''Prob ''structure.  The solver ''clsSolve ''is called directly. Note that the user only defines the routine to compute the residual vector and the Jacobian matrix of derivatives. TOMLAB  has special routines ''ls f'', ''ls g ''and ''ls H ''that computes the nonlinear least squares objective function value, given the residuals, as well as the gradient and the approximative Hessian,  see Table 39. The residual routine for this problem is defined in file ''ls1 r ''in the directory ''example ''with the statements
<pre>
function r = ls_r(x, Prob)
% Compute  residuals to  nonlinear least squares  problem  Gisela
% US_A is the  standard  TOMLAB  global parameter  to  be used in the
% communication  between the  residual and the  Jacobian  routine
global US_A
% The extra weight parameter  K  is sent as part of the structure
K = Prob.user.K;
t = Prob.LS.t(:); % Pick  up the  time  points
% Exponential  expressions to be later used when computing the Jacobian
US_A.e1  = exp(-x(1)\*t); US_A.e2  = exp(-x(2)\*t);
r = K\*x(1)\*(US_A.e2  - US_A.e1) / (x(3)\*(x(1)-x(2))) - Prob.LS.y;
</pre>
Note that this example also shows how to communicate information between the residual and the Jacobian routine. It is best to use any of the predefined global variables ''US A ''and ''US B'', because then there will be no conflicts with respect to global variables if recursive calls are used. In this example the global variable ''US A ''is used as structure array storing two vectors with exponential expressions.  The Jacobian routine for this problem is defined in the file ''ls1 J ''in the directory ''example''. The global variable ''US A ''is accessed to obtain the exponential expressions, see the statements below.
<pre>
function J = ls1_J(x,  Prob)
% Computes  the  Jacobian  to  least squares  problem  Gisela. J(i,j) is dr_i/d_x_j
% Parameter K  is input in the  structure
Prob a = Prob.user.K * x(1)/(x(3)*(x(1)-x(2)));
b      = x(1)-x(2);
t      = Prob.LS.t;
% Pick up the globally saved exponential computations
global US_A
e1 = US_A.e1; e2 = US_A.e2;
% Compute  the  three columns in the Jacobian, one for each of variable
J = a * [ t.*e1+(e2-e1)*(1-1/b), -t.*e2+(e2-e1)/b, (e1-e2)/x(3) ];
The following statements solve the ''Gisela ''problem.
%  ---------------------------------------------------------------------
function ls1Demo - Nonlinear parameter  estimation with  3 unknowns
%  ---------------------------------------------------------------------
Name='Gisela';
% Time values
t = [0.25; 0.5; 0.75; 1;  1.5; 2;  3;  4;  6;  8;  12;  24;  32;  48;  54;  72;  80;...
    96;  121;  144;  168;  192;  216;  246;  276;  324;  348;  386];
%  Observations
y = [30.5; 44;  43;  41.5; 38.6; 38.6; 39;  41;  37;  37;  24;  32;  29;  23;  21;...
      19;  17;  14;  9.5; 8.5; 7;  6;  6;  4.5; 3.6; 3;  2.2; 1.6];
x_0 = \[6.8729,0.0108,0.1248\]'; %  Initial values  for unknown  x
%  Generate the  problem  structure using  the  TOMLAB  format  (short call)
%  Prob = clsAssign(r, J, JacPattern, x_L,  x_U, Name, x_0,  ...
%             y, t, weightType,  weightY, SepAlg,  fLowBnd, ...
%             A,  b_L,  b_U, c, dc,  ConsPattern, c_L,  c_U, ...
%             x_min,  x_max, f_opt, x_opt);
Prob = clsAssign('ls1_r', 'ls1_J', \[\], \[\], \[\], Name, x_0,  y, t);
% Weighting  parameter  K  in model is sent  to  r and J computation  using  Prob
Prob.user.K = 5;
Result = tomRun('clsSolve', Prob,  2);
</pre>
The second example ''ls2Demo ''in file ''lsDemo ''solves the same problem as ''ls1Demo'',  but using numerical differences to compute the Jacobian matrix in each iteration.  To make TOMLAB  avoid using the Jacobian routine, the variable ''Prob.NumDiff ''has to be set nonzero. Also in this example the flag ''Prob.optParam.IterPrint  ''is set to enable one line of printing for each iteration.  The changed statements are
<pre>
...
Prob.NumDiff            = 1; % Use standard  numerical differences
Prob.optParam.IterPrint = 1; % Print one line each iteration
Result    = tomRun('clsSolve',Prob,2);
</pre>
The third example ''ls3Demo ''in file ''lsDemo ''solves the same problem as ''ls1Demo'',  but six times for different values of the parameter ''K ''in the range \[3''.''8'', ''5''.''0\]. It illustrates that it is not necessary to remake the problem structure ''Prob ''for each optimization, but instead just change the parameters needed. The ''Result ''structure is saved as an vector of structure arrays, to enable post analysis of the results. The changed statements are
<pre>
for i=1:6
  Prob.user.K = 3.8  + 0.2\*i;
  Result(i) = tomRun('clsSolve',Prob,2);
end
fprintf('\nWEIGHT PARAMETER  K  is %9.3f\n\n\n',Prob.user.K);
</pre>
Table 39 describes the low level routines and the initialization  routines needed for the predefined constrained nonlinear least squares ('''cls''') test problems. Similar routines are needed for the nonlinear least squares ('''ls''') test problems (here no constraint routines are needed).
{|
|+Table 39: Constrained nonlinear least squares ('''cls''') test problems.
|-
|'''Function'''||'''Description'''
|-
|''cls prob''||Initialization  of '''cls '''test problems.
|-
|''cls r''||Compute the residual vector ''ri ''(''x'')'', i ''= 1'', ..., m. x ? ''R''n ''for '''cls ''' test problems.
|-
|''cls J''||Compute the Jacobian matrix  ''Jij ''(''x'')  = ''?ri /dxj , i ''= 1'', ..., m, j ''= 1'', ..., n ''for '''cls ''' test problems.
|-
|''cls c''||Compute the vector of constraint functions ''c''(''x'') for '''cls '''test problems.
|-
|''cls dc''||Compute the matrix of constraint normals ''?c''(''x'')''/dx ''for '''cls '''test problems.
|-
|''cls d2c''||Compute the second part of the second derivative of the Lagrangian function for '''cls '''test problems.
|-
|''ls_f''||General routine to compute the objective function value ''f ''(''x'') = 1 ''r''(''x'')''T r''(''x'') for nonlinear least squares type of problems.
|-
|''ls_g''||General routine to compute the  gradient  ''g''(''x'') = ''J ''(''x'')''T r''(''x'') for nonlinear least squares type of problems.
|-
|''ls_H''||General routine to compute the Hessian approximation ''H ''(''x'') = ''J ''(''x'')''T  *J ''(''x'') for nonlinear least squares type of problems.
|}
===Fitting  Sums of Exponentials to Empirical  Data===
In TOMLAB  the problem of fitting  sums of positively weighted exponential functions to empirical data may be formulated either as a nonlinear least squares problem or a separable nonlinear least squares problem \[66\]. Several empirical data series are predefined and artificial data series may also be generated.  There are five different types of exponential models with special treatment in TOMLAB, shown in Table 40. In research in cooperation with Todd Walton, Vicksburg, USA, TOMLAB  has been used to estimate parameters using maximum likelihood in simulated Weibull distributions, and Gumbel and Gamma distributions with real data. TOMLAB  has also been useful for parameter estimation in stochastic hydrology using real-life data.
{| cellpadding="5"
|+Table 40: Exponential models treated in TOMLAB.
|-
|<math>$f(t) = \sum\limits_i^p \alpha_i e^{-\beta_i t}$</math>, || <math>$\alpha_i \geq 0$</math>, || <math>$0\leq\beta_1<\beta_2< ... <\beta_p$</math>.
|-
|<math>$f(t) = \sum\limits_i^p \alpha_i e^{-\beta_i t}$</math>, || <math>$\alpha_i \geq 0$</math>, ||  <math>$0\leq\beta_1<\beta_2< ... <\beta_p$</math>.
|-
|<math>$f(t) = \sum\limits_i^p \alpha_i(1-e^{-\beta_i t})$</math>, ||  <math>$\alpha_i \geq 0$</math>, ||  <math>$0\leq\beta_1<\beta_2< ... <\beta_p$</math>.
|-
|<math>$f(t) = \sum\limits_i^p t \alpha_i e^{-\beta_i t}$</math>, ||  <math>$\alpha_i \geq 0$</math>, ||  <math>$0\leq\beta_1<\beta_2< ... <\beta_p$</math>.
|-
|<math>$f(t) = \sum\limits_i^p (t \alpha_i-\gamma_i) e^{-\beta_i t}$</math>, || <math>$\alpha_i,\gamma_i \geq 0$</math>, ||  <math>$0\leq\beta_1<\beta_2< ... >\beta_p$</math>.
|-
|<math>$f(t) = \sum\limits_i^p t \alpha_i e^{-\beta_i (t - \gamma_i)}$</math>, ||  <math>$\alpha_i \geq 0$</math>, || <math>$0\leq\beta_1<\beta_2< ... <\beta_p$</math>.
|}
Algorithms to find starting values for different number of exponential terms are implemented. Test results show that  these initial  value algorithms are very close to the true solution for equidistant  problems and fairly  good for non-equidistant problems,  see the thesis by Petersson \[61\]. Good initial  values are extremely important when solving real life exponential fitting  problems, because they are so ill-conditioned.  Table 41 shows the relevant routines.

Revision as of 18:19, 23 June 2011

The Organization of This Guide

Overall Design presents the general design of TOMLAB.

Problem Types and Solver Routines contains strict mathematical definitions of the optimization problem types. All solver routines available in TOMLAB are described.

Defining Problems in TOMLAB describes the input format and modeling environment. The functionality of the modeling engine TomSym is discussed in 4.3 and also in appendix C.

Section 5, Section 6, Section 7 and Section 8 contain examples on the process of defining problems and solving them. All test examples are available as part of the TOMLAB distribution.

Section 9 shows how to setup and define multi layer optimization problems in TOMLAB.

Section 10 contains detailed descriptions of many of the functions in TOMLAB. The TOM solvers, originally developed by the Applied Optimization and Modeling (TOM) group, are described together with TOMLAB driver routine and utility functions. Other solvers, like the Stanford Optimization Laboratory (SOL) solvers are not described, but documentation is available for each solver.

Section 12 describes the utility functions that can be used, for example tomRun and SolverList.

Section 13 introduces the different options for derivatives, automatic differentiation.

Section 14 discusses a number of special system features such as partially separable functions and user supplied parameter information for the function computations.

Appendix A contains tables describing all elements defined in the problem structure. Some subfields are either empty, or filled with information if the particular type of optimization problem is defined. To be able to set different parameter options for the optimization solution, and change problem dependent information, the user should consult the tables in this Appendix.

Appendix B contains tables describing all elements defined in the output result structure returned from all solvers and driver routines.

Appendix D is concerned with the global variables used in TOMLAB and routines for handling important global variables enabling recursive calls of any depth.

Appendix E describes the available set of interfaces to other optimization software, such as CUTE, AMPL, and The Mathworks' Optimization Toolbox.

Appendix F gives some motivation for the development of TOMLAB.