# SNOPT

TOMLAB /SNOPT (hereafter referred to as SNOPT) is a general-purpose system for constrained optimization. It minimizes a linear or nonlinear function subject to bounds on the variables and sparse linear or nonlinear constraints. It is suitable for large-scale linear and quadratic programming and for linearly constrained optimization, as well as for general nonlinear programs of the form (SparseNP)

where *l *and *u *are constant lower and upper bounds, *f*_{0} (*x*) is a smooth scalar objective function, *A _{L}* is a sparse matrix, and

*f*(

*x*) is a vector of smooth nonlinear constraint functions

*{f*(

_{i}*x*)

*}*. An optional parameter maximize may specify that

*f*

_{0}(

*x*) should be maximized instead of minimized.

Ideally, the first derivatives (gradients) of *f*_{0} (*x*) and *f _{i} *(

*x*) should be known and coded by the user.

Note that upper and lower bounds are specified for all variables and constraints. This form allows full generality in specifying various types of constraint. Special values are used to indicate absent bounds ( or for appropriate *j*). Free variables and free constraints ("free rows") are ones that have both bounds infinite. Fixed variables and equality constraints have *l _{j} *=

*u*.

_{j}### Problem types

If *f*_{0} (*x*) is linear and *f *(*x*) is absent, SparseNP is a *linear program *(LP) and SNOPT applies the primal simplex method. Sparse basis factors are maintained by LUSOL as in MINOS.

If only the objective is nonlinear, the problem is *linearly constrained *(LC) and tends to solve more easily than the general case with nonlinear constraints (NC). For both cases, SNOPT applies a sparse sequential quadratic programming (SQP) method, using limited-memory quasi-Newton approximations to the Hessian of the Lagrangian. The merit function for steplength control is an augmented Lagrangian, as in the dense SQP solver NPSOL.

In general, SNOPT requires less matrix computation than NPSOL and fewer evaluations of the functions than the nonlinear algorithms in MINOS.

It is suitable for nonlinear problems with thousands of constraints and variables, and is efficient if many constraints and bounds are active at a solution. (Thus, ideally there should not be thousands of degrees of freedom.)