ClsSolve: Difference between revisions

From TomWiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
[[Category:Solvers]]
[[Category:Solvers]]
{{cleanup|Clean this article.}}
==Purpose==
==Purpose==


Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality con- straints and simple bounds on the variables.
Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality constraints and simple bounds on the variables.


''clsSolve'' solves problems of the form
''clsSolve'' solves problems of the form
Line 17: Line 15:


where <math>$x,x_{L},x_{U}\in \MATHSET{R} ^{n}$</math>, <math>$r(x)\in \MATHSET{R} ^{N}$</math>, <math>$A\in \MATHSET{R}^{m_{1}\times n}$</math> and <math>$b_{L},b_{U}\in \MATHSET{R}^{m_{1}}$</math>.
where <math>$x,x_{L},x_{U}\in \MATHSET{R} ^{n}$</math>, <math>$r(x)\in \MATHSET{R} ^{N}$</math>, <math>$A\in \MATHSET{R}^{m_{1}\times n}$</math> and <math>$b_{L},b_{U}\in \MATHSET{R}^{m_{1}}$</math>.


==Calling  Syntax==
==Calling  Syntax==


Result = clsSolve(Prob, varargin)  
<syntaxhighlight lang="matlab">
 
Result = clsSolve(Prob, varargin)
Result = tomRun('clsSolve', Prob);
Result = tomRun('clsSolve', Prob);
</syntaxhighlight>


===Description  of Inputs===
===Description  of Inputs===


{|
{| class="wikitable"
|''Prob''|| colspan="2" | Problem description structure. The following fields are used:
!Prob''
|-
!colspan="2" | Problem description structure. The following fields are used:
|-valign="top"
|||''Solver.Alg''||Solver algorithm to be run:
|||''Solver.Alg''||Solver algorithm to be run:
|-
 
|||||0: Gives default, the Fletcher - Xu hybrid method;
0: Gives default, the Fletcher - Xu hybrid method;
|-
 
|||||1: Fletcher - Xu hybrid method; Gauss-Newton/BFGS.
1: Fletcher - Xu hybrid method; Gauss-Newton/BFGS.
|-
 
|||||2: Al-Baali - Fletcher hybrid method; Gauss-Newton/BFGS.
2: Al-Baali - Fletcher hybrid method; Gauss-Newton/BFGS.
|-
 
|||||3: Huschens method. SIAM J. Optimization.  Vol 4, No 1, pp 108-129 jan 1994.
3: Huschens method. SIAM J. Optimization.  Vol 4, No 1, pp 108-129 jan 1994.
|-
 
|||||4: The Gauss-Newton method.
4: The Gauss-Newton method.
|-
 
|||||5: Wang, Li, Qi Structured MBFGS method.
5: Wang, Li, Qi Structured MBFGS method.
|-
 
|||||6: Li-Fukushima MBFGS method.
6: Li-Fukushima MBFGS method.
|-
 
|||||7: Broydens method.
7: Broydens method.
|-
 
|||||Recommendations: Alg=5  is theoretically best, and seems best in practice as well. Alg=1  and Alg=2  behave very similar, and are robust methods. Alg=4 may be good for ill-conditioned problems. Alg=3  and Alg=6  may sometimes fail. Alg=7 tries to minimize Jacobian evaluations, but might need more resid- ual evaluations. Also fails more often that  other algorithms.  Suitable when analytic Jacobian is missing and evaluations of the Jacobian is costly.  The problem should not be too ill-conditioned.
Recommendations: Alg=5  is theoretically best, and seems best in practice as well. Alg=1  and Alg=2  behave very similar, and are robust methods. Alg=4 may be good for ill-conditioned problems. Alg=3  and Alg=6  may sometimes fail. Alg=7 tries to minimize Jacobian evaluations, but might need more resid- ual evaluations. Also fails more often that  other algorithms.  Suitable when analytic Jacobian is missing and evaluations of the Jacobian is costly.  The problem should not be too ill-conditioned.
|-
|-valign="top"
|
|-
|||''Solver.Method''||Method to solve linear system:
|||''Solver.Method''||Method to solve linear system:
|-
 
|||||0: QR with pivoting (both sparse and dense).
0: QR with pivoting (both sparse and dense).
|-
 
|||||1: SVD (dense).
1: SVD (dense).
|-
 
|||||2: The inversion routine (inv) in Matlab (Uses QR).
2: The inversion routine (inv) in Matlab (Uses QR).
|-
 
|||||3: Explicit  computation of pseudoinverse, using pinv(''Jk '').
3: Explicit  computation of pseudoinverse, using pinv(''Jk '').
|-
 
|||||Search method technique (if Prob.LargeScale = 1, then Method = 0 always): Prob.Solver.Method = 0 Sparse iterative QR using Tlsqr.
Search method technique (if Prob.LargeScale = 1, then Method = 0 always): Prob.Solver.Method = 0 Sparse iterative QR using Tlsqr.
|-
|-
|||''LargeScale''||If = 1, then sparse iterative QR using Tlsqr is used to find search directions
|||''LargeScale''||If = 1, then sparse iterative QR using Tlsqr is used to find search directions
Line 90: Line 87:
|-
|-
|||''LineParam''||Structure with line search parameters. Special fields used:
|||''LineParam''||Structure with line search parameters. Special fields used:
|-
|-valign="top"
|||''LineAlg''||If ''Alg'' = 7
|||''LineAlg''||If ''Alg'' = 7
|-
 
|||||0 = Fletcher quadratic interpolation line search
0 = Fletcher quadratic interpolation line search
|-
 
|||||3 = Fletcher cubic interpolation line search
3 = Fletcher cubic interpolation line search
|-
 
|||||otherwise Armijo-Goldstein line search (LineAlg == 2)
otherwise Armijo-Goldstein line search (LineAlg == 2)
|-
 
|||||If ''Alg''! = 7
If ''Alg''! = 7
|-
 
|||||0 = Fletcher quadratic interpolation line search
0 = Fletcher quadratic interpolation line search
|-
 
|||||1 = Fletcher cubic interpolation line search
1 = Fletcher cubic interpolation line search
|-
 
|||||2 = Armijo-Goldstein line search
2 = Armijo-Goldstein line search
|-
 
|||||otherwise Fletcher quadratic interpolation line search (LineAlg == 0)
otherwise Fletcher quadratic interpolation line search (LineAlg == 0)
|-
 
|||||If  Fletcher, see  help LineSearch for the LineParam parameters used.  Most important  is the accuracy in the line search: sigma - Line search accuracy tolerance, default 0.9.
If  Fletcher, see  help LineSearch for the LineParam parameters used.  Most important  is the accuracy in the line search: sigma - Line search accuracy tolerance, default 0.9.
|-  
|-  
||| colspan="2" | If LineAlg == 2, then the following parameters are used
||| colspan="2" | If LineAlg == 2, then the following parameters are used
Line 117: Line 114:
|||''sigma''||Line search accuracy tolerance, default 0.9
|||''sigma''||Line search accuracy tolerance, default 0.9
|- valign="top"
|- valign="top"
|||''fLowBnd''||A  lower bound on the  global optimum  of  f(x). NLLS  problems always have  f(x)  values ''>''= 0 The user might  also give  lower bound estimate in Prob.f Low clsSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(0,Prob.f Low,Prob.LineParam.fLowBnd) fLow = LineParam.fLowBnd is used in convergence tests.
|||''fLowBnd''||A  lower bound on the  global optimum  of  f(x). NLLS  problems always have  f(x)  values ''>''= 0 The user might  also give  lower bound estimate in Prob.f_Low clsSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(0,Prob.f Low,Prob.LineParam.fLowBnd) fLow = LineParam.fLowBnd is used in convergence tests.
|-
|-
|''varargin''|| colspan="2" | Other parameters directly sent to low level routines.
|''varargin''|| colspan="2" | Other parameters directly sent to low level routines.
Line 123: Line 120:


===Description  of Outputs===
===Description  of Outputs===
{|
{| class="wikitable"
|''Result''|| colspan="2" | Structure with result from optimization.  The following fields are changed:
!''Result''
!colspan="2" | Structure with result from optimization.  The following fields are changed:
|-
|-
|||''x_k''||Optimal point.
|||''x_k''||Optimal point.
Line 149: Line 147:
|-
|-
|||''ExitFlag''||Flag giving exit status. 0 if convergence, otherwise error. See ''Inform''.
|||''ExitFlag''||Flag giving exit status. 0 if convergence, otherwise error. See ''Inform''.
|-
|-valign="top"
|||''Inform''||Binary code telling type of convergence:
|||''Inform''||Binary code telling type of convergence:
|-
 
|||||1: Iteration points are close.
1: Iteration points are close.
|-
 
|||||2: Projected gradient small.
2: Projected gradient small.
|-
 
|||||3: Iteration points are close and projected gradient small.
3: Iteration points are close and projected gradient small.
|-
 
|||||4: Function value close to 0.
4: Function value close to 0.
|-
 
|||||5: Iteration points are close and function value close to 0.
5: Iteration points are close and function value close to 0.
|-
 
|||||6: Projected gradient small and function value close to 0.
6: Projected gradient small and function value close to 0.
|-
 
|||||7: Iteration points are close, projected gradient small and function value close to 0.
7: Iteration points are close, projected gradient small and function value close to 0.
|-
 
|||||8: Relative function value reduction low for ''LowI ts ''= 10 iterations.
8: Relative function value reduction low for ''LowIts ''= 10 iterations.
|-
 
|||||11: Relative f(x) reduction low for LowIts iter. Close Iters.
11: Relative f(x) reduction low for LowIts iter. Close Iters.
|-
 
|||||16: Small Relative f(x) reduction.
16: Small Relative f(x) reduction.
|-
 
|||||17: Close iteration points, Small relative f(x) reduction.
17: Close iteration points, Small relative f(x) reduction.
|-
 
|||||18: Small gradient, Small relative f(x) reduction.
18: Small gradient, Small relative f(x) reduction.
|-
 
|||||32: Local minimum with all variables on bounds.
32: Local minimum with all variables on bounds.
|-
 
|||||99: The residual is independent of x. The Jacobian is 0.
99: The residual is independent of x. The Jacobian is 0.
|-
 
|||||101: Maximum number of iterations reached.
101: Maximum number of iterations reached.
|-
 
|||||102: Function value below given estimate.
102: Function value below given estimate.
|-
 
|||||104: ''x_k '' not feasible, constraint violated.
104: ''x_k '' not feasible, constraint violated.
|-
 
|||||105: The residual is empty, no NLLS problem.
105: The residual is empty, no NLLS problem.
|-
|-
|||''Solver''||Solver used.  
|||''Solver''||Solver used.  
Line 197: Line 195:
==Description==
==Description==


The solver ''clsSolve'' includes seven optimization methods for nonlinear least squares problems: the Gauss-Newton method, the Al-Baali-Fletcher \[3\] and the Fletcher-Xu \[19\] hybrid method, the Huschens TSSM method \[50\] and three more. If rank problem occur, the solver is using subspace minimization.  The line search is performed using the routine ''LineSearch ''which is a modified version of an algorithm by Fletcher \[20\]. Bound constraints are partly treated as described in Gill,  Murray  and Wright \[28\].  ''clsSolve ''treats linear equality and inequality constraints using an active set strategy and a null space method.
The solver ''clsSolve'' includes seven optimization methods for nonlinear least squares problems: the Gauss-Newton method, the Al-Baali-Fletcher and the Fletcher-Xu hybrid method, the Huschens TSSM method and three more. If rank problem occur, the solver is using subspace minimization.  The line search is performed using the routine ''LineSearch ''which is a modified version of an algorithm by Fletcher. ''clsSolve'' treats linear equality and inequality constraints using an active set strategy and a null space method.


==M-files Used==
==M-files Used==

Revision as of 13:59, 17 July 2011

Purpose

Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality constraints and simple bounds on the variables.

clsSolve solves problems of the form


where Failed to parse (unknown function "\MATHSET"): {\displaystyle $x,x_{L},x_{U}\in \MATHSET{R} ^{n}$} , Failed to parse (unknown function "\MATHSET"): {\displaystyle $r(x)\in \MATHSET{R} ^{N}$} , Failed to parse (unknown function "\MATHSET"): {\displaystyle $A\in \MATHSET{R}^{m_{1}\times n}$} and Failed to parse (unknown function "\MATHSET"): {\displaystyle $b_{L},b_{U}\in \MATHSET{R}^{m_{1}}$} .

Calling Syntax

Result = clsSolve(Prob, varargin)
Result = tomRun('clsSolve', Prob);

Description of Inputs

Prob Problem description structure. The following fields are used:
Solver.Alg Solver algorithm to be run:

0: Gives default, the Fletcher - Xu hybrid method;

1: Fletcher - Xu hybrid method; Gauss-Newton/BFGS.

2: Al-Baali - Fletcher hybrid method; Gauss-Newton/BFGS.

3: Huschens method. SIAM J. Optimization. Vol 4, No 1, pp 108-129 jan 1994.

4: The Gauss-Newton method.

5: Wang, Li, Qi Structured MBFGS method.

6: Li-Fukushima MBFGS method.

7: Broydens method.

Recommendations: Alg=5 is theoretically best, and seems best in practice as well. Alg=1 and Alg=2 behave very similar, and are robust methods. Alg=4 may be good for ill-conditioned problems. Alg=3 and Alg=6 may sometimes fail. Alg=7 tries to minimize Jacobian evaluations, but might need more resid- ual evaluations. Also fails more often that other algorithms. Suitable when analytic Jacobian is missing and evaluations of the Jacobian is costly. The problem should not be too ill-conditioned.

Solver.Method Method to solve linear system:

0: QR with pivoting (both sparse and dense).

1: SVD (dense).

2: The inversion routine (inv) in Matlab (Uses QR).

3: Explicit computation of pseudoinverse, using pinv(Jk ).

Search method technique (if Prob.LargeScale = 1, then Method = 0 always): Prob.Solver.Method = 0 Sparse iterative QR using Tlsqr.

LargeScale If = 1, then sparse iterative QR using Tlsqr is used to find search directions
x_0 Starting point.
x_L Lower bounds on the variables.
x U Upper bounds on the variables.
b_L Lower bounds on the linear constraints. b
b_U Upper bounds on the linear constraints. A Constraint matrix for linear constraints.
c_L Lower bounds on the nonlinear constraints.
c_U Upper bounds on the nonlinear constraints.
f_Low A lower bound on the optimal function value, see LineParam.fLowBnd below.
SolverQP Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m.
PriLevOpt Print Level.
optParam Structure with special fields for optimization parameters, see Table 141. Fields used are: bTol, eps absf, eps g, eps Rank, eps x, IterPrint, MaxIter, PreSolve, size f, size x, xTol, wait, and

QN InitMatrix (Initial Quasi-Newton matrix, if not empty, otherwise use identity matrix).

LineParam Structure with line search parameters. Special fields used:
LineAlg If Alg = 7

0 = Fletcher quadratic interpolation line search

3 = Fletcher cubic interpolation line search

otherwise Armijo-Goldstein line search (LineAlg == 2)

If Alg! = 7

0 = Fletcher quadratic interpolation line search

1 = Fletcher cubic interpolation line search

2 = Armijo-Goldstein line search

otherwise Fletcher quadratic interpolation line search (LineAlg == 0)

If Fletcher, see help LineSearch for the LineParam parameters used. Most important is the accuracy in the line search: sigma - Line search accuracy tolerance, default 0.9.

If LineAlg == 2, then the following parameters are used
agFac Armijo Goldsten reduction factor, default 0.1
sigma Line search accuracy tolerance, default 0.9
fLowBnd A lower bound on the global optimum of f(x). NLLS problems always have f(x) values >= 0 The user might also give lower bound estimate in Prob.f_Low clsSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(0,Prob.f Low,Prob.LineParam.fLowBnd) fLow = LineParam.fLowBnd is used in convergence tests.
varargin Other parameters directly sent to low level routines.

Description of Outputs

Result Structure with result from optimization. The following fields are changed:
x_k Optimal point.
v_k Lagrange multipliers (not used).
f_k Function value at optimum.
g_k Gradient value at optimum.
x_0 Starting point.
f_0 Function value at start.
r_k Residual at optimum.
J_k Jacobian matrix at optimum.
xState State of each variable, described in Table 150.
bState State of each linear constraint, described in Table 151.
Iter Number of iterations.
ExitFlag Flag giving exit status. 0 if convergence, otherwise error. See Inform.
Inform Binary code telling type of convergence:

1: Iteration points are close.

2: Projected gradient small.

3: Iteration points are close and projected gradient small.

4: Function value close to 0.

5: Iteration points are close and function value close to 0.

6: Projected gradient small and function value close to 0.

7: Iteration points are close, projected gradient small and function value close to 0.

8: Relative function value reduction low for LowIts = 10 iterations.

11: Relative f(x) reduction low for LowIts iter. Close Iters.

16: Small Relative f(x) reduction.

17: Close iteration points, Small relative f(x) reduction.

18: Small gradient, Small relative f(x) reduction.

32: Local minimum with all variables on bounds.

99: The residual is independent of x. The Jacobian is 0.

101: Maximum number of iterations reached.

102: Function value below given estimate.

104: x_k not feasible, constraint violated.

105: The residual is empty, no NLLS problem.

Solver Solver used.
SolverAlgorithm Solver algorithm used.
Prob Problem structure used.

Description

The solver clsSolve includes seven optimization methods for nonlinear least squares problems: the Gauss-Newton method, the Al-Baali-Fletcher and the Fletcher-Xu hybrid method, the Huschens TSSM method and three more. If rank problem occur, the solver is using subspace minimization. The line search is performed using the routine LineSearch which is a modified version of an algorithm by Fletcher. clsSolve treats linear equality and inequality constraints using an active set strategy and a null space method.

M-files Used

ResultDef.m, preSolve.m, qpSolve.m, tomSolve.m, LineSearch.m, ProbCheck.m, secUpdat.m, iniSolve.m, endSolve.m

See Also

conSolve, nlpSolve, sTrustr

Limitations

When using the LargeScale option, the number of residuals may not be less than 10 since the sqr2 algorithm may run into problems if used on problems that are not really large-scale.

Warnings

Since no second order derivative information is used, clsSolve may not be able to determine the type of stationary point converged to.