ClsSolve: Difference between revisions
No edit summary |
No edit summary |
||
Line 18: | Line 18: | ||
==Calling Syntax== | ==Calling Syntax== | ||
< | <source lang="matlab"> | ||
Result = clsSolve(Prob, varargin) | Result = clsSolve(Prob, varargin) | ||
Result = tomRun('clsSolve', Prob); | Result = tomRun('clsSolve', Prob); | ||
</ | </source> | ||
==Inputs== | ==Inputs== |
Latest revision as of 08:08, 10 January 2012
Purpose
Solves dense and sparse nonlinear least squares optimization problems with linear inequality and equality constraints and simple bounds on the variables.
clsSolve solves problems of the form
where , , and .
Calling Syntax
Result = clsSolve(Prob, varargin)
Result = tomRun('clsSolve', Prob);
Inputs
Prob | Problem description structure. The following fields are used: | |
---|---|---|
Solver.Alg | Solver algorithm to be run:
0: Gives default, the Fletcher - Xu hybrid method; 1: Fletcher - Xu hybrid method; Gauss-Newton/BFGS. 2: Al-Baali - Fletcher hybrid method; Gauss-Newton/BFGS. 3: Huschens method. 4: The Gauss-Newton method. 5: Wang, Li, Qi Structured MBFGS method. 6: Li-Fukushima MBFGS method. 7: Broydens method. Recommendations: Alg=5 is theoretically best, and seems best in practice as well. Alg=1 and Alg=2 behave very similar, and are robust methods. Alg=4 may be good for ill-conditioned problems. Alg=3 and Alg=6 may sometimes fail. Alg=7 tries to minimize Jacobian evaluations, but might need more resid- ual evaluations. Also fails more often that other algorithms. Suitable when analytic Jacobian is missing and evaluations of the Jacobian is costly. The problem should not be too ill-conditioned. | |
Solver.Method | Method to solve linear system:
0: QR with pivoting (both sparse and dense). 1: SVD (dense). 2: The inversion routine (inv) in Matlab (Uses QR). 3: Explicit computation of pseudoinverse, using pinv(Jk ). Search method technique (if Prob.LargeScale = 1, then Method = 0 always): Prob.Solver.Method = 0 Sparse iterative QR using Tlsqr. | |
LargeScale | If = 1, then sparse iterative QR using Tlsqr is used to find search directions | |
x_0 | Starting point. | |
x_L | Lower bounds on the variables. | |
x U | Upper bounds on the variables. | |
b_L | Lower bounds on the linear constraints. b | |
b_U | Upper bounds on the linear constraints. A Constraint matrix for linear constraints. | |
c_L | Lower bounds on the nonlinear constraints. | |
c_U | Upper bounds on the nonlinear constraints. | |
f_Low | A lower bound on the optimal function value, see LineParam.fLowBnd below. | |
SolverQP | Name of the solver used for QP subproblems. If empty, the default solver is used. See GetSolver.m and tomSolve.m. | |
PriLevOpt | Print Level. | |
optParam | Structure with special fields for optimization parameters, see TOMLAB Appendix A. Fields used are: bTol, eps absf, eps g, eps Rank, eps x, IterPrint, MaxIter, PreSolve, size f, size x, xTol, wait, and
QN InitMatrix (Initial Quasi-Newton matrix, if not empty, otherwise use identity matrix). | |
LineParam | Structure with line search parameters. Special fields used: | |
LineAlg | If Alg = 7
0 = Fletcher quadratic interpolation line search 3 = Fletcher cubic interpolation line search otherwise Armijo-Goldstein line search (LineAlg == 2) If Alg! = 7 0 = Fletcher quadratic interpolation line search 1 = Fletcher cubic interpolation line search 2 = Armijo-Goldstein line search otherwise Fletcher quadratic interpolation line search (LineAlg == 0) If Fletcher, see help LineSearch for the LineParam parameters used. Most important is the accuracy in the line search: sigma - Line search accuracy tolerance, default 0.9. | |
If LineAlg == 2, then the following parameters are used | ||
agFac | Armijo Goldsten reduction factor, default 0.1 | |
sigma | Line search accuracy tolerance, default 0.9 | |
fLowBnd | A lower bound on the global optimum of f(x). NLLS problems always have f(x) values >= 0 The user might also give lower bound estimate in Prob.f_Low clsSolve computes LineParam.fLowBnd as: LineParam.fLowBnd = max(0,Prob.f Low,Prob.LineParam.fLowBnd) fLow = LineParam.fLowBnd is used in convergence tests. | |
varargin | Other parameters directly sent to low level routines. |
Outputs
Result | Structure with result from optimization. The following fields are changed: | |
---|---|---|
x_k | Optimal point. | |
v_k | Lagrange multipliers (not used). | |
f_k | Function value at optimum. | |
g_k | Gradient value at optimum. | |
x_0 | Starting point. | |
f_0 | Function value at start. | |
r_k | Residual at optimum. | |
J_k | Jacobian matrix at optimum. | |
xState | State of each variable, described in TOMLAB Appendix B. | |
bState | State of each linear constraint, described in TOMLAB Appendix B | |
Iter | Number of iterations. | |
ExitFlag | Flag giving exit status. 0 if convergence, otherwise error. See Inform. | |
Inform | Binary code telling type of convergence:
1: Iteration points are close. 2: Projected gradient small. 3: Iteration points are close and projected gradient small. 4: Function value close to 0. 5: Iteration points are close and function value close to 0. 6: Projected gradient small and function value close to 0. 7: Iteration points are close, projected gradient small and function value close to 0. 8: Relative function value reduction low for LowIts = 10 iterations. 11: Relative f(x) reduction low for LowIts iter. Close Iters. 16: Small Relative f(x) reduction. 17: Close iteration points, Small relative f(x) reduction. 18: Small gradient, Small relative f(x) reduction. 32: Local minimum with all variables on bounds. 99: The residual is independent of x. The Jacobian is 0. 101: Maximum number of iterations reached. 102: Function value below given estimate. 104: x_k not feasible, constraint violated. 105: The residual is empty, no NLLS problem. | |
Solver | Solver used. | |
SolverAlgorithm | Solver algorithm used. | |
Prob | Problem structure used. |
Description
The solver clsSolve includes seven optimization methods for nonlinear least squares problems: the Gauss-Newton method, the Al-Baali-Fletcher and the Fletcher-Xu hybrid method, the Huschens TSSM method and three more. If rank problem occur, the solver is using subspace minimization. The line search is performed using the routine LineSearch which is a modified version of an algorithm by Fletcher. clsSolve treats linear equality and inequality constraints using an active set strategy and a null space method.
M-files Used
ResultDef.m, preSolve.m, qpSolve.m, tomSolve.m, LineSearch.m, ProbCheck.m, secUpdat.m, iniSolve.m, endSolve.m
See Also
conSolve, nlpSolve, sTrustr
Limitations
When using the LargeScale option, the number of residuals may not be less than 10 since the sqr2 algorithm may run into problems if used on problems that are not really large-scale.
Warnings
Since no second order derivative information is used, clsSolve may not be able to determine the type of stationary point converged to.