UcSolve: Difference between revisions

From TomWiki
Jump to navigationJump to search
No edit summary
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:Solvers]]
[[Category:Solvers]]
{{cleanup|Clean this article.}}


==Purpose==
==Purpose==
Line 17: Line 16:




where <math>x,x_{L},x_{U}\in \MATHSET{R} ^{n}</math>.
where <math>x,x_{L},x_{U}\in \mathbb{R} ^{n}</math>.


==Calling  Syntax==
==Calling  Syntax==


<source lang="matlab">
Result = ucSolve(Prob, varargin)
Result = ucSolve(Prob, varargin)
</source>


===Description  of Inputs===
==Inputs==


{|
{| class="wikitable"
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|''Prob''||colspan="2"|Problem description structure. The following fields are used:
|-
|-
Line 41: Line 42:
|-
|-
|||''f_Low''||Lower bound on function value.
|||''f_Low''||Lower bound on function value.
|-
|-valign="top"
|||''Solver.Alg ''||Solver algorithm to be run:
|||''Solver.Alg ''||Solver algorithm to be run:
|-
 
|||||0: Gives default, either Newton or BFGS.
0: Gives default, either Newton or BFGS.
|-
 
|||||1: Newton with subspace minimization, using SVD.
1: Newton with subspace minimization, using SVD.
|-
 
|||||2: Safeguarded BFGS with inverse Hessian updates (standard).
2: Safeguarded BFGS with inverse Hessian updates (standard).
|-
 
|||||3: Safeguarded BFGS with Hessian updates.
3: Safeguarded BFGS with Hessian updates.
|-
 
|||||4: Safeguarded DFP with inverse Hessian updates.
4: Safeguarded DFP with inverse Hessian updates.
|-
 
|||||5: Safeguarded DFP with Hessian updates.
5: Safeguarded DFP with Hessian updates.
|-
 
|||||6: Fletcher-Reeves CG.
6: Fletcher-Reeves CG.
|-
 
|||||7: Polak-Ribiere CG.
7: Polak-Ribiere CG.
|-
 
|||||8: Fletcher conjugate descent CG-method.
8: Fletcher conjugate descent CG-method.
|-
|-valign="top"
|||''Solver.Method''||Method used to solve equation system:
|||''Solver.Method''||Method used to solve equation system:
|-
 
|||||0: SVD (default).
0: SVD (default).
|-
 
|||||1: LU-decomposition.
1: LU-decomposition.
|-
 
|||||2: LU-decomposition with pivoting.
2: LU-decomposition with pivoting.
|-
 
|||||3: Matlab built in QR.
3: Matlab built in QR.
|-
 
|||||4: Matlab inversion.
4: Matlab inversion.
|-
 
|||||5: Explicit  inverse.
5: Explicit  inverse.
|-
|-valign="top"
|||''Solver.Method''||Restart or not for C-G method:
|||''Solver.Method''||Restart or not for C-G method:
0: Use restart in CG-method each n:th step.
1: Use restart in CG-method each n:th step.
|-
|-
|||||0: Use restart in CG-method each n:th step.
|||''LineParam ''||Structure with line search parameters,  see routine ''LineSearch ''and [[TOMLAB Appendix A]].
|-
|-valign="top"
|||||1: Use restart in CG-method each n:th step.
|||''optParam ''||Structure with special fields for optimization parameters,  see [[TOMLAB Appendix A]]
|-
 
|||''LineParam ''||Structure with line search parameters,  see routine ''LineSearch ''and Table 140.
Fields used are: ''eps_absf'', ''eps_f'', ''eps_g'', ''eps_x'', ''eps_Rank'', ''MaxIter'', ''wait'', ''size_x'', ''xTol'', ''size_f'', ''LineSearch'', ''LineAlg'', ''xTol'', ''IterPrint ''and ''QN InitMatrix''.
|-
|||''optParam ''||Structure with special fields for optimization parameters,  see Table 141.
|-
|||||Fields used are: ''eps absf'', ''eps f'', ''eps g'', ''eps x'', ''eps Rank'', ''MaxIter'', ''wait'', ''size x'', ''xTol'', ''size f'', ''LineSearch'', ''LineAlg'', ''xTol'', ''IterPrint ''and ''QN InitMatrix''.
|-
|-
|||''PriLevOpt''||Print level.
|||''PriLevOpt''||Print level.
Line 93: Line 94:
|}
|}


===Description  of Outputs===
==Outputs==


{|
{| class="wikitable"
|''Result''||colspan="2"|Structure with result from optimization.  The following fields are changed:
!''Result''||colspan="2"|Structure with result from optimization.  The following fields are changed:
|-
|-
|||''x_k''||Optimal point.
|||''x_k''||Optimal point.
Line 114: Line 115:
|||''f_0''||Function value at start.
|||''f_0''||Function value at start.
|-
|-
|||''xState''||State of each variable, described in Table 150.
|||''xState''||State of each variable, described in [[TOMLAB Appendix B]].
|-
|-
|||''Iter''||Number of iterations.
|||''Iter''||Number of iterations.
|-
|-
|||''ExitFlag''||0 if convergence to local min. Otherwise errors.
|||''ExitFlag''||0 if convergence to local min. Otherwise errors.
|-
|-valign="top"
|||''Inform''||Binary code telling type of convergence:
|||''Inform''||Binary code telling type of convergence:
|-
 
|||||1: Iteration points are close.
1: Iteration points are close.
|-
 
|||||2: Projected gradient small.
2: Projected gradient small.
|-
 
|||||4: Relative function value reduction low for ''LowIts ''iterations.
4: Relative function value reduction low for ''LowIts ''iterations.
|-
 
|||||101: Maximum number of iterations reached.
101: Maximum number of iterations reached.
|-
 
|||||102: Function value below given estimate.
102: Function value below given estimate.
|-
 
|||||104: Convergence to a saddle point.
104: Convergence to a saddle point.
|-
|-
|||''Solver''||Solver used.  
|||''Solver''||Solver used.  
Line 143: Line 144:
==Description==
==Description==


The solver ''ucSolve ''includes several of the most popular search step methods for unconstrained optimization. The search step methods included in ''ucSolve ''are: the Newton method, the quasi-Newton BFGS and DFP methods, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian are using a subspace minimization  technique to handle rank problems, see  Lindstr¨om  \[53\]. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix. The line search algorithm in the routine ''LineSearch ''is a modified version of an algorithm by Fletcher \[20\]. Bound constraints are treated as described in Gill, Murray and Wright \[28\]. The accuracy in the line search is critical for the performance of quasi-Newton BFGS and DFP methods and for the CG methods. If the accuracy parameter ''Prob.LineParam.sigma ''is set to the default value 0''.''9, ''ucSolve ''changes it automatically according to:
The solver ''ucSolve ''includes several of the most popular search step methods for unconstrained optimization. The search step methods included in ''ucSolve ''are: the Newton method, the quasi-Newton BFGS and DFP methods, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian are using a subspace minimization  technique to handle rank problems. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix. The line search algorithm in the routine ''LineSearch ''is a modified version of an algorithm by Fletcher. Bound constraints are treated as described in Gill, Murray and Wright. The accuracy in the line search is critical for the performance of quasi-Newton BFGS and DFP methods and for the CG methods. If the accuracy parameter ''Prob.LineParam.sigma ''is set to the default value 0''.''9, ''ucSolve ''changes it automatically according to:


{|
{| class="wikitable"
|''Prob.Solver.Alg''||''Prob.LineParam.sigma''
!''Prob.Solver.Alg''||''Prob.LineParam.sigma''
|-
|-
|4,5 (DFP)||0.2
|4,5 (DFP)||0.2
Line 159: Line 160:
==See Also==
==See Also==


*[[clsSolve]]
*clsSolve

Latest revision as of 08:08, 10 January 2012


Purpose

Solve unconstrained nonlinear optimization problems with simple bounds on the variables.

ucSolve solves problems of the form



where .

Calling Syntax

Result = ucSolve(Prob, varargin)

Inputs

Prob Problem description structure. The following fields are used:
x_L Lower bounds on the variables.
x_U Upper bounds on the variables.
x_0 Starting point.
FUNCS.f Name of m-file computing the objective function f (x).
FUNCS.g Name of m-file computing the gradient vector g(x).
FUNCS.H Name of m-file computing the Hessian matrix H (x).
f_Low Lower bound on function value.
Solver.Alg Solver algorithm to be run:

0: Gives default, either Newton or BFGS.

1: Newton with subspace minimization, using SVD.

2: Safeguarded BFGS with inverse Hessian updates (standard).

3: Safeguarded BFGS with Hessian updates.

4: Safeguarded DFP with inverse Hessian updates.

5: Safeguarded DFP with Hessian updates.

6: Fletcher-Reeves CG.

7: Polak-Ribiere CG.

8: Fletcher conjugate descent CG-method.

Solver.Method Method used to solve equation system:

0: SVD (default).

1: LU-decomposition.

2: LU-decomposition with pivoting.

3: Matlab built in QR.

4: Matlab inversion.

5: Explicit inverse.

Solver.Method Restart or not for C-G method:

0: Use restart in CG-method each n:th step.

1: Use restart in CG-method each n:th step.

LineParam Structure with line search parameters, see routine LineSearch and TOMLAB Appendix A.
optParam Structure with special fields for optimization parameters, see TOMLAB Appendix A

Fields used are: eps_absf, eps_f, eps_g, eps_x, eps_Rank, MaxIter, wait, size_x, xTol, size_f, LineSearch, LineAlg, xTol, IterPrint and QN InitMatrix.

PriLevOpt Print level.
varargin Other parameters directly sent to low level routines.

Outputs

Result Structure with result from optimization. The following fields are changed:
x_k Optimal point.
f_k Function value at optimum.
g_k Gradient value at optimum.
H_k Hessian value at optimum.
B_k Quasi-Newton approximation of the Hessian at optimum.
v_k Lagrange multipliers.
x_0 Starting point.
f_0 Function value at start.
xState State of each variable, described in TOMLAB Appendix B.
Iter Number of iterations.
ExitFlag 0 if convergence to local min. Otherwise errors.
Inform Binary code telling type of convergence:

1: Iteration points are close.

2: Projected gradient small.

4: Relative function value reduction low for LowIts iterations.

101: Maximum number of iterations reached.

102: Function value below given estimate.

104: Convergence to a saddle point.

Solver Solver used.
SolverAlgorithm Solver algorithm used.
Prob Problem structure used.

Description

The solver ucSolve includes several of the most popular search step methods for unconstrained optimization. The search step methods included in ucSolve are: the Newton method, the quasi-Newton BFGS and DFP methods, the Fletcher-Reeves and Polak-Ribiere conjugate-gradient method, and the Fletcher conjugate descent method. The quasi-Newton methods may either update the inverse Hessian (standard) or the Hessian itself. The Newton method and the quasi-Newton methods updating the Hessian are using a subspace minimization technique to handle rank problems. The quasi-Newton algorithms also use safe guarding techniques to avoid rank problem in the updated matrix. The line search algorithm in the routine LineSearch is a modified version of an algorithm by Fletcher. Bound constraints are treated as described in Gill, Murray and Wright. The accuracy in the line search is critical for the performance of quasi-Newton BFGS and DFP methods and for the CG methods. If the accuracy parameter Prob.LineParam.sigma is set to the default value 0.9, ucSolve changes it automatically according to:

Prob.Solver.Alg Prob.LineParam.sigma
4,5 (DFP) 0.2
6,7,8 (CG) 0.01

M-files Used

ResultDef.m, LineSearch.m, iniSolve.m, tomSolve.m, endSolve.m

See Also

  • clsSolve