CGO ego

From TomWiki
Jump to navigationJump to search

Notice.png

This page is part of the CGO Manual. See CGO Manual.

Purpose

Solve general constrained mixed-integer global black-box optimization problems with costly objective functions.

The optimization problem is of the following form


where ; ; the linear constraints are defined by , ; and the nonlinear constraints are defined by . The variables are restricted to be integers, where is an index subset of possibly empty. It is assumed that the function is continuous with respect to all variables, even if there is a demand that some variables only take integer values. Otherwise it would not make sense to do the surrogate modeling of used by all CGO solvers.

f (x) is assumed to be a costly function while c(x) is assumed to be cheaply computed. Any costly constraints can be treated by adding penalty terms to the objective function in the following way:

where weighting parameters wj have been added. The user then returns p(x) instead of f (x) to the CGO solver.

Calling Syntax

Result=ego(Prob,varargin) 
Result = tomRun('ego', Prob);


Usage

See CGO solver usage

Description of Inputs

Problem structure

The following fields are used in the problem description structure Prob:

Field Description
Name See Common input for all CGO solvers
FUNCS.f
FUNCS.c
x_L
x_U
b_L
b_U
A
c_L
c_U
WarmStart
user
MaxCPU
PriLevOpt
optParam
CGO See the table below but also this table for input common to all CGO solvers
GO See common input for all CGO solvers
MIP See common input for all CGO solvers
varargin Additional parameters to ego are sent to the costly f(x)


- Special EGO algorithm parameters in Prob.CGO -
EGOAlg Main algorithm in the EGO solver

=1 Run expected improvement steps (modN = 0,1,2,...). If no f (x) improvement, use DACE surface minimum (modN = -1) in 1 step

=2 Run expected improvement steps (modN=0) until ExpI / |yMin| < TolExpI for 3 successive steps (modN = 1,2,3) without f (x) improvement (fRed <= 0).
After 2 such steps (when modN = 2), 1 step using the DACE surface minimum (modN = -1) is tried. If then fRed > TolExpI, reset to modN = 0 steps.

=3 Compute trial points from both Expexted Improvement and DACE surface minimum in every step

=4 Compute DACE surface minimum in every step

Default EGOAlg = 1, but if any x is integer valued, EGOAlg = 3 is default

pEst Norm parameters, fixed or estimated, also see p0, pLow, pUpp (default pEst = 0).

0 = Fixed constant p-value for all components (default, p0=1.99).

1 = Estimate one p-value valid for all components.

2 = Estimate d|| ||p parameters, one for each component.

p0 Fixed p-value (pEst==0, default = 1.99) or
initial p-value (pEst == 1, default 1.9) or
d-vector of initial p-values (pEst > 1, default 1.9*ones(d,1))
pLow If pEst == 0, not used

if pEst == 1, lower bound on p-value (default 1.98)

if pEst == 2, lower bounds on p (default 1.99*ones(d,1))

pUpp If pEst == 0, not used

if pEst == 1, upper bound on p-value (default 1.99999)

if pEst == 2, upper bounds on p (default 1.99999*ones(d,1))

snPLim Avoid the time consuming global optimization in DACEFit for iterations > snPLim.
Instead just do a local search from the previous DACE parameter model
This part is the most time consuming in ego
Default 15 if pEst = 0, Default 20 if pEst = 1, Default 25 if pEst > 1
TolExpI Convergence tolerance for expected improvement (default 10-7).
SAMPLEF Sample criterion function:

0 = Expected improvment (default)

1 = Kushner's criterion (related option: KEPS)

2 = Lower confidence bounding (related option: LCBB)

3 = Generalized expected improvement (related option: GEIG)

4 = Maximum variance

5 = Watson and Barnes 2

KEPS The ε parameter in the Kushner's criterion
If set to a positive value, the epsilon is taken as KEPS. If set to a negative value, then epsilon is taken as |KEPS|*f_min

Default: -0.01

GEIG The exponent g in the generalized expected improvement function
Default: 2
LCBB Lower Confidence Bounding parameter b
Default 2
AddSurfMin Add up to AddSurfMin global or local minima on DACE surface as search points, based on estimated Lipschitz constants, number of components on bounds, and distance to sampled set X.
AddExpIMin=0 implies no additional minimum added.
AddExpIMin Add up to AddExpIMin global or local minima on ExpI surface as search points, based on estimated Lipschitz constants, number of components on bounds, and distance to sampled set X.
AddExpIMin=0 implies no additional minimum added.
Only possible if globalSolver = 'multiMin' or 'glcCluster'.
Default AddExpIMin = 1

Description of Outputs

Result structure

The output structure Result contains results from the optimization.
The following fields are set:

Field Description
x_k See Common output for all CGO solvers for details.
f_k
Iter
FuncEv
ExitText
ExitFlag Always 0
Inform Information parameter.
Value Signification
0 Normal termination.
1 Function value f(x) is less than fGoal.
2 Error in function value f (x), |f - fGoal| <= fTol, fGoal = 0.
3 Relative Error in function value f (x) is less than fTol, i.e. |f - fGoal|/|fGoal| <= fTol.
6 All sample points same as previous point for the last 11 iterations.
7 All feasible integers tried.
9 Max CPU Time reached.
10 Expected improvement low for three iterations
CGO Subfield WarmStartInfo saves warm start information, the same information as in cgoSave.mat, see Common output for all CGO solvers#WSInfo.

Description

ego implements the algorithm EGO by D. R. Jones, Matthias Schonlau and William J. Welch presented in the paper "Efficient Global Optimization of Expensive Black-Box Functions".

Please note that Jones et al. has a slightly different problem formulation. The TOMLAB version of ego treats linear and nonlinear constraints separately.

ego samples points to which a response surface is fitted. The algorithm then balances between sampling new points and minimization on the surface.

ego and rbfSolve use the same format for saving warm start data. This means that it is possible to try one solver for a certain number of iterations/function evaluations and then do a warm start with the other. Example:

>> Prob	= probInit('glc_prob',1);		%   Set up problem structure
>> Result_ego = tomRun('ego',Prob);		%   Solve for a while with  ego
>> Prob.WarmStart = 1; 				%   Indicate a warm start
>> Result_rbf = tomRun('rbfSolve',Prob);	%   Warm start with rbfSolve

M-files Used

iniSolve.m, endSolve.m, conAssign.m, glcAssign.m

See Also

rbfSolve

Warnings

Observe that when cancelling with CTRL+C during a run, some memory allocated by ego will not be deallocated.

To deallocate, do:

>> clear egolib