CGO ego
This page is part of the CGO Manual. See CGO Manual. |
Purpose
Solve general constrained mixed-integer global black-box optimization problems with costly objective functions.
The optimization problem is of the following form
where ; ;
the linear constraints are defined by , ;
and the nonlinear constraints are defined by .
The variables are restricted to be integers,
where is an index subset of possibly empty.
It is assumed that the function is continuous with respect to all
variables, even if there is a demand that some variables only take integer values.
Otherwise it would not make sense to do the surrogate modeling of used by all CGO solvers.
f (x) is assumed to be a costly function while c(x) is assumed to be cheaply computed. Any costly constraints can be treated by adding penalty terms to the objective function in the following way:
where weighting parameters wj have been added. The user then returns p(x) instead of f (x) to the CGO solver.
Calling Syntax
Result=ego(Prob,varargin) Result = tomRun('ego', Prob);
Usage
See CGO solver usage
Description of Inputs
Problem structure
The following fields are used in the problem description structure Prob:
Field | Description |
---|---|
Name | See Common input for all CGO solvers |
FUNCS.f | |
FUNCS.c | |
x_L | |
x_U | |
b_L | |
b_U | |
A | |
c_L | |
c_U | |
WarmStart | |
user | |
MaxCPU | |
PriLevOpt | |
optParam | |
CGO | See the table below but also this table for input common to all CGO solvers |
GO | See common input for all CGO solvers |
MIP | See common input for all CGO solvers |
varargin | Additional parameters to ego are sent to the costly f(x) |
- Special EGO algorithm parameters in Prob.CGO - | |
---|---|
EGOAlg | Main algorithm in the EGO solver (default EGOAlg == 1)
=1 Run expected improvement steps (modN=0,1,2,...). If no f (x) improve- ment, use DACE surface minimum (modN=-1) in 1 step =2 Run expected improvement steps (modN=0) until ExpI/-yMin- ¡ Tol- ExpI for 3 successive steps (modN=1,2,3) without f (x) improvement (fRed = 0), where yMin is fMin transformed by TRANSFORM After 2 such steps (when modN=2), 1 step using the DACE surface minimum (modN=-1) is tried. If then fRed ¿0, reset to modN=0 steps. |
pEst | 1 - Estimate d-vector, p parameters (default), 0 - fix p=2. |
pEst | Norm parameters, fixed or estimated, also see p0, pLow, pUpp (default pEst = 0).
0 = Fixed constant p-value for all components (default, p0=1.99). 1 = Estimate one p-value valid for all components. > 1 = Estimate d||||p parameters, one for each component. |
p0 | Fixed p-value (pEst==0, default = 1.99) or initial p-value (pEst == 1, default 1.9) or d-vector of initial p-values (pEst > 1, default 1.9*ones(d,1)) |
pLow | Lower bound on p.
If pEst == 0, not used if pEst == 1, lower bound on p-value (default 1.0) if pEst > 1, lower bounds on p (default ones(d,1)) |
pUpp | Upper bound on p.
If pEst == 0, not used if pEst == 1, upper bound on p-value (default 2.0) if pEst > 1, upper bounds on p (default 2*ones(d,1)) |
TRANSFORM | Function value transformation.
0 - No transformation made. 1 - Median value transformation. Use REPLACE instead. 2 - log(y) transformation made. 3 - -log(-y) transformation made. 4 - -1/y transformation made. Default EGO is computing the best possible transformation from the initial set of data. Note! No check is made on illegal y if user gives TRANSFORM. |
EITRANSFORM | Transformation of expected improvement function (default 1).
= 0 No transformation made. = 1 - log(-f ) transformation made. = 2 -1/f transformation made. |
TolExpI | Convergence tolerance for expected improvement (default 10-6 ). |
SAMPLEF | Sample criterion function:
0 = Expected improvment (default) 1 = Kushner's criterion (related option: KEPS) 2 = Lower confidence bounding (related option: LCBB) 3 = Generalized expected improvement (related option: GEIG) 4 = Maximum variance 5 = Watson and Barnes 2 |
KEPS | The ε parameter in the Kushner's criterion (default: -0.01).
If KEPS > 0, then E = K EP S. If KEPS < 0, then E = \|K EP S\| * fM in . |
GEIG | The exponent g in the generalized expected improvement function (default 2.0). |
LCBB | Lower Confidence Bounding parameter b (default 2.0). |
Description of Outputs
Structure with result from optimization.
Output | Description |
---|---|
x_k | Matrix with the best points as columns. |
f_k | The best function value found so far. |
Iter | Number of iterations. |
FuncEv | Number of function evaluations. |
ExitText | Text string with information about the run. |
ExitFlag | Always 0. |
CGO | Subfield WarmStartInfo saves warm start information, the same information as in cgoSave.mat, see below. |
Inform | Information parameter.
0 = Normal termination. 1 = Function value f (x) is less than fGoal. 2 = Error in function value f (x), abs(f - fGoal) <= fTol, fGoal=0. 3 = Relative Error in function value f(x) is less than fTol, i.e. abs(f - fGoal)/abs(fGoal) <= fTol. 4 = No new point sampled for N iteration steps. 5 = All sample points same as the best point for N last iterations. 6 = All sample points same as previous point for N last iterations. 7 = All feasible integers tried. 9 = Max CPU Time reached. 10 = Expected improvement low for three iterations. |
Result | Structure with result from optimization. |
cgoSave.mat | To make a warm start possible, all CGO solvers saves information in the file cgoSave.mat. The file is created independent of the solver, which enables the user to call any CGO solver using the warm start information. cgoSave.mat is a MATLAB mat-file saved to the current directory. If the parameter SAVE is 1, the CGO solver saves the mat file every iteration, which enables the user to break the run and restart using warm start from the current state. SAVE = 1 is currently always set by the CGO solvers. If the cgoSave.mat file fails to open for writing, the information is also available in the output field Result.CGO.WarmStartInfo, if the run was concluded without interruption. Through a call to WarmDefGLOBAL, the Prob structure can be setup for warm start. In this case, the CGO solver will not load the data from cgoSave.mat. The file contains the following variables: |
Name | Problem name. Checked against the Prob.Name field if doing a warmstart. |
O | Matrix with sampled points (in original space). |
X | Matrix with sampled points (in unit space if SCALE==1) |
F | Vector with function values (penalty added for costly Cc(x)) |
F_m | Vector with function values (replaced). |
F00 | Vector of pure function values, before penalties. |
Cc | MMatrix with costly constraint values, C c(x). |
nInit | Number of initial points. |
Fpen | Vector with function values + additional penalty if infeasible using the linear constraints and noncostly nonlinear c(x). |
fMinIdx | Index of the best point found. |
rngState | Current state of the random number generator used. |
Description
ego implements the algorithm EGO by D. R. Jones, Matthias Schonlau and William J. Welch presented in the paper "Efficient Global Optimization of Expensive Black-Box Functions".
Please note that Jones et al. has a slightly different problem formulation. The TOMLAB version of ego treats linear and nonlinear constraints separately.
ego samples points to which a response surface is fitted. The algorithm then balances between sampling new points and minimization on the surface.
ego and rbfSolve use the same format for saving warm start data. This means that it is possible to try one solver for a certain number of iterations/function evaluations and then do a warm start with the other. Example:
>> Prob = probInit('glc_prob',1); % Set up problem structure >> Result_ego = tomRun('ego',Prob); % Solve for a while with ego >> Prob.WarmStart = 1; % Indicate a warm start >> Result_rbf = tomRun('rbfSolve',Prob); % Warm start with rbfSolve
M-files Used
iniSolve.m, endSolve.m, conAssign.m, glcAssign.m
See Also
rbfSolve
Warnings
Observe that when cancelling with CTRL+C during a run, some memory allocated by ego will not be deallocated.
To deallocate, do:
>> clear egolib