1Blurb::
2Global Surrogate Based Optimization, a.k.a. EGO
3Description::
4The Efficient Global Optimization (EGO) method was first developed by
5Jones, Schonlau, and Welch \cite Jon98. In EGO,
6a stochastic response surface approximation for the objective function
7is developed based on some sample points from the "true" simulation.
8
9Note that several major differences exist between our implementation
10and that of \cite Jon98. First, rather than
11using a branch and bound method to find the point which maximizes the
12EIF, we use the DIRECT global optimization method.
13
14Second, we support both global optimization and global nonlinear least
15squares as well as general nonlinear constraints through abstraction
16and subproblem recasting.
17
18The efficient global method is in prototype form. Currently, we do not
19expose any specification controls for the underlying Gaussian process
20model used or for the optimization of the expected improvement
21function (which is currently performed by the NCSU DIRECT algorithm
22using its internal defaults).
23
24By default, EGO uses the Surfpack GP (Kriging) model, but the %Dakota
25implementation may be selected instead. If \c use_derivatives is
26specified the GP model will be built using available derivative data
27(Surfpack GP only).
28
29<b>Expected HDF5 Output</b>
30
31If Dakota was built with HDF5 support and run with the
32\ref environment-results_output-hdf5 keyword, this method
33writes the following results to HDF5:
34
35- \ref hdf5_results-best_params
36- \ref hdf5_results-best_obj_fncs (when \ref responses-objective_functions) are specified)
37- \ref hdf5_results-best_constraints
38- \ref hdf5_results-calibration (when \ref responses-calibration_terms are specified)
39
40Topics::  global_optimization_methods, surrogate_based_optimization_methods
41Examples::
42Theory::
43The particular response surface used is a Gaussian process (GP). The
44GP allows one to calculate the prediction at a new input location as
45well as the uncertainty associated with that prediction. The key idea
46in EGO is to maximize the Expected Improvement Function (EIF). The EIF
47is used to select the location at which a new training point should be
48added to the Gaussian process model by maximizing the amount of
49improvement in the objective function that can be expected by adding
50that point. A point could be expected to produce an improvement in the
51objective function if its predicted value is better than the current
52best solution, or if the uncertainty in its prediction is such that
53the probability of it producing a better solution is high. Because the
54uncertainty is higher in regions of the design space with few
55observations, this provides a balance between exploiting areas of the
56design space that predict good solutions, and exploring areas where
57more information is needed. EGO trades off this "exploitation
58vs. exploration." The general procedure for these EGO-type methods is:
59
60\li Build an initial Gaussian process model of the objective function
61\li Find the point that maximizes the EIF. If the EIF value at this
62point is sufficiently small, stop.
63\li Evaluate the objective function at the point where the EIF is
64maximized. Update the Gaussian process model using this new point.
65\li Return to the previous step.
66
67Faq::
68See_Also::	method-surrogate_based_local, method-surrogate_based_global
69