1Blurb::Pareto set optimization
2Description::
3In the pareto set minimization method (\c pareto_set), a series of
4optimization or least squares calibration runs are performed for
5different weightings applied to multiple objective functions.  This
6set of optimal solutions defines a "Pareto set," which is useful for
7investigating design trade-offs between competing objectives.  The
8code is similar enough to the \c multi_start technique that both
9algorithms are implemented in the same ConcurrentMetaIterator class.
10
11The \c pareto_set specification must identify an optimization or least
12squares calibration method using either a \c method_pointer or a \c
13method_name plus optional \c model_pointer.  This minimizer is
14responsible for computing a set of optimal solutions from a set of
15response weightings (multi-objective weights or least squares term
16weights).  These weightings can be specified as follows: (1) using \c
17random_weight_sets, in which case weightings are selected randomly
18within [0,1] bounds, (2) using \c weight_sets, in which the weighting
19sets are specified in a list, or (3) using both \c random_weight_sets
20and \c weight_sets, for which the combined set of weights will be
21used.  In aggregate, at least one set of weights must be specified.
22The set of optimal solutions is called the "pareto set," which can
23provide valuable design trade-off information when there are competing
24objectives.
25
26<b>Expected HDF5 Output</b>
27
28If Dakota was built with HDF5 support and run with the
29\ref environment-results_output-hdf5 keyword, this method
30writes the best parameters and responses returned by each sub-iterator.
31The weights are provided as metadata. See the \ref hdf5_results-ms_pareto
32documentation for details.
33
34
35Topics::
36Examples::
37Theory::
38Faq::
39See_Also::
40