1Blurb:: Multifidelity uncertainty quantification using function train expansions
2
3Description::
4
5As described in the \ref method-function_train method and the
6\ref model-surrogate-global-function_train model,
7the function train (FT) approximation is a polynomial expansion that exploits low rank
8structure within the mapping from input random variables to output quantities of interest
9(QoI).  For multilevel and multifidelity function train approximations, we decompose this
10expansion into several constituent expansions, one per model form or solution control
11level, where independent function train approximations are constructed for the
12low-fidelity/coarse resolution model and one or more levels of model discrepancy.
13
14In a three-model case with low-fidelity (L), medium-fidelity (M), and
15high-fidelity (H) models and an additive discrepancy approach, we can denote this as:
16
17\f[ Q^H \approx \hat{Q}_{r_L}^L + \hat{\Delta}_{r_{ML}}^{ML} + \hat{\Delta}_{r_{HM}}^{HM} \f]
18
19where \f$\Delta^{ij}\f$ represents a discrepancy expansion computed from
20\f$Q^i - Q^j\f$ and reduced rank representations of these discrepancies may
21be targeted (\f$ r_{HM} < r_{ML} < r_L \f$).
22
23In multifidelity approaches, sample allocation for the constituent expansions can be
24performed with either no, individual, or integrated adaptive refinement as described in
25\ref method-multifidelity_function_train-allocation_control.
26
27<b> Expected HDF5 Output </b>
28
29If Dakota was built with HDF5 support and run with the
30\ref environment-results_output-hdf5 keyword, this method
31writes the following results to HDF5:
32
33- \ref hdf5_results-se_moments (expansion moments only)
34- \ref hdf5_results-pdf
35- \ref hdf5_results-level_mappings
36
37In addition, the execution group has the attribute \c equiv_hf_evals, which
38records the equivalent number of high-fidelity evaluations.
39
40Topics::
41
42Examples::
43Theory::
44Faq::
45See_Also:: model-surrogate-global-function_train, method-function_train
46