1Blurb::
2Response type suitable for calibration or least squares
3
4Description::
5Responses for a calibration study are specified using \c
6calibration_terms and optional keywords for weighting/scaling, data,
7and constraints.  In general when calibrating, Dakota automatically
8tunes parameters \f$ \theta \f$ to minimize discrepancies or residuals
9between the model and the data:
10
11\f[ R_{i} = y^{Model}_i(\theta) - y^{Data}_{i}. \f]
12
13Note that the problem specification affects what must be returned to
14Dakota in the \ref interface-analysis_drivers-fork-results_file :
15
16\li If calibration data <em>is not specified</em>, then each of the
17  calibration terms returned to Dakota through the \ref interface is a
18  residual \f$ R_{i} \f$ to be driven toward zero.
19
20\li If calibration data <em>is specified</em>, then each of the
21  calibration terms returned to Dakota must be a response \f$
22  y^{Model}_i(\theta) \f$, which Dakota will difference with the data
23  in the specified data file.
24
25<b> Constraints </b>
26
27(See general problem formulation at \ref
28responses-objective_functions.) The keywords \ref
29responses-calibration_terms-nonlinear_inequality_constraints and \ref
30responses-calibration_terms-nonlinear_equality_constraints specify the
31number of nonlinear inequality constraints \em g, and nonlinear
32equality constraints \em h, respectively.  When interfacing to
33external applications, the responses must be returned to %Dakota in
34this order in the \ref interface-analysis_drivers-fork-results_file :
35<ol> <li>calibration terms</li> <li>nonlinear inequality
36constraints</li> <li>nonlinear equality constraints</li> </ol>
37
38An optimization problem's linear constraints are provided to the
39solver at startup only and do not need to be included in the data
40returned on every function evaluation. Linear constraints are
41therefore specified in the \ref variables block through the \ref
42variables-linear_inequality_constraint_matrix \f$A_i\f$ and \ref
43variables-linear_equality_constraint_matrix \f$A_e\f$.
44
45Lower and upper bounds on the design variables \em x are also
46specified in the \ref variables block.
47
48<b> Problem Transformations</b>
49
50Weighting or scaling calibration terms is often appropriate to account
51for measurement error or to condition the problem for easier solution.
52Weighting or scaling transformations are applied in the following
53order:
54
55<ol>
56<li> When present, observation error variance \f$ \sigma_i \f$ or full
57     covariance \f$ \Sigma\f$, optionally specified through \c
58     experiment_variance_type, is applied to residuals first:
59
60     \f[  R^{(1)}_i = \frac{R_{i}}{\sigma_{i}} = \frac{y^{Model}_i(\theta) -
61     y^{Data}_{i}}{\sigma_{i}}  \textrm{, or} \f]
62
63     \f[
64     R^{(1)} = \Sigma^{-1/2} R = \Sigma^{-1/2} \left(y^{Model}(\theta) -
65     y^{Data}\right), \f]
66     resulting in the typical variance-weighted least squares formulation
67     \f[ \textrm{min}_\theta \; R(\theta)^T \Sigma^{-1} R(\theta) \f]
68</li>
69<li> Any active scaling transformations are applied next, e.g., for
70     characteristic value scaling:
71
72     \f[ R^{(2)}_i = \frac{R^{(1)}_i}{s_i} \f]
73</li>
74<li> Finally the optional weights are applied in a way that preserves
75    backward compatibility:
76
77    \f[ R^{(3)}_i = \sqrt{w_i}{R^{(2)}_i} \f]
78
79    so the ultimate least squares formulation, e.g., in a scaled and
80    weighted case would be
81
82    \f[ f = \sum_{i=1}^{n} w_i \left( \frac{y^{Model}_i -
83    y^{Data}_i}{s_i} \right)^2 \f]
84</li>
85</ol>
86
87<em>Note that specifying observation error variance and weights are mutually
88exclusive in a calibration problem.</em>
89
90Topics::
91Examples::
92Theory::
93
94%Dakota calibration terms are typically used to solve problems of
95parameter estimation, system identification, and model
96calibration/inversion. Local least squares calibration problems are
97most efficiently solved using special-purpose least squares solvers
98such as Gauss-Newton or Levenberg-Marquardt; however, they may also be
99solved using any general-purpose optimization algorithm in %Dakota.
100While Dakota can solve these problems with either least squares or
101optimization algorithms, the response data sets to be returned from
102the simulator are different when using \ref
103responses-objective_functions versus \ref responses-calibration_terms.
104
105Least squares calibration involves a set of residual
106functions, whereas optimization involves a single objective function
107(sum of the squares of the residuals), i.e., \f[f = \sum_{i=1}^{n}
108R_i^2 = \sum_{i=1}^{n} \left(y^{Model}_i(\theta) - y^{Data}_{i} \right)^2 \f]
109where \e f is the objective function and the set of \f$R_i\f$
110are the residual functions, most commonly defined as the difference between a model response and data. Therefore, function values and derivative
111data in the least squares case involve the values and derivatives of
112the residual functions, whereas the optimization case involves values
113and derivatives of the sum of squares objective function. This means that
114in the least squares calibration case, the user must return each of
115\c n residuals
116separately as a separate calibration term. Switching
117between the two approaches sometimes requires different simulation
118interfaces capable of returning the different granularity of response
119data required, although %Dakota supports automatic recasting of
120residuals into a sum of squares for presentation to an optimization
121method. Typically, the user must compute the difference between the
122model results and the observations when computing the residuals.
123However, the user has the option of specifying the observational data
124(e.g. from physical experiments or other sources) in a file.
125
126Faq::
127See_Also::	responses-objective_functions, responses-response_functions
128