1Blurb::
2Performs an incremental Latin Hypercube Sampling (LHS) study
3
4Description::
5Use of \c refinement_samples replaces
6the deprecated \c sample_type \c incremental_lhs and
7\c sample_type \c incremental_random.
8
9An incremental random sampling approach will successively add
10samples to an initial or existing random sampling study
11according to the sequence of \c refinement_samples. Dakota reports
12statistics (mean, variance, percentiles, etc) separately for the
13initial \c samples and for each \c refinement_samples increment at
14the end of the study. For an LHS design, the number of
15\c refinement_samples in each refinement
16level must result in twice the number of previous samples. For
17\c sample_type \c random, there is no constraint on the number of
18samples that can be added.
19
20Often, this approach is used when you have an initial study with
21sample size N1 and you want to perform an additional N1 samples.
22The initial N1 samples may be contained in a Dakota restart file so
23only N1 (instead of 2 x N1) potentially expensive function
24evaluations will be performed.
25
26
27This process can be extended by repeatedly increasing (for LHS: doubling)
28the \c refinement_samples:
29\verbatim
30method
31  sampling
32    seed = 1337
33    samples = 50
34    refinement_samples = 50 100 200 400 800
35\endverbatim
36
37<b> Usage Tips </b>
38
39The incremental approach is useful if it is uncertain how many
40simulations can be completed within available time.
41
42See the examples below and
43the \ref running_dakota-usage and \ref dakota_restart pages.
44
45Topics::
46Examples::
47
48Suppose an initial study is conducted using \c sample_type \c lhs
49with \c samples = 50.  A follow-on study uses a new input file where
50\c samples = 50, and \c refinement_samples = 50, resulting in 50
51reused samples (from restart) and 50 new random samples.  The 50 new
52samples will be combined with the 50 previous samples to generate a
53combined sample of size 100 for the analysis.
54
55One way to ensure the restart file is saved is to specify a non-default name,
56via a command line option:
57\verbatim
58dakota -input LHS_50.in -write_restart LHS_50.rst
59\endverbatim
60
61which uses the input file:
62
63\verbatim
64# LHS_50.in
65
66environment
67  tabular_data
68    tabular_data_file = 'LHS_50.dat'
69
70method
71  sampling
72    seed = 1337
73    sample_type lhs
74    samples = 50
75
76model
77  single
78
79variables
80  uniform_uncertain = 2
81    descriptors  =   'input1'     'input2'
82    lower_bounds =  -2.0     -2.0
83    upper_bounds =   2.0      2.0
84
85interface
86  analysis_drivers 'text_book'
87    fork
88
89responses
90  response_functions = 1
91  no_gradients
92  no_hessians
93\endverbatim
94and the restart file is written to \c LHS_50.rst.
95
96
97Then an incremental LHS study can be run with:
98\verbatim
99dakota -input LHS_100.in -read_restart LHS_50.rst -write_restart LHS_100.rst
100\endverbatim
101where \c LHS_100.in is shown below, and \c LHS_50.rst is the restart
102file containing the results of the previous LHS study. In the example input
103files for the initial and incremental studies, the values for \c seed match.
104This ensures that the initial 50 samples generated in both runs are the same.
105\verbatim
106# LHS_100.in
107
108environment
109  tabular_data
110    tabular_data_file = 'LHS_100.dat'
111
112method
113  sampling
114    seed = 1337
115    sample_type lhs
116    samples = 50
117      refinement_samples = 50
118
119model
120  single
121
122variables
123  uniform_uncertain = 2
124    descriptors  =   'input1'     'input2'
125    lower_bounds =  -2.0     -2.0
126    upper_bounds =   2.0      2.0
127
128interface
129  analysis_drivers 'text_book'
130    fork
131
132responses
133  response_functions = 1
134  no_gradients
135  no_hessians
136\endverbatim
137
138The user will get 50 new LHS samples which
139maintain both the correlation and stratification of the original LHS
140sample. The new samples will be combined with the original
141samples to generate a combined sample of size 100.
142
143Theory::
144Faq::
145See_Also::
146