1Blurb::
2(Deprecated keyword) Augments an existing Latin Hypercube Sampling (LHS) study
3
4Description::
5This keyword is deprecated.  Instead specify \c sample_type \c lhs
6with \c refinement_samples.
7
8An incremental random sampling approach will augment an existing
9random sampling study with refinement_samples to get better estimates
10of mean, variance, and percentiles. The number of refinement_samples
11in each refinement level must result in twice the number of previous
12samples.
13
14Typically, this approach is used when you have an initial study with
15sample size N1 and you want to perform an additional N1 samples.
16Ideally, a Dakota restart file containing the initial N1 samples, so
17only N1 (instead of 2 x N1) potentially expensive function
18evaluations will be performed.
19
20This process can be extended by repeatedly doubling the \c refinement_samples:
21\verbatim
22method
23  sampling
24    seed = 1337
25    samples = 50
26    refinement_samples = 50 100 200 400 800
27\endverbatim
28
29<b> Usage Tips </b>
30
31The incremental approach is useful if it is uncertain how many
32simulations can be completed within available time.
33
34See the examples below and
35the \ref running_dakota-usage and \ref dakota_restart pages.
36
37Topics::
38Examples::
39
40Suppose an initial study is conducted using \c sample_type \c lhs
41with \c samples = 50.  A follow-on study uses a new input file where
42\c samples = 50, and \c refinement_samples = 50, resulting in 50
43reused samples (from restart) and 50 new random samples.  The 50 new
44samples will be combined with the 50 previous samples to generate a
45combined sample of size 100 for the analysis.
46
47One way to ensure the restart file is saved is to specify a non-default name,
48via a command line option:
49\verbatim
50dakota -input LHS_50.in -write_restart LHS_50.rst
51\endverbatim
52
53which uses the input file:
54
55\verbatim
56# LHS_50.in
57
58environment
59  tabular_data
60    tabular_data_file = 'LHS_50.dat'
61
62method
63  sampling
64    seed = 1337
65    sample_type lhs
66    samples = 50
67
68model
69  single
70
71variables
72  uniform_uncertain = 2
73    descriptors  =   'input1'     'input2'
74    lower_bounds =  -2.0     -2.0
75    upper_bounds =   2.0      2.0
76
77interface
78  analysis_drivers 'text_book'
79    fork
80
81responses
82  response_functions = 1
83  no_gradients
84  no_hessians
85\endverbatim
86and the restart file is written to \c LHS_50.rst.
87
88
89Then an incremental LHS study can be run with:
90\verbatim
91dakota -input LHS_100.in -read_restart LHS_50.rst -write_restart LHS_100.rst
92\endverbatim
93where \c LHS_100.in is shown below, and \c LHS_50.rst is the restart
94file containing the results of the previous LHS study. In the example input
95files for the initial and incremental studies, the values for \c seed match.
96This ensures that the initial 50 samples generated in both runs are the same.
97\verbatim
98# LHS_100.in
99
100environment
101  tabular_data
102    tabular_data_file = 'LHS_100.dat'
103
104method
105  sampling
106    seed = 1337
107    sample_type incremental_lhs
108    samples = 50
109      refinement_samples = 50
110
111model
112  single
113
114variables
115  uniform_uncertain = 2
116    descriptors  =   'input1'     'input2'
117    lower_bounds =  -2.0     -2.0
118    upper_bounds =   2.0      2.0
119
120interface
121  analysis_drivers 'text_book'
122    fork
123
124responses
125  response_functions = 1
126  no_gradients
127  no_hessians
128\endverbatim
129
130The user will get 50 new LHS samples which
131maintain both the correlation and stratification of the original LHS
132sample. The new samples will be combined with the original
133samples to generate a combined sample of size 100.
134
135Theory::
136Faq::
137See_Also::
138