1Blurb:: 2Global surrogate model based on functional tensor train decomposition 3 4Description:: 5Tensor train decompositions are approximations that exploit low rank structure 6in an input-output mapping. The form of the approximation can be written as a 7set of matrix valued products: 8\f[ f_r(x) = F_1(x_1) F_2(x_2) \dots F_d(x_d) \f] 9Where the "cores" expand to 10\f[ F_k(x_k) = 11\begin{bmatrix} 12f_k^{11}(x_k) & \cdots & f_k^{1r_k}(x_k)\\ 13\vdots & \ddots & \vdots\\ 14f_k^{r_{k-1}1}(x_k) & \cdots & f_k^{r_{k-1}r_k}(x_k) 15\end{bmatrix} 16\f] 17 18An example expansion over four random variables with rank vector (1,7,5,3,1) is 19\f[ f_r(x) = 20\begin{bmatrix} 21f_1^{11}(x_1) & \cdots & f_1^{17}(x_1) 22\end{bmatrix} 23\begin{bmatrix} 24f_2^{11}(x_2) & \cdots & f_2^{15}(x_2)\\ 25\vdots & \ddots & \vdots\\ 26f_2^{71}(x_2) & \cdots & f_2^{75}(x_2) 27\end{bmatrix} 28\begin{bmatrix} 29f_3^{11}(x_3) & \cdots & f_3^{13}(x_3)\\ 30\vdots & \ddots & \vdots\\ 31f_3^{51}(x_3) & \cdots & f_3^{53}(x_3) 32\end{bmatrix} 33\begin{bmatrix} 34f_4^{11}(x_4) \\ 35\vdots \\ 36f_4^{31}(x_4) 37\end{bmatrix} 38\f] 39 40In the current implementation, orthogonal polynomial basis functions 41(Hermite and Legendre) are employed as the basis functions \f$f_i^{jk}(x_i)\f$, 42although the C3 library will enable additional options in the future. 43 44The number of coefficients that must be computed by the regression solver 45can be inferred from the construction above. For each QoI, the regression 46size can be determined as follows: 47 48- For \a v variables, orders \a o is a v-vector and ranks \a r is a v+1-vector 49- the first core is a \f$ 1 \times r_1 \f$ row vector and contributes \f$ (o_0 + 1) r_1 \f$ terms 50- the last core is a \f$ r_{v-1} \times 1 \f$ col vector and contributes \f$ (o_{v-1}+1) r_{v-1} \f$ terms 51- the middle v-2 cores are \f$ r_i \times r_{i+1} \f$ matrices that contribute \f$ r_i r_{i+1} (o_i + 1) \f$ terms, \f$ i = 1, ..., v-2 \f$ 52- neighboring vec/mat dimensions must match, so there are v-1 unique ranks 53 54 55 56<b> Usage Tips </b> 57 58This new capability is stabilizing and beginning to be embedded in 59higher-level strategies such as multilevel-multifidelity algorithms. 60It is not included in the Dakota build by default, as some C3 library 61dependencies (CBLAS) can induce small differences in our regression suite. 62 63This capability is also being used as a prototype to explore 64model-based versus method-based specification of stochastic 65expansions. While the model specification is stand-alone, it 66currently requires a corresponding method specification to exercise 67the model, which can be a generic UQ strategy such as \c 68surrogate_based_uq method or a \c sampling method. The intent is to 69migrate function train, polynomial chaos, and stochastic collocation 70toward model-only specifications that can then be employed in any 71surrogate/emulator context. 72 73Topics:: 74 75Examples:: 76\verbatim 77model, 78 id_model = 'FT' 79 surrogate global function_train 80 start_order = 2 81 start_rank = 2 kick_rank = 2 max_rank = 10 82 adapt_rank 83 dace_method_pointer = 'SAMPLING' 84\endverbatim 85 86See_Also:: method-function_train, method-multilevel_function_train, method-multifidelity_function_train 87