1\documentstyle[12pt,art12cox,epsf]{article}
2\newcommand{\afni}{{\em AFNI\,}}
3\newcommand{\afnit}{{\em AFNI\/}\ }
4
5\newcommand{\MCW}{{\sf MCW}}
6
7\newcommand{\mcwafni}{\MCW$\!$ \afnit}
8
9\setlength{\topmargin}{0.0in}
10\setlength{\textheight}{8.7in}
11\setlength{\oddsidemargin}{0.25in}
12\setlength{\evensidemargin}{0.25in}
13\setlength{\textwidth}{6.5in}
14\setlength{\footskip}{.7in}
15
16\hyphenpenalty=200
17
18\def\mypleft{\footnotesize \MCW$\!$ \afnit Buckets}
19\def\mypright{\scriptsize\today}
20\dashpage
21
22\raggedbottom
23
24\newcommand{\seeme}[1]%
25{\marginpar{\raggedright%
26$\star\star\star$\hspace*{0pt plus 1fill}$\longrightarrow$\\{}%
27\scriptsize\bf#1}}
28
29\newcommand{\blob}{\hspace*{1em}}
30\newcommand{\vset}{\vspace{0.5in}\goodbreak\vspace{-0.5in}}
31
32\newcommand{\mysec}[1]{%
33\vspace{2in}\goodbreak\vspace{-1.99in}\section{#1}}
34
35\newcommand{\mysubsec}[1]{%
36\vspace{1.1in}\goodbreak\vspace{-1.09in}\subsection{#1}}
37
38\newcommand{\subbreak}[1]{\vspace{#1}\penalty-5000\vspace{-#1}}
39
40\newcommand{\dline}[1]%
41{\subbreak{0.6in}\noindent%
42\underline{\bf$\vphantom{y}$#1}\\*[.1ex]\nopagebreak}
43
44%\setcounter{tocdepth}{2}
45
46%---------------------------------------------------------------------
47\begin{document}
48%%%\thispagestyle{empty}
49
50\vspace*{0.4in}
51\centerline{\Large\bf\boldmath \MCW$\!$ \afnit --- Buckets}\vspace{1ex}
52\centerline{\large\bf Robert W. Cox, Ph.D.}\vspace{0.4ex}
53\centerline{\tt rwcox@mcw.edu}\vspace{0.2ex}
54\centerline{\copyright\ 1997 Medical College of Wisconsin}
55
56\vspace{5ex}
57\centerline{\fbox{\fbox{\LARGE\bf The New `Bucket' Dataset Type}}}
58\vspace{6ex}
59
60\noindent
61A {\bf bucket} dataset is a 3D dataset that can contain an
62arbitrary number of sub-bricks.  These sub-bricks are not
63considered to be time-ordered; rather, the bucket dataset type
64is a place where the user can toss 3D bricks of data.
65
66This documentation is provided to update the \afnit plugins
67manual and to explain the programmatic interface for creating
68bucket datasets.  It is current as of \afnit 2.20, and
69is at present a work-in-progress ({\it i.e.},~subject to change
70at a moment's whim).
71
72Note that the entire \mcwafni package---including plugins---must be recompiled
73to use these features.  This is because the internal
74storage scheme used for datasets has been modified slightly.
75
76\tableofcontents
77\newpage
78%---------------------------------------------------------------------
79It has always been possible to create a 3D (no time) dataset with
80multiple sub-bricks.  Until now, there has not been any program
81that would do this, nor would any but the first sub-brick be
82visible from within \afni.
83The new program {\tt 3dbucket} allows the creation of
84datasets with an arbitrary number of sub-bricks drawn from
85the bricks of existing dataset.  \afnit has been modified
86to allow the user to switch viewing between sub-bricks.
87The dataset structure has been extended to allow extra
88information to be attached to each sub-brick to make
89them easy to distinguish.  A~programming interface has
90been implemented that allows external programs (and plugins)
91to create 3D bucket datasets.
92
93%---------------------------------------------------------------------
94\mysec{Sub-brick Auxiliary Data}
95Three new types of data can be associated with each sub-brick in any
96\afnit dataset (bucket, 3D+time,~\ldots). They are
97\begin{description}
98
99  \item[Label] This is a character string that is displayed on
100               the \afnit bucket chooser menu that lets the user
101               decide which sub-brick should be displayed (see~\S5).
102
103  \item[Keywords] This is a character string that contains a
104                  list of keywords that are to be associated
105                  with a given sub-brick.  Each keyword is
106                  separated by the C substring \hbox{\tt " ; "}.
107                  At present, the keywords have no function
108                  within any \mcwafni program, but that
109                  is likely to change shortly.
110
111  \item[Statistical Parameters]
112         Each sub-brick can have a statistical type attached,
113         exactly as some of the earlier function types can.
114         If a sub-brick with a valid statistical type is chosen to be the threshold
115         sub-brick from within \afni, then the nominal $p$-value per voxel will be
116         displayed beneath the threshold slider.
117
118         Most of these sub-brick statistical types require
119         auxiliary parameters.  The list of statistical types is:\vspace{1ex}
120
121   \centerline{\begin{tabular}{|l|l|l|l|}\hline
122    Name & Type Code     & Distribution       & Auxiliary parameters \\\hline\hline
123   %
124    {\tt fico} & {\tt FUNC\_COR\_TYPE} & Correlation Coeff & \# Samples, \# Fit Param, \# Ort Param \\\hline
125    {\tt fitt} & {\tt FUNC\_TT\_TYPE}  & Student t         &  Degrees-of-Freedom (DOF) \\\hline
126    {\tt fift} & {\tt FUNC\_FT\_TYPE}  & F ratio           &  Numerator DOF, Denominator DOF \\\hline
127    {\tt fizt} & {\tt FUNC\_ZT\_TYPE}  & Standard Normal   &  --- none --- \\\hline
128    {\tt fict} & {\tt FUNC\_CT\_TYPE}  & Chi-Squared       &  DOF \\\hline
129    {\tt fibt} & {\tt FUNC\_BT\_TYPE}  & Incomplete Beta   &  Parameters $a$ and $b$  \\\hline
130    {\tt fibn} & {\tt FUNC\_BN\_TYPE}  & Binomial          &  \# Trials, Probability per trial \\\hline
131    {\tt figt} & {\tt FUNC\_GT\_TYPE}  & Gamma             &  Shape, Scale \\\hline
132    {\tt fipt} & {\tt FUNC\_PT\_TYPE}  & Poisson           &  Mean \\\hline
133   \end{tabular}}\vspace{1ex}
134
135         The `Name' is used on the command line when modifying the auxiliary data
136         inside a dataset using the program {\tt 3drefit}.  The `Type Code' is a C macro
137         for a constant that is used from within a program when modifying the
138         auxiliary data.
139
140\end{description}
141
142In addition to the sub-brick specific keywords list, I have also added a global
143keywords list that pertains to the entire dataset.  One ultimate purpose of
144the keywords lists is to allow the selection of datasets and sub-bricks based
145on keywords.
146
147It is possible to attach a label and a statistical type to sub-bricks
148of non-bucket datasets.  But they will have no effect.
149
150%---------------------------------------------------------------------
151\mysec{Program {\tt 3dbucket}}
152At this moment, the only program that can create a bucket dataset is
153{\tt 3dbucket}.  (In particular, {\tt to3d} {\it cannot\/} create
154a bucket dataset!)  {\tt 3dbucket} concatenates 3D sub-bricks from
155multiple input datasets and produce one output bucket dataset.
156The main purpose of this program is to experiment with bucket datasets.
157Its help file follows:
158\begin{verbatim}
159Usage: 3dbucket options
160where the options are:
161     -prefix pname = Use 'pname' for the output dataset prefix name.
162 OR  -output pname     [default='buck']
163
164     -session dir  = Use 'dir' for the output dataset session directory.
165                       [default='./'=current working directory]
166     -dry          = Execute a 'dry run'; that is, only print out
167                       what would be done.  This is useful when
168                       combining sub-bricks from multiple inputs.
169     -verb         = Print out some verbose output as the program
170                       proceeds (-dry implies -verb).
171     -fbuc         = Create a functional bucket.
172     -abuc         = Create an anatomical bucket.  If neither of
173                       these options is given, the output type is
174                       determined from the first input type.
175
176Other arguments are taken as input datasets.  A dataset is specified
177using one of the forms
178   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
179You can also add a sub-brick selection list after the end of the
180dataset name.  This allows only a subset of the sub-bricks to be
181included into the output (by default, all of the input dataset
182is copied into the output).  A sub-brick selection list looks like
183one of the following forms:
184  fred+orig[5]                     ==> use only sub-brick #5
185  fred+orig[5,9,17]                ==> use #5, #9, and #12
186  fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
187  fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
188Sub-brick indexes start at 0.  You can use the character '$'
189to indicate the last sub-brick in a dataset; for example, you
190can select every third sub-brick by using the selection list
191  fred+orig[0..$(3)]
192
193N.B.: The sub-bricks are output in the order specified, which may
194 not be the order in the original datasets.  For example, using
195  fred+orig[0..$(2),1..$(2)]
196 will cause the sub-bricks in fred+orig to be output into the
197 new dataset in an interleaved fashion.  Using
198  fred+orig[$..0]
199 will reverse the order of the sub-bricks in the output.
200
201N.B.: Bucket datasets have multiple sub-bricks, but do NOT have
202 a time dimension.  You can input sub-bricks from a 3D+time dataset
203 into a bucket dataset.  You can use the '3dinfo' program to see
204 how many sub-bricks a 3D+time or a bucket dataset contains.
205
206N.B.: The '$', '(', ')', '[', and ']' characters are special to
207 the shell, so you will have to escape them.  This is most easily
208 done by putting the entire dataset plus selection list inside
209 single quotes, as in 'fred+orig[5..7,9]'.
210\end{verbatim}
211
212\noindent
213Some additional points:
214\begin{itemize}
215  \item If an input sub-brick has a statistical type, then its type
216        and auxiliary parameters are copied to the output sub-brick.
217        This happen if the input dataset is one of the functional
218        types with a statistics threshold attached ({\it e.g.},~the
219        second sub-brick from a {\tt fico} dataset).  It can also
220        happen if the input dataset is itself a bucket dataset.
221
222  \item The sub-brick labels for the output dataset are of the form
223        {\tt prefix[index]}, where `{\tt prefix}' is the input
224        dataset and `{\tt index}' is the integer index of the
225        sub-brick in the input dataset.
226
227  \item The output sub-brick keywords are copied from the input sub-bricks,
228        if any.  The additional keyword {\tt prefix+view[index]} is also
229        attached to the sub-brick keyword list.
230
231  \item I intend to extend the input sub-brick selection scheme
232        for {\tt 3dbucket} to allow selection from keyword lists.
233        Eventually, it will also be possible to construct datasets
234        `on-the-fly' on the command line for any program.
235
236  \item Anatomical bucket datasets are not particularly useful,
237        at least at present. (Got any ideas for applications?)
238\end{itemize}
239
240%---------------------------------------------------------------------
241\mysec{Program {\tt 3drefit}}
242This program lets the user change the contents of a dataset header.
243It has been extended to let the sub-brick auxiliary data be modified.
244Its help file follows:
245\begin{verbatim}
246Usage: 3drefit [options] dataset ...
247where the options are
248  -orient code    Sets the orientation of the 3D volume(s) in the .BRIK.
249                  The code must be 3 letters, one each from the
250                  pairs {R,L} {A,P} {I,S}.  The first letter gives
251                  the orientation of the x-axis, the second the
252                  orientation of the y-axis, the third the z-axis:
253                     R = right-to-left         L = left-to-right
254                     A = anterior-to-posterior P = posterior-to-anterior
255                     I = inferior-to-superior  S = superior-to-inferior
256               ** WARNING: when changing the orientation, you must be sure
257                  to check the origins as well, to make sure that the volume
258                  is positioned correctly in space.
259
260  -xorigin distx  Puts the center of the edge voxel off at the given
261  -yorigin disty  distance, for the given axis (x,y,z); distances in mm.
262  -zorigin distz  (x=first axis, y=second axis, z=third axis).
263                  Usually, only -zorigin makes sense.  Note that this
264                  distance is in the direction given by the corresponding
265                  letter in the -orient code.  For example, '-orient RAI'
266                  would mean that '-zorigin 30' sets the center of the
267                  first slice at 30 mm Inferior.  See the to3d manual
268                  for more explanations of axes origins.
269               ** SPECIAL CASE: you can use the string 'cen' in place of
270                  a distance to force that axis to be re-centered.
271
272  -xdel dimx      Makes the size of the voxel the given dimension,
273  -ydel dimy      for the given axis (x,y,z); dimensions in mm.
274  -zdel dimz   ** WARNING: if you change a voxel dimension, you will
275                  probably have to change the origin as well.
276
277  -TR time        Changes the TR time to a new value (see 'to3d -help').
278               ** WARNING: this only applies to 3D+time datasets.
279
280  -newid          Changes the ID code of this dataset as well.
281
282  -statpar v ...  Changes the statistical parameters stored in this
283                  dataset.  See 'to3d -help' for more details.
284
285  -markers        Adds an empty set of AC-PC markers to the dataset,
286                  if it can handle them (is anatomical, doesn't already
287                  have markers, is in the +orig view, and isn't 3D+time).
288
289  -appkey ll      Appends the string 'll' to the keyword list for the
290                  whole dataset.
291  -repkey ll      Replaces the keyword list for the dataset with the
292                  string 'll'.
293  -empkey         Destroys the keyword list for the dataset.
294
295  -type           Changes the type of data that is declared for this
296                  dataset, where 'type' is chosen from the following:
297       ANATOMICAL TYPES
298         spgr == Spoiled GRASS             fse == Fast Spin Echo
299         epan == Echo Planar              anat == MRI Anatomy
300           ct == CT Scan                  spct == SPECT Anatomy
301          pet == PET Anatomy               mra == MR Angiography
302         bmap == B-field Map              diff == Diffusion Map
303         omri == Other MRI                abuc == Anat Bucket
304       FUNCTIONAL TYPES
305          fim == Intensity                fith == Inten+Thr
306         fico == Inten+Cor                fitt == Inten+Ttest
307         fift == Inten+Ftest              fizt == Inten+Ztest
308         fict == Inten+ChiSq              fibt == Inten+Beta
309         fibn == Inten+Binom              figt == Inten+Gamma
310         fipt == Inten+Poisson            fbuc == Func-Bucket
311
312The options below allow you to attach auxiliary data to sub-bricks
313in the dataset.  Each option may be used more than once so that
314multiple sub-bricks can be modified in a single run of 3drefit.
315
316  -sublabel  n ll  Attach to sub-brick #n the label string 'll'.
317  -subappkey n ll  Add to sub-brick #n the keyword string 'll'.
318  -subrepkey n ll  Replace sub-brick #n's keyword string with 'll'.
319  -subempkey n     Empty out sub-brick #n' keyword string
320
321  -substatpar n type v ...
322                  Attach to sub-brick #n the statistical type and
323                  the auxiliary parameters given by values 'v ...',
324                  where 'type' is one of the following:
325
326
327
328         type  Description  PARAMETERS
329         ----  -----------  ----------------------------------------
330         fico  Cor          SAMPLES  FIT-PARAMETERS  ORT-PARAMETERS
331         fitt  Ttest        DEGREES-of-FREEDOM
332         fift  Ftest        NUMERATOR and DENOMINATOR DEGREES-of-FREEDOM
333         fizt  Ztest        N/A
334         fict  ChiSq        DEGREES-of-FREEDOM
335         fibt  Beta         A (numerator) and B (denominator)
336         fibn  Binom        NUMBER-of-TRIALS and PROBABILITY-per-TRIAL
337         figt  Gamma        SHAPE and SCALE
338         fipt  Poisson      MEAN
339\end{verbatim}
340
341\noindent
342{\tt 3drefit} is the only program that lets the user change the
343sub-brick label, keywords, and statistical parameters.  In particular,
344if you don't like the default labels provided by {\tt 3dbucket},
345you must use {\tt 3drefit} to patch things up.
346Program {\tt 3dinfo} has been modified so that it will print
347out the sub-brick auxiliary data (including keywords).  This will help guide
348the use of {\tt 3drefit}.
349
350%---------------------------------------------------------------------
351\mysec{Creating Buckets in a Program}
352\dline{Modifying Sub-Brick Parameters}
353A number of new {\tt ADN\_} commands have been added to
354{\tt EDIT\_dset\_items} in order to make creating
355bucket datasets moderately painless.  In combination
356with a couple of other utility routines, it is possible
357to create an empty dataset with $n$ sub-bricks, attach
358data arrays to them and auxiliary data to them, and even
359later to expand the number of sub-bricks.
360
361The new {\tt ADN\_} commands are described below.
362(This section should be read in conjunction with \S2.7
363of the \afnit plugins manual.)
364The inputs to {\tt EDIT\_dset\_items} are copied into the
365internal data structure of the dataset being modified,
366and so can be freed or otherwise recycled after
367this function return.
368Bucket construction examples will be given later.
369
370\vset
371\newcommand{\tb}[1]{\parbox[t]{3.6in}{\sloppy #1}}
372\begin{tabbing}
373  ADN CONTROL NAme \= ADN type na \= \kill
374%
375\underline{\bf Control Code} \> \underline{\bf Data Type} \> \underline{\bf Meaning} \\*[.5ex]
376%
377{\tt ADN\_brick\_label\_one}            \> {\tt char *} \>
378         \tb{Unlike the earlier {\tt ADN\_} codes, this one, and the
379             others below that end in `{\tt \_one}' are be used to
380             set auxiliary parameters for individual sub-bricks in
381             a dataset.  This is done by adding the sub-brick index
382             to the {\tt ADN\_} code. Note that only one version
383             of any particular `{\tt \_one}' code can be used per
384             call to {\tt EDIT\_dset\_items}---to set multiple sub-bricks,
385             a~loop is required.  This particular code is used
386             to set the label that is displayed in the menu that
387             is used to select which sub-brick is displayed.}
388\\[.9ex]
389
390{\tt ADN\_brick\_fac\_one}              \> {\tt float}  \>
391          \tb{This code is used to set the scale factor for an
392              individual sub-brick.  The alternative code,
393              {\tt ADN\_brick\_fac} (described in the plugins manual),
394              is used to set {\it all\/} the sub-brick factors at once.}
395\\[.9ex]
396
397{\tt ADN\_brick\_stataux\_one}          \> \tb{\blob\\[.5ex]{\tt float *}} \>
398          \tb{This code is used to set the auxiliary statistical
399              parameters for an individual sub-brick.  The {\tt float}
400              array that is passed as the paired argument must have
401              the following contents:\\
402              \blob\blob {\tt statcode npar v1 v2 ... vn}\\
403              where {\tt statcode} is the type of statistic
404              stored in the sub-brick, {\tt npar} is the number
405              of parameters that follow in the array,
406              and {\tt v1}~\ldots~{\tt vn} are the parameters
407              for that type of statistic.  (Note that {\tt npar} may
408              be~0.)  See \S1 for details on the
409              different statistical types supported by \afni.}
410\\[.9ex]
411
412{\tt ADN\_brick\_keywords\_replace\_one} \> \tb{\blob\\[.5ex]{\tt char *}} \>
413          \tb{This code is used to delete the keywords associated
414              with a sub-brick and replace them with a new set.
415              The list of keywords is stored as a single string,
416              with distinct entries separated by the substring
417              \hbox{\tt " ; "}.  If you want to enter multiple distinct
418              keywords with this operation, you must supply the
419              \hbox{\tt " ; "} yourself within the paired argument.}
420\\[.9ex]
421
422{\tt ADN\_brick\_keywords\_append\_one} \> \tb{\blob\\[.5ex]{\tt char *}} \>
423          \tb{This code is used to add keywords to the list
424              associated with a sub-brick.  The input character string
425              will be appended to the existing keyword string, with
426              \hbox{\tt " ; "} separating them.  (This function will
427              supply the \hbox{\tt " ; "} separator.)  If there are
428              no keywords, this operation is equivalent to the {\tt replace} function
429              above.}
430\\[.9ex]
431
432{\tt ADN\_keywords\_replace}            \> {\tt char *} \>
433          \tb{This is used to replace the keywords list associated
434              with the entire dataset.}
435\\*[.9ex]
436
437{\tt ADN\_keywords\_append}             \> {\tt char *} \>
438          \tb{This is used to append to the keyword list for the
439              entire dataset.}
440\end{tabbing}
441
442\noindent
443To make some of these steps easier, the following {\tt C} macros
444have been defined:
445\begin{itemize}
446  \item {\tt EDIT\_BRICK\_LABEL(ds,iv,str)} \\*
447  will change the label in the
448  {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds} to the string {\tt str}.
449
450 \item {\tt EDIT\_BRICK\_FACTOR(ds,iv,fac)} \\*
451  will change the scaling factor in the
452  {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds} to the {\tt float} value {\tt fac}.
453
454 \item {\tt EDIT\_BRICK\_ADDKEY(ds,iv,str)} \\*
455  will add the keyword string {\tt str} to the
456  {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}.
457
458 \item {\tt EDIT\_BRICK\_TO\_FICO(ds,iv,nsam,nfit,nort)} \\*
459  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
460  to be {\tt fico} type (correlation coefficient) with statistical parameters {\tt nsam}, {\tt nfit}, and {\tt nort}.
461
462 \item {\tt EDIT\_BRICK\_TO\_FITT(ds,iv,ndof)} \\*
463  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
464  to be {\tt fitt} ($t$-test) type with statistical parameter {\tt ndof}.
465
466 \item {\tt EDIT\_BRICK\_TO\_FIFT(ds,iv,ndof,ddof)} \\*
467  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
468  to be {\tt fift} ($F$-test) type with statistical parameters {\tt ndof} and {\tt ddof}.
469
470 \item {\tt EDIT\_BRICK\_TO\_FIZT(ds,iv)} \\*
471  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
472  to be {\tt fizt} type ($z$-score, or normally distributed).
473
474 \item {\tt EDIT\_BRICK\_TO\_FICT(ds,iv,ndof)} \\*
475  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
476  to be {\tt fict} type ($\chi^2$ distributed) with statistical parameter {\tt ndof}.
477
478 \item {\tt EDIT\_BRICK\_TO\_FIBT(ds,iv,a,b)} \\*
479  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
480  to be {\tt fibt} type (beta distributed) with statistical parameters {\tt a} and~{\tt b}.
481
482 \item {\tt EDIT\_BRICK\_TO\_FIBN(ds,iv,ntrial,prob)} \\*
483  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
484  to be {\tt fibn} type (binomial distributed) with statistical parameters {\tt ntrial} and {\tt prob}.
485
486 \item {\tt EDIT\_BRICK\_TO\_FIGT(ds,iv,shape,scale)} \\*
487  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
488  to be {\tt fign} type (gamma distributed) with statistical parameters {\tt shape} and {\tt scale}.
489
490 \item {\tt EDIT\_BRICK\_TO\_FIPT(ds,iv,mean)} \\*
491  changes the {\tt iv}$^{\rm th}$ sub-brick of dataset {\tt ds}
492  to be {\tt fipt} type (Poisson distributed) with statistical parameter {\tt mean}.
493
494\end{itemize}
495
496\dline{Example: Creating a Bucket Dataset All at Once}
497In this example, an empty copy of an input dataset is made
498(to get the geometry correct), then the new dataset is turned
499into a function bucket, then the sub-bricks are attached.
500The following code is adapted from {\tt 3dbucket.c}.
501\begin{verbatim}
502THD_3dim_dataset * old_dset , * new_dset ;
503char * output_prefix , output_session ;
504int new_nvals , iv ;
505short ** bar ;          /* bar[iv] points to data for sub-brick #iv */
506char ** new_label ;     /* new_label[iv] points to label for #iv    */
507char ** new_keyw ;      /* new_keyw[iv] points to keywords for #iv  */
508float * new_fac ;       /* new_fac[iv] is new scale factor for #iv  */
509float   sax[32] ;       /* statistical auxiliary parameters         */
510
511/*-- Copy the input dataset structure, but not data --*/
512
513new_dset = EDIT_empty_copy( old_dset ) ;
514
515/*-- Modify some structural properties.
516     Note that the new_nvals just makes empty
517     sub-bricks, without any data attached.   --*/
518
519EDIT_dset_items( new_dset ,
520                   ADN_prefix        , output_prefix ,
521                   ADN_directory_name, output_session ,
522                   ADN_type          , HEAD_FUNC_TYPE ,
523                   ADN_func_type     , FUNC_BUCK_TYPE,
524                   ADN_ntt           , 0 ,              /* no time! */
525                   ADN_nvals         , new_nvals ,
526                   ADN_malloc_type   , DATABLOCK_MEM_MALLOC ,
527                 ADN_none ) ;
528
529if( THD_is_file(DSET_HEADNAME(new_dset)) ){
530   fprintf(stderr,"*** Fatal error: file %s already exists!\n",
531           DSET_HEADNAME(new_dset) ) ;
532   exit(1) ;
533}
534
535/*-- Loop over new sub-brick index,
536     attach data array with EDIT_substitute_brick
537     (this just attaches the pointer, it DOES NOT copy the array),
538     then put some strings into the labels and keywords,
539     and modify the sub-brick scaling factor
540     (a zero scaling factor means don't scale the data array). --*/
541
542for( iv=0 ; iv < new_nvals ; iv++ ){
543   EDIT_substitute_brick( new_dset , iv ,         /* attach bar[iv] to   */
544                          MRI_short , bar[iv] ) ; /* be sub-brick #iv.   */
545                                                  /* don't free bar[iv]! */
546
547   EDIT_dset_items( new_dset ,
548                       ADN_brick_label_one           +iv, new_label[iv] ,
549                       ADN_brick_keywords_replace_one+iv, new_keyw[iv]  ,
550                       ADN_brick_fac_one             +iv, new_fac[iv]   ,
551                    ADN_none ) ;
552}
553
554/*-- Make sub-brick #2 be a t-test --*/
555
556sax[0] = FUNC_TT_TYPE ;
557sax[1] = 1.0 ;
558sax[2] = degrees_of_freedom ;
559EDIT_dset_items( new_dset ,
560                   ADN_brick_stataux_one + 2 , sax ,
561                 ADN_none ) ;
562
563/*-- write new dataset to disk --*/
564
565DSET_write( new_dset ) ;
566\end{verbatim}
567
568\dline{Adding Sub-Bricks to a Bucket Dataset}
569In the above example, all the sub-bricks were created at once.
570They were initially empty, after the first call to
571{\tt EDIT\_dset\_items}, but otherwise had all the structure
572needed.  After a sub-brick has an actual data array attached
573to it, the {\tt ADN\_nvals} code can no longer be used to
574change the number of sub-bricks in dataset.
575
576If a dataset already has actual data attached to any of its
577sub-bricks, another method must be used to add a new sub-brick:
578\begin{verbatim}
579  short * qbar ;
580  float   qfac ;
581  EDIT_add_brick( new_dset , MRI_short , qfac , qbar ) ;
582\end{verbatim}
583will create a new sub-brick in the dataset, with data
584type {\tt short}, scale factor {\tt qfac}, and data
585array {\tt qbar}.  (The pointer {\tt qbar} is just
586copied into the sub-brick---the data it points to
587now `belongs' to the dataset and should not be freed
588by you!)  If you wish to attach a label, keywords, or
589statistical parameters to this new brick, you would
590do this using {\tt EDIT\_dset\_items} (using the
591correct index for the new sub-brick).
592
593Note that if you are doing this to a 3D+time dataset,
594as opposed to a bucket dataset, then a little more
595needs to be done.  See {\tt plug\_realtime.c} for
596an example of how the \afnit real-time system uses
597{\tt EDIT\_add\_brick} to grow a 3D+time dataset
598during image acquisition.
599
600\dline{Accessing Sub-Brick Data}
601The following {\tt C} macros can be used to access the contents
602of sub-bricks and their associated data.
603The argument {\tt ds} is a pointer to a dataset {\tt struct},
604and the argument {\tt iv} is a sub-brick index.
605\vset
606\renewcommand{\tb}[1]{\parbox[t]{3.99in}{\sloppy #1}}
607\begin{tabbing}
608  XXXXXXXXXXXXXXXXXXXx \= \kill
609%
610\underline{\bf Macro} \> \underline{\bf Meaning} \\*[.5ex]
611
612{\tt ISVALID\_DSET(ds)} \> \tb{Returns 1 if {\tt ds} is a valid pointer
613                              to a dataset, or 0 if it is not.}
614\\[.9ex]
615{\tt ISANATBUCKET(ds)} \> \tb{Returns 1 if {\tt ds} is an anatomy
616                              bucket dataset, 0~if it is not.}
617\\[.9ex]
618{\tt ISFUNCBUCKET(ds)} \> \tb{Returns 1 if {\tt ds} is a function
619                              bucket dataset, 0~if it is not.}
620\\[.9ex]
621{\tt ISBUCKET(ds)}     \> \tb{Returns 1 if {\tt ds} is a
622                              bucket dataset (function or anatomy), 0~if it is not.}
623\\[.9ex]
624{\tt DSET\_BRICK\_TYPE(ds,iv)} \> \tb{Returns an integer describing what type
625                                    of data is stored in the sub-brick array.}
626\\[.9ex]
627{\tt DSET\_BRICK\_ARRAY(ds,iv)} \> \tb{Returns a pointer to the sub-brick array.}
628\\[.9ex]
629{\tt DSET\_BRICK\_FACTOR(ds,iv)} \> \tb{Returns the sub-brick floating point
630                                      scale factor.}
631\\[.9ex]
632{\tt DSET\_NVALS(ds)} \> \tb{Returns the number of sub-bricks in a dataset.}
633\\[.9ex]
634{\tt DSET\_BRICK\_LABEL(ds,iv)} \> \tb{Returns a pointer to the sub-brick label.
635                                     This pointer will not be {\tt NULL}.  Do {\it not\/}
636                                     write into this string!}
637\\[.9ex]
638{\tt DSET\_BRICK\_STATCODE(ds,iv)} \> \tb{Returns an integer with the statistical
639                                        type of a sub-brick.  A~positive value
640                                        means that this sub-brick can be interpreted
641                                        as a statistic.  Note that if {\tt ds} is
642                                        on of the older 2-brick datasets such
643                                        as {\tt fico}, then calling this with
644                                        {\tt iv=1} will return the correct code,
645                                        even though that code is actually associated
646                                        with the dataset as a whole, not the sub-brick.}
647\\[.9ex]
648{\tt DSET\_BRICK\_STATAUX(ds,iv)} \> \tb{Returns a pointer to a {\tt float} array
649                                       with the statistical parameters for this
650                                       sub-brick.  This may be {\tt NULL}, which means
651                                       that you did something wrong.  Do {\it not\/}
652                                     write into this array!  The number of parameters
653                                     in this array can be determined from the table
654                                     in~\S1, or from
655                                     {\tt FUNC\_need\_stat\_aux[kv]} where
656                                     {\tt kv = DSET\_BRICK\_STATCODE(ds,iv)}.}
657\\[.9ex]
658{\tt DSET\_BRICK\_STATPAR(ds,iv,jj)} \> \tb{Returns the {\tt jj}$^{\rm th}$ statistical
659                                          parameter for this sub-brick.  This will
660                                          be a float.}
661\\[.9ex]
662{\tt DSET\_BRICK\_KEYWORDS(ds,iv)} \> \tb{Returns a pointer to the keywords string for
663                                        this sub-brick (\hbox{\tt char *}).  Do {\it not\/}
664                                     write into this string!  This pointer may be {\tt NULL}.}
665\\[.9ex]
666{\tt DSET\_KEYWORDS(ds)} \> \tb{Returns a pointer to the keywords string for the
667                              entire dataset.  Do {\it not\/}
668                                     write into this string!  This pointer may be {\tt NULL}.}
669\end{tabbing}
670
671\dline{Creating a Bucket from a 3D+time Dataset}
672I have written a utility routine to create a function bucket dataset from an input
6733D+time dataset.  This function takes as input a user-supplied function
674that returns the bucket values at each voxel.  This function
675resides in {\tt 3dmaker.c} (a~new file), and can be called from a
676plugin or from a command-line program.
677Its calling sequence is:
678\begin{verbatim}
679  new_dset = MAKER_4D_to_typed_fbuc( THD_3dim_dataset * old_dset ,
680                                     char * new_prefix , int new_datum ,
681                                     int ignore , int detrend ,
682                                     int nbrik ,generic_func * user_func ,
683                                     void * user_data ) ;
684\end{verbatim}
685The inputs to this function are:
686\begin{tabbing}
687 XXX \= XXXXXXXX \= \kill
688    \> {\tt old\_dset}   \> Pointer to old dataset; \\*
689    \>                  \> \blob note that this dataset must not be warp-on-demand. \\[.5ex]
690    \> {\tt new\_prefix} \> String to use as filename prefix. \\[.5ex]
691    \> {\tt new\_datum}  \> Type of data to store in output brick; \\*
692    \>                  \> \blob if negative, will use datum from {\tt old\_dset}. \\[.5ex]
693    \> {\tt ignore}     \> Number of data points to ignore at the start. \\[.5ex]
694    \> {\tt detrend}    \> If nonzero, this routine will detrend ({\tt a+b*t}) \\*
695    \>                  \> \blob each time series before passing it to {\tt user\_func}. \\[.5ex]
696    \> {\tt nbrik}      \> Number of values (and sub-bricks) to create at each voxel location. \\[.5ex]
697    \> {\tt user\_func}  \> Function to compute outputs; discussed below. \\[.5ex]
698    \> {\tt user\_data}  \> Discussed below.
699\end{tabbing}
700The output is a pointer to a new dataset.  If {\tt NULL} is returned,
701some error occurred.
702
703The {\tt user\_func} function should be declared like so:
704\begin{verbatim}
705   void user_func( double tzero , double tdelta ,
706                   int npts , float ts[] , double ts_mean , double ts_slope ,
707                   void * ud , int nbrik , float * val ) ;
708\end{verbatim}
709The arguments to {\tt user\_func} are:
710\begin{tabbing}
711 XXX \= XXXXXXXX \= \kill
712   \> {\tt tzero}  \>  Time at {\tt ts[0]}. \\*[.5ex]
713   \> {\tt tdelta} \>  Time at {\tt ts[1]} ({\it i.e.}, {\tt ts[k]} is at {\tt tzero+k*tdelta}); \\*
714   \>              \>  \blob tzero and tdelta will be in sec if this is truly `time'. \\[.5ex]
715   \> {\tt npts}   \>  Number of points in ts array.\\[.5ex]
716   \> {\tt ts}     \>  One voxel time series array, {\tt ts[0]} \ldots {\tt ts[npts-1]}; \\*
717   \>              \>  \blob note that this will always be a float array, and that {\tt ts} will \\*
718   \>              \>  \blob start with the {\tt ignore}$^{\rm th}$ point of the actual voxel time series. \\[.5ex]
719   \> {\tt ts\_mean} \>  Mean value of {\tt ts} array. \\[.5ex]
720   \> {\tt ts\_slope} \>  Slope of {\tt ts} array; this will be inversely proportional to {\tt tdelta} \\*
721   \>                \> \blob (units of 1/sec); if {\tt detrend} is nonzero, then the mean and slope \\*
722   \>                \> \blob will been removed from the {\tt ts} array. \\[.5ex]
723   \> {\tt ud}     \>  The {\tt user\_data} pointer passed in here; this can contain whatever \\*
724   \>              \>  \blob control information the user wants. \\[.5ex]
725   \> {\tt nbrik}  \>  Number of output values that this function should return for \\*
726   \>              \>   \blob the voxel corresponding to input data {\tt ts}. \\[.5ex]
727   \> {\tt val}    \>  Pointer to return values for this voxel; \\*
728   \>              \> \blob  note that this is a {\tt float} array of length {\tt nbrik}, and that values \\*
729   \>              \> \blob  you don't fill in will be set to zero.
730\end{tabbing}
731Before the first timeseries is passed, {\tt user\_func} will be called with arguments
732\begin{verbatim}
733     ( 0.0 , 0.0 , nvox , NULL , 0.0 , 0.0 , user_data , nbrik , NULL )
734\end{verbatim}
735where {\tt nvox} = number of voxels that will be processed.
736This is to allow for some setup ({\it e.g.},~{\tt malloc}).
737After the last timeseries is passed, {\tt user\_func} will be called again with
738arguments
739\begin{verbatim}
740     ( 0.0 , 0.0 , 0 , NULL , 0.0 , 0.0 , user_data , nbrik , NULL )
741\end{verbatim}
742This is to allow for cleanup ({\it e.g.},~{\tt free}).  Note that the
743only difference between these `notification' calls is the third argument.
744After the new dataset is created, you will likely want to modify
745some of the auxiliary data associated with its sub-bricks ({\it e.g.},~set
746statistical parameters and labels).
747
748%---------------------------------------------------------------------
749\mysec{Changes in \afni}
750If the active datasets are buckets, then the set of choosers
751in the {\tt Define Function} control panel changes.  The
752figure below shows the new bucket sub-brick choosers on the
753left; the old style sub-brick choosers are shown on the right
754for comparison (these are used with the non-bucket dataset types).\vspace{1ex}
755
756\centerline{\epsfxsize=2.5in\epsffile{bucket_bb.eps}
757            \blob\blob\blob
758            \epsfxsize=2.5in\epsffile{bucket_nb.eps}}
759
760\centerline{\makebox[2.5in]{\sf With bucket sub-brick choosers}
761            \blob\blob\blob
762            \makebox[2.5in]{\sf With old-style choosers}}\vspace{1ex}
763
764Note that any sub-brick of a bucket dataset can be used as a
765threshold.  This is true even if it does not have statistical
766parameters attached.  (In that case, no $p$-value can be
767computed, of course.)
768
769Two new buttons have been added to the \fbox{\tt Misc} menu under the {\tt Define Datamode} control panel:
770\fbox{\tt Anat Info} and \fbox{\tt Func Info}.
771These buttons will popup message windows with the output of
772program {\tt 3dinfo} about the current anatomical and functional datasets.
773This is to help look up keywords and statistical types of sub-bricks.
774In addition, the \fbox{FIM} button has been removed from the
775{\tt Define Function} control panel.
776
777%---------------------------------------------------------------------
778\mysec{Still to Come}
779Things that are needed:
780\begin{itemize}
781  \item Routines to deal with keyword lists, and a mechanism to
782        allow the user to select sub-bricks (and datasets) based on keywords.
783  \item A mechanism to allow the user to assemble datasets `on-the-fly'
784        using keyword and/or sub-brick index criteria.  (A~generalization
785        of the syntax of {\tt 3dbucket} is a possibility.)
786  \item Applications that create buckets ({\it e.g.}, multiple regression, \ldots).
787\end{itemize}
788
789\end{document}
790