1 static char * PvalueStuff[] = {
2    "\n" ,
3    "---------------------\n" ,
4    "A NOTE ABOUT p-VALUES\n" ,
5    "---------------------\n" ,
6    "The 2-sided p-value of a t-statistic value T is the likelihood (probability)\n" ,
7    "that the absolute value of the t-statistic computation would be bigger than\n" ,
8    "the absolute value of T, IF the null hypothesis of no difference in the means\n" ,
9    "(2-sample test) were true.  For example, with 30 degrees of freedom, a T-value\n" ,
10    "of 2.1 has a p-value of 0.0442 -- that is, if the null hypothesis is true\n" ,
11    "and you repeated the experiment a lot of times, only 4.42% of the time would\n" ,
12    "the T-value get to be 2.1 or bigger (and -2.1 or more negative).\n" ,
13    "\n" ,
14    "You can NOT interpret this to mean that the alternative hypothesis (that the\n" ,
15    "means are different) is 95.58% likely to be true.  (After all, this T-value\n" ,
16    "shows a pretty weak effect size -- difference in the means for a 2-sample\n" ,
17    "t-test, magnitude of the mean for a 1-sample t-test, scaled by the standard\n" ,
18    "deviation of the noise in the samples.)  A better way to think about it is\n" ,
19    "to pose the following question:\n" ,
20    "     Assuming that the alternative hypothesis is true, how likely\n" ,
21    "     is it that you would get the p-value of 0.0442, versus how\n" ,
22    "     likely is p=0.0442 when the null hypothesis is true?\n" ,
23    "This is the question addressed in the paper:\n" ,
24    "     Calibration of p Values for Testing Precise Null Hypotheses.\n" ,
25    "     T Sellke, MJ Bayarri, and JO Berger.\n" ,
26    "     The American Statistician v.55:62-71, 2001.\n" ,
27    "     http://www.stat.duke.edu/courses/Spring10/sta122/Labs/Lab6.pdf\n" ,
28    "The exact interpretation of what the above question means is somewhat\n" ,
29    "tricky, depending on if you are a Bayesian heretic or a Frequentist\n" ,
30    "true believer.  But in either case, one reasonable answer is given by\n" ,
31    "the function\n" ,
32    "     alpha(p) = 1 / [ 1 - 1/( e * p * log(p) ) ]\n" ,
33    "(where 'e' is 2.71828... and 'log' is to the base 'e').  Here,\n" ,
34    "alpha(p) can be interpreted as the likelihood that the given p-value\n" ,
35    "was generated by the null hypothesis, versus being from the alternative\n" ,
36    "hypothesis.  For p=0.0442, alpha=0.2726; in non-quantitative words, this\n" ,
37    "p-value is NOT very strong evidence that the alternative hypothesis is true.\n" ,
38    "\n" ,
39    "Why is this so -- why isn't saying 'the null hypothesis would only give\n" ,
40    "a result this big 4.42% of the time' similar to saying 'the alternative\n" ,
41    "hypothesis is 95.58% likely to be true'?  The answer is because it is\n" ,
42    "only somewhat more likely the t-statistic would be that value when the\n" ,
43    "alternative hypothesis is true.  In this example, the difference in means\n" ,
44    "cannot be very large, or the t-statistic would almost certainly be larger.\n" ,
45    "But with a small difference in means (relative to the standard deviation),\n" ,
46    "the alternative hypothesis (noncentral) t-value distribution isn't that\n" ,
47    "different than the null hypothesis (central) t-value distribution.  It is\n" ,
48    "true that the alternative hypothesis is more likely to be true than the\n" ,
49    "null hypothesis (when p < 1/e = 0.36788), but it isn't AS much more likely\n" ,
50    "to be true than the p-value itself seems to say.\n" ,
51    "\n" ,
52    "In short, a small p-value says that if the null hypothesis is true, the\n" ,
53    "experimental results that you have aren't very likely -- but it does NOT\n" ,
54    "say that the alternative hypothesis is vastly more likely to be correct,\n" ,
55    "or that the data you have are vastly more likely to have come from the\n" ,
56    "alternative hypothesis case.\n" ,
57    "\n" ,
58    "Some values of alpha(p) for those too lazy to calculate just now:\n" ,
59    "     p = 0.0005 alpha = 0.010225\n" ,
60    "     p = 0.001  alpha = 0.018431\n" ,
61    "     p = 0.005  alpha = 0.067174\n" ,
62    "     p = 0.010  alpha = 0.111254\n" ,
63    "     p = 0.015  alpha = 0.146204\n" ,
64    "     p = 0.020  alpha = 0.175380\n" ,
65    "     p = 0.030  alpha = 0.222367\n" ,
66    "     p = 0.040  alpha = 0.259255\n" ,
67    "     p = 0.050  alpha = 0.289350\n" ,
68    "You can also try this fun AFNI package command to plot alpha(p) vs. p:\n" ,
69    "     1deval -dx 0.001 -xzero 0.001 -num 99 -expr '1/(1-1/(exp(1)*p*log(p)))' |\n" ,
70    "       1dplot -stdin -dx 0.001 -xzero 0.001 -xlabel 'p' -ylabel '\\alpha(p)'\n" ,
71    "Another example: to reduce the likelihood of the null hypothesis being the\n" ,
72    "source of your t-statistic to 10%, you have to have p = 0.008593 -- a value\n" ,
73    "more stringent than usually seen in scientific publications.  To get the null\n" ,
74    "hypothesis likelihood below 5%, you have to get p below 0.003408.\n" ,
75    "\n" ,
76    "Finally, none of the discussion above is limited to the case of p-values that\n" ,
77    "come from 2-sided t-tests.  The function alpha(p) applies (approximately) to\n" ,
78    "many other situations.  However, it does NOT apply to 1-sided tests (which are\n" ,
79    "not testing 'Precise Null Hypotheses', such as 'effect size == 0').  See the\n"
80    "paper by Sellke et al. for a lengthier and more precise discussion.  Another\n"
81    "article on the same topic is:\n" ,
82    "     Revised standards for statistical evidence.\n" ,
83    "     VE Johnson.  PNAS v110:19313-19317, 2013.\n" ,
84    "     http://www.pnas.org/content/110/48/19313.long\n" ,
85    "And also see the very readable summary:\n"
86    "     An investigation of the false discovery rate and the misinterpretation\n"
87    "     of p-values.  D Colquhoun.  Royal Society of Open Science, Nov 2014.\n"
88    "     http://rsos.royalsocietypublishing.org/content/1/3/140216\n"
89    "In this latter article, a threshold of p < 0.001 is recommended!\n"
90    "\n" ,
91    "For the case of 1-sided t-tests, the issue is more complex; the paper below\n" ,
92    "may be of interest:\n" ,
93    "     Default Bayes Factors for Nonnested Hypthesis Testing.\n" ,
94    "     JO Berger and J Mortera.  J Am Stat Assoc v:94:542-554, 1999.\n" ,
95    "     http://www.jstor.org/stable/2670175 [PDF]\n" ,
96    "     http://ftp.isds.duke.edu/WorkingPapers/97-44.ps [PS preprint]\n" ,
97    "What I have tried to do herein is outline the p-value interpretation issue\n" ,
98    "using (mostly) non-technical words.\n" ,
99    NULL } ;
100