1\name{raftery.diag}
2\alias{raftery.diag}
3%\alias{print.raftery.diag}
4\title{Raftery and Lewis's diagnostic}
5
6\usage{
7raftery.diag(data, q=0.025, r=0.005, s=0.95, converge.eps=0.001)
8}
9
10\arguments{
11   \item{data}{an \code{mcmc} object}
12   \item{q}{the quantile to be estimated.}
13   \item{r}{the desired margin of error of the estimate.}
14   \item{s}{the probability of obtaining an estimate in the interval (q-r,q+r).}
15   \item{converge.eps}{Precision required for estimate of time to convergence.}
16}
17
18\description{
19   \code{raftery.diag} is a run length control diagnostic based on a
20   criterion of accuracy of estimation of the quantile \code{q}.  It is
21   intended for use on a short pilot run of a Markov chain.  The number
22   of iterations required to estimate the quantile \eqn{q} to within an
23   accuracy of +/- \eqn{r} with probability \eqn{p} is calculated. Separate
24   calculations are performed for each variable within each chain.
25
26   If the number of iterations in \code{data} is too small,
27   an error message is printed indicating the minimum length of
28   pilot run.  The minimum length is the required sample size for a
29   chain with no correlation between consecutive samples. Positive
30   autocorrelation will increase the required sample size above this
31   minimum value. An estimate \code{I} (the `dependence factor') of the
32   extent to which autocorrelation inflates the required sample size
33   is also provided. Values of \code{I} larger than 5 indicate strong
34   autocorrelation which may be due to a poor choice of starting value,
35   high posterior correlations or `stickiness' of the MCMC algorithm.
36
37   The number of `burn in' iterations to be discarded at the beginning
38   of the chain is also calculated.
39}
40
41\value{
42   A list with class \code{raftery.diag}.  A print method is available
43   for objects of this class. the contents of the list are
44      \item{tspar}{The time series parameters of \code{data}}
45      \item{params}{A vector containing the parameters \code{r}, \code{s}
46      and \code{q}}
47      \item{Niters}{The number of iterations in \code{data}}
48      \item{resmatrix}{A 3-d array containing the results: \eqn{M} the
49      length of "burn in", \eqn{N} the required sample size, \eqn{Nmin}
50      the minimum sample size based on zero autocorrelation and
51      \eqn{I = (M+N)/Nmin} the "dependence factor"}
52}
53
54\section{Theory}{
55   The estimated sample size for variable U is based on the process \eqn{Z_t
56   = d(U_t <= u)} where \eqn{d} is the indicator function and u is the
57   qth quantile of U. The process \eqn{Z_t} is derived from the Markov
58   chain \code{data} by marginalization and truncation, but is not itself
59   a Markov chain.  However, \eqn{Z_t} may behave as a Markov chain if
60   it is sufficiently thinned out.  \code{raftery.diag} calculates the
61   smallest value of thinning interval \eqn{k} which makes the thinned
62   chain \eqn{Z^k_t} behave as a Markov chain. The required sample size is
63   calculated from this thinned sequence.  Since some data is `thrown away'
64   the sample size estimates are conservative.
65
66   The criterion for the number of `burn in' iterations \eqn{m} to be
67   discarded, is that the conditional distribution of \eqn{Z^k_m}
68   given \eqn{Z_0} should be within \code{converge.eps} of the equilibrium
69   distribution of the chain \eqn{Z^k_t}.
70}
71
72\note{
73   \code{raftery.diag} is based on the FORTRAN program `gibbsit'
74   written by Steven Lewis, and available from the Statlib archive.
75}
76
77\references{
78   Raftery, A.E. and Lewis, S.M. (1992).  One long run with diagnostics:
79   Implementation strategies for Markov chain Monte Carlo.
80   \emph{Statistical Science}, \bold{7}, 493-497.
81
82   Raftery, A.E. and Lewis, S.M. (1995).  The number of iterations,
83   convergence diagnostics and generic Metropolis algorithms.  \emph{In}
84   Practical Markov Chain Monte Carlo (W.R. Gilks, D.J. Spiegelhalter
85   and S. Richardson, eds.). London, U.K.: Chapman and Hall.
86}
87
88\keyword{htest}
89