
TSpace at The University of Toronto Libraries >
School of Graduate Studies  Theses >
Doctoral >
Please use this identifier to cite or link to this item:
http://hdl.handle.net/1807/24771

Title:  Invariant Procedures for Model Checking, Checking for PriorData Conflict and Bayesian Inference 
Authors:  Jang, Gun Ho 
Advisor:  Evans, Michael 
Department:  Statistics 
Keywords:  invariance Pvalue model checking checking for priordata conflict Bayesian inference relative surprise 
Issue Date:  13Aug2010 
Abstract:  We consider a statistical theory as being invariant when the results of two statisticians' independent data analyses, based upon the same statistical theory and using effectively the same statistical ingredients, are the same.
We discuss three aspects of invariant statistical theories.
Both model checking and checking for priordata conflict are assessments of single null hypothesis without any specific alternative hypothesis.
Hence, we conduct these assessments using a measure of surprise based on a discrepancy statistic.
For the discrete case, it is natural to use the probability of obtaining a data point that is less probable than the observed data.
For the continuous case, the natural analog of this is not invariant under equivalent choices of discrepancies.
A new method is developed to obtain an invariant assessment. This approach also allows several discrepancies to be combined into one discrepancy via a single Pvalue.
Second, Bayesians developed many noninformative priors that are supposed to contain no information concerning the true parameter value.
Any of these are data dependent or improper which can lead to a variety of difficulties.
Gelman (2006) introduced the notion of the weak informativity as a comprimise between informative and noninformative priors without a precise definition.
We give a precise definition of weak informativity using a measure of priordata conflict that assesses whether or not a prior places its mass around the parameter values having relatively high likelihood.
In particular, we say a prior Pi_2 is weakly informative relative to another prior Pi_1 whenever Pi_2 leads to fewer priordata conflicts a priori than Pi_1.
This leads to a precise quantitative measure of how much less informative a weakly informative prior is.
In Bayesian data analysis, highest posterior density inference is a commonly used method.
This approach is not invariant to the choice of dominating measure or reparametrizations.
We explore properties of relative surprise inferences suggested by Evans (1997).
Relative surprise inferences which compare the belief changes from a priori to a posteriori are invariant under reparametrizations.
We mainly focus on the connection of relative surprise inferences to classical Bayesian decision theory as well as important optimalities. 
URI:  http://hdl.handle.net/1807/24771 
Appears in Collections:  Doctoral Department of Statistics  Doctoral theses

Items in TSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
