T-Space Collection:
http://hdl.handle.net/1807/25379
Fri, 18 Apr 2014 08:13:15 GMT2014-04-18T08:13:15ZInvariant Procedures for Model Checking, Checking for Prior-Data Conflict and Bayesian Inference
http://hdl.handle.net/1807/24771
Title: Invariant Procedures for Model Checking, Checking for Prior-Data Conflict and Bayesian Inference
Authors: Jang, Gun Ho
Abstract: We consider a statistical theory as being invariant when the results of two statisticians' independent data analyses, based upon the same statistical theory and using effectively the same statistical ingredients, are the same.
We discuss three aspects of invariant statistical theories.
Both model checking and checking for prior-data conflict are assessments of single null hypothesis without any specific alternative hypothesis.
Hence, we conduct these assessments using a measure of surprise based on a discrepancy statistic.
For the discrete case, it is natural to use the probability of obtaining a data point that is less probable than the observed data.
For the continuous case, the natural analog of this is not invariant under equivalent choices of discrepancies.
A new method is developed to obtain an invariant assessment. This approach also allows several discrepancies to be combined into one discrepancy via a single P-value.
Second, Bayesians developed many noninformative priors that are supposed to contain no information concerning the true parameter value.
Any of these are data dependent or improper which can lead to a variety of difficulties.
Gelman (2006) introduced the notion of the weak informativity as a comprimise between informative and noninformative priors without a precise definition.
We give a precise definition of weak informativity using a measure of prior-data conflict that assesses whether or not a prior places its mass around the parameter values having relatively high likelihood.
In particular, we say a prior Pi_2 is weakly informative relative to another prior Pi_1 whenever Pi_2 leads to fewer prior-data conflicts a priori than Pi_1.
This leads to a precise quantitative measure of how much less informative a weakly informative prior is.
In Bayesian data analysis, highest posterior density inference is a commonly used method.
This approach is not invariant to the choice of dominating measure or reparametrizations.
We explore properties of relative surprise inferences suggested by Evans (1997).
Relative surprise inferences which compare the belief changes from a priori to a posteriori are invariant under reparametrizations.
We mainly focus on the connection of relative surprise inferences to classical Bayesian decision theory as well as important optimalities.Fri, 13 Aug 2010 14:38:30 GMThttp://hdl.handle.net/1807/247712010-08-13T14:38:30ZA Bayesian Approach to Factor Analysis via Comparing Prior and Posterior Concentration
http://hdl.handle.net/1807/24698
Title: A Bayesian Approach to Factor Analysis via Comparing Prior and Posterior Concentration
Authors: Cao, Yun
Abstract: We consider a factor analysis model that arises as some distribution form known up
to first and second moments. We propose a new Bayesian approach to determine if any latent factors exist and the number of factors. As opposed to current Bayesian
methodology for factor analysis, our approach only requires the specification of a
prior for the mean vector and the variance matrix for the manifest variables. We
compare the concentration of the prior and posterior about the various subsets of
parameter space specified by the hypothesized factor structures. We consider two priors here, one is conjugate type and the other is based on the correlation factorization of the covariance matrix. A computational problem associated with the use of the second prior is solved by the use of importance sampling for the posterior analysis.
If the data does not lead to a substantial increase in the concentration about
the relevant subset, of the posterior compared to the prior, then we have evidence
against the hypothesized factor structure. The hypothesis is assessed by computing
the observed relative surprise. This results in a considerable simplification of the
problem, especially with respect to the elicitation of the prior.Thu, 05 Aug 2010 19:35:39 GMThttp://hdl.handle.net/1807/246982010-08-05T19:35:39ZConvergence of Adaptive Markov Chain Monte Carlo Algorithms
http://hdl.handle.net/1807/24673
Title: Convergence of Adaptive Markov Chain Monte Carlo Algorithms
Authors: Bai, Yan
Abstract: In the thesis, we study ergodicity of adaptive Markov Chain Monte Carlo methods (MCMC) based on two conditions(Diminishing Adaptation and Containment which together imply ergodicity), explain the advantages of adaptive MCMC, and apply the theoretical result for some applications.
\indent First we show several facts: 1. Diminishing Adaptation alone may not guarantee ergodicity; 2. Containment is not necessary for ergodicity; 3. under some additional condition, Containment is
necessary for ergodicity. Since Diminishing Adaptation is relatively easy to check and Containment is abstract, we focus on the
sufficient conditions of Containment. In order to study Containment, we consider the quantitative bounds of the distance between samplers and targets in total variation norm. From early results, the quantitative bounds are connected with nested drift conditions for polynomial rates of convergence. For ergodicity of adaptive MCMC,
assuming that all samplers simultaneously satisfy nested polynomial drift conditions, we find that either when the number of nested
drift conditions is greater than or equal to two, or when the number of drift conditions with some specific form is one, the adaptive
MCMC algorithm is ergodic. For adaptive MCMC algorithm with Markovian adaptation, the algorithm satisfying simultaneous polynomial ergodicity is ergodic without those restrictions. We also discuss some recent results related to this topic.
\indent Second we consider ergodicity of certain adaptive Markov Chain Monte Carlo algorithms for multidimensional target
distributions, in particular, adaptive Metropolis and adaptive Metropolis-within-Gibbs algorithms. We derive various sufficient conditions to ensure Containment, and connect the convergence rates of algorithms with the tail properties of the corresponding target distributions. We also present a Summable Adaptive Condition which,
when satisfied,proves ergodicity more easily.
\indent Finally, we propose a simple adaptive
Metropolis-within-Gibbs algorithm attempting to study directions on which the Metropolis algorithm can be run flexibly. The algorithm
avoids the wasting moves in wrong directions by proposals from the full dimensional adaptive Metropolis algorithm. We also prove its ergodicity, and test it on a Gaussian Needle example and a real-life
Case-Cohort study with competing risks. For the Cohort study, we describe an extensive version of Competing Risks Regression model,
define censor variables for competing risks, and then apply the algorithm to estimate coefficients based on the posterior
distribution.Wed, 04 Aug 2010 14:50:15 GMThttp://hdl.handle.net/1807/246732010-08-04T14:50:15ZFirst Passage Times: Integral Equations, Randomization and Analytical Approximations
http://hdl.handle.net/1807/19240
Title: First Passage Times: Integral Equations, Randomization and Analytical Approximations
Authors: Valov, Angel
Abstract: The first passage time (FPT) problem for Brownian motion has been extensively studied
in the literature. In particular, many incarnations of integral equations which link the density of the hitting time to the equation for the boundary itself have appeared. Most interestingly, Peskir (2002b) demonstrates that a master integral equation can be used to generate a countable number of new integrals via its differentiation or integration. In this thesis, we generalize Peskir's results and provide a more powerful unifying framework for generating integral equations through a new class of martingales. We obtain a continuum of new Volterra type equations and prove uniqueness for a subclass. The uniqueness result is
then employed to demonstrate how certain functional transforms of the boundary affect the density function. Furthermore, we generalize a class of Fredholm integral equations and show its fundamental
connection to the new class of Volterra equations. The Fredholm equations are then
shown to provide a unified approach for computing the FPT distribution for linear, square root and quadratic boundaries. In addition, through the Fredholm equations, we analyze a polynomial expansion of the FPT density and employ a regularization method to solve for the coefficients. Moreover, the Volterra and Fredholm equations help us to examine a modification of the classical FPT under which we randomize, independently, the starting point of the Brownian motion. This randomized problem seeks the distribution of the starting point and takes the boundary and the (unconditional) FPT distribution as inputs. We show the existence
and uniqueness of this random variable and solve the problem analytically for the linear
boundary. The randomization technique is then drawn on to provide a structural framework
for modeling mortality. We motivate the model and its natural inducement of 'risk-neutral'
measures to price mortality linked financial products.
Finally, we address the inverse FPT problem and show that in the case of the scale family
of distributions, it is reducible to nding a single, base boundary. This result was applied
to the exponential and uniform distributions to obtain analytical approximations of their
corresponding base boundaries and, through the scaling property, for a general boundary.Wed, 03 Mar 2010 15:33:32 GMThttp://hdl.handle.net/1807/192402010-03-03T15:33:32Z