Alexander Terenin and David Draper

09/25/2015 03:38 PM

Applied Mathematics & Statistics

Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method for numerically approximating integrals of interest in Bayesian statistics and other mathematical sciences. Since MCMC methods typically suffer from poor scaling when the integral in question is high-dimensional (for example, in problems in Bayesian statistics involving large data sets), researchers have attempted to find ways to speed up computation. We present a novel scheme that allows us to approximate any integral (for which a Gibbs sampler exists) in a parallel fashion with no synchronization or locking, avoiding the typical performance bottlenecks of parallel algorithms. We provide three examples that offer numerical evidence of the scheme’s convergence and illustrate some of the algorithm’s properties with respect to scaling. Because our hardware resources are bounded, we have not yet found a limit to the algorithm’s scaling, and thus its true capabilities remain unknown. The convergence proof for our scheme is a work in progress and we defer it to a future publication.