Minibatch MetropolisHastings MCMC with Reversible SGLD Proposal
Seminar/Forum
Traditional MCMC algorithms are computationally intensive and do not scale well to large data. In particular, the MetropolisHastings (MH) algorithm requires passing over the entire dataset to evaluate the likelihood ratio in each iteration. We propose a general framework for performing MHMCMC using minibatches of the whole dataset and show that this gives rise to approximately a tempered stationary distribution. We prove that the algorithm preserves the modes of the original target distribution and derive an error bound on the approximation with mild assumptions on the likelihood, while allowing the dimension to grow at a suitable rate. To further enhance the utility of the algorithm in high dimensional settings, we construct a proposal with forward and reverse moves using stochastic gradient and show that the construction significantly increases acceptance probabilities. We demonstrate the performance of our algorithm in both low dimensional models and high dimensional deep learning applications. Particularly in the latter case, compared to popular optimization methods, our method is more robust to the choice of learning rate and improves testing accuracy.
Presenter

Dr Rachel Wang, University of Sydney