Sequentially adaptive Bayesian learning algorithms for inference and optimization
Publication date: Available online 12 November 2018
Source: Journal of Econometrics
Author(s): John Geweke, Garland Durham
The sequentially adaptive Bayesian learning algorithm (SABL) builds on and ties together ideas from sequential Monte Carlo and simulated annealing. The algorithm can be used to simulate from Bayesian posterior distributions, using either data tempering or power tempering, or for optimization. A key feature of SABL is that the introduction of information is adaptive and controlled, ensuring that the algorithm performs reliably and efficiently in a wide variety of applications with off-the-shelf settings, minimizing the need for tedious tuning, tinkering, trial and error by users. The algorithm is pleasingly parallel, and a Matlab toolbox implementing the algorithm is able to make efficient use of massively parallel computing environments such as graphics processing units (GPUs) with minimal user effort. This paper describes the algorithm, provides theoretical foundations, applies the algorithm to Bayesian inference and optimization problems illustrating key properties of its operation, and briefly describes the open source software implementation.