Home / Research /

Asymptotically exact data augmentation (AXDA)

Data augmentation, by the introduction of auxiliary variables, has become an ubiquitous technique to improve mixing/convergence properties, simplify the implementation or reduce the computational time of inference methods such as Markov chain Monte Carlo. Nonetheless, introducing appropriate auxiliary variables while preserving the initial target probability distribution cannot be conducted in a systematic way but highly depends on the considered problem. To deal with such issues, we draw a unified framework, namely asymptotically exact data augmentation (AXDA), which encompasses several well-established but also more recent approximate augmented models. Benefiting from a much more general perspective, we deliver some additional qualitative and quantitative insights concerning these schemes. In particular, general properties of AXDA along with non-asymptotic theoretical results on the approximation that is made are stated. Close connections to existing Bayesian methods (e.g. mixture modeling, robust Bayesian models and approximate Bayesian computation) are also drawn. All the results are illustrated with examples and applied to standard statistical learning problems.

The AXDA framework, its properties and connections to existing models and algorithms (optimization, expectation-maximization, Monte Carlo sampling and variational Bayes) are detailed in the paper published in Journal of Computational and Graphical Statistics, granted with a Jupyter notebook to reproduce the results of this paper available on Maxime Vono's GitHub.

Split (and augmented) Gibbs sampler

In particular, Monte Carlo sampling from AXDA leads to the split-and-augmented Gibbs sampler (SPA) detailed in the paper published in IEEE Trans. Signal Processing, with the associated Matlab code:

Application to sparse logistic regression

As an illustration, an AXDA-based model and the corresponding split-and-augmented Gibbs sampler has been used to conduct sparse Bayesian logistic regression efficiently. The results are reported in the paper presented at IEEE Workshop on Machine Learning for Signal Processing (MLSP 2018):

More results are also available here.

Application to image restoration under Poisson noise and log-concave prior

Another instance of the the proposed AXDA and split-and-augmented Gibbs sampler have been implemented to conduct Bayesian image restoration under Poisson noise with a log-concave prior (e.g., TV regularization or sparse frame-based synthesis regularization). The results are reported in the paper presented at IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP 2019):

sitemeter stats