A Computational Framework and Implementation of Implicit Priors in Bayesian Inverse Problems
Abstract
Solving Bayesian inverse problems typically involves deriving a posterior distribution using Bayes' rule, followed by sampling from this posterior for analysis. Sampling methods, such as general-purpose Markov chain Monte Carlo (MCMC), are commonly used, but they require prior and likelihood densities to be explicitly provided. In cases where expressing the prior explicitly is challenging, implicit priors offer an alternative, encoding prior information indirectly. These priors have gained increased interest in recent years, with methods like Plug-and-Play (PnP) priors and Regularized Linear Randomize-then-Optimize (RLRTO) providing computationally efficient alternatives to standard MCMC algorithms. However, the abstract concept of implicit priors for Bayesian inverse problems is yet to be systematically explored and little effort has been made to unify different kinds of implicit priors. This paper presents a computational framework for implicit priors and their distinction from explicit priors. We also introduce an implementation of various implicit priors within the CUQIpy Python package for Computational Uncertainty Quantification in Inverse Problems. Using this implementation, we showcase several implicit prior techniques by applying them to a variety of different inverse problems from image processing to parameter estimation in partial differential equations.