A global Lipschitz stability perspective for understanding approximate approaches in Bayesian sequential learning
Abstract
We establish a general, non-asymptotic error analysis framework for understanding the effects of incremental approximations made by practical approaches for Bayesian sequential learning (BSL) on their long-term inference performance. Our setting covers inverse problems, state estimation, and parameter-state estimation. In these settings, we bound the difference-termed the learning error-between the unknown true posterior and the approximate posterior computed by these approaches, using three widely used distribution metrics: total variation, Hellinger, and Wasserstein distances. This framework builds on our establishment of the global Lipschitz stability of the posterior with respect to the prior across these settings. To the best of our knowledge, this is the first work to establish such global Lipschitz stability under the Hellinger and Wasserstein distances and the first general error analysis framework for approximate BSL methods. Our framework offers two sets of upper bounds on the learning error. The first set demonstrates the stability of general approximate BSL methods with respect to the incremental approximation process, while the second set is estimable in many practical scenarios. Furthermore, as an initial step toward understanding the phenomenon of learning error decay, which is sometimes observed, we identify sufficient conditions under which data assimilation leads to learning error reduction.