De Finetti + Sanov = Bayes
Abstract
We develop a framework for the operationalization of models and parameters by combining de Finetti's representation theorem with a conditional form of Sanov's theorem. This synthesis, the tilted de Finetti theorem, shows that conditioning exchangeable sequences on empirical moment constraints yields predictive laws in exponential families via the I-projection of a baseline measure. Parameters emerge as limits of empirical functionals, providing a probabilistic foundation for maximum entropy (MaxEnt) principles. This explains why exponential tilting governs likelihood methods and Bayesian updating, connecting naturally to finite-sample concentration rates that anticipate PAC-Bayes bounds. Examples include Gaussian scale mixtures, where symmetry uniquely selects location-scale families, and Jaynes' Brandeis dice problem, where partial information tilts the uniform law. Broadly, the theorem unifies exchangeability, large deviations, and entropy concentration, clarifying the ubiquity of exponential families and MaxEnt's role as the inevitable predictive limit under partial information.