The Principles of Probability: From Formal Logic to Measure Theory to the Principle of Indifference
Abstract
In this work, we develop a formal system of inductive logic. It uses an infinitary language that allows for countable conjunctions and disjunctions. It is based on a set of nine syntactic rules of inductive inference, and contains classical first-order logic as a special case. We also provide natural, probabilistic semantics, and prove both $\sigma$-compactness and completeness. We show that the whole of measure-theoretic probability theory is embedded in this system of inductive logic. The semantic models of inductive logic are probability measures on sets of structures. (Structures are the semantic models of finitary, deductive logic.) Moreover, any probability space, together with a set of its random variables, can be mapped to such a model in a way that gives each outcome, event, and random variable a logical interpretation. This embedding, however, is proper. There are scenarios that are expressible in this system of logic which cannot be formulated in a measure-theoretic probability model. The principle of indifference is an idea originating with Laplace. It says, roughly, that if we are "equally ignorant" about two possibilities, then we should assign them the same probability. The principle of indifference has no rigorous formulation in probability. It exists only as a heuristic. Moreover, its use has a problematic history and is prone to apparent paradoxes. Within inductive logic, however, we formulate it rigorously and illustrate its use through a number of examples. Many of the ideas in inductive logic have counterparts in measure theory. The principle of indifference, however, does not. Its formulation requires the structure of inductive logic, both its syntactic structure and the semantic structures embedded in its models. As such, it exemplifies the fact that inductive logic is a strictly broader theory of probability than any that is based on measure theory alone.