Testing (Conditional) Mutual Information
Abstract
We investigate the sample complexity of mutual information and conditional mutual information testing. For conditional mutual information testing, given access to independent samples of a triple of random variables $(A, B, C)$ with unknown distribution, we want to distinguish between two cases: (i) $A$ and $C$ are conditionally independent, i.e., $I(A\!:\!C|B) = 0$, and (ii) $A$ and $C$ are conditionally dependent, i.e., $I(A\!:\!C|B) \geq \varepsilon$ for some threshold $\varepsilon$. We establish an upper bound on the number of samples required to distinguish between the two cases with high confidence, as a function of $\varepsilon$ and the three alphabet sizes. We conjecture that our bound is tight and show that this is indeed the case in several parameter regimes. For the special case of mutual information testing (when $B$ is trivial), we establish the necessary and sufficient number of samples required up to polylogarithmic terms. Our technical contributions include a novel method to efficiently simulate weakly correlated samples from the conditionally independent distribution $P_{A|B} P_{C|B} P_B$ given access to samples from an unknown distribution $P_{ABC}$, and a new estimator for equivalence testing that can handle such correlated samples, which might be of independent interest.