Frequentist Persuasion
Abstract
A sender persuades a strategically naive decisionmaker (DM) by committing privately to an experiment. Sender's choice of experiment is unknown to the DM, who must form her posterior beliefs nonparametrically by applying some learning rule to an IID sample of (state, message) realizations. We show that, given mild regularity conditions, the empirical payoff functions hypo-converge to the full-information counterpart. This is sufficient to ensure that payoffs and optimal signals converge to the Bayesian benchmark. For finite sample sizes, the force of this "sampling friction" is nonmonotonic: it can induce more informative experiments than the Bayesian benchmark in settings like the classic Prosecutor-Judge game, and less revelation even in situations with perfectly aligned preferences. For many problems with state-independent preferences, we show that there is an optimal finite sample size for the DM. Although the DM would always prefer a larger sample for a fixed experiment, this result holds because the sample size affects sender's choice of experiment. Our results are robust to imperfectly informative feedback and the choice of learning rule.