Strategic Cyber Defense via Reinforcement Learning-Guided Combinatorial Auctions
Abstract
Cyber defense operations increasingly require long-term strategic planning under uncertainty and resource constraints. We propose a new use of combinatorial auctions for allocating defensive action bundles in a realistic cyber environment, using host-specific valuations derived from reinforcement learning (RL) Q-values. These Q-values encode long-term expected utility, allowing upstream planning. We train CAFormer, a differentiable Transformer-based auction mechanism, to produce allocations that are approximately incentive-compatible under misreporting. Rather than benchmarking against existing agents, we explore the qualitative and strategic properties of the learned mechanisms. Compared to oracle and heuristic allocations, our method achieves competitive revenue while offering robustness to misreporting. In addition, we find that allocation patterns correlate with adversarial and defensive activity, suggesting implicit alignment with operational priorities. Our results demonstrate the viability of auction-based planning in cyber defense and highlight the interpretability benefits of RL-derived value structures.