Dual Reinforcement Learning Synergy in Resource Allocation: Emergence of Self-Organized Momentum Strategy
Abstract
In natural ecosystems and human societies, self-organized resource allocation and policy synergy are ubiquitous and significant. This work focuses on the synergy between Dual Reinforcement Learning Policies in the Minority Game (DRLP-MG) to optimize resource allocation. Our study examines a mixed-structured population with two sub-populations: a Q-subpopulation using Q-learning policy and a C-subpopulation adopting the classical policy. We first identify a synergy effect between these subpopulations. A first-order phase transition occurs as the mixing ratio of the subpopulations changes. Further analysis reveals that the Q-subpopulation consists of two internal synergy clusters (IS-clusters) and a single external synergy cluster (ES-cluster). The former contribute to the internal synergy within the Q-subpopulation through synchronization and anti-synchronization, whereas the latter engages in the inter-subpopulation synergy. Within the ES-cluster, the classical momentum strategy in the financial market manifests and assumes a crucial role in the inter-subpopulation synergy. This particular strategy serves to prevent long-term under-utilization of resources. However, it also triggers trend reversals and leads to a decrease in rewards for those who adopt it. Our research reveals that the frozen effect, in either the C- or Q-subpopulation, is a crucial prerequisite for synergy, consistent with previous studies. We also conduct mathematical analyses on subpopulation synergy effects and the synchronization and anti-synchronization forms of IS-clusters in the Q-subpopulation. Overall, our work comprehensively explores the complex resource-allocation dynamics in DRLP-MG, uncovers multiple synergy mechanisms and their conditions, enriching the theoretical understanding of reinforcement-learning-based resource allocation and offering valuable practical insights