Deep Reinforcement Learning-Assisted Component Auto-Configuration of Differential Evolution Algorithm for Constrained Optimization: A Foundation Model
Abstract
Despite significant efforts to manually design high-performance evolutionary algorithms, their adaptability remains limited due to the dynamic and ever-evolving nature of real-world problems. The "no free lunch" theorem highlights that no single algorithm performs optimally across all problems. While online adaptation methods have been proposed, they often suffer from inefficiency, weak convergence, and limited generalization on constrained optimization problems (COPs). To address these challenges, we introduce a novel framework for automated component configuration in Differential Evolution (DE) algorithm to address COPs, powered by Deep Reinforcement Learning (DRL). Specifically, we propose SuperDE, a foundation model that dynamically configures DE's evolutionary components based on real-time evolution. Trained offline through meta-learning across a wide variety of COPs, SuperDE is capable of recommending optimal per-generation configurations for unseen problems in a zero-shot manner. Utilizing a Double Deep Q-Network (DDQN), SuperDE adapts its configuration strategies in response to the evolving population states during optimization. Experimental results demonstrate that SuperDE significantly outperforms existing state-of-the-art algorithms on benchmark test suites, achieving superior generalization and optimization performance.