This paper presents a Multi-Agent Deep Reinforcement learning (MARL) framework for distributed energy management in a DC Microgrid (DC MG) comprising Photovoltaic, Wind Turbine, and Energy Storage Systems, with the primary objective of maintaining DC link voltage stability. The decentralized control architecture employs local voltage measurements as agent state inputs and uses Deep Q-Networks to estimate individual action-value functions. Three algorithmic approaches are investigated: Independent DQN (IDQN), Value Decomposition Networks (VDN), and QMIX, each evaluated with Multilayer Perceptron (MLP) and Recurrent Neural Network (RNN) architectures. The custom reward function integrates voltage deviation penalties, power balance constraints, and battery cycling costs to achieve high renewable penetration and efficient storage dispatch. Case studies validate framework performance under diverse conditions, including variable generation and demand, network delays, false data injection attacks, ground faults, and plug-and-play topology changes. Results reveal scenario-dependent performance characteristics: RNN based VDN achieves superior voltage regulation under normal operation, IDQN demonstrates robust reward optimization during cyber-attacks, while RNN based QMIX excels in adversarial scenarios during false data injection and fastest transient response during plug-and-play events. Computational analysis identifies architecture-dependent scaling trade-offs, with QMIX requiring more compute requirements and centralized coordination overhead, while IDQN's distributed architecture and lower resource consumption suggest better scalability for multi-agent expansion. The framework demonstrates the practical viability of MARL-based distributed control for resilient energy management in DC MG with scenario-appropriate algorithm selection.
扫码关注我们
求助内容:
应助结果提醒方式:
