Transfer Learning Enhanced Deep Reinforcement Learning for Volt-Var Control in Smart Grids

Reinforcement Learning
Transfer Learning
Author

Kundan Kumar.

Published

January 21, 2025

Doi

Poster (PDF)

Citation

K. Kumar and G. Ravikumar, “Transfer Learning Enhanced Deep Reinforcement Learning for Volt-Var Control in Smart Grids,” 2025 IEEE PES Grid Edge Technologies Conference & Exposition (Grid Edge), San Diego, CA, USA, 2025, pp. 1-5, doi: 10.1109/GridEdge61154.2025.10887439.

Abstract

The integration of renewable energy resources has made power system management increasingly complex. DRL is a potential solution to optimize power system operations, but it requires significant time and resources during training. The control policies developed using DRL are specific to a single grid and require retraining from scratch for other grids. Training the DRL model from scratch is computationally expensive. This paper proposes a novel TL with a DRL framework to optimize VV C across different grids. This framework significantly reduces training time and improves VVC control performance by fine-tuning pre-trained DRL models for various grids. We developed a policy reuse classifier that transfers the knowledge from the IEEE-123 Bus system to the IEEE-13 Bus system. We performed an impact analysis to determine the effectiveness of TL. Our results show that TL improves the VVC control policy by 69.51 %, achieves faster convergence, and reduces the training time by 98.14%.