Enhancing reinforcement learning controllers with GAN-generated data and transfer learning

This study addresses the challenge of data scarcity in training reinforcement learning (RL) controllers for power system economic dispatch problems (EDP) by integrating Generative Adversarial Network (GAN)-generated synthetic data and transfer learning (TL). Traditional data collection for power sys...

Full description

Saved in:
Bibliographic Details
Main Authors: Chang Xu, Naoki Hayashi, Masahiro Inuiguchi
Format: Article
Language:English
Published: Taylor & Francis Group 2025-12-01
Series:SICE Journal of Control, Measurement, and System Integration
Subjects:
Online Access:http://dx.doi.org/10.1080/18824889.2025.2527471
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1839635026040324096
author Chang Xu
Naoki Hayashi
Masahiro Inuiguchi
author_facet Chang Xu
Naoki Hayashi
Masahiro Inuiguchi
author_sort Chang Xu
collection DOAJ
description This study addresses the challenge of data scarcity in training reinforcement learning (RL) controllers for power system economic dispatch problems (EDP) by integrating Generative Adversarial Network (GAN)-generated synthetic data and transfer learning (TL). Traditional data collection for power systems may face limitations like privacy concerns hindering the performance of deep neural network-based controllers. To overcome this, a GAN-based framework is proposed to generate synthetic load demand data, preserving characteristics of real datasets. A TL technique is then employed to fine-tune a Twin Delayed Deep Deterministic Policy Gradient (TD3) agent, pretrained in a synthetic environment, into a target environment with real-world data. Experiments evaluate three GAN-generated datasets, including scenarios with mode collapse, and compare results against regression-based data generation methods. Key findings demonstrate that even low-quality synthetic data, when combined with TL, significantly enhances RL performance. For instance, a mode-collapsed GAN model reduced test operation cost by 54.7% and power unbalance by 89.9% compared to a baseline TD3 agent. This work highlights the potential of synthetic data augmentation and TL in data-scarce power system applications, offering a viable pathway to improve controller performance without additional real-world data collection.
format Article
id doaj-art-0108b0bdebf9473c8e61ce6ac38ae06c
institution Matheson Library
issn 1884-9970
language English
publishDate 2025-12-01
publisher Taylor & Francis Group
record_format Article
series SICE Journal of Control, Measurement, and System Integration
spelling doaj-art-0108b0bdebf9473c8e61ce6ac38ae06c2025-07-09T13:40:42ZengTaylor & Francis GroupSICE Journal of Control, Measurement, and System Integration1884-99702025-12-0118110.1080/18824889.2025.25274712527471Enhancing reinforcement learning controllers with GAN-generated data and transfer learningChang Xu0Naoki Hayashi1Masahiro Inuiguchi2Universiti MalayaThe University of OsakaThe University of OsakaThis study addresses the challenge of data scarcity in training reinforcement learning (RL) controllers for power system economic dispatch problems (EDP) by integrating Generative Adversarial Network (GAN)-generated synthetic data and transfer learning (TL). Traditional data collection for power systems may face limitations like privacy concerns hindering the performance of deep neural network-based controllers. To overcome this, a GAN-based framework is proposed to generate synthetic load demand data, preserving characteristics of real datasets. A TL technique is then employed to fine-tune a Twin Delayed Deep Deterministic Policy Gradient (TD3) agent, pretrained in a synthetic environment, into a target environment with real-world data. Experiments evaluate three GAN-generated datasets, including scenarios with mode collapse, and compare results against regression-based data generation methods. Key findings demonstrate that even low-quality synthetic data, when combined with TL, significantly enhances RL performance. For instance, a mode-collapsed GAN model reduced test operation cost by 54.7% and power unbalance by 89.9% compared to a baseline TD3 agent. This work highlights the potential of synthetic data augmentation and TL in data-scarce power system applications, offering a viable pathway to improve controller performance without additional real-world data collection.http://dx.doi.org/10.1080/18824889.2025.2527471economic dispatchgantransfer learningreinforcement learningtwin delayed ddpg
spellingShingle Chang Xu
Naoki Hayashi
Masahiro Inuiguchi
Enhancing reinforcement learning controllers with GAN-generated data and transfer learning
SICE Journal of Control, Measurement, and System Integration
economic dispatch
gan
transfer learning
reinforcement learning
twin delayed ddpg
title Enhancing reinforcement learning controllers with GAN-generated data and transfer learning
title_full Enhancing reinforcement learning controllers with GAN-generated data and transfer learning
title_fullStr Enhancing reinforcement learning controllers with GAN-generated data and transfer learning
title_full_unstemmed Enhancing reinforcement learning controllers with GAN-generated data and transfer learning
title_short Enhancing reinforcement learning controllers with GAN-generated data and transfer learning
title_sort enhancing reinforcement learning controllers with gan generated data and transfer learning
topic economic dispatch
gan
transfer learning
reinforcement learning
twin delayed ddpg
url http://dx.doi.org/10.1080/18824889.2025.2527471
work_keys_str_mv AT changxu enhancingreinforcementlearningcontrollerswithgangenerateddataandtransferlearning
AT naokihayashi enhancingreinforcementlearningcontrollerswithgangenerateddataandtransferlearning
AT masahiroinuiguchi enhancingreinforcementlearningcontrollerswithgangenerateddataandtransferlearning