A Deep Reinforcement Learning-based Reserve Optimization in Active Distribution Systems for Tertiary Frequency Regulation
Federal Energy Regulatory Commission (FERC) Orders 841 and 2222 have recommended that distributed energy resources (DERs) should participate in energy and reserve markets; therefore, a mechanism needs to be developed to facilitate DERs’ participation at the distribution level. Although the available reserve from a single distribution system may not be sufficient for tertiary frequency regulation, stacked and coordinated contributions from several distribution systems can enable them participate in tertiary frequency regulation at scale. This paper proposes a deep reinforcement learning (DRL)- based approach for optimization of requested aggregated reserves by system operators among the clusters of DERs. The cooptimization of cost of reserve, distribution network loss, and voltage regulation of the feeders are considered while optimizing the reserves among participating DERs. The proposed framework adopts deep deterministic policy gradient (DDPG), which is an algorithm based on an actor-critic method. The effectiveness of the proposed method for allocating reserves among DERs is demonstrated through case studies on a modified IEEE 34-node distribution system.
US Department of Energy Grant Number: DE-EE0009022
Email Address of Submitting Authormukesh.firstname.lastname@example.org
ORCID of Submitting Authorhttps://orcid.org/0000-0003-0571-5825
Submitting Author's InstitutionUniversity of Nevada, Reno
Submitting Author's Country
- United States of America