loading page

Using Graph Neural Networks in Reinforcement Learning with application to Monte Carlo simulations in Power System Reliability Analysis
  • Øystein Rognes Solheim ,
  • Boye Annfelt Høverstad ,
  • Magnus Korpås
Øystein Rognes Solheim

Corresponding Author:[email protected]

Author Profile
Boye Annfelt Høverstad
Author Profile
Magnus Korpås
Author Profile


This paper presents a novel method for using Graph Neural Networks in combination with reinforcement learning for power reliability studies. Monte Carlo methods are the backbone of such probabilistic power system reliability analyses. Recent efforts from the authors have shown that it is possible to replace Optimal Power Flow solvers with the policies of deep reinforcement learning agents and obtain significant speedups of Monte Carlo simulations while retaining close to optimal accuracies. However, a limitation of this reinforcement learning approach is that the training of the agent is tightly connected to the specific case being analyzed, and the agent cannot be used as is in new, unseen cases. In this paper, we seek to overcome these issues. First, we represent the state and actions in the power reliability environment by features in a graph, where the adjacency matrix can vary from time step to time step. Second, we train the agent by applying a message passing graph neural network architecture to an integrated variant of an actor-critic algorithm. This implies that the agent can solve the problem independently of the power system grid structure. Third, we show that the agent can solve small extensions of a test case without having seen the new parts of the power system during training. In all of our reliability Monte Carlo simulations using this  graph neural network agent, the simulation time is competitive with that based on optimal power flow, while still retaining close to optimal accuracy.