Skip to content
#

reward-shaping

Here are 26 public repositories matching this topic...

A gymnasium-compatible framework to create reinforcement learning (RL) environment for solving the optimal power flow (OPF) problem. Contains five OPF benchmark environments for comparable research.

  • Updated Dec 5, 2024
  • Python

Improve this page

Add a description, image, and links to the reward-shaping topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the reward-shaping topic, visit your repo's landing page and select "manage topics."

Learn more