Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Idea: Default Workflows #54

Open
bburns632 opened this issue Nov 13, 2019 · 3 comments
Open

Feature Idea: Default Workflows #54

bburns632 opened this issue Nov 13, 2019 · 3 comments

Comments

@bburns632
Copy link

Hi all,

I love the toolkits available within OpenOA. In order to make this even more available to the wind industry and help with adoption, I think a simple, low touch, basic workflow/report will help. Think of it as a logical collection of some of the tools made available within this package. My thoughts on where to start and where this could go are below, and I very much welcome comments.

Target User:
A wind project manager, engineer, or performance analyst that would like to evaluate a few hundred wind turbines for operational performance issues. The purpose is to identify specific turbines with performance or operational issues for further inspection by either their maintenance crew, a specialist, or their OEM (if under warranty).

Assumptions:
This user has some access to turbine data via scada, OSI PI, Eta Pro, etc, but not unrestricted access.
A csv of 10 min averages for a subset of signals for all turbines for 2 years is something this user can access.
Turbines operating states are not reported in the 10 minute summary table.
This user has a basic knowledge of python (e.g. pip install, start notebook, enter some data, click button)

Short Term:
Create a jupyter notebook that is:

  • designed to analyze a CSV with two years of 10 min averages from a handful of signals on 100 wind turbines.
  • requires minimal configuration, just csv file path and column names.
  • requires minimal coding outside of the notebook (i.e. prepare() method)
  • utilizes existing toolkits
  • utilizes standard xxx-25 for categorization but only touches highest categories
  • highlights “low hanging fruit”

Long Term:

  • Multiple report workflows based on the level of detail available.
  • Reports are polished HTML, less jupyter notebook as result.
  • Full categorization via standard xxx-25 in one of the workflows.
  • Report workflows might each be their own class, each with a default “runReport” method or similar.
@jordanperr
Copy link
Collaborator

jordanperr commented Nov 18, 2019

Thanks for this issue report and the related pull request. You have noted that the pull request is "in progress" and should not yet be merged, so I won't perform a full review. I list some initial feedback here:

requires minimal coding outside of the notebook (i.e. prepare() method)

The PlantData class is meant to be abstract, with subclasses able to load any type of data by overloading the prepare() method. I'd imagined that every organizational user with a different database / schema / input file convention and user base will implement their own org-wide PlantData class, which can load internal data in a streamlined way. I agree it would be useful to provide some more concrete PlantData classes within OpenOA that make more assumptions about the input data and are easier to use.

Multiple report workflows based on the level of detail available.

Our "report workflows" are called "methods" - overloaded term I know - and located here: https://github.com/NREL/OpenOA/tree/master/operational_analysis/methods
These are concrete classes that each implement some analysis. The abstract API for these classes is:
-- init() with all report-level parameters.
-- run() to execute the analysis (by calling several internal methods in a given order) and set self.result.

I'd suggest implementing these reports as a method, at least for now. Might make more sense to call them "Workflows" or "Reports" down the line, as you suggest.

Long term: Reports are polished HTML, less jupyter notebook as result.

We do not yet have a mechanism to specify an output format for results. You run an analysis method and get some python object back. Up to the user to interpret the result object. I could see a new toolkit like "report_templates" or "views" to generate polished HTML or PDF reports from the results. Maybe add a function line "render()" to each method that relies on this toolkit. Just thinking out loud.

Thanks for your work so far. I'll keep an eye on this issue.

@jordanperr jordanperr removed their assignment Jun 11, 2020
@RHammond2
Copy link
Collaborator

@bburns632 I think our v3 implementation has addressed much of the short term issues. If this is something that is still relevant to you (recognizing it's been 4.5 years though), please feel free to update the request. Otherwise, I'll plan to close this out.

We've also been mulling over the idea of allowing for YAML/JSON analysis setup to power a lower code way to run OpenOA and get outputs.

@bburns632
Copy link
Author

@RHammond2 Thanks for the update. You can close this out at your leisure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants