Replies: 3 comments 12 replies
-
FWIW I have a script that I was using to get an idea of the performance changes over time. I haven't updated it in a while, but the plots throughout the development of v3 are in the docs: https://nrel.github.io/floris/code_quality.html. I'll work on adding at least all of the releases to that chart. Long term, it would be nice to have some automation to display the performance in a pull request on command. For example, I'd like to be able to add comment to a pull request to run these scripts and have the results added to another comment. |
Beta Was this translation helpful? Give feedback.
-
As another point of consideration, I think we talk of performance at two scales: high level parallelization of many data points (i.e. atmospheric conditions) and low level vectorization or parallelization of a single data point. If we develop performance tests, it would be good to test at both resolutions. |
Beta Was this translation helpful? Give feedback.
-
hi @rafmudaf just following up on our discussion this morning, I think we should test a few things, here is my idea:
|
Beta Was this translation helpful? Give feedback.
-
I was hoping to start a quick discussion on performance testing. Let's say also this applies to both FLORIS and FLASC. Some of the improvements we consider are aimed at performance but I'm not always positive the best way to assess that performance. I know some NREL codes have automatic performance testing, but I wasn't sure the lift it would take to bring that to FLORIS/FLASC, or if we even want that. So wanted to take a pulse of what people thought of having some standard performance tests in these codes?
@rafmudaf @bayc @ejsimley @Bartdoekemeijer @misi9170 @RHammond2
Beta Was this translation helpful? Give feedback.
All reactions