You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Under #4/#13 I did initial setup of running benchmark scenarios, based on an initial use case definition.
Some things to discuss/(re)consider/finetune:
current structure of the JSON files is a top-level array with scenario objects ([ {"id": "...", ...}, {"id": ...} ] ). This makes it impossible to add top-level properties in the future. Things we might want to add at top-level: references to owner/author, default values for parameters, run schedule constraints, credit constraints, ....
Instead, change toplevel to an object, e.g.
scenarios currently have "type": "openeo", but that is quite generic if you compare it how the "type" field is used in GeoJSON, STAC, OGC, ... I guess "type": "openeo" intends to say that this is a openeo based benchmark. I would make type a bit more verbose, e.g. "openeo-benchmark" or "openeo-process". Or use a different field name, e.g. "processing_mode": "openeo"
related to previous: is it intended to also have non-openeo benchmarks here?
Should benchmarks always run batch jobs, or should there be an option for sync processing too?
current examples don't have a save_result node, e.g. to set output format. We probably should set this explicitly to avoid depending on implicit behavior
The text was updated successfully, but these errors were encountered:
Under #4/#13 I did initial setup of running benchmark scenarios, based on an initial use case definition.
Some things to discuss/(re)consider/finetune:
current structure of the JSON files is a top-level array with scenario objects (
[ {"id": "...", ...}, {"id": ...} ]
). This makes it impossible to add top-level properties in the future. Things we might want to add at top-level: references to owner/author, default values for parameters, run schedule constraints, credit constraints, ....Instead, change toplevel to an object, e.g.
scenarios currently have
"type": "openeo"
, but that is quite generic if you compare it how the "type" field is used in GeoJSON, STAC, OGC, ... I guess"type": "openeo"
intends to say that this is a openeo based benchmark. I would make type a bit more verbose, e.g. "openeo-benchmark" or "openeo-process". Or use a different field name, e.g."processing_mode": "openeo"
related to previous: is it intended to also have non-openeo benchmarks here?
Should benchmarks always run batch jobs, or should there be an option for sync processing too?
current examples don't have a
save_result
node, e.g. to set output format. We probably should set this explicitly to avoid depending on implicit behaviorThe text was updated successfully, but these errors were encountered: