You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Flow Framework templates are essentially a configuration-as-code implementation, conceptually not much different than sequencing method calls with the appropriate method/function names (workflow steps) and arguments (required and optional parameters).
Unit and integration tests cover the foundational elements of Flow Framework but are completely lacking in testing the templates themselves. Other than "sanity testing" templates when they are first written and spot checking each relesae, there is no automation to ensure that future changes to their implemented APIs don't break them (such as the addition of a new required parameter, which happened just before the 2.14.0 release).
While #651 can help mitigate some issues with specific APIs, it's the equivalent of "unit testing". In addition to that requirement, we need an equivalent of end-to-end "integration testing" of templates; validating that they execute all their workflow steps and pass the appropriate data to each other.
Further, we need to validate that the expected return values (including error status codes) match what users expect, and permit front-end developers to rely on a documented API to build applications around.
This framework is an ideal way to test our use case templates.
It's also a significant effort that needs to start small and build up to eventually reach the goal of fully testing all use case templates. Here are the steps that are needed to reach this goal:
Write basic API tests for the sample templates based on the documentation, covering all expected (4xx) status values.
Document a subset of the ML Commons APIs which Flow Framework consumes via the client. (Subset of [FEATURE] Add specs for ml namespace opensearch-api-specification#228 ) (Note: the Client bypasses the REST path but the required/optional parameters should be the same) and write basic API tests for these APIs.
Write code to consume the ML Commons OpenAPI specification and validate that the corresponding Workflow Steps match the required and optional parameters. [FEATURE] Add test(s) to compare workflow steps configuration to API specification #651 This code does not exist and should include a proposed design for review on this repo before implementing it.
For each of the existing use case substitution templates, write a "minimal" (only required params) and "maximal" (all params) API test story.
Expand the functionality of the above framework to do fine grained "feature capability" checks, as certain features require minimal versions and potential settings changes. This may include a new API.
What alternatives have you considered?
Continuing our manual sanity testing when first written, and our laborious bug fixing when we inevitably miss changes because we're human.
Do you have any additional context?
While this issue is focused on testing existing workflow steps and templates, the components built to enable this will also be extremely useful in expanding Flow Framework functionality in the future by allowing more generic execution of any documented OpenSearch API.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem?
This is a more specific plan to implement #795
Flow Framework templates are essentially a configuration-as-code implementation, conceptually not much different than sequencing method calls with the appropriate method/function names (workflow steps) and arguments (required and optional parameters).
Unit and integration tests cover the foundational elements of Flow Framework but are completely lacking in testing the templates themselves. Other than "sanity testing" templates when they are first written and spot checking each relesae, there is no automation to ensure that future changes to their implemented APIs don't break them (such as the addition of a new required parameter, which happened just before the 2.14.0 release).
While #651 can help mitigate some issues with specific APIs, it's the equivalent of "unit testing". In addition to that requirement, we need an equivalent of end-to-end "integration testing" of templates; validating that they execute all their workflow steps and pass the appropriate data to each other.
Further, we need to validate that the expected return values (including error status codes) match what users expect, and permit front-end developers to rely on a documented API to build applications around.
What solution would you like?
OpenSearch has recently migrated its API specification to the OpenAPI format, published at https://github.com/opensearch-project/opensearch-api-specification. As part of this specification, there is a robust testing framework and an open issue to document and test plugin specifications.
This framework is an ideal way to test our use case templates.
It's also a significant effort that needs to start small and build up to eventually reach the goal of fully testing all use case templates. Here are the steps that are needed to reach this goal:
ml
namespace opensearch-api-specification#228 ) (Note: the Client bypasses the REST path but the required/optional parameters should be the same) and write basic API tests for these APIs.What alternatives have you considered?
Continuing our manual sanity testing when first written, and our laborious bug fixing when we inevitably miss changes because we're human.
Do you have any additional context?
While this issue is focused on testing existing workflow steps and templates, the components built to enable this will also be extremely useful in expanding Flow Framework functionality in the future by allowing more generic execution of any documented OpenSearch API.
The text was updated successfully, but these errors were encountered: