Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

Concept on how to deliver test cases for contributed Camara APIs #61

Closed
shilpa-padgaonkar opened this issue Jul 29, 2022 · 34 comments
Closed
Assignees

Comments

@shilpa-padgaonkar
Copy link
Contributor

In what format are the test cases expected?

@Kevsy
Copy link
Contributor

Kevsy commented Sep 21, 2022

I can think of:

conformance tests.

These validate the request and response syntax of an API, including response codes and exceptions.

documentation conformance tests.

Could be by parsing the .md to ensure the markdown template is followed

penetration tests.

probably out of scope for CAMARA but essential for implementers

performance tests

out of scope

privacy by design and security by design.

To be discussed, could be a manual review?

user acceptance testing

In this case the user is the developer. Ideally API developers (and not the API authors) will review.

Others?

@shilpa-padgaonkar
Copy link
Contributor Author

Use suggestions and feedback provided here. Contribute the first set of test cases for one of the subprojects and for one or more of the categories listed above and then derive a (guidelines) doc out of this contribution

@jordonezlucena
Copy link
Contributor

jordonezlucena commented Oct 31, 2022

We propose to use Gherkin as DSL for ATP. This DSL allows (via other tools, up to operators' criteria) automation on test campaign execution. An example of baseline ATP using Gherkin is provided in PR #102.

@Kevsy: in relation to the tests you have listed above, here's our feedback.

  • conformance tests: IN SCOPE. We propose using Gherkin.
  • documentation conformance tests: IN SCOPE. We should agree on guidelines for Gherkin file format.
  • penetration tests: OUT OF SCOPE.
  • performance tests: OUT OF SCOPE.
  • privacy by design and security by design: N/A (still under internal discussion how to do it).
  • user acceptance testing: not clear what it means. OUT OF SCOPE. For us, acceptance testing == conformance testing, and shall be perform on producer/implementer side. We do not see the need for individual API developers to define their own tests; otherwise, testing campaign mightbecome endless.

@Kevsy
Copy link
Contributor

Kevsy commented Nov 2, 2022

Thanks @jordonezlucena

For 'user acceptance testing': the 'user' here is the API consumer. So UAT is to make sure the API is useful and useable - otherwise we have no feedback and end up publishing the API and hoping it will be used.

An example could be that the network operators define an API using certain operations, datatypes, error messages etc. that cause problems to the API consumer. In which case the UAT feedback would be used to reconsider the API defintion.

I see UAT as an iterative process (before and after the conformance test) with the result that we have a useful/conformant spec.

@shilpa-padgaonkar
Copy link
Contributor Author

@jordonezlucena : Proposal to use Gherkin as DSL is fine from DT side.

@jordonezlucena
Copy link
Contributor

Thanks both for the feedback.
@shilpa-padgaonkar: since QoD is the most advanced API as of today, do you think a first example on ATP can be provided by DT in the near future? Would it be using Gherkin?

In the meantime, my take is to draft a (live) table where we specify: 1) DSL for testing, and 2) tests in scope and out of scope in CAMARA. Does it make sense?

@patrice-conil
Copy link
Contributor

Hi @ALL,
As the Karate framework is based on Gherkin, and is independent of the programming language, I can share a first sample of BDD tests if you think it can help progress on this subject.

@jordonezlucena
Copy link
Contributor

Hi @ALL,
As the Karate framework is based on Gherkin, and is independent of the programming language, I can share a first sample of BDD tests if you think it can help progress on this subject.

Looking forward to it.

@patrice-conil
Copy link
Contributor

The pull-request is here: #148

You can look at it.

@mdomale
Copy link

mdomale commented Feb 23, 2023

Cucumber Framework is based on Gherkin ,we are sharing samples for QoD cucumber Tests .
Also we are sharing comparison between Cucumber and Karate.
cucumber.zip
Comparing+Cucumber+and+Karate.docx

@patrice-conil
Copy link
Contributor

@mdomale,
I agree with you that Cucumber is more popular than Karate, it can be used to define and test just about anything and
we use it in many projects.

For me the question is not: "what is the best framework to test my implementation by myself?" but "which framework makes it easy for someone else to test their implementation for conformance using the suite I provide?"

I think that Cucumber has a few downsides that you didn't mention in your comparison:

  • Since Layer 2 is programming language dependent, you need skills in that language to run the test suite.
  • If you want to mock NEF, you need to add another tool like wiremock

A Karate collection can be run in vscode without any knowledge of the programming language and can also mock NEF.
As a developer of a new API implementation, if I want to test its conformance using an external test suite, I expect it to be as easy as possible.

Maybe I'm trying to answer the wrong question, let me know if that's the case.

@sfnuser
Copy link
Contributor

sfnuser commented Feb 27, 2023

@patrice-conil - Thanks for your PR and @mdomale - Thanks for your references. Both are useful read.

There are few comments from my side.

  1. Does the framework we choose is binding on all CAMARA sub-projects?
  2. Karate - it seems language agnostic but it still depends on Java. The standalone tool is still a jar file. Which brings to the below question - "which framework makes it easy for someone else to test their implementation for conformance using the suite I provide?" - Is this the question this Github issue addressing?

@shilpa-padgaonkar
Copy link
Contributor Author

@sfnuser: As test artifacts are expected to be in the main subproject repos, there is at least a desire to align on the framework/tools to be used for the same (and yes across subprojects).
WE can discuss if it makes sense to give a chance to provider implementation repos to supplement this with their individual test artifacts, and then may have the choice also to use the tools of their choice.

But for the main subproject repos, it would be useful to have a common alignment. let us know your view.

@patrice-conil
Copy link
Contributor

@sfnuser,
Karate needs a java runtime to run the tests. But you don't need any Java coding skills to use it. That's why I called it "language agnostic"...despite the fact that the karate config uses javascript.

Are we targeting validation for API consumers or API providers or both?

@sfnuser
Copy link
Contributor

sfnuser commented Mar 6, 2023

@shilpa-padgaonkar
Thanks. If there is a common framework/tools, which is part of the main repo, that targets API consumers or API providers to validate their implementation, I am OK with any of the above mentioned tools.

I believe Provider Implementations can add their own set of tests that suit their need, within their repo. I don't think we need commonalities across PIs on this aspect.

@patrice-conil
OK. Karate framework does seem to recommend Java or JS when the logic gets complex. I suppose it does not matter if it is part of a top level suite as mentioned above. If I am an API provider or consumer, I can always spin off a docker instance running the suite and test against it.

Are we targeting validation for API consumers or API providers or both?

Good question. I think it is nice to have both. However, if we have to choose one at this point, API providers would be my pick. It would put the onus of all of us to consistently upgrade and maintain :-) and API consumer devs are free to choose PIs.

@mdomale
Copy link

mdomale commented Mar 8, 2023

@patrice-conil
Cucumber framework makes it easy for everyone to test their implementation for conformance because feature file is more readable and easy understandable along with re-use
of Step definitions when need arises for extending scenarios. As cucumber is wellknown and widely used framework it becomes more effective for contribution for everyone and utilization as well.

  • Although layer 2 is language dependent (java & other languages)for cucumber but for running the test scenarios we only need awarness about the applicable tags to be used in Runner File which is configuration
    (For Eg. @ConfigurationParameter(key = FILTER_TAGS_PROPERTY_NAME, value = "@AllocateDevice")).Moreover there is no complex logic for implementation and with basic knowledge applicable scenarios can be written with their definitions.
    Also i cannot agree completely that karate is language independent as knowledge for Javascript is required even for basic implementation of feature file.

  • Usage of external services like wiremock ,mockito or mockserver we do not see as disadvantage but they can be benifical for complex simulations required. @akoshunyadi @shilpa-padgaonkar

@patrice-conil
Copy link
Contributor

@mdomale,
If idea is to deliver a full project structure with features and steps and probably a wiremock to simulate the NEF, I think we need to agree on which technical stack to work:

  • gradle, maven, ...
  • junit4, junit5, ...
  • which language for step definitions Java / javaScript/ Kotlin / Scala / Ruby /Golang/ Erlang ... ?
  • wiremock or other NEF silmulator ?
    PS: On orange side we can work with Cucumber ... as I said before we use it on many projects since many years. We just found that Karate is more API friendly than cucumber, so we use it on many API projects

@mdomale
Copy link

mdomale commented Mar 9, 2023

Although we don't want to restrict usage of different tools to ensure maximum flexibility we would like to prefer below stack

@mdomale
Copy link

mdomale commented Mar 14, 2023

@patrice-conil @sfnuser Can you please share confirmation that you are fine with above stack or you have any concerns ?

@patrice-conil
Copy link
Contributor

@mdomale,
I prefer gradle and kotlin but I can live with maven and java. :)
Since this is a full Java stack project... I think we will need to provide a docker/k8s image for non-javaist users to run the test suite.

@sfnuser
Copy link
Contributor

sfnuser commented Mar 14, 2023

@mdomale

As @patrice-conil mentioned, if the instructions to dockerize the test suite is also provided, I have no further concerns. Cheers.

@mdomale
Copy link

mdomale commented Mar 21, 2023

@sfnuser @patrice-conil Thanks a lot for your responses .Yes we can provide docker/k8s image. @shilpa-padgaonkar @akoshunyadi

@mdomale
Copy link

mdomale commented Mar 24, 2023

@patrice-conil @sfnuser We have created a draft pull request with our initial contribution for QoD APIs and integration of wiremock will be done shortly .
camaraproject/QualityOnDemand#134 @akoshunyadi @shilpa-padgaonkar

@mdomale
Copy link

mdomale commented May 5, 2023

@jpengar
Copy link
Contributor

jpengar commented May 29, 2023

Providing API test cases is included as one of the mandatory items in the API Readiness minimum criteria checklist defined in Commonalities.

These checklist steps are part of the requirements in order to generate a stable release v1.0.0 in CAMARA as it is being discussed in Commonalities #139

As @jordonezlucena mentioned above, we propose to use Gherkin as DSL for ATP. This DSL allows (via other tools, up to operators' criteria) to automate the execution of test campaigns. And perhaps specific test implementations using Cucumber (or other frameworks) could be part of the provider implementation repos, if applicable.

@rartych @shilpa-padgaonkar @jordonezlucena @jlurien @patrice-conil @sfnuser @mdomale @hdamker To satisfy step 5 ("API test cases and documentation") of the API Readiness checklist, could we conclude that the mandatory requirement is to provide the `.feature' gherkin file describing the test scenarios and left implementation specifications to implementation repositories?
Is this correct? Or does this need further discussion/clarification? Do you expect the implementation of the test plan to also be a mandatory deliverable in the main API subproject?

@jordonezlucena
Copy link
Contributor

@jpengar : thanks for resuming the issue.
Your proposal LGTM.

@shilpa-padgaonkar
Copy link
Contributor Author

shilpa-padgaonkar commented May 31, 2023

@jpengar and @jordonezlucena : This issue was already resolved with agreement from multiple participants (see comment #61 (comment) and #61 (comment) ).

Based on this agreement, a PR was created in QoD subproject which was reviewed, approved and merged.

I see now that @jpengar has requested 2 new things:

I would rather open a new issue where we extend the already made contribution with the Gherkin feature file. DT will then create a PR in QoD subproject to add the feature file.

  • to move test case implementation into provider implementation repos.

At least for QoD all current 3 providers DT, Orange and SpryFoxNetworks are fine to have a common implementation as currently provided. If you are still keen to move this out, I would recommend you to start this as a separate discussion and try to get a new consensus here.

@jpengar @jordonezlucena : Please provide feedback if you are ok with the recommendations above.

@jpengar
Copy link
Contributor

jpengar commented Jun 1, 2023

@shilpa-padgaonkar In my personal defense (kidding) I will say that the issue was still open when I got to it and it was previously referenced only a few days ago... :)

Anyway, what I wanted to know is what exactly is the requirement to fulfill step 5 ("API test cases and documentation") of the API Readiness checklist. And if it is good enough to provide the .feature file or not. As mentioned above, the .feature file is actually the gherkin file that describes the test scenarios. And this is not a new request, if you check the PR camaraproject/QualityOnDemand#134, the .feature file is provided as part of the pull request as the test case definition. But apart from that, the test plan implementation is also provided using the cucumber framework (test runner, test step implementation in java, etc).

As for moving the test case implementation into provider implementation repos. For me it makes sense, but if it was agreed to include it in the main project, I would like to know if it is a mandatory requirement to satisfy the corresponding step in the API Readiness checklist or not. Or if a .feature file describing the test cases is good enough as a prerequisite to generate a first stable API version v1.0.0.

@shilpa-padgaonkar
Copy link
Contributor Author

@jpengar : :) no worries.
You are right, the Gherkin feature file is already in, my bad.

I would say that the feature file would fulfill the requirements from the minimum readiness checklist. But this is just my personal opinion, and we could check in the group for feedback from others.

@rafpas-tim
Copy link

Since Gherkin .feature is a high level description of a specific set of features/behaviours that the api implementation would provide as user-experience, me and @FabrizioMoggio are fine with adding it into related subproject (e.g. TrafficInfluence).

In this way it will represent a useful document of the api's "internal" behaviour, which is hidden if you rely only on openapi specs, and it will be easier and faster to assess minimum critieria without being an expert in BDD or Gherkin.

Consequently, the test case implementation (Cucumber), which is tightly coupled with the reference implementation (preconditions, states, mocks, etc.), will be hosted in the implementation repo.

If we are understanding BDD correclty: firstly write .feature, then approve, then write implementation code.

@shilpa-padgaonkar
Copy link
Contributor Author

shilpa-padgaonkar commented Jun 9, 2023

Thank you @jpengar @rafpas-tim @FabrizioMoggio for your feedback.

Would like to propose the following:

  1. Everyone seem to agree that .feature file in main subproject will help with compliance of minimum checklist criteria document.
  2. If there is a testcase implementation that is aligned and accepted in the subproject, we could host it in the main subproject repo as we do this in QoD.
  3. Provider implementors are still free to further provide additional test implementation within their individual repos.

Would this work? If it is still important for you that we move the QoD test implementation to the provider implementation repo, we can also do that. Would be fine for me both ways.
But I see more chances of people wanting to contribute just the test implementation but not a full reference implementation (currently we just have that for QoD), and in this case it would mean that we need to add a new provider implementation repo in Camara just to host the test implementation.

@jpengar
Copy link
Contributor

jpengar commented Jun 9, 2023

Thank you @jpengar @rafpas-tim @FabrizioMoggio for your feedback.

Would like to propose the following:

  1. Everyone seem to agree that .feature file in main subproject will help with compliance of minimum checklist criteria document.
  2. If there is a testcase implementation that is aligned and accepted in the subproject, we could host it in the main subproject repo as we do this in QoD.
  3. Provider implementors are still free to further provide additional test implementation within their individual repos.

Would this work? If it is still important for you that we move the QoD test implementation to the provider implementation repo, we can also do that. Would be fine for me both ways. But I see more chances of people wanting to contribute just the test implementation but not a full reference implementation (currently we just have that for QoD), and in this case it would mean that we need to add a new provider implementation repo in Camara just to host the test implementation.

It makes sense to me. Thank you @shilpa-padgaonkar

@shilpa-padgaonkar
Copy link
Contributor Author

@jpengar : Thanks for your feedback.

@rafpas-tim @FabrizioMoggio : Could you kindly provide your feedback? If you are ok with the proposal, we can go ahead and close this issue.

@FabrizioMoggio
Copy link
Contributor

this is fine to me.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants