Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Key Vault] Test keys library against a shared vault #16474

Merged
merged 24 commits into from
Feb 5, 2021

Conversation

mccoyp
Copy link
Member

@mccoyp mccoyp commented Feb 2, 2021

Resolves #15435.

This drops KV's existing preparers, and instead uses a PowerShellPreparer to fetch an existing vault URL from an environment variable.
test-resources.json and test-resources-post.ps1 are added to make resource group, vault, and managed HSM creation as easy as running a script.
test-resources-cleanup cleans up test resources from a vault, which are identified by a "livekvtest" prefix -- this is automatically added by get_resource_name().

@mccoyp mccoyp added KeyVault Client This issue points to a problem in the data-plane of the library. labels Feb 2, 2021
@mccoyp mccoyp added this to the [2021] February milestone Feb 2, 2021
@check-enforcer
Copy link

check-enforcer bot commented Feb 2, 2021

This pull request is protected by Check Enforcer.

What is Check Enforcer?

Check Enforcer helps ensure all pull requests are covered by at least one check-run (typically an Azure Pipeline). When all check-runs associated with this pull request pass then Check Enforcer itself will pass.

Why am I getting this message?

You are getting this message because Check Enforcer did not detect any check-runs being associated with this pull request within five minutes. This may indicate that your pull request is not covered by any pipelines and so Check Enforcer is correctly blocking the pull request being merged.

What should I do now?

If the check-enforcer check-run is not passing and all other check-runs associated with this PR are passing (excluding license-cla) then you could try telling Check Enforcer to evaluate your pull request again. You can do this by adding a comment to this pull request as follows:
/check-enforcer evaluate
Typically evaulation only takes a few seconds. If you know that your pull request is not covered by a pipeline and this is expected you can override Check Enforcer using the following command:
/check-enforcer override
Note that using the override command triggers alerts so that follow-up investigations can occur (PRs still need to be approved as normal).

What if I am onboarding a new service?

Often, new services do not have validation pipelines associated with them, in order to bootstrap pipelines for a new service, you can issue the following command as a pull request comment:
/azp run prepare-pipelines
This will run a pipeline that analyzes the source tree and creates the pipelines necessary to build and validate your pull request. Once the pipeline has been created you can trigger the pipeline using the following comment:
/azp run python - [service] - ci

@mccoyp
Copy link
Member Author

mccoyp commented Feb 2, 2021

/azp run python - keyvault - tests

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@mccoyp
Copy link
Member Author

mccoyp commented Feb 2, 2021

/check-enforcer evaluate

@mccoyp
Copy link
Member Author

mccoyp commented Feb 2, 2021

/check-enforcer override

@mccoyp
Copy link
Member Author

mccoyp commented Feb 2, 2021

/azp run python - keyvault - tests

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

sdk/keyvault/test-resources-cleanup.py Show resolved Hide resolved
sdk/keyvault/test-resources-cleanup.py Outdated Show resolved Hide resolved
sdk/keyvault/test-resources-cleanup.py Outdated Show resolved Hide resolved
sdk/keyvault/test-resources-cleanup.py Show resolved Hide resolved
@mccoyp mccoyp marked this pull request as ready for review February 3, 2021 03:02
@mccoyp mccoyp requested a review from schaabs as a code owner February 3, 2021 03:02
@mccoyp mccoyp requested a review from chlowell February 3, 2021 03:03
Copy link
Member

@chlowell chlowell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were the recordings made with the code as it is in the latest revision of this PR (it looks like they were recorded by test cases sharing a single client instance)?

@mccoyp
Copy link
Member Author

mccoyp commented Feb 4, 2021

These recordings were all made after other changes were made, so I'm not sure why that would be the case

@mccoyp mccoyp requested a review from chlowell February 4, 2021 22:05
@chlowell
Copy link
Member

chlowell commented Feb 4, 2021

It's because the ChallengeAuthenticationPolicy uses a process-wide cache of challenge information. In a live run, the first test case caches challenge information and subsequent test cases--all using the same vault, whose URL is the cache key--use that cached information rather than send a request to get it from Key Vault. Therefore only one recording contains the initial empty request, which recording being determined by order of execution. This implies that when you play the tests back in a different order than they were recorded, you can expect failure because the challenge cache will be empty until the test case with the recorded challenge runs. Each test that runs before the challenge is played back will send a request with no recorded response: vcr should raise.

But in fact this doesn't happen; the tests pass because:

  1. they don't configure vcr to match request bodies (because the bodies contain random data)
  2. the initial, empty request the client sends to elicit a challenge from Key Vault therefore matches the subsequent, authorized request so far as vcr is concerned
  3. when it gets the empty request, vcr replays the response to the authorized request
  4. that response doesn't have a WWW-Authenticate header, so the auth policy fails to parse a challenge from it
  5. the policy's failure mode in this case is to return the response (because it doesn't know what else to do with it)
  6. the returned response isn't an error, so the pipeline doesn't complain
  7. actually the response is exactly what the test asked for when it called client.do_whatever()
  8. success!

So these recordings create an error condition that probably should, but coincidentally does not, cause test failures 😆

The easiest solution is to clear the cache between tests:


...something the preparer could handle?

@mccoyp
Copy link
Member Author

mccoyp commented Feb 4, 2021

I can ask Sean about adding it to the PowerShellPreparer if this is something that can be generalized to other libraries (if it's not, we could subclass the preparer and do this from there), but I've verified locally that this could also be done in a tearDown method of each test class. The former would be cleaner though 🤔

@chlowell
Copy link
Member

chlowell commented Feb 4, 2021

It's a Key Vault only thing. I thought tearDown was called after all the test cases have run?

@mccoyp
Copy link
Member Author

mccoyp commented Feb 4, 2021

One would think that would be the case, but it's called after each test method finishes up -- kind of convenient, kind of confusing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Client This issue points to a problem in the data-plane of the library. KeyVault
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Key Vault] Update tests to work on a shared vault (keys)
2 participants