Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rep001 1 (rez-test improvements) #807

Merged
merged 7 commits into from
Jan 18, 2020
Merged

Rep001 1 (rez-test improvements) #807

merged 7 commits into from
Jan 18, 2020

Conversation

nerdvegas
Copy link
Contributor

@nerdvegas nerdvegas commented Nov 21, 2019

This updates rez-test behaviour wrt #665.

Not all of the REP is addressed here. This PR includes:

  • addition of run_on tests field (however this is not fully acted on yet - pre_release does nothing atm)
  • Addition of --inplace option, to run a test in the current env (if necessary requirements are present)
  • Better handling of variant iteration (there was none previously), and a new on_variants tests field to give granular control over this.
  • Updated wiki docs wrt tests package attrib.

Note: It would also make sense to have tests for the PackageTestRunner class. However I'd like to hold off until we port to pytest - I think it's going to be significantly simpler to do something like have a fixture for installing a set of packages into a fake fs. The way packages are built and tested on currently is haphazard and I'd rather not increase our technical debt for the sake of this fairly non-critical part of the code. #808.

ajohns added 7 commits November 19, 2019 13:23
-fix bug where test was expanding root to first variant, even though first variant may not be the actual variant the test is running in
-better output (summary added)
-added handling of on_variants.requires case
Copy link
Member

@JeanChristopheMorinPerso JeanChristopheMorinPerso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I had time to look at the PR. In general I like your changes. I left some questions.

I'll also leave two general questions here:

  1. Should we add support for doing rez-test . or rez-test /path/to/package (can be useful for CI) in this PR?
  2. Are you planning to add a variant selection flag to the CLI?


if returncode:
sys.exit(returncode)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We no more exit with non-zero exit codes? Is that intentional or on purpose?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's intentional, as it isn't clear what this would mean anymore.

Previously, rez-test raised as soon as a test fails. It doesn't do that anymore (unless you specify --stop-on-fail), and it also iterates over variants. So now, you get a summary of all the tests, showing whether they passed/failed/skipped. In that context it isn't really clear what the exit code should be, or if it's useful to have at all.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the feeling it becomes unclear because the tests are skipped instead of being marked as failed. I've been thinking about it even more today and I really think tests that fail to resolve should really be marked as failed. That would avoid hiding possible issues and would probably make it possible to use a different exit code more easily.

A lot of CI system base themselves on the exit code of processes to know if a step failed or not. If we don't return a proper exit code, it's going to be quite difficult to know when a test failed or not and mark a build as failed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I'll mark those as failed, and set exit code to match exit code of first failed test.

src/rez/package_test.py Show resolved Hide resolved
Optional("on_variants"): Or(
bool,
{
"requires": [package_request_schema]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious here, how would you use this? If I understand correctly, this is a selection mechanism. So you can select the variants that the test should run on. So you can only have one filter?

Also, why not call the attribute selection or filter? requiressounds too much like the packagerequiresand the testsrequires. Or maybe matches`?

Copy link
Contributor Author

@nerdvegas nerdvegas Dec 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"on_variants" is the scope that you would consider as the filter, whereas the fields within ("requires") can be thought of as the type of filter. So this reads like "run the test on variants that require X". For eg, I haven't added it here (and probably won't) but in theory it would make sense to also have an "index" filter option, within the on_variants dict. And "on_variants" can't be named "filter" because that's too generic. We could go for "variants_filter", but is that any better than "on_variants"?

Example of use: on_variants.requires=["maya"] will run only on variants that (directly) require maya; on_variants.requires=["!nuke"] would run on all non-nuke variants.

I suppose you could have multiple 'requires' filters that would OR together, but I'm not sure that would ever be useful, and it would get quite confusing with conflict ops present. Ie, what would "!maya OR !nuke: mean? That would be the same as no requires filter at all. ANDed options aren't necessary, because you only need a single 'requires' field to describe that (eg ["maya", "!python-3"] will select any maya variant not using python-3).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that my comment was not on the on_variant but on the requires key (from your comment I have the feeling you thought I was talking about the on_variant name...)

What about:

"on_variant": {
    "filter_type" (Or just type): "requirements", (eventually could be index too)
    "filter": ["maya"]
}

I feel this would make the intention clearer here than having yet another property called requires.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is ok, and if different filter types are added, this will avoid some confusion.

I would go with type/value though rather than filter_type/filter (the on_variant dict is the filter)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great. I'm fine with type and value.

src/rez/package_test.py Show resolved Hide resolved
else:
tests_dict = package.tests

return sorted((tests_dict or {}).keys())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically you can just do sorted(test_dict or {})

else:
command = map(variant.format, command)
if reqlist.conflict:
continue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a user I would probably like to know that a variant was filtered out by something. In your second PR you are adding multiple verbosity level, that could probably help I guess. At least it would help debugging if we had a small log. Same for the next if.

Copy link
Contributor Author

@nerdvegas nerdvegas Dec 3, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I wrestled with this a bit. Not sure it felt right to keep telling the developer that their test has been skipped on a variant, when they explicitly specified that anyway. However I agree it probably makes sense to add if verbosity is enabled. I'll take a look in the second PR.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way I see things is like skipped tests in a test suite. Your test framework will tell you which test was skipped, even if they were manually skipped. But maybe others have a different view. I guess we can just add the log and we could adjust if it's too verbose.

src/rez/package_test.py Show resolved Hide resolved
src/rez/package_test.py Show resolved Hide resolved
src/rez/package_test.py Show resolved Hide resolved
@nerdvegas
Copy link
Contributor Author

"""Should we add support for doing rez-test . or rez-test /path/to/package (can be useful for CI) in this PR?"""

I wanted to keep that as a separate PR because I think there are some outstanding questions that need to be answered, so I wanted to wait til we get consensus. See #759

"""Are you planning to add a variant selection flag to the CLI?"""

Yes. I didn't want this PR to be too large, so was leaving this for next. Will also add the --interactive flag in this PR.

nerdvegas pushed a commit that referenced this pull request Dec 4, 2019
@nerdvegas
Copy link
Contributor Author

Issues addressed in #811 (fixing them here will just cause conflicts).

If we're reasonably happy with what's here, I'd like to merge and roll further reviewing into #811
.

@JeanChristopheMorinPerso
Copy link
Member

I'll reply to your comments when I get at home (so 2 hours).


if returncode:
sys.exit(returncode)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the feeling it becomes unclear because the tests are skipped instead of being marked as failed. I've been thinking about it even more today and I really think tests that fail to resolve should really be marked as failed. That would avoid hiding possible issues and would probably make it possible to use a different exit code more easily.

A lot of CI system base themselves on the exit code of processes to know if a step failed or not. If we don't return a proper exit code, it's going to be quite difficult to know when a test failed or not and mark a build as failed.

Optional("on_variants"): Or(
bool,
{
"requires": [package_request_schema]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that my comment was not on the on_variant but on the requires key (from your comment I have the feeling you thought I was talking about the on_variant name...)

What about:

"on_variant": {
    "filter_type" (Or just type): "requirements", (eventually could be index too)
    "filter": ["maya"]
}

I feel this would make the intention clearer here than having yet another property called requires.

src/rez/package_test.py Show resolved Hide resolved
src/rez/package_test.py Show resolved Hide resolved
"-s", "--stop-on-fail", action="store_true",
help="stop on first test failure")
parser.add_argument(
"--inplace", action="store_true",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should --extra-packages and --inplace be mutually exclusive? In theory it doesn't make much sense to let a user use --extra-package if it's gonna be ignored... Or would it be ignored right now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure, but agreed, they should be mutually exclusive to avoid confusion.

src/rez/package_test.py Show resolved Hide resolved
else:
command = map(variant.format, command)
if reqlist.conflict:
continue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way I see things is like skipped tests in a test suite. Your test framework will tell you which test was skipped, even if they were manually skipped. But maybe others have a different view. I guess we can just add the log and we could adjust if it's too verbose.

@nerdvegas
Copy link
Contributor Author

Ok, I think after I make these changes we're on the same page. It's gonna be a lot simpler to do this over in #811. Let's take the convo over there, and when that's ready, I'll do both merges into the same release.

@JeanChristopheMorinPerso
Copy link
Member

I also think we are now on the same page. So if you feel like it's going to be easier to do the changes in #811, I'm fine with that.

@nerdvegas
Copy link
Contributor Author

nerdvegas commented Dec 4, 2019 via email

nerdvegas pushed a commit that referenced this pull request Dec 5, 2019
@nerdvegas nerdvegas merged commit 431d4e5 into master Jan 18, 2020
@bpabel bpabel deleted the rep001-1 branch January 19, 2023 20:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants