Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

V2 ./pants test.pytest selects interpreter based off of compatibility constraints #7679

Merged
merged 15 commits into from
May 13, 2019

Conversation

Eric-Arellano
Copy link
Contributor

@Eric-Arellano Eric-Arellano commented May 8, 2019

Currently the V2 task will always use the current interpreter to run Pytest, even if the targets have different requirements. Instead, we must support choosing the interpreter for the subprocess based on the targets' compatibility (which falls back to global compatibility if missing).

Note that we let Pex do the intepreter resolution for us to simplify the Pants code. All we have to do is pass the interpreter constraints to PEX, along with exposing the --python-setup-interpreter-search-paths, and then Pex will handle the resolution for us.

Remaining tasks:
* Convert HydratedTarget to PythonTarget to read .compatibility
* Figure out why Pex can't resolve interpreters other than current one
@jsirois
Copy link
Contributor

jsirois commented May 8, 2019

Note that Pants does not choose the interpreter itself for the subprocess, as it relies on Pex to do the interpreter resolution. We simply must identify the constraints and then pass those constraints and the interpreter search paths to Pex.

NB: You can run a pex with an interpreter of your choice: /my/python /my/pex [args]. With current pex bugs you must add a PEX_PYTHON prefix like so PEX_PYTHON=/my/python /my/python /my/pex [args] (pex-tool/pex#709 & pex-tool/pex#710). So, iff you know where the right python is, you can control that it gets used. This I think is a ways off though since it ultimately requires some rules to compile or resolve the right python and get it in the CAS so you can mix it into the snapshot for the execution request.

@Eric-Arellano
Copy link
Contributor Author

The source code is complete, modulo requested changes.

This is still missing test code. Stu and I talked a little over Slack about this—not quite sure what is meaningful to test here, e.g. integration test vs unit test. I have to get going for the day but any ideas on what to test would be much appreciated, as I've never done testing with V2 before.

Copy link
Member

@stuhood stuhood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, looks good.

As to testing this, a few tools for the toolbelt:

  1. It's possible that we can add an @ensure_v2 decorator that (operates on integration tests similar to @ensure_pantsd), and add it to tests like:
    def test_cli_option_wins_compatibility_conflict(self):
    # Tests that targets with compatibility conflicts collide.
    binary_target = '{}:deliberately_conficting_compatibility'.format(self.testproject)
    pants_run = self._build_pex(binary_target)
    self.assert_success(pants_run, 'Failed to build {binary}.'.format(binary=binary_target))
  2. It's possible to unit test any @(console_)rule using run_rule:
    res = run_rule(fast_test, console, (target1, target2), {
    (TestResult, Address): make_result,
    })
    ... particularly if you decompose the rule into multiple rules.
  3. It's possible to more-unit-than-integration-test any @(console_)rule by directly making a product_request in a TestBase test:
    def test_integration_concat_with_snapshots_stdout(self):
    self.create_file('f1', 'one\n')
    self.create_file('f2', 'two\n')
    cat_exe_req = CatExecutionRequest(
    ShellCat(BinaryLocation('/bin/cat')),
    PathGlobs(include=['f*']),
    )
    concatted = self.scheduler.product_request(Concatted, [cat_exe_req])[0]
    self.assertEqual(Concatted('one\ntwo\n'), concatted)
    ... to take advantage of this method, you'd generally add some shim @rules around the @rules that you're trying to test, and/or add things that you'd like to directly inject (like PythonSetup) as RootRules that you pass in.
  4. It's possible to unit-but-mostly-integration-test a @console_rule via ConsoleRuleTestBase:
    class ListTargetsTest(ConsoleRuleTestBase):
    goal_cls = list_targets.List
    @classmethod
    def alias_groups(cls):
    return BuildFileAliases(
    targets={
    'target': Target,
    'java_library': JavaLibrary,
    'python_library': PythonLibrary,
    },
    objects={
    'pants': lambda x: x,
    'artifact': Artifact,
    'scala_artifact': ScalaArtifact,
    'public': Repository(name='public',
    url='http://maven.example.com',
    push_db_basedir='/tmp'),
    }
    )
    @classmethod
    def rules(cls):
    return super(ListTargetsTest, cls).rules() + list_targets.rules()

John suggested that we start using helper functions in this file.

This particular new helper function will make it easier to test that we're doing the right thing for interpreter constraints. It also might inspire other improvements to the file!
…than HydratedTarget

More direct API which also makes testing much easier.
commit f7acde5
Author: Eric Arellano <ericarellano@me.com>
Date:   Thu May 9 11:46:14 2019 -0700

    Update cache dependencies to what is actually used

commit 785ee9f
Author: Eric Arellano <ericarellano@me.com>
Date:   Thu May 9 11:40:37 2019 -0700

    Revert "Convert cache BUILD into an entry for each file"

    This reverts commit ec9077d.

    Actually this API becomes really annoying to import.

commit 1be577e
Author: Eric Arellano <ericarellano@me.com>
Date:   Thu May 9 11:35:12 2019 -0700

    Add cache to engine, which was missing

commit ec9077d
Author: Eric Arellano <ericarellano@me.com>
Date:   Thu May 9 11:34:03 2019 -0700

    Convert cache BUILD into an entry for each file

    This pattern allows us to only import the code we actually care about and makes it easier to ensure requirements actually match whats used.
Copy link
Member

@stuhood stuhood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this looks good to me.

We don't need to test things like compatibility flag winning over global constraints - that is not what this file is focused on. Instead, the main thing we need to test is that we can convert PythonTargetAdaptors to the proper format PEX is expecting.

This also works around issues with mocking the PythonSetup subsystem.
@Eric-Arellano Eric-Arellano changed the title WIP: Pytest v2 task selects interpreter based off of compatibility constraints Pytest V2 task selects interpreter based off of compatibility constraints May 9, 2019
@Eric-Arellano Eric-Arellano marked this pull request as ready for review May 9, 2019 21:03
Copy link
Contributor Author

@Eric-Arellano Eric-Arellano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ready for review now. Thanks both for your help getting it to this point!

@@ -26,10 +28,24 @@ class PyTestResult(TestResult):
pass


def parse_interpreter_constraints(python_setup, python_target_adaptors):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bikeshed time! In rules files, should we have helper functions like this appear above or below the main @rule and def rules()?

I have no preference and really don't know what's best.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bikeshed time! In rules files, should we have helper functions like this appear above or below the main @rule and def rules()?

IMO, it should appear nearest to the @rule that it is being used by. If there were more rules in this file, they might appear above/below a static function for that reason.

But it doesn't really matter.

@Eric-Arellano Eric-Arellano changed the title Pytest V2 task selects interpreter based off of compatibility constraints V2 ./pants test.pytest selects interpreter based off of compatibility constraints May 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants