Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Break build packagewise #1370

Merged
merged 1 commit into from
Jun 19, 2017
Merged

Conversation

utkbansal
Copy link
Member

Fixes #1369

Changes made in this Pull Request:

PR Checklist

  • Tests?
  • Docs?
  • CHANGELOG updated?
  • Issue raised/referenced?

@utkbansal
Copy link
Member Author

@jbarnoud Can you please have a look at this.

MDAnalysisTests.auxiliary.test_xvg.TestXVGFileReader.test_iterate_as_auxiliary_from_trajectory fails with the error IOError: Failed to interpret file '/home/travis/build/MDAnalysis/mdanalysis/testsuite/MDAnalysisTests/data/coordinates/.test.xtc_offsets.npz' as a pickle

I've seen such errors on Travis before, but restarting the build would always solve it. Have restarted 4-5 times, but getting the same error. Also, I'm unable to reproduce this on my system.

@jbarnoud
Copy link
Contributor

It seems that there is an issue with the offset file for an XTC trajectory. I do not see why causes the issue; maybe @mnmelo or @kain88-de would.

On the PR itself, it would be better to have all the test to run with mda_nose on the same instruction. Instead of mda_nose analysis && mda_nose auxiliary I would prefer mda_nose analysis auxiliary. It makes a very long and ugly command, but there are at least three benefits there:

  • All the test will run; with what you have now, if a mda_nose command fails the next ones do not run beause of the &&
  • You can keep the timer; having the timer for the individual runs of mda_nose you have now do not make much sense, especially the report on the slowest test would be broken
  • It should be faster; indeed some operation would be done only once. Deleting the offset file requires to recursively go through the data directory, we do not want to do that too often.

I hope it is not the case, but the last point could be the cause of your offset problem. It could be that the offset file is not yet fully deleted by mda_test analysis when mda_nose auxiliary starts causing the latter to read a corrupted offset file.

@utkbansal
Copy link
Member Author

utkbansal commented May 27, 2017

Okay, I'll make it into a single command. Also, can you think of a better way to deal with the files in the root directory?

@richardjgowers
Copy link
Member

@utkbansal you could just scoop them up into a new folder. Ideally there shouldn't be anything in the root dir really, that's just messy.

@utkbansal
Copy link
Member Author

@richardjgowers What should I call it? utils ? All the tests are testing the supporting things for MD and not the core features, i think.

@utkbansal
Copy link
Member Author

It looks like I messed up something with the knownfailure plugin, I'm not sure what it is though.

@kain88-de
Copy link
Member

@utkbansal if the knownfailure plugin fails we might just deactivate those tests for now. This will likely cause a drop in coverage.

@mnmelo
Copy link
Member

mnmelo commented May 29, 2017

The knownfailure plugin failing probably has to do with you including the plugins directory in the testing command. Just leave that out.

@utkbansal
Copy link
Member Author

@mnmelo Thanks! But I don't understand how adding the plugins directory affects the result, when we had nose run the complete testsuite, then also nose would have looked into the plugin directory - to find nothing - no tests. So how does asking it explicitly to look into the plugins directory change anything? It should still find nothing to test.

@mnmelo
Copy link
Member

mnmelo commented May 29, 2017

That's a good point, and perhaps the test finder behaves differently if a directory is passed explicitly. It might also be that importing stuff from directories that are going to be explicitly tested is somehow prevented or breaks something.
These are all hypotheses. I don't really feel like digging into nose's docs/code to find out, now that we're moving away to pytest, but do feel free to if you're curious!

@utkbansal utkbansal closed this May 29, 2017
@utkbansal utkbansal reopened this May 29, 2017
.travis.yml Outdated
@@ -21,7 +21,8 @@ env:
# Set default python version to avoid repetition later
- BUILD_DOCS=false
- PYTHON_VERSION=2.7
- MAIN_CMD="python ./testsuite/MDAnalysisTests/mda_nosetests --processes=2 --process-timeout=400 --no-open-files --with-timer --timer-top-n 50"
- MAIN_CMD="python ./testsuite/MDAnalysisTests/mda_nosetests analysis auxiliary coordinates core formats topology utils
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One way of making this cleaner, would be to use ./testsuite/MDAnalysisTests/mda_nosetests $(cat tests_for_nose.txt) --options. This assumes that test_for_nose.txt is a file where each line is a module to test. One advantage is that you can have a tests_for_pytest.txt so moving tests from one system to the other can be done by moving the corresponding lines from one file to the other. It should look clearer in the diffs, and it will make .travis.yml a bit nicer to read.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this is a temporary thing and within one month from now, we will be back to something like python ./mda_pytest --flags.
Though I can do as suggested by you, it feels like an overkill to me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you want. It is not blocking to me, just neater.

@mnmelo
Copy link
Member

mnmelo commented May 29, 2017

Travis failed again with one of those timeouts, but all seems ok now.

@richardjgowers
Copy link
Member

So this works as intended, but anothing thing we have to get working as proof of concept is combining .coverage files from different test invocations. Maybe merge this and then try and split out one of the directories and combine coverage as proof of concept?

@utkbansal
Copy link
Member Author

@richardjgowers Sure, we can merge this.

Also can you try python ./mda_nosetests test_log.py ? A test fails when I do this, but when I run the complete testsuite, nothing fails.

@jbarnoud
Copy link
Contributor

jbarnoud commented May 29, 2017 via email

@kain88-de
Copy link
Member

@utkbansal how is it going? i'm looking forward to already use pytest for the new DCD reader

@utkbansal
Copy link
Member Author

@kain88-de Hi, the first part where I have to split the build is done, but the second part where I have to merge the coverage files is on hold till I get something actually running on pytest. If we want a proof of concept I can do it on another PR when this is merged.

@kain88-de
Copy link
Member

You can do a split with the first file to convert already now using nose test. I actually would prefer that since with the same test runner the coverage shouldn't change at all. It the test of coverage combination with the least amount of free variables.

@utkbansal
Copy link
Member Author

utkbansal commented Jun 2, 2017

@kain88-de Before I can run anything with pytest, I have to port all the existing plugins. If we can skip the plugins for now, I can do this for one file as a POC.

@kain88-de
Copy link
Member

kain88-de commented Jun 2, 2017

you don't need pytest for the coverage proof of concept. Just call mda_nosetest twice with different files to test.

@utkbansal
Copy link
Member Author

@kain88-de Okay, got it.

@kain88-de
Copy link
Member

@utkbansal which module did you think about splitting of first? The lib util tests might be nice. @jbarnoud @richardjgowers any other ideas which one to split off.

@richardjgowers
Copy link
Member

@kain88-de it's not really important, I just want to see we can combine .coverage files

@kain88-de
Copy link
Member

My thought was we can to a split already in this PR. This has do be done anyway and then we can directly see if this is picked up nicely by coveralls. I like doing it now before we enable pytest because it means that we don't have a lot of variables that could influence if the combination works.

From the coverage docs I think we need to do something like this to get the combination working.

env:
  global:
    - NOSE_COV_FILE: coverage_nose
    - PYTEST_COV_FILE: pytest_coverage
    - NOSE_TEST_LIST: ...
    - PYTEST_TEST_LIST: ...
    - MAIN_PYTEST_CMD: ...
script:
  - COVERAGE_FILE=$NOSE_COV_FILE $MAIN_CMD $SETUP_CMD $NOSE_TEST_LIST
  - COVERAGE_FILE=$PYTEST_COV_FILE $MAIN__PYTEST_CMD $SETUP_CMD $PYTEST_TEST_LIST
after_success:
  - | 
     if [[ $COVERALLS == 'true' ]]; then \
        coverage combine $NOSE_COV_FILE $PYTEST_COV_FILE
        coveralls; \
     fi

In the end the combination should work in a normal PR as well so we can be somewhat sure of our coverage.

@jbarnoud
Copy link
Contributor

jbarnoud commented Jun 5, 2017

Yes, and until we have tests run with pytest, the second batch of tests can be a second batch of nose tests.

@utkbansal what are your progress on that matter? We need to have this working to decide how we deal with the test freeze.

.travis.yml Outdated

after_success:
- if [[ $COVERALLS == 'true' ]]; then coveralls; fi
- if [[ $COVERALLS == 'true' ]]; then \
coverage combine $NOSE_COVERAGE1 $NOSE_COVERAGE2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a missing line break here. You should also use the multiline yaml marker |. It should be already used in the install section. So you can look there how to use it.

@utkbansal
Copy link
Member Author

@kain88-de For some odd reason test__set_verbose in utils/test_log.py fails when run individually.

@utkbansal
Copy link
Member Author

@richardjgowers @jbarnoud @kain88-de Got the minimal build to pass too. Apparantly, it doesn't like anyone touching its flags.

Other than the full build (which is terminated), all other builds pass now. So I guess this should be good for now. A review would be helpfull.

.travis.yml Outdated
- echo $MAIN_CMD $SETUP_CMD
- eval $MAIN_CMD $SETUP_CMD
- |
if [[ $NAME == 'minimal' ]] || [[ $NAME == 'full' ]] || [[ $NAME == 'Lint' ]] || [[ $NAME == 'osx' ]] || [[ $NAME == 'old numpy' ]] || [[ $NAME == 'numpy dev' ]]; then \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "lint" build should not be in this list. It has a completely different MAIN_CMD. Also, I am not a huge fan of having ifs like that as it breaks the very generic way we had before, I would prefer all this gymnastic to be in the MAIN_CMD variable. But if it has to be this way (having everything in MAIN_CMD has its issues) I would rather test for the exceptions than for the general case: I would test for [[ $NAME != 'Doc' ]] && [[ $NAME != 'Lint' ]] for the first batch and [[ $NAME == 'Doc' ]] || [[ $NAME == 'Lint' ]] for the second batch.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the longer term we could look at using build stages for things like this

https://docs.travis-ci.com/user/build-stages

# assert_raises(ValueError, _set_verbose, verbose=True, quiet=True)
# assert_raises(ValueError, _set_verbose, verbose=False, quiet=False)
# # A deprecation warning is issued when quiet is set
# assert_warns(DeprecationWarning, _set_verbose, verbose=None, quiet=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only the two assertions on warnings are failing, could you comment only these ones and add a comment that explains why they are commented?

@kain88-de kain88-de mentioned this pull request Jun 15, 2017
11 tasks
@utkbansal
Copy link
Member Author

@jbarnoud @richardjgowers Better now?

.travis.yml Outdated
matrix:
fast_finish: true
include:
- os : linux
env: NAME='minimal'
PYTHON_VERSION=2.7
SETUP_CMD='--with-memleak'
MAIN_CMD='python ./testsuite/MDAnalysisTests/mda_nosetests analysis auxiliary coordinates core formats topology --processes=2 --process-timeout=400 --no-open-files --with-timer --timer-top-n 50 --with-memleak && python ./testsuite/MDAnalysisTests/mda_nosetests utils --processes=2 --process-timeout=400 --no-open-files --with-timer --timer-top-n 50 --with-memleak'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer the logic of that!

I think the environment variable are set sequentially. Therefore you can still use SETUP_CMD. Also, you could have the list of tests for each call to nose in a couple of variable to avoid mistakes when a test will move from one run to the other.

Finally, it should not be && between the two tests run, but only ;. With &&, the second run only happens if the first one succeed.

Copy link
Member Author

@utkbansal utkbansal Jun 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand how I can use SETUP_CMD. Do you want me to reference SETUP_CMD in MAIN_CMD? Do we want something like -
MAIN_CMD = python ./testsuite/MDAnalysisTests/mda_nosetests $NOSE_TEST_LIST $TEST_FLAGS $SETUP_CMD

Okay, will split the tests list into variables as before.

I thought we wanted && ??

It makes a very long and ugly command, but there are at least three benefits there:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we should use &&. Otherwise we are starting two test processes at the some time where each one uses the 2 processes internally. Giving a total of 4 running processes. But travis only gives us 2 cpu core. This means using anything but && might hurt the runtime of the tests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking closer, it cannot be SETUP_CMD indeed. But it can be a variable with an other name, like NOSE_SPECIAL_OPT. You can assignNOSE_SPECIAL_OPT in the individual env sections, just before you assign MAIN_CMD, and refer toNOSE_SPECIAL_OPT while assigning MAIN_CMD. This way you show the difference between the builds; now, the difference is hidden in a very long command line.

Also, the idea you had to have all the nose option in a global variable makes much more sense now. If you repeat the nose command for all the builds, you want to avoid duplication as much as possible to avoid mistakes (e.g. an option is changed in some builds but not all of them).

About &&, I may not have been clear enough but I am saying the same thing in both comments. We want all tests to run, regardless of the passing status of the previous ones. && means "and"; cmd1 && cmd2 means "run cmd1, and if it succeeded then run cmd2". What we want is "run cmd1, run cmd2"; this is said with cmd1; cmd2.

All in all, I would see something in the lines of:

MAIN_CMD="python ./testsuite/MDAnalysisTests/mda_nosetests ${NOSE_OPT} ${NOSE_SPECIAL_OPT} ${RUN1_TESTS}; python ./testsuite/MDAnalysisTests/mda_nosetests ${NOSE_OPT} ${NOSE_SPECIAL_OPT} ${RUN2_TESTS}"

Copy link
Contributor

@jbarnoud jbarnoud Jun 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kain88-de I am pretty sure that with ; the commands run sequentially and not in parallel. And as mentioned in my previous command, you take the risk of running only half of the tests if you use &&.

# is run individually. Initially seen in #1370

# assert_warns(DeprecationWarning, _set_verbose, verbose=None, quiet=True)
# assert_warns(DeprecationWarning, _set_verbose, verbose=False, quiet=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the record: I think this is an error due to how assert_warns is implemented (or used most likely). Hopefully, pytest--that has its own way of handling warnings--will solve the issue and we should have the test back online.

.travis.yml Outdated
@@ -62,7 +66,8 @@ matrix:

- os: linux
env: NAME='full'
MAIN_CMD='export COVERAGE_FILE=$NOSE_COVERAGE1 && python ./testsuite/MDAnalysisTests/mda_nosetests analysis auxiliary coordinates core formats topology --processes=2 --process-timeout=400 --no-open-files --with-timer --timer-top-n 50 --with-coverage --cover-package MDAnalysis && export COVERAGE_FILE=$NOSE_COVERAGE2 && python ./testsuite/MDAnalysisTests/mda_nosetests utils --processes=2 --process-timeout=400 --no-open-files --with-timer --timer-top-n 50 --with-coverage --cover-package MDAnalysis'
COVERAGE='--with-coverage --cover-package MDAnalysis'
MAIN_CMD='export COVERAGE_FILE=$NOSE_COVERAGE1; python ./testsuite/MDAnalysisTests/mda_nosetests ${NOSE_TEST_LIST1} ${NOSE_FLAGS} ${COVERAGE}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the second run is missing here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed it. All good other than this?

@kain88-de
Copy link
Member

@utkbansal your last change broke the build

@utkbansal
Copy link
Member Author

Interesting discovery -

When I run ./mda_nosetests lib/test_util.py, the test are executed just fine, but when I run ./testsuite/MDAnalysisTests/mda_nosetests lib, it fails with ImportError: No module named lib

The error makes sense though, the lib folder is not a python package because it doesn't have a __init__.py file.

I wonder if these tests were run before.

@utkbansal
Copy link
Member Author

@kain88-de Yep, saw that. I'm adding an __init__.py file to the lib folder. That should fix it.

@jbarnoud
Copy link
Contributor

It appears that adding the __init__.py made some more files visible to pylint. Could you add from __future__ import absolute_import, division in lib/test_utils.py, please?

Except for that it looks fine to me.

@kain88-de
Copy link
Member

OK so there is now the linter that needs to be satisfied. Otherwise this is good to go and increases our test coverage dramatically. is there anything else that needs to be done to start a pytest transition?

@kain88-de
Copy link
Member

@utkbansal you also need to do a rebase against develop. It will show you the copyright year as conflicting please choose the newer date.

@utkbansal
Copy link
Member Author

utkbansal commented Jun 19, 2017

@kain88-de I rebased again upstream/develop, the rebase went fine - I have Copyright (c) 2006-2017 The MDAnalysis Development Team and contributors in all files in testsuite/MDAnalysisTests/utils, but oddly github still shows me conflicts !?!

The conflicting files don't exist on my branch, I had moved them to the utils folder.

@kain88-de
Copy link
Member

@utkbansal no idea whats up with github. Can you squash your commits and open a new PR?

@utkbansal
Copy link
Member Author

Sure.

Merge into single command

Fix for root dir files

Move files in root dir to utils folder

Drop plugins directory

Merge coverage files

Fix covage run command

Fix test list env variable

Fix output coverage file

Fix covegae file

Fix case

Commnet out test__set_verbose

Fix minimal build

Fix merge command

Undo minial build change

Fix multiline command

Fix lint build

Remove plugins from the minimal build

Try to fix the doc build

Adds the test flags to the minimal build

Reactivate tests that pass in test_log.py

Remove if-else logic from "script" phase

Adds missing call to evail in "script" phase

Filip test flags and test directories

Adds cover package argument

Split command into constituent variables

Fix MAIN_CMD in full build

Have the same order of execution for all builds

Add lib folder to the testing command

Make the lib folder a python package

* Adds __init__.py file to the lib folder

Adds __future__ import
@kain88-de
Copy link
Member

OK the squash itself solved the hiccup that github had

@utkbansal
Copy link
Member Author

Makes it look like a minute amount of work!

@mnmelo
Copy link
Member

mnmelo commented Jun 19, 2017

It actually makes for concise history to squash before merging, although, as you said, it might misrepresent the work that goes into each commit. Don't worry, the number of changed files/lines will speak for itself.

@utkbansal
Copy link
Member Author

@kain88-de How soon can we merge this? I have some work with lib/test_util.py for which I can send a meaningful PR once this is merged.

Copy link
Contributor

@jbarnoud jbarnoud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Since the full build has issues I cannot check that the coverage works as intended. I would be more comfortable merging if somebody else greenlight the PR. Everything looks OK to me.

@richardjgowers richardjgowers merged commit 49496c3 into MDAnalysis:develop Jun 19, 2017
@richardjgowers richardjgowers added this to the 0.16.x milestone Jun 19, 2017
@orbeckst orbeckst mentioned this pull request Jun 21, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants