-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Framework: Improve Travis build performance #14289
Conversation
"npm run lint" \ | ||
"npm run check-local-changes" \ | ||
"npm run check-licenses" \ | ||
"npm run test-unit -- --ci --maxWorkers=2 --cacheDirectory=\"$HOME/.jest-cache\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be interested in seeing what putting npm run lint
, npm run check-local-changes
, and npm run check-licenses
into their own job adds to the build times. I don't like the idea of doing concurrently here because it means for someone relying on travis has no insight into whether the subsequent tasks would pass or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be interested in seeing what putting
npm run lint
,npm run check-local-changes
, andnpm run check-licenses
into their own job adds to the build times.
Yeah, this is what prompted my most recent thread comments in Slack. I agree that ideally they'd be defined as separate jobs, but it's not clear how to do that without increasing build times (kinda contrary to the point of the pull request).
On the general topic of "making sense of the build output", there might be some other options to explore for organizing the results in a way which is more easy to read. Right now, the concurrent output is a bit of a mess, since it just spews everything to stdout
in the order it's received (i.e. parallel scripts each with their own outputs are intermixed). I think it comes down to: Can we leverage / find new / develop the tool to organize the output in a more readable fashion?
I don't like the idea of doing concurrently here because it means for someone relying on travis has no insight into whether the subsequent tasks would pass or not.
I guess it depends if it's more in the interest of the developer to have faster build times, or a more thorough report of every issue in the build if there are multiple issues. I don't really have a strong leaning one way or the other, but I'd be inclined to think the former would benefit more people in the vast majority of cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you tested with --maxWorkers=4
? Travis VMs have two CPUs available, so it's usually a good idea to have more workers than that, so the CPUs are fully utilised: the workers will likely have I/O wait time to deal with.
Assumed previously relied on postinstall
Slated for removal as of #13569
I added If it's more trouble than it's worth, I'm fine with moving it somewhere that isn't run as often. It really only needs to run once, in a single Travis job. |
Yeah, I thought it might be the case. Ideally there would be a separate script hook specifically when installing with arguments (i.e. There's quite a few instances in the setup which optimize developer experience, which is great, but I think they ought to be isolated from what's run in Travis. Another one I've been seeing is that we still run |
Analyzing the current results using another build as baseline: Before: https://travis-ci.com/WordPress/gutenberg/builds/103420242
† Expanding on this last point:
I'm also a bit curious if these need to be run as separate containers, or if we could instead parallelize the end-to-end tests within a single container:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also a bit curious if these need to be run as separate containers, or if we could instead parallelize the end-to-end tests within a single container:
I don't know the architecture of Travis containers, but I would expect that each container has max two cores available. I assume it based on the unit tests setup which uses two workers at the moment. I also know that Circle CI had a similar limitation in the past.
@@ -32,8 +29,9 @@ fi | |||
|
|||
# Run PHPUnit tests | |||
if [[ $DOCKER = "true" ]]; then | |||
npm run test-php || exit 1 | |||
npm run test-unit-php-multisite || exit 1 | |||
docker-compose $DOCKER_COMPOSE_FILE_OPTIONS run --rm composer run-script lint |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for calling those commands directly rather than continue using npm run *
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reason for calling those commands directly rather than continue using
npm run *
?
I guess I assumed if we weren't installing dependencies, we shouldn't run npm scripts. In retrospect, it's probably not strictly a problem. Overall, it's part of a move of "we don't need NPM (or even Node) at all here", maybe even opening up future possibilities to avoid having it be installed in the environment at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, should we keep those npm scripts moving forward? There might be value in it but I'm afraid that at some point they will diverge from what is added to Travis config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, should we keep those npm scripts moving forward? There might be value in it but I'm afraid that at some point they will diverge from what is added to Travis config.
Agreed on the concern. I know I've used the NPM script variants, likely out of convenience / familiarity over what would be the docker-compose
equivalent, but I'm not overly compelled that they should stay. There's some nice uniformity on the developer experience front to have all test commands available in a single location. The Travis environment is the exception here.
You're right, it is two cores: https://docs.travis-ci.com/user/reference/overview/#virtualisation-environment-vs-operating-system To be embarrassingly candid, I'm not sure I understand how the upper bound of cores translates to what types and extent of parallelization we can implement. For example, if it's possible to run multiple Puppeteer tabs in parallel in a single Node process (single core?). Regardless, there may be some benefit to parallelizing inside the container, albeit maybe not as much as we might have hoped. It may not be worth the effort, though it could also depend how easy or difficult it is to implement. For example, I found some projects which exist in the ecosystem to simplify it: https://github.com/thomasdondorf/puppeteer-cluster In any case, would you think a more immediate solution might be to further fork the end-to-end tests across 3 or 4 tasks, vs. the current two? |
Apart from testing a higher number of workers, it's quite reasonable to split testing over multiple jobs. We can have up to 15 concurrent jobs at one time across the entire WordPress org, we rarely hit that limit. It's a pity build stages don't let us do all of the environment setup first, then copy that across multiple VMs, so we can avoid re-running Something we could explore is setting up a custom Docker image that does all of that setup work. Then it'd just be rebuilt whenever packages change. |
Yeah, there was some related conversation about this on Slack. It's certainly not as straight-forward as I wish it were, but Travis does promote an option of publishing files to S3 in an early stage which are then pulled into later stages for re-use. It's not entirely clear if / how much of a positive benefit this would have, particularly considering the overhead involved in archiving and transmitting a folder containing |
Yesterday I also checked out the stages docs once again, and yeah a shame we can't use it to host our environment. That said, a quick look at I'm wondering if we can use Travis CI's cache functionality here someway, maybe archive that folder into a folder and added to the Travis CI |
There is also previous work started by @noisysocks where he tries to add caching for building packages in #13124. With the changes proposed in this PR, it seems like caching packages might be less important for the initial Travis run. However, it might be still beneficial for the follow-up runs where new commits are added to PRs. |
FYI: I'm considering this more a directional / experimental pull request, though perhaps it can land after a few iterations and review. Following-up on E2E parallelization: I think it'd be enough for the short-term to split it into more containers. Unsure if I want to explore it here, but I think also we should:
I'm working on a side-project which, while not directly related to this effort, could provide a plugin ZIP distributable for a given pull request at its latest HEAD SHA, to be used for this purpose. Or, alternatively, I think we lack some coverage in verifying that our "package plugin" step works correctly, and it could be a good opportunity to implement this as a startup job for the build to serve the dual-purpose of packaging coverage, and for making the build available for each of the subsequent jobs. This does require an S3 (or equivalent) integration, however. I want to tinker as well with splitting "JS unit tests" into individual jobs for what currently comprises |
This would also be good because e2e tests will be aligning more closely to a "traditional" plugin installation, having them all start from the same plugin zip would be good. |
Oh, and more: This and my side-project have made me consider what we can be doing to reduce the time it takes for A few high-level thoughts occur to me:
I should also note that these notes are a brain dump because, with other priorities, I've not been able to dedicate time yet to revisit this in more detail. |
Yes, definitely. It's a bit scary to think that we have all this testing in place, but what's distributed to users is whatever is produced by this packaging shell script, which is not necessarily the same as what runs in the testing environments (notably by the last step: choosing which files to include in the ZIP). |
- npm run lint | ||
- npm run check-local-changes | ||
- npm run test-unit -- --ci --maxWorkers=2 --cacheDirectory="$HOME/.jest-cache" | ||
- npm run build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In #14432 I'm proposing changes which would ensure that we always use the source code to run unit tests. If that change lands, we could do one of the following:
- remove
npm run build
altogether from this job - add a flag which will allow Jest config to remove override which ensures that
build
folders aren't used
As far as I remember, if we do (2) we would have quite good coverage for all our codebase both transpiled with Babel and original source code. In development, we would always use source code and on Travis we would use setup closer to what you have when using code from npm.
While I do plan to continue the effort here, I don't plan to use this pull request itself, and discussion would be better served and tracked at an issue anyways. With that in mind, I'll close this and have opened #15159 to track ongoing work here. |
This pull request seeks to explore a few ideas for improved Travis build performance:
postinstall
script frompackage.json
npm run build
twice, wastefully, because of the combination of thepostinstall
triggered by an initialnpm install
, and a subsequent explicit buildnpm install
is run in many separate jobs, butnpm run check-licenses
only really needs to be run at most one time, not in each jobnpm run build
. Previously it may have been needed to use the npm scripts fornpm run test-php
andnpm run test-unit-php-multisite
, but these resolve todocker-compose
commands anyways, so it seems reasonable enough to call them directlyTesting instructions:
Verify a passing build.
In review, assure there is no lack of coverage for what had previously been tested.