-
Notifications
You must be signed in to change notification settings - Fork 7.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sped up tests by relying on watchify and splitting up tests #2254
Conversation
'grunt browserify:watch', | ||
'grunt karma:watch' | ||
].join('&') | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a benefit of doing it this way over something like grunt-concurrent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cooool. Didn't know about that. I'll try it. Originally had a non-grunt wait
command in there too, but with this it's not needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, my bad, should have mentioned that yesterday. What we've got here makes sense, but while we're still in a Grunt world we should probably use grunt-concurrent. Also helps avoid any weirdness around zombie processes that this could potentially introduce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's been switched to grunt-concurrent now. Was able to back out the keep alive change for connect as well.
lgtm |
I'm all for speeding up development builds. My main goal in the build work I did last month was to ensure it remained easy to open up the automated tests in your regular browser so you can debug issues properly. I'd be happy tracing back concatenated+babelified files to their sources as long as I don't have to attempt to fix a test issue solely through a stack trace on the command line. I love running automated tests through the command line but it's a terrible way to fix them when they break. |
9938d9b
to
37bb1da
Compare
Ok, I think I have a solution for tests here. Not perfect but better. Could use a review, please! Run
Run Run Also...
|
I should also note, I tried combining ALL the watch tasks into one concurrent command, but it was a mess of log output. I think it probably makes more sense just to open a new terminal for other tasks. I welcome any ideas for improving from here. Also there's something wrong with our tests right now that causes them to run infinitely when there's an error in a commonly used file, like component. I can't figure it out, but it made the super-concurrent setup all the more complicated. |
@@ -22,3 +23,7 @@ env: | |||
- secure: GIbhjUJapvC70nIZVlhVyK+3KAD2TVKpiY/q412OO7V2izbvcM1tvU3LBoMZbROzrt5TT84tCoJDvHnrpL0OvxPwrzL5CUU7h4UTxhTOyQkEinbYAnWlW9wdrvtdczsEvANkFPqBZ53B3hVHZHMLOG8QRWaTBicF68vSHEJFqb4= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't your fault, but while you're editing this Travis yml file, please fix this indentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
lgtm |
@@ -8,6 +8,7 @@ before_install: | |||
- export CHROME_BIN=chromium-browser | |||
- export DISPLAY=:99.0 | |||
- sh -e /etc/init.d/xvfb start | |||
script: grunt test && grunt coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just keep the default and have npm test
either get passed wither to run coveralls as an option or figure it out itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you show me what you mean? Previously we had a conditional in the grunt file and it was creating a long chain to follow to figure out tests. This at least seemed like an improvement.
For the most part, looking good. I'd like to actually try it out locally before we pull it in, though. |
@gkatsev I've added in your requests. Let me know when you've had a chance to try it out. I'd like to get this pulled in soon. |
I'll get to it really soon. |
// Needed so watchify can cache intelligently | ||
watchAll: [ | ||
'watch', | ||
'browserify:watch', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should move browserify:watch
and browser:tests
to the first two items in this list.
Seems like as it currently is, browserify:tests
ends up running after karma:watch
runs so, it takes some time extra before you can refresh your browser window.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The order doesn't seem to have a significant impact on when they're run (or rather when they finish), from what I've seen. But I'll try it. I think we still want watch
first though, because if you're developing the skin you don't want to wait for the rest, and also the tests depend on the skin being available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is why I separated them out into test-ui
and test-cli
, so you can choose which you're using. I like to let the cli version run and beep at me when there's test errors, so I'd personally prioritize karma:watch
over the others.
- Temporarily killed API tests. Combining them with other tests. - Using browserify:watch for build/temp/video.js (sandbox testing) - Using karma:watch for automated tests Using individual watch tasks allows watchify to use smart caching in both instances. - Switched to grunt-concurrent for watch tasks - Switched to travis containers, sudo: false - Added caching of npm modules in Travis - Consolidated travis testing - Cleaned up grunt file - Fixed travis.yml spacing - Added the watchAll task for trying it out - Moved travis test script logic to package.json - Moved coverage reporting to Travis only closes videojs#2254
@gkatsev does this mean anything to you? https://travis-ci.org/videojs/video.js/builds/71163633#L618 |
Not sure. I can take a look tomorrow. |
The main thing that's happening here is setting up parallel watch tasks so that we can use watchify's smart caching of browserify source files. We end with 3 watch tasks that run at the same time. We can't use grunt-contrib-watch for all of them, because it just starts browserify from scratch each time, instead of using watchify to cache the files, which is why our suite is so slow right now.
The first two are different tasks because that automated tests only pull in the source files they need, not the full video.js. If we wanted to share the two we would need to rewrite the test suite to work off the fully compiled video.js, but I don't think that's the way we want to go.
Also temporarily killed API tests. I'm combining them with other tests so we only have one suite.
Would be interested to get @gkatsev's feedback since the parallel watch tasks idea came from him.
And also @dmlap's since this is reverting some of the work he did to compile the library first before running tests. Let me know if any of this isn't clear.