Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sped up tests by relying on watchify and splitting up tests #2254

Merged
merged 1 commit into from
Jul 15, 2015

Conversation

heff
Copy link
Member

@heff heff commented Jun 10, 2015

The main thing that's happening here is setting up parallel watch tasks so that we can use watchify's smart caching of browserify source files. We end with 3 watch tasks that run at the same time. We can't use grunt-contrib-watch for all of them, because it just starts browserify from scratch each time, instead of using watchify to cache the files, which is why our suite is so slow right now.

  • grunt browserify:watch - rebuild build/temp/video.js with each change for sandbox manual tests
  • grunt karma:watch - rerun tests with relevant changes for automated tests
  • grunt watch - everything else that doesn't need to be smart. Sass and jshint.

The first two are different tasks because that automated tests only pull in the source files they need, not the full video.js. If we wanted to share the two we would need to rewrite the test suite to work off the fully compiled video.js, but I don't think that's the way we want to go.

Also temporarily killed API tests. I'm combining them with other tests so we only have one suite.

Would be interested to get @gkatsev's feedback since the parallel watch tasks idea came from him.
And also @dmlap's since this is reverting some of the work he did to compile the library first before running tests. Let me know if any of this isn't clear.

'grunt browserify:watch',
'grunt karma:watch'
].join('&')
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a benefit of doing it this way over something like grunt-concurrent?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cooool. Didn't know about that. I'll try it. Originally had a non-grunt wait command in there too, but with this it's not needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heh, my bad, should have mentioned that yesterday. What we've got here makes sense, but while we're still in a Grunt world we should probably use grunt-concurrent. Also helps avoid any weirdness around zombie processes that this could potentially introduce.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's been switched to grunt-concurrent now. Was able to back out the keep alive change for connect as well.

@mmcc
Copy link
Member

mmcc commented Jun 10, 2015

lgtm

@dmlap
Copy link
Member

dmlap commented Jun 12, 2015

I'm all for speeding up development builds. My main goal in the build work I did last month was to ensure it remained easy to open up the automated tests in your regular browser so you can debug issues properly. I'd be happy tracing back concatenated+babelified files to their sources as long as I don't have to attempt to fix a test issue solely through a stack trace on the command line. I love running automated tests through the command line but it's a terrible way to fix them when they break.

@heff heff force-pushed the speedup-tests branch 3 times, most recently from 9938d9b to 37bb1da Compare July 13, 2015 23:48
@heff
Copy link
Member Author

heff commented Jul 14, 2015

Ok, I think I have a solution for tests here. Not perfect but better. Could use a review, please!

Run grunt dev, it will...

  • Build
  • Start a server
  • Concurrently watch multiple file sets
    • jsbin and sass watching
    • browsersify:watch, building the lib for sandbox and dist while using watchify caching for speed

Run grunt test-ui, it will watch and build test files for the Qunit UI, located at localhost:9999/test/. It requires grunt dev to be running the server already.

Run grunt test-cli it will run karma:watch, watching and auto running test files on the command line when they change.

Also...

  • Fixed line number reporting (coverage only runs on Travis). In the Qunit UI, you have to use the try/catch option to see the source line numbers in the console. Qunit doesn't handle sourcemaps.
  • Consolidated Travis test differences into karma.conf.js
  • Removed iPad and Android from Sauce testing for now
  • Updated Travis to use their new containers for better speed, and also added npm caching

@heff
Copy link
Member Author

heff commented Jul 14, 2015

I should also note, I tried combining ALL the watch tasks into one concurrent command, but it was a mess of log output. I think it probably makes more sense just to open a new terminal for other tasks. I welcome any ideas for improving from here.

Also there's something wrong with our tests right now that causes them to run infinitely when there's an error in a commonly used file, like component. I can't figure it out, but it made the super-concurrent setup all the more complicated.

@@ -22,3 +23,7 @@ env:
- secure: GIbhjUJapvC70nIZVlhVyK+3KAD2TVKpiY/q412OO7V2izbvcM1tvU3LBoMZbROzrt5TT84tCoJDvHnrpL0OvxPwrzL5CUU7h4UTxhTOyQkEinbYAnWlW9wdrvtdczsEvANkFPqBZ53B3hVHZHMLOG8QRWaTBicF68vSHEJFqb4=
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't your fault, but while you're editing this Travis yml file, please fix this indentation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@mmcc
Copy link
Member

mmcc commented Jul 14, 2015

lgtm

@mmcc mmcc mentioned this pull request Jul 14, 2015
@@ -8,6 +8,7 @@ before_install:
- export CHROME_BIN=chromium-browser
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
script: grunt test && grunt coveralls
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just keep the default and have npm test either get passed wither to run coveralls as an option or figure it out itself?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you show me what you mean? Previously we had a conditional in the grunt file and it was creating a long chain to follow to figure out tests. This at least seemed like an improvement.

@gkatsev
Copy link
Member

gkatsev commented Jul 14, 2015

For the most part, looking good. I'd like to actually try it out locally before we pull it in, though.

@heff
Copy link
Member Author

heff commented Jul 14, 2015

@gkatsev I've added in your requests. Let me know when you've had a chance to try it out. I'd like to get this pulled in soon.

@gkatsev
Copy link
Member

gkatsev commented Jul 14, 2015

I'll get to it really soon.

// Needed so watchify can cache intelligently
watchAll: [
'watch',
'browserify:watch',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should move browserify:watch and browser:tests to the first two items in this list.
Seems like as it currently is, browserify:tests ends up running after karma:watch runs so, it takes some time extra before you can refresh your browser window.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The order doesn't seem to have a significant impact on when they're run (or rather when they finish), from what I've seen. But I'll try it. I think we still want watch first though, because if you're developing the skin you don't want to wait for the rest, and also the tests depend on the skin being available.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, this is why I separated them out into test-ui and test-cli, so you can choose which you're using. I like to let the cli version run and beep at me when there's test errors, so I'd personally prioritize karma:watch over the others.

@heff heff merged commit 3e35935 into videojs:master Jul 15, 2015
- Temporarily killed API tests. Combining them with other tests.
- Using browserify:watch for build/temp/video.js (sandbox testing)
- Using karma:watch for automated tests

Using individual watch tasks allows watchify to use smart caching in both
instances.

- Switched to grunt-concurrent for watch tasks
- Switched to travis containers, sudo: false
- Added caching of npm modules in Travis
- Consolidated travis testing
- Cleaned up grunt file
- Fixed travis.yml spacing
- Added the watchAll task for trying it out
- Moved travis test script logic to package.json
- Moved coverage reporting to Travis only

closes videojs#2254
@heff
Copy link
Member Author

heff commented Jul 15, 2015

@gkatsev does this mean anything to you? https://travis-ci.org/videojs/video.js/builds/71163633#L618

@gkatsev
Copy link
Member

gkatsev commented Jul 15, 2015

Not sure. I can take a look tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants