Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install Mocha's global error handler in each test's iframe #140

Closed
wants to merge 6 commits into from

Conversation

Mr0grog
Copy link
Collaborator

@Mr0grog Mr0grog commented May 27, 2013

This patch fixes #136 by copying Mocha's global error handler onto each iframe after the runner starts (Mocha sets up the handler as the last thing Runner#run does, so you can't even do it on the start event). When the runner ends, it cleans everything up.

Alternatively, you could overwrite Mocha's process.on() shim, but that seems more like an implementation detail this prone to breakage.

This also adds a test for async errors.

Also worth noting: in the process of writing this patch, I discovered Mocha's async error handling is a little buggy, which I had missed in my earlier test of it. Specifically, if an async error gets caught by Mocha and then done() is still called later, it can screw things up a bit. Mocha doesn't actually fully clean up the test when the async error is caught, but does fire test failure events :\

I think #136 is still worth fixing anyway, but I probably also need to file a bug on Mocha, too.

window.onerror = function() {
_mochaGlobalError.apply(this, arguments);
// return false so Poltergeist doesn't try to capture the error directly
return false;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to spec, return true should be used to indicate the error is handled. WebKit had it backward for some time, including the version that PhantomJS uses. I will get a return true upstream in mocha -- meanwhile see bfa5516.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is bfa5516 really desirable? It seems like it would be useful to catch errors that happen before the tests themselves start. In any case, I'll remove it.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't want it in a production test suite, but I'm not worried about it for the two poltergeist examples in Konacha's own suite.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, duh. For some reason I was thinking that would apply to apps using Konacha, which is obviously nonsense.

@jfirebaugh
Copy link
Owner

Thanks, committed as f7fc286, and added you as a repo collaborator.

@jfirebaugh jfirebaugh closed this May 28, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Asynchronous errors aren't causing tests to fail
2 participants