-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using a Redis server that requires SSL connection with self signed certificate exits 0 with 0 tests run #292
Comments
This behavior is actually by design. As such, the reporter step isn't optional, it's 100% required as it's the only thing that ensure all queued tests were ran. This is definitely not documented enough, and you are not the first one to get bit by this. What we could probably do to is to make this opt-in with some sort of
That's a different feature request. |
@ChrisBr any opinion? |
Thanks for the extra context. In my case such a flag would meet my needs for increasing confidence. I'm running a small cluster, 16 nodes, and I don't expect any of them to fail or be unavailable.
CAP strikes again I guess. It makes sense that you would want your distributed job to allow for failure of nodes.
Short of adding a new feature, lower hanging fruit could be to emit a warning saying when the worker is going into a soft-failure mode, ideally along with why to differentiate failure modes. Possibly to link to some docs explaining why it's not exiting
👍 I'm going to poke at it a bit. I'll be at RubyConf this week, so probably won't have a PR until after that. Thanks for taking a look at the issue. |
Hosted Redis servers use self signed certificates (because they do not own the domain they're running on such as `compute-1.amazonaws.com`) therefore a full SSL connection verification will fail. The fix for that behavior is to disable verification so a self signed certificate can be used [docs from a hosted Redis/Key-value store provider](https://devcenter.heroku.com/articles/connecting-heroku-redis#connecting-in-ruby). This PR makes the default behavior to allow connecting to self signed Redis servers. This failing SSL connection behavior was originally reported in Shopify#292, however that issue focuses on raising visibility of such failures. This is an alternative to Shopify#293 that does not introduce a new configuration flag.
FYI this is exactly what minitest-distributed does as it does not have a summary step and all workers report. It doesn't have all the bells and whistles as ci-queue (e.g. not bisect) but is a lot simpler and hence integrates easier. Let me know what you think. |
The issue is described in my debugging notes here heroku/heroku-buildpack-ruby#1505 (comment)
Context
Heroku rolled out a SSL connections by default https://devcenter.heroku.com/changelog-items/2992. However all certificates are self-signed. (as the servers do not "own" the domain
compute-1.amazonaws.com
). Therefore we recommend people disable peer verification https://devcenter.heroku.com/articles/connecting-heroku-redis#connecting-in-ruby.When this change happened, my CI was reporting that my runs were successful however I never noticed that they were executing 0 tests:
While debugging, I tried connecting to a legacy non-ssl url (REDIS_TEMPORARY_URL) and when I made that change everything worked fine.
Expected
When I try to connect to a redis instance and the connection fails for some reason that it will exit non-zero.
Actual
It reports 0 tests were executed and then does not fail
Suggestion
Ideally if the runner cannot connect to redis (for any reason) it would hard fail and ideally give more connection information. I understand that rspec-queue is marked deprecated in the readme, however I'm guessing that this behavior might also persist in other shared connection code-paths.
Beyond improving the connection failure message, if an flag or env var could be added to set the verification mode to none for use with hosted/self-signed certs, that would be helpful as I have no other way of working around this problem at this time.
The text was updated successfully, but these errors were encountered: