-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Digest::Base cannot be directly inherited in Ruby #979
Comments
Running
|
Looks like aerospike/aerospike-client-ruby#45 had the same problem. Accessing a digest by Going to open up a PR for this. |
r10k content_synchronizer uses Forge v3 API when installing a module: https://github.com/puppetlabs/r10k/blob/4.0.0/lib/r10k/module/forge.rb#L95 https://github.com/puppetlabs/r10k/blob/4.0.0/lib/r10k/module/forge.rb#L128 https://github.com/puppetlabs/r10k/blob/4.0.0/lib/r10k/module/forge.rb#L176 r10k uses a thread pool to install the modules, therefore the Forge v3 API calls have to be thread safe, and so does the LruCache class used to cache API responses. If LruCache is left not thread safe, the same error as reported in r10k issue #979 happens. puppetlabs/r10k#979
I'm seeing this consistently now, and it's causing r10k runs to fail. Fully up-to-date r10k on RHEL 8, puppetserver-7.17.1-1.el8.noarch. |
Same deal when I try it on puppetserver-8.6.1-1. And I should note that it's not happening every run, but if I run it against a few dozen branches, I'm seeing errors about 5-6 times. Note that I'm not seeing this error on my puppetserver-7.9.1-1.el8.noarch hosts. |
Describe the Bug
r10k sometimes throws an
Digest::Base cannot be directly inherited in Ruby
error. This leads to module not being deployed and therefore other errors (failing CI pipelines in our case).This showed up after we enabled the
pool_size
feature. (Currently usingpool_size: 10
)Expected Behavior
r10k doesn't fail and successfully deploys all modules.
Steps to Reproduce
Currently I can't reliably reproduce this. We had 6 (out of probably thousands) jobs failing with this error within ~24h.
Environment
Additional Context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: