-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Timeouts with Keyserver | Debating on providing key in build context #3312
Comments
@tianon, just commented about a similar issue here: #3306 (comment). The sks keyservers have been having a few issues these past few days, but we'd rather not see gpg key files committed to the repo. |
Granted, but I'd still like to know the set of reasons why that would be a poor decision, so I can justify to myself hardcoding a list of individual keyserver URLs and a fail/retry code into the dockerfiles as the alternative? Here are pros and cons for committing the gpg key into build context that I've thought of so far: Pros
Cons
So, if that assumption on
|
The main reason I'm against committing the key files to Git is the
combination of large diffs during review, but more importantly that "COPY"
still has irritating cache behavior, and the keys will need to be copied
into the image fairly early during the build, and thus end up causing
unnecessary rebuilds.
|
@tianon , could you reference the issue about this? I think I might just revert back to the high-availability pool and suggest user behind firewalls can easily find/replace-all in our repo to switch to the p80 pool or custom key delivery logic. |
Not sure there's a specific issue to reference, to be honest -- it's not necessarily something that's reproducible 100% (sometimes it works properly, sometimes it doesn't).
Yes, but that's why we ask that maintainers keep the
IMO this is a sane course of action -- users with special requirements who insist on building the images themselves should be responsible for handling their special requirements (or should simply use the pre-built images as-is). |
So I have been adjusting the methods that the official ROS and Gazebo images have interacted with the remote keyserver, (see #3163 and #3162 ) and am not satified with its relabilty. We migrated recently to the p80 pool for sks-keyservers.net to help our users who may be behind corporate firewalls, and thus blocking the default port 11371. But now with our docker images continuous integration in place, I'm getting a lot of flaky test solely due to the fact that the keyserver is intermittently unreachable and times out: e.g.
I've seen it unofficially alluded to that the p80.pool is not load balanced, so this may be the root issue in this case; reverting to the traditional load balanced pool may be the simplest solution. I'm not sure if adding an exhaustive loop of p80 keyserver URLs to reattempt would suffice. However, I'd like to ask about whether it would be appropriate to simply provide the public key as a .asc file along inside the build context. This has been similarly discussed in the docker community here:
moby/moby#13555 and resolved here: moby/moby#29967
I could mimic moby/moby#29967 by having my image templates auto generate the included key file along with the entrypoint. I don't think it's as elegant as a one-liner keyserver command, and I'm not sure if the same arguments for revocation would apply for static image layer as opposed to live system, but it would be a reliable compromise between our corporate users who reuse our dockerfiles, and our image build CI test.
pinging @tfoote @yosifkit @tianon
The text was updated successfully, but these errors were encountered: