Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timeouts with Keyserver | Debating on providing key in build context #3312

Closed
ruffsl opened this issue Aug 10, 2017 · 5 comments
Closed

Timeouts with Keyserver | Debating on providing key in build context #3312

ruffsl opened this issue Aug 10, 2017 · 5 comments

Comments

@ruffsl
Copy link
Contributor

ruffsl commented Aug 10, 2017

So I have been adjusting the methods that the official ROS and Gazebo images have interacted with the remote keyserver, (see #3163 and #3162 ) and am not satified with its relabilty. We migrated recently to the p80 pool for sks-keyservers.net to help our users who may be behind corporate firewalls, and thus blocking the default port 11371. But now with our docker images continuous integration in place, I'm getting a lot of flaky test solely due to the fact that the keyserver is intermittently unreachable and times out: e.g.

I've seen it unofficially alluded to that the p80.pool is not load balanced, so this may be the root issue in this case; reverting to the traditional load balanced pool may be the simplest solution. I'm not sure if adding an exhaustive loop of p80 keyserver URLs to reattempt would suffice. However, I'd like to ask about whether it would be appropriate to simply provide the public key as a .asc file along inside the build context. This has been similarly discussed in the docker community here:
moby/moby#13555 and resolved here: moby/moby#29967

I could mimic moby/moby#29967 by having my image templates auto generate the included key file along with the entrypoint. I don't think it's as elegant as a one-liner keyserver command, and I'm not sure if the same arguments for revocation would apply for static image layer as opposed to live system, but it would be a reliable compromise between our corporate users who reuse our dockerfiles, and our image build CI test.

pinging @tfoote @yosifkit @tianon

@yosifkit
Copy link
Member

@tianon, just commented about a similar issue here: #3306 (comment). The sks keyservers have been having a few issues these past few days, but we'd rather not see gpg key files committed to the repo.

@ruffsl
Copy link
Contributor Author

ruffsl commented Aug 11, 2017

but we'd rather not see gpg key files committed to the repo.

Granted, but I'd still like to know the set of reasons why that would be a poor decision, so I can justify to myself hardcoding a list of individual keyserver URLs and a fail/retry code into the dockerfiles as the alternative?

Here are pros and cons for committing the gpg key into build context that I've thought of so far:

Pros

  • Reliability
    • Key delivery is equivalently secure to keyserver delivery as the fingerprint is just as vulnerable to alteration (i.e. compromised github TLS certificate + DNS hijacking + and SHA-1 collision in git)
    • One less point of failure in connecting to a remote resource is omitted from the build steps
  • Connectivity
    • No need to accommodate to local firewalls or failing keyserver load balancers
    • Users could still build the images without internet given a proxy cache of the apt packages

Cons

  • Size
    • Given some PGP keys can get large, changing them in and out with revision control could get large
    • However we infrequently change keys, some keys used are from 2009
  • Revocation
    • If the key is later revoked, the keyserver can service the revocation notice
    • In that instance, would the apt-key adv --keyserver command fail given the fingerprint?
    • This could halt compromised builds as long as the revocation is circulated sufficiently

So, if that assumption on apt-key behavior was enforced, then the revocation point would have some teeth. However, from reading the man page on apt-key adv, it doesn't sound like it bothers to check the revocation with the keyserver:

adv
Pass advanced options to gpg. With adv --recv-key you can e.g. download key from keyservers directly into the the trusted set of keys. Note that there are no checks performed, so it is easy to completely undermine the apt-secure(8) infrastructure if used without care.

@tianon
Copy link
Member

tianon commented Aug 11, 2017 via email

@ruffsl
Copy link
Contributor Author

ruffsl commented Aug 11, 2017

but more importantly that "COPY" still has irritating cache behavior

@tianon , could you reference the issue about this?
Is this it: moby/moby#32816 ?
Wouldn't the entrypoint script be just a susceptible to breaking the cache?

I think I might just revert back to the high-availability pool and suggest user behind firewalls can easily find/replace-all in our repo to switch to the p80 pool or custom key delivery logic.

@tianon
Copy link
Member

tianon commented Sep 13, 2017

could you reference the issue about this?

Not sure there's a specific issue to reference, to be honest -- it's not necessarily something that's reproducible 100% (sometimes it works properly, sometimes it doesn't).

Wouldn't the entrypoint script be just a susceptible to breaking the cache?

Yes, but that's why we ask that maintainers keep the COPY for that as low in the Dockerfile as possible, which keeps the cache hurt as small as possible (usually just a few metadata instructions like CMD, EXPOSE, etc). For something like KEYS, it's going to have to come before the actual software download/build/compile/etc, so it's going to be quite high in the Dockerfile.

I think I might just revert back to the high-availability pool and suggest user behind firewalls can easily find/replace-all in our repo to switch to the p80 pool or custom key delivery logic.

IMO this is a sane course of action -- users with special requirements who insist on building the images themselves should be responsible for handling their special requirements (or should simply use the pre-built images as-is).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants