-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding caching to Dockerfiles #5570
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,15 +9,16 @@ MAINTAINER Lars Gierth <lgierth@ipfs.io> | |
ENV GX_IPFS "" | ||
ENV SRC_DIR /go/src/github.com/ipfs/go-ipfs | ||
|
||
COPY . $SRC_DIR | ||
COPY ./package.json $SRC_DIR/package.json | ||
|
||
# Build the thing. | ||
# Also: fix getting HEAD commit hash via git rev-parse. | ||
# Fetch dependencies. | ||
# Also: allow using a custom IPFS API endpoint. | ||
RUN cd $SRC_DIR \ | ||
&& mkdir .git/objects \ | ||
RUN set -x \ | ||
&& go get github.com/whyrusleeping/gx \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would prefer if There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Actually, I'm pretty sure we neither want nor need gx in the docker build. We only need to do this here because we haven't run There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. Yes, we should be copying bin (I guess? I haven't read the docker documentation) |
||
&& go get github.com/whyrusleeping/gx-go \ | ||
&& ([ -z "$GX_IPFS" ] || echo $GX_IPFS > /root/.ipfs/api) \ | ||
&& make build | ||
&& cd $SRC_DIR \ | ||
&& gx install | ||
|
||
# Get su-exec, a very minimal tool for dropping privileges, | ||
# and tini, a very minimal init daemon for containers | ||
|
@@ -33,6 +34,15 @@ RUN set -x \ | |
&& wget -q -O tini https://github.com/krallin/tini/releases/download/$TINI_VERSION/tini \ | ||
&& chmod +x tini | ||
|
||
COPY . $SRC_DIR | ||
|
||
# Build the thing. | ||
# Also: fix getting HEAD commit hash via git rev-parse. | ||
RUN set -x \ | ||
&& cd $SRC_DIR \ | ||
&& mkdir .git/objects \ | ||
&& make build | ||
|
||
# Get the TLS CA certificates, they're not provided by busybox. | ||
RUN apt-get update && apt-get install -y ca-certificates | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a similar but different approach in making the ipfs-cluster docker build to be faster. Instead of caching the gx dependencies in the docker cache, which still requires you to load the gx dependencies via the docker network stack (hits ipfs.io/ipfs/... for each dependency, even though 99% they are already in your local gopath) at least once but also each time they are updated, I run
gx install --local
before building the docker container, which brings all the deps into thevendor/
directory and I added docker specific Make install target which usesgx install --local
. This way the when docker copies the SRC_DIR into the build context it also brings the gx packages with it and then when building inside the container gx is able to use the already 'local' dependencies.The advantage of this is that you get the benefit of the docker cache as well as never having to use the docker network stack to go out and retrieve dependencies again.
You can see the Makefile here and the Dockerfile here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @lanzafame , thanks for the info.
Unfortunately that is not a great solution for me, because I use a remote Docker host and I don't want to upload hundreds of megabytes worth of build context.
There are many reasons to use a remote build host. I use one because I'm on macOS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rob-deutsch Would that maybe be case for a third Dockerfile, i.e. Dockerfile.remote? I have had enough fights with docker daemon networking issues that I would rather not see anymore network reliant requests be added to the Dockerfile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To start with, I think this PR should be merged, because (I believe) it's a strict improvement over what's currently there.
Longer term, I don't mind whether there's a separate Dockerfile that sends all the dependancies in the build context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rob-deutsch Have you considered "Docker for Mac" (free as in beer) which is actually just docker running in a headless linux VM on your local machine? It would still copy hundreds of megabytes of build context, but it would do so very quickly since nothing is transmitted over the network. Just a thought, anyway.