-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Push the server images from the crossbuild CI #1400
Comments
This is a good idea, but I'm not sure it will happen in the near future. One challege: how do we garbage collect old images? We'll build a lot of these images very quickly. |
you can use |
in fact, images already built and published as tarballs to gs://kubernetes-release-dev/ci/, including ones for pull requests. Example: what is missing, is actually to publish those images into some registry (something like gcr.io/kubernetes-dev-ci/ ?) |
How are you planning to use the (hypothetical) images in the registry? |
one example could be validation of PRs locally. kubeadm can be instructed to use images from different registry with specific version tag. |
yes, in order to test k8s images against master we could recycle old ones if we have to |
ping @ixdy Could we push the images generated by the crossbuild CI to WDYT? I could probably implement it, I think it's quite straightforward code-wise. |
Sounds good to me. I don't have time to work on this right now, but I can review PRs. |
@ixdy I'm not sure I can write the cleanup part, but it's not a high-priority because it would start doing things first in v1.8 anyway... or it could be a Job instead. Anyway, I can make so it the cross-build pushes the images, sure :) |
not yet, but we'd like to start doing this. |
https://github.com/bazelbuild/rules_docker#docker_push would do this for our bazel builds. |
The federation projects push the hyperkube image from CI builds, but it seems like only the 10 or so most recent tags are there. I'll check with some of those folks to see how they're doing image lifecycle management. |
oh, no, |
Where do the hyperkube images get pushed to? We should consolidate on a single place for pushing all ci images. |
the federation hyperkube images are pushed to gcr.io/k8s-jkns-e2e-gce-federation, gcr.io/k8s-jkns-e2e-gce-f8n-1-6, and gcr.io/k8s-jkns-e2e-gce-f8n-1-7. |
I don't have bandwidth right now to plumb through all of the changes needed. Things I imagine are necessary:
Other thoughts:
|
@ixdy we are ready to move to whatever is the "normal" way. So yeah, I am fine moving to the approach you described above, if it is built. |
@madhusudancs Would you have time to implement what @ixdy described above? |
Alternatively, maybe @fejta can suggest another assignee from the engprod team to help with this task? |
@luxas No. But happy to delegate :) |
Thinking about this a bit more, I'm concerned about cleanup of docker images on development machines. In the normal (non-release) build workflow, we build the images, save them out to tarfiles, and then delete. With my proposal in #1400 (comment), we would eventually end up with 10s-100s of images with no clear cleanup mechanism. (I'm not sure Which leads me to a different proposal, similar to what Bazel does, and hearkening back to @luxas' suggestion in kubernetes/kubeadm#309:
I think this is overall less work than my earlier suggestion. This would also probably help prevent something like kubernetes/kubernetes#47307 from happening again, if we add the push-build functionality to anago, too. |
I have a basic POC of my last proposal in ixdy/kubernetes@93fcdc1 and ixdy/kubernetes-release@f251200. It doesn't handle hyperkube, though, since hyperkube has a different and inconsistent workflow - we don't save it as a tarfile anywhere, and I'm not sure where we would save it. We certainly don't want to bundle it in the server tarball. |
kubernetes/kubernetes#47939 and kubernetes/release#355 seem to work in local testing. If those get merged, next steps would be to update the various build jobs to set |
Automatic merge from submit-queue (batch tested with PRs 47650, 47936, 47939, 47986, 48006) Save docker image tarfiles in _output/release-images/$arch/ Additionally, add option `KUBE_BUILD_HYPERKUBE` to build hyperkube outside of the release flow. **What this PR does / why we need it**: Saves all of the docker tarfiles in a separate directory that the release scripts can use to push to a docker registry. This is easier than trying to guess which images should be pushed from the local docker engine, and supports work in kubernetes/test-infra#1400. If we eventually use this for the official release workflow (`anago`) this may prevent something like #47307 from happening again. **Release note**: ```release-note NONE ``` /release-note-none /assign @luxas @david-mcmahon cc @madhusudancs @roberthbailey
we probably need to shave the #3207 yak before this will work for the CI cross build. |
another thing I'm trying to figure out: which builds should produce a |
I thought we were set up so that for each PR we did a single "build" step and then let e2e tests run against that build. In that case, then only the build step would need to build & push images (including hyperkube) and all per-PR tests could be run assuming that the images exist in gcr.io already. |
For CI testing we have a single build job which produces the binaries that all of the e2e test jobs use. |
Perfect!
So basically the bazel or kubernetes-build will be the only "real" presubmits and the we'll hook up a lot of other tests in
Indeed. But I think the federation job depends on hyperkube... With those improvements we would still be much better off than we're now. And then the ci-cross job would actually push some builds. Due to its periodic nature, we wouldn't get it run for every single commit, but I don't think that's a problem. It would be cool to have tags like |
That's the idea, yes. Re: hyperkube: I'm considering creating a It'll solve the flakiness and slowness concerns about building the hyperkube image every time, and it'll make building the hyperkube image in bazel (something I haven't done yet) much easier. (You can manage a bunch of deb dependencies in bazel, but it's a pain.) I'm planning to get the ci-cross job building everything (including hyperkube) and pushing to gcr.io very soon. That should at least enable downstream testing and integration of the artifacts. I'll tackle the hyperkube image and other jobs (e.g. pull jobs, eliminating federation redundancy, etc) after that. Re: tags - I worry about that enabling an anti-pattern we want to discourage. I think we don't want to support loading a cluster from arbitrary moving tags, since that may results in different components having different versions, especially if nodes are added later. If the tags are fully resolved at cluster start time that might be OK, but I worry that's now how they'd be used. In any case, that's a whole different discussion. :) |
ARGH:
|
this is a double-fail:
|
First set of images have appeared: $ gcloud container images list --repository gcr.io/kubernetes-ci-images
NAME
gcr.io/kubernetes-ci-images/cloud-controller-manager-amd64
gcr.io/kubernetes-ci-images/cloud-controller-manager-arm
gcr.io/kubernetes-ci-images/cloud-controller-manager-arm64
gcr.io/kubernetes-ci-images/cloud-controller-manager-ppc64le
gcr.io/kubernetes-ci-images/cloud-controller-manager-s390x
gcr.io/kubernetes-ci-images/cloud-controller-manager
gcr.io/kubernetes-ci-images/hyperkube-amd64
gcr.io/kubernetes-ci-images/hyperkube-arm
gcr.io/kubernetes-ci-images/hyperkube-arm64
gcr.io/kubernetes-ci-images/hyperkube-ppc64le
gcr.io/kubernetes-ci-images/hyperkube-s390x
gcr.io/kubernetes-ci-images/hyperkube
gcr.io/kubernetes-ci-images/kube-aggregator-amd64
gcr.io/kubernetes-ci-images/kube-aggregator-arm
gcr.io/kubernetes-ci-images/kube-aggregator-arm64
gcr.io/kubernetes-ci-images/kube-aggregator-ppc64le
gcr.io/kubernetes-ci-images/kube-aggregator-s390x
gcr.io/kubernetes-ci-images/kube-aggregator
gcr.io/kubernetes-ci-images/kube-apiserver-amd64
gcr.io/kubernetes-ci-images/kube-apiserver-arm
gcr.io/kubernetes-ci-images/kube-apiserver-arm64
gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le
gcr.io/kubernetes-ci-images/kube-apiserver-s390x
gcr.io/kubernetes-ci-images/kube-apiserver
gcr.io/kubernetes-ci-images/kube-controller-manager-amd64
gcr.io/kubernetes-ci-images/kube-controller-manager-arm
gcr.io/kubernetes-ci-images/kube-controller-manager-arm64
gcr.io/kubernetes-ci-images/kube-controller-manager-ppc64le
gcr.io/kubernetes-ci-images/kube-controller-manager-s390x
gcr.io/kubernetes-ci-images/kube-controller-manager
gcr.io/kubernetes-ci-images/kube-proxy-amd64
gcr.io/kubernetes-ci-images/kube-proxy-arm
gcr.io/kubernetes-ci-images/kube-proxy-arm64
gcr.io/kubernetes-ci-images/kube-proxy-ppc64le
gcr.io/kubernetes-ci-images/kube-proxy-s390x
gcr.io/kubernetes-ci-images/kube-proxy
gcr.io/kubernetes-ci-images/kube-scheduler-amd64
gcr.io/kubernetes-ci-images/kube-scheduler-arm
gcr.io/kubernetes-ci-images/kube-scheduler-arm64
gcr.io/kubernetes-ci-images/kube-scheduler-ppc64le
gcr.io/kubernetes-ci-images/kube-scheduler-s390x
gcr.io/kubernetes-ci-images/kube-scheduler
$ gcloud container images list-tags gcr.io/kubernetes-ci-images/hyperkube-amd64
DIGEST TAGS TIMESTAMP
1c31b4837f61 v1.8.0-alpha.1.602_231c0783ed97bc 2017-06-29T21:46:20 |
@ixdy WOOOOT 🎉!!! I'll go ahead and make kubeadm able to pick those up 👍 |
FYI |
@ixdy I think this can be closed now, right? |
I'd like to get federation builds using these images, and we might want to have some PR jobs (e.g. the kubeadm one) uploading their images, but those can probably be separate efforts. |
You choose ;) |
Hi,
Can we make the cross-build push the kube-apiserver, controller-manager, scheduler, proxy and hyperkube images on every run? I think it should be pretty straightforward code-wise.
They could and should be in another repository than
gcr.io/google_containers
, for instancegcr.io/kubernetes-ci
. We should soon start the migration fromgcr.io/google_containers
to something more kubernetes-specific and where we can set different ACLs anyway.Many many users have asked for this, and it would indeed make it much more convenient to test k8s against HEAD
Can we make this happen soon? Thoughts?
cc @ixdy @spxtr @rmmh @jessfraz
The text was updated successfully, but these errors were encountered: