Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiarch support for registry addon #10780

Open
medyagh opened this issue Mar 10, 2021 · 13 comments
Open

Multiarch support for registry addon #10780

medyagh opened this issue Mar 10, 2021 · 13 comments
Labels
area/addons area/registry registry related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@medyagh
Copy link
Member

medyagh commented Mar 10, 2021

if yes add integration test for multi arch

@medyagh medyagh added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/feature Categorizes issue or PR as related to a new feature. labels Mar 10, 2021
@ilya-zuyev
Copy link
Contributor

/assign

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 11, 2021

Hi @ilya-zuyev

You will find that the registry itself is multi-arch (well, amd64/arm/arm64) but that the registry-proxy needs updating...
It probably did that anyway, but I don't think it will be a major problem since nginx is multi-arch (being debian-based)

https://hub.docker.com/_/registry?tab=tags
https://hub.docker.com/_/nginx?tab=tags

It hasn't seen any updates since it was abandoned (in 2017)

kubernetes/kubernetes@6f48d86
kubernetes/kubernetes@d6918bb

FROM nginx:1.11

RUN apt-get update \
	&& apt-get install -y \
		curl \
		--no-install-recommends \
	&& apt-get clean \
	&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /usr/share/man /usr/share/doc

COPY rootfs /

CMD ["/bin/boot"]

The registry version is the latest available (from 2019)

https://github.com/docker/distribution

I think it is on the same kind of "life support" as machine ?

https://www.docker.com/blog/donating-docker-distribution-to-the-cncf/


It would be great if we could have a proper registry deployment one day, with storage and with certificates.
The current hack with the localhost:5000 proxy to get around the "insecure" daemon settings isn't great...

See the old README

https://docs.docker.com/registry/deploying/

But for now, we will continue to promote just using the container runtime on the control plane directly.
This is similar to using hostpath as the default PV storage, it is simpler for a single-node deployment...

@afbjorklund afbjorklund added area/registry registry related issues area/addons labels Mar 11, 2021
@ilya-zuyev
Copy link
Contributor

Hi @afbjorklund! Thanks for info. We also want to test in this issue how our registry addon handles multi-arch images, including if it's possible to use it with docker buildx --push ... and docker manifest push

@medyagh
Copy link
Member Author

medyagh commented Mar 22, 2021

@ilya-zuyev lets update the findings, with logs and current blockers

@ilya-zuyev
Copy link
Contributor

It looks like we have work to do here:

tested on Ubuntu 20.10 x86_64

ilyaz@skeletron --- g/minikube ‹master› » m version                                                                                                                                                                 130 ↵
minikube version: v1.18.1
commit: a05f887651bd65102c6559f3c30439af3e792427
ilyaz@skeletron --- g/minikube ‹master› » 
ilyaz@skeletron --- g/minikube ‹master› » docker version
Client: Docker Engine - Community
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:17:52 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:15:47 2021
  OS/Arch:          linux/amd64
  Experimental:     true
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
ilyaz@skeletron --- g/minikube ‹master› » m start --driver=docker --addons=registry
* minikube v1.18.1 on Ubuntu 20.10
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v9-v1....: 491.22 MiB / 491.22 MiB  100.00% 6.39 MiB
* Creating docker container (CPUs=2, Memory=8000MB) ...

X Docker is nearly out of disk space, which may cause deployments to fail! (88% of capacity)
* Suggestion: 

    Try one or more of the following to free up space on the device:
    
    1. Run "docker system prune" to remove unused Docker data (optionally with "-a")
    2. Increase the storage allocated to Docker for Desktop by clicking on:
    Docker icon > Preferences > Resources > Disk Image Size
    3. Run "minikube ssh -- docker system prune" if using the Docker container runtime
* Related issue: https://github.com/kubernetes/minikube/issues/9024

* Preparing Kubernetes v1.20.2 on Docker 20.10.3 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image registry:2.7.1
  - Using image gcr.io/google_containers/kube-registry-proxy:0.4
  - Using image gcr.io/k8s-minikube/storage-provisioner:v4
* Verifying registry addon...
* Enabled addons: storage-provisioner, default-storageclass, registry

! /home/ilyaz/google-cloud-sdk/bin/kubectl is version 1.17.17-dispatcher, which may have incompatibilites with Kubernetes 1.20.2.
  - Want kubectl v1.20.2? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default



ilyaz@skeletron --- g/minikube ‹master› » m addons list
|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| ambassador                  | minikube | disabled     |
| auto-pause                  | minikube | disabled     |
| csi-hostpath-driver         | minikube | disabled     |
| dashboard                   | minikube | disabled     |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gcp-auth                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | disabled     |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| kubevirt                    | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metallb                     | minikube | disabled     |
| metrics-server              | minikube | disabled     |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| olm                         | minikube | disabled     |
| pod-security-policy         | minikube | disabled     |
| registry                    | minikube | enabled ✅   |
| registry-aliases            | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
| volumesnapshots             | minikube | disabled     |
|-----------------------------|----------|--------------|


ilyaz@skeletron --- g/minikube ‹master› » kck port-forward svc/registry 5000:80                                                                                                                                                
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000

Then:

ilyaz@skeletron --- tmp/img » curl -Li localhost:5000/v2/_catalog
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Tue, 23 Mar 2021 21:52:07 GMT
Content-Length: 20

{"repositories":[]}

ok, registry is started and serves API on local port 5000

Let's build some images:

ilyaz@skeletron --- tmp/img » cat Dockerfile 
FROM alpine

CMD "echo boom"

ilyaz@skeletron --- tmp/img » docker build -t localhost:5000/foo:bar .                                                                      
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM alpine
 ---> a24bb4013296
Step 2/2 : CMD "echo boom"
 ---> Using cache
 ---> 007fcd1efad4
Successfully built 007fcd1efad4
Successfully tagged localhost:5000/foo:bar
ilyaz@skeletron --- tmp/img » docker -D push localhost:5000/foo:bar                                                                         
The push refers to repository [localhost:5000/foo]
50644c29ef5a: Pushed 
bar: digest: sha256:0f6e5d9bac509123c0d6e6179ca068747dfd5d6f324c2bb3b2276efda8a0abe9 size: 528
ilyaz@skeletron --- tmp/img » curl -Li localhost:5000/v2/_catalog  
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Tue, 23 Mar 2021 21:52:50 GMT
Content-Length: 25

{"repositories":["foo"]}

Single arch works.
But:

ilyaz@skeletron --- tmp/img » docker -D  manifest create localhost:5000/march-foo localhost:5000/foo:bar   
DEBU[0000] endpoints for localhost:5000/foo:bar: [{false https://localhost:5000 v2 false false true 0xc000502d80} {false http://localhost:5000 v2 false false true 0xc000502d80}] 
DEBU[0000] skipping non-tls registry endpoint: http://localhost:5000 
DEBU[0000] skipping non-tls registry endpoint: http://localhost:5000 
no such manifest: localhost:5000/foo:bar

docker manifest create doesn't :(

Let's try buildx:

ilyaz@skeletron --- tmp/img » docker buildx create --name zbuilder --use
zbuilder

ilyaz@skeletron --- tmp/img » docker -D buildx build --push --builder zbuilder --platform linux/amd64,linux/arm64 -t localhost:5000/foo-m .                                                                                    

DEBU[0000] using default config store "/home/ilyaz/.docker/buildx" 
DEBU[0000] serving grpc connection                      
[+] Building 0.0s (0/1)                                                                                                                                                                                                              
[+] Building 0.1s (4/4) FINISHED                                                                                                                                                                                                     
 => [internal] load build definition from Dockerfile                                                                                                                                                                            0.0s
 => => transferring dockerfile: 31B                                                                                                                                                                                             0.0s
 => [internal] load .dockerignore                                                                                                                                                                                               0.0s
 => => transferring context: 2B                                                                                                                                                                                                 0.0s

error: failed to solve: rpc error: code = Unknown desc = failed to do request: Head http://localhost:5000/v2/foo-m/blobs/sha256:069a56d6d07f6b186fbb82e4486616b9be9a37ce32a63013af6cddcb65898182: dial tcp 127.0.0.1:5000: connect: connection refused
1 v0.8.2 buildkitd
github.com/containerd/containerd/remotes/docker.(*request).do
        /src/vendor/github.com/containerd/containerd/remotes/docker/resolver.go:544
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries
        /src/vendor/github.com/containerd/containerd/remotes/docker/resolver.go:551
github.com/containerd/containerd/remotes/docker.dockerPusher.Push
        /src/vendor/github.com/containerd/containerd/remotes/docker/pusher.go:88
github.com/containerd/containerd/remotes.push
        /src/vendor/github.com/containerd/containerd/remotes/handlers.go:154
github.com/containerd/containerd/remotes.PushHandler.func1
        /src/vendor/github.com/containerd/containerd/remotes/handlers.go:146
github.com/moby/buildkit/util/resolver/retryhandler.New.func1
        /src/util/resolver/retryhandler/retry.go:20
github.com/moby/buildkit/util/push.updateDistributionSourceHandler.func1
        /src/util/push/push.go:266
github.com/moby/buildkit/util/push.dedupeHandler.func1.1
        /src/util/push/push.go:295
github.com/moby/buildkit/util/flightcontrol.(*call).run
        /src/util/flightcontrol/flightcontrol.go:121
sync.(*Once).doSlow
        /usr/local/go/src/sync/once.go:66
sync.(*Once).Do
        /usr/local/go/src/sync/once.go:57
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

116816 v0.5.1-docker /usr/libexec/docker/cli-plugins/docker-buildx -D buildx build --push --builder zbuilder --platform linux/amd64,linux/arm64 -t localhost:5000/foo-m .
github.com/docker/buildx/vendor/google.golang.org/grpc.(*ClientConn).Invoke
        /go/src/github.com/docker/buildx/vendor/google.golang.org/grpc/call.go:35
github.com/docker/buildx/vendor/github.com/moby/buildkit/api/services/control.(*controlClient).Solve
        /go/src/github.com/docker/buildx/vendor/github.com/moby/buildkit/api/services/control/control.pb.go:1321
github.com/docker/buildx/vendor/github.com/moby/buildkit/client.(*Client).solve.func2
        /go/src/github.com/docker/buildx/vendor/github.com/moby/buildkit/client/solve.go:201
github.com/docker/buildx/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
        /go/src/github.com/docker/buildx/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

116816 v0.5.1-docker /usr/libexec/docker/cli-plugins/docker-buildx -D buildx build --push --builder zbuilder --platform linux/amd64,linux/arm64 -t localhost:5000/foo-m .
github.com/docker/buildx/vendor/github.com/moby/buildkit/client.(*Client).solve.func2
        /go/src/github.com/docker/buildx/vendor/github.com/moby/buildkit/client/solve.go:214
github.com/docker/buildx/vendor/golang.org/x/sync/errgroup.(*Group).Go.func1
        /go/src/github.com/docker/buildx/vendor/golang.org/x/sync/errgroup/errgroup.go:57
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1357

although:

ilyaz@skeletron --- tmp/img » curl --head  -Li  http://localhost:5000/v2/foo-m/blobs/sha256:ba3557a56b150f9b813f9d02274d62914fd8fce120dd374d9ee17b87cf1d277d                                                                     1 ↵
HTTP/1.1 404 Not Found
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Tue, 23 Mar 2021 21:56:12 GMT
Content-Length: 157

@ilya-zuyev
Copy link
Contributor

Probably, we need to serve HTTPS registry endpoint to make buildx happy. Currently, addon supports only HTTP

@medyagh medyagh added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Apr 16, 2021
@medyagh medyagh changed the title Investigate if minikube registry addon supports multiarch Add support for Registry addon on Arm64 Apr 16, 2021
@medyagh
Copy link
Member Author

medyagh commented Apr 16, 2021

this PP is availavble to pick up anyone interested I would accept A PR

@medyagh medyagh modified the milestones: v1.20.0, v1.21.0-candidate May 3, 2021
@ilya-zuyev ilya-zuyev self-assigned this May 3, 2021
@medyagh medyagh changed the title Add support for Registry addon on Arm64 Registry addon support multiarch (arm64) May 27, 2021
@spowelljr spowelljr modified the milestones: v1.21.0, 1.22.0-candidate May 27, 2021
@sharifelgamal sharifelgamal changed the title Registry addon support multiarch (arm64) Multiarch support for registry addon Jun 7, 2021
@ilya-zuyev ilya-zuyev self-assigned this Jun 8, 2021
@sharifelgamal sharifelgamal removed this from the 1.22.0-candidate milestone Jun 14, 2021
@medyagh
Copy link
Member Author

medyagh commented Jul 26, 2021

this issue is available to pick up

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2021
@sharifelgamal sharifelgamal removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 7, 2022
@spowelljr spowelljr self-assigned this Jan 7, 2022
@spowelljr spowelljr removed their assignment Jan 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2022
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 28, 2022
@pilhuhn
Copy link

pilhuhn commented May 11, 2022

Does the 1.26 milestone assignment mean this may be fixed in 1.26?

@spowelljr spowelljr removed this from the 1.26.0 milestone May 11, 2022
@spowelljr
Copy link
Member

That would be correct, but it was something we planned on doing for this milestone, but other things took priority, I've removed the milestone from this issue.

@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Aug 3, 2022
@zjx20
Copy link
Contributor

zjx20 commented Nov 24, 2023

Probably, we need to serve HTTPS registry endpoint to make buildx happy. Currently, addon supports only HTTP

There seems to be another problem. I've tried adding an https reverse proxy to the registry addon using stunnel, but buildx still reports errors. (which works fine for docker push)

#19 ERROR: failed to push 192.168.44.28:5001/testimage:v0.0.1: failed to do request: Head "https://192.168.44.28:5001/v2/open-local/blobs/sha256:c3c0e0e9df293d62b09b768b9179a4d876c39faabd6cdd40c0a4d26cb6881742": dial tcp 192.168.44.28:5001: i/o timeout

The stunnel command is:

docker run --network=host -itd --name minikube-registry-proxy \
        -e STUNNEL_SERVICE=registry \
        -e STUNNEL_ACCEPT=5001 \
        -e STUNNEL_CONNECT=$(minikube ip):5000 \
        -p 5001:5001 \
    dweomer/stunnel

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/addons area/registry registry related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

9 participants