Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different echoserver images used in different places #15650

Closed
afbjorklund opened this issue Jan 15, 2023 · 19 comments
Closed

Different echoserver images used in different places #15650

afbjorklund opened this issue Jan 15, 2023 · 19 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 15, 2023

The old k8s.gcr.io/echoserver:1.4 was replaced with different images:

https://kubernetes.io/docs/tutorials/hello-minikube/

kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.39 -- /agnhost netexec --http-port=8080

https://minikube.sigs.k8s.io/docs/start/

kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0


It would probably be better to have this image hosted elsewhere ?

i.e. not "registry.k8s.io/e2e-test-images", not "docker.io/kicbase"

The kubernetes.io image is slightly more enormous, than the minikube:

REPOSITORY                                TAG       IMAGE ID       CREATED        SIZE
registry.k8s.io/e2e-test-images/agnhost   2.39      a05bd3a9140b   7 months ago   127MB
alpine                                    3.12      24c8ece58a1a   9 months ago   5.58MB
kicbase/echo-server                       1.0       9056ab77afb8   6 months ago   4.94MB

The minikube image is based on https://github.com/jmalloc/echo-server

@afbjorklund afbjorklund added the kind/documentation Categorizes issue or PR as related to documentation. label Jan 15, 2023
@afbjorklund
Copy link
Collaborator Author

cc @spowelljr

@nitishfy
Copy link
Member

It would probably be better to have this image hosted elsewhere ?

i.e. not "registry.k8s.io/e2e-test-images", not "docker.io/kicbase"

The kubernetes.io image is slightly more enormous, than the minikube:

REPOSITORY                                TAG       IMAGE ID       CREATED        SIZE
registry.k8s.io/e2e-test-images/agnhost   2.39      a05bd3a9140b   7 months ago   127MB
alpine                                    3.12      24c8ece58a1a   9 months ago   5.58MB
kicbase/echo-server                       1.0       9056ab77afb8   6 months ago   4.94MB

The minikube image is based on https://github.com/jmalloc/echo-server

After all, we've to come up with one image which is common to both. Is this the image registry.k8s.io/e2e-test-images/agnhost:2.39 you're mentioning to be hosted elsewhere because of its enormous size?

The image mentioned here is found to be way less enormous than the one mentioned on Kubernetes docs. However the image mentioned on minikube is found to have more vulnerabilities than the one mentioned in Kubernetes documentation.

image

Before updating the docs image (if we want to), I'd like to know the perspective of what other maintainers have to say about this.
@spowelljr Could you also specify the capabilities of using kicbase/echo-server:1.0 image apart from it's size? A good description of the image registry.k8s.io/e2e-test-images/agnhost:2.39 has been mentioned by @mtardy below:

The previous version of the image did not support arm64. This one supports amd64, arm, arm64, ppc64le, s390x on Linux, and amd64 on multiple Windows versions.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jan 15, 2023

The size isn't all that important, but "e2e-test-images" sounds like it is meant for a different purpose (like "kicbase").

The command is somewhat hard to remember, as well. Referring to the /agnhost netexec --http-port=8080

So would probably be better to have a dedicated place for images for "getting started", as I'm sure there are more ?

And they should probably use the registry.k8s.io rather than docker.io (since that is the trend elsewhere*)

* the main exception being the kubernetes dashboard

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jan 15, 2023

If the minikube documentation is going to stay at the current location, we could make a PR for the documentation site.

I see it as possible that "Hello, Minikube" will disappear with the Katacoda removal, and just link to the minikube site ?

learn-kubernetes-basics-topics

https://kubernetes.io/docs/tutorials/hello-minikube/ -> https://minikube.sigs.k8s.io/docs/start/

The "Create Cluster" also needs modifications, but I opened up a separate issue for that topic: #15651

@nitishfy
Copy link
Member

If the minikube documentation is going to stay at the current location, we could make a PR for the documentation site.

I see it as possible that "Hello, Minikube" will disappear with the Katacoda removal, and just link to the minikube site ?

learn-kubernetes-basics-topics

https://kubernetes.io/docs/tutorials/hello-minikube/ -> https://minikube.sigs.k8s.io/docs/start/

The "Create Cluster" also needs modifications, but I opened up a separate issue for that topic: #15651

That's a good approach you've suggested. However, What if we modify the documentation of Hello Minikube and replace the Katacoda with Killercoda?

@nitishfy
Copy link
Member

If the minikube documentation is going to stay at the current location, we could make a PR for the documentation site.

I see it as possible that "Hello, Minikube" will disappear with the Katacoda removal, and just link to the minikube site ?

learn-kubernetes-basics-topics

https://kubernetes.io/docs/tutorials/hello-minikube/ -> https://minikube.sigs.k8s.io/docs/start/

The "Create Cluster" also needs modifications, but I opened up a separate issue for that topic: #15651

@kubernetes/minikube-maintainers Kindly take a look onto it!

@afbjorklund

This comment was marked as off-topic.

@mtardy
Copy link
Member

mtardy commented Jan 15, 2023

Thanks Nitish for the ping! He's right, we changed the image because we got many complaint about the previous command not working on arm64. kubernetes/website#37383

We just decided to pick up this image because this is a toolbox image actually maintained by the infra sig! I agree that the command line is slightly more ugly but at least this one came at no supplementary maintaining cost and was working on all platforms.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Jan 15, 2023

Well, it did spend too much time in the backlog. And it was a bit surprising that the amd64 image was left for 5 years.

There could be some better communication, though. We have a similar image for registry, abandoned 5 years ago...

@spowelljr
Copy link
Member

I agree that the name registry.k8s.io/e2e-test-images/agnhost doesn't sound like a hello-world example and the command args aren't ideal, but neither of those are huge issues.

For some examples such as ingress if we use the agnhost image we'll probably want to specify the user to use the /hostname endpoint so they can see what pod they're hitting to ensure their routing is working.

Current ingress example:

$ curl 192.168.49.2/foo
Request served by foo-app
...

$ curl 192.168.49.2/bar
Request served by bar-app
...

With regard to the security comment, when I scan the agnhost image I'm getting back a 9.8 critical CVE

Testing registry.k8s.io/e2e-test-images/agnhost:2.39...

✗ Critical severity vulnerability found in zlib/zlib
  Description: Out-of-bounds Write
  Info: https://security.snyk.io/vuln/SNYK-ALPINE312-ZLIB-2977082
  Introduced through: zlib/zlib@1.2.12-r0, apk-tools/apk-tools@2.10.8-r1, libxml2/libxml2@2.9.14-r0, bind/bind-libs@9.16.27-r1, curl/libcurl@7.79.1-r1, curl/curl@7.79.1-r1, elfutils/libelf@0.179-r0, protobuf/libprotobuf@3.12.2-r0
  From: zlib/zlib@1.2.12-r0
  From: apk-tools/apk-tools@2.10.8-r1 > zlib/zlib@1.2.12-r0
  From: libxml2/libxml2@2.9.14-r0 > zlib/zlib@1.2.12-r0
  and 5 more...
  Image layer: 'apk --update add bind-tools curl netcat-openbsd iproute2 iperf bash'
  Fixed in: 1.2.12-r2

https://nvd.nist.gov/vuln/detail/CVE-2022-37434

And a warning about the Alpine version: Alpine 3.12.12 is no longer supported by the Alpine maintainers.

We don't want to be managing the images, we just wanted to update the docs so arm64 users weren't failing to run the examples and getting turned off.

@BenTheElder
Copy link
Member

With regard to the security comment, when I scan the agnhost image I'm getting back a 9.8 critical CVE

Please consider reporting this sort of thing back to the original source repo.
https://github.com/kubernetes/kubernetes/blob/master/test/images/agnhost/README.md

This may already be fixed, as 2.43 is the current tag.

@mtardy
Copy link
Member

mtardy commented Mar 29, 2023

With regard to the security comment, when I scan the agnhost image I'm getting back a 9.8 critical CVE

Please consider reporting this sort of thing back to the original source repo. https://github.com/kubernetes/kubernetes/blob/master/test/images/agnhost/README.md

This may already be fixed, as 2.43 is the current tag.

maybe it would be actually more reasonable to use the latest tag on this specific tutorial wdyt?

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Mar 29, 2023

I think you can open an issue with SIG Docs, if you want to change the legacy tutorial before it is deleted

https://kubernetes.io/blog/2023/02/14/kubernetes-katacoda-tutorials-stop-from-2023-03-31/

@BenTheElder
Copy link
Member

maybe it would be actually more reasonable to use the latest tag on this specific tutorial wdyt?

kubernetes does not publish mutable tags to the production registry by policy.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Mar 29, 2023

The rest of the tutorial is like 5+ years old, so it is comparatively new anyway (but still a good idea to report and bump it)

Kubernetes 1.20, Ubuntu 18.04, and so on

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 27, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants