-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Outdated description of minikube in upstream documentation #15651
Comments
Related issues:
And also: |
So the new section is something like: A Kubernetes cluster can be deployed on either physical or virtual machines. To get started with Kubernetes development, you can use Minikube or Kind or Kubeadm. minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
For this tutorial, however, you'll use a provided online terminal with Kubernetes pre-installed. Now that you know what Kubernetes is, let's go to the online tutorial and use our first cluster! 2023( Start Interactive Tutorial ) root@controlplane:~# kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:58:30Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.0", GitCommit:"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d", GitTreeState:"clean", BuildDate:"2022-12-08T19:51:45Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
root@controlplane:~# kubectl cluster-info
Kubernetes control plane is running at https://172.30.1.2:6443
CoreDNS is running at https://172.30.1.2:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@controlplane:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 23d v1.26.0
root@controlplane:~# Congratulations on completing the module Where the old section currently reads like: https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/
2021( Start Interactive Tutorial ) root@minikube:~# minikube version
minikube version: v1.18.0
commit: ec61815d60f66a6e4f6353030a40b12362557caa-dirty
root@minikube:~# minikube start
* minikube v1.18.0 on Ubuntu 18.04 (amd64)
* Using the none driver based on existing profile
X The requested memory allocation of 2200MiB does not leave room for system overhead (total system memory: 2460MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=2200mb'
* Starting control plane node minikube in cluster minikube
* Running on localhost (CPUs=2, Memory=2460MB, Disk=194868MB) ...
* OS release is Ubuntu 18.04.5 LTS
* Preparing Kubernetes v1.20.2 on Docker 19.03.13 ...
- kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring local host environment ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v4
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
root@minikube:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
root@minikube:~# kubectl cluster-info
Kubernetes control plane is running at https://10.0.0.18:8443
KubeDNS is running at https://10.0.0.18:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@minikube:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 24s v1.20.2
root@minikube:~# Congratulations on completing the module Note: the first two steps take around 15-30 seconds. Not too bad, but then again not too fun ? By separing cluster installation from cluster usage, the user can dive right into using Especially now when the start is forcing the user to make lots of deployment decisions up front. |
The main problem with the kubeadm method (both minikube and kind uses kubeadm for the actual installation), is that there are a lot of hidden requirements before the https://kubernetes.io/docs/setup/production-environment/ Any addons would also have to be deployed manually, meaning that there is no dashboard/metrics-server and no default-storageclass/storage-provisioner. At least not until they have been deployed with the usual upstream method, that is. https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/ https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/ https://kubernetes.io/docs/concepts/storage/storage-classes/ $ minikube addons list
|-----------------------------|----------|--------------|
| ADDON NAME | PROFILE | STATUS |
|-----------------------------|----------|--------------|
| dashboard | minikube | enabled ✅ |
| default-storageclass | minikube | enabled ✅ |
| metrics-server | minikube | enabled ✅ |
| storage-provisioner | minikube | enabled ✅ |
|-----------------------------|----------|--------------| |
I agree with @afbjorklund |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
The "Learn Kubernetes Basic" section of the kubernetes.io website is getting an overhaul, in light of the Katacoda shutdown.
https://kubernetes.io/docs/tutorials/kubernetes-basics/
There is one "Hello, Minikube" tutorial, that is fairly identical to get "Getting Started" and might end up just getting moved...
https://kubernetes.io/docs/tutorials/hello-minikube/
Theoretically it would be possible to do three different tutorials:
Hello, (Cloud) Minikube
This would require that minikube had some way of starting new cloud instances (VMs). Currently, it does not.
Hello, (Docker) Minikube
This would require that either docker or podman is installed, but it also means a more complex environment.
Hello, (Native) Minikube
This is the current scenario. It uses a pre-allocated virtual machine, and then installs everything locally on it.
But most likely, they will just refer to each project's documentation:
minikube
: https://minikube.sigs.k8s.io/docs/kind
: https://kind.sigs.k8s.io/docs/kubeadm
: https://kubernetes.io/docs/reference/setup-tools/kubeadm/So that the user can pick their preferred method and tool of deployment.
And the Kubernetes tutorial will start, when the cluster is already available ?
But then there is another section, "Create Cluster", that will need revising. It is based on an earlier definition of minikube.
https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/
I will break it down here, and then we can work with SIG Docs about updating it for next time (without Katacoda).
https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/
Objectives
Here is the section on what minikube is:
So far, so good.
This is currently breaking down, into three different paths to set up a "Learning Environment":
"Getting Started" -> redirecting to https://kubernetes.io/docs/setup/ ?
"Learning Environment" -> redirecting to https://kubernetes.io/docs/tasks/tools/ ?
kubectlThe order of the tools here is a bit confusing, but anyway. Ignoring
kubectl
, they all need that.Since people always had problems with it, we integrated
minikube kubectl
for them. But anyway.Uh oh.
"lightweight Kubernetes implementation"
It was
localkube
that was the lightweight implementation, and it is now gone. There is nowk3s
for that, elsewhere."creates a VM on your local machine"
That is one option. You can also create a container, or you can just run everything on the current machine (localhost).
"deploys a simple cluster containing only one node"
Again, optional. The recommendation is to only use one node and deploy on the control-plane. Just to get started.
The user is now faced with the overwhelming choice on how to run their cluster, which they find a little confusing.
Minikube can still go the "VirtualBox" route, now using a bunch of different hypervisors, to create the VM (like before)
This path is the best, if you don't have an already existing VM/Linux setup. It even comes with a customized OS image.
Minikube can use an existing Linux environment, and run the nodes as system containers (multi-node)
This option is similar to using
kind
.minikube start --driver=docker
Minikube can use an existing Linux environment, and run the pods directly on the machine (single-node)
This option is similar to using
kubeadm
.minikube start --driver=none
And that's only just the main three, then there are alternative setups like "podman" or "ssh" ("generic")
They will probably not be covered on any intro page, since they are advanced (and also somewhat buggy).
But all of them are still using minikube.
Still true.
Also same.
This will have to get revised to something like:
For this tutorial, however, you'll use a provided online terminal with Kubernetes pre-installed.
At least with Killercoda, k8s is already running...
The alternative is to get an Ubuntu terminal, and install all of minikube / kind / kubeadm yourself.
The text was updated successfully, but these errors were encountered: