Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for versions not listed by get-k8s-versions? #2156

Closed
samlevin opened this issue Nov 4, 2017 · 11 comments
Closed

Support for versions not listed by get-k8s-versions? #2156

samlevin opened this issue Nov 4, 2017 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@samlevin
Copy link

samlevin commented Nov 4, 2017

Feature request, Max OSX

I'm using GKE for hosting in production and minikube for running locally. Looking to upgrade to a newer version on our master node but none of our target versions (v1.7.8/9, v1.8.1) are included in minikube get-k8s-versions (and supplying a nonexistent version to minikube start throws an error). Is there any way to specify a custom version? Is there a specific reason why these particular versions are unavailable in minikube?

Best,
Sam

@r2d4
Copy link
Contributor

r2d4 commented Nov 4, 2017

tl;dr you can use minikube config set bootstrapper kubeadm and then minikube start with any version. You'll have to do a minikube delete first if you have a running cluster.

Historically, minikube has used localkube, a custom kubernetes distribution that we maintain, to run the cluster. However, we're in the process of deprecating localkube for a more traditional cluster setup (running the control plane in kubernetes itself and the kubelet standalone). In 1.9, we plan to make the kubeadm bootstrapper the default, and in 1.10 we plan to remove localkube entirely. For end users, there shouldn't be any major differences. One difference will be the way that you can configure the components (which you can read about here https://github.com/kubernetes/minikube/blob/master/docs/configuring_kubernetes.md)

The tracking issue for deprecating localkube is #2134. After 1.10, the get-k8s-versions command will go away.

I'll leave this issue open as a tracking issue to remove the get-k8s-version command.

@uromahn
Copy link

uromahn commented Nov 6, 2017

Great idea - in theory. In practice, the kubeadm bootstrapper barely works!
In my case, I want to use k8s v1.5.7 since this is the one we are using here. Unfortunately, I am getting an error message when using minikube config set bootstrapper kubeadm first and then run the following command: minikube start --cpus 2 --disk-size 50g --memory 4096 --kubernetes-version v1.5.7

I am using the latest minikube 0.23 on Mac OS X and are using the virtualbox driver (default) with the latest virtualbox installed.

I decided to report this here instead of creating a new issue since I believe there are already too many issues reporting the same.

@samlevin
Copy link
Author

samlevin commented Nov 7, 2017

@r2d4 Thanks for the response. kubeadm worked like a charm. Sorry you're not having any luck @uromahn.

@r2d4
Copy link
Contributor

r2d4 commented Nov 7, 2017

@uromahn I don't think that the kubeadm bootstrapper will work for more than one major version skew (i.e. 1.8 is the current version, therefore 1.7 and 1.9 alphas are guaranteed to work). localkube is a bit better in that regard, since it has a lot fewer moving parts, but I don't know the exact compatibility guarantees on localkube.

@uromahn
Copy link

uromahn commented Nov 7, 2017

@r2d4 thanks for your response and the clarification. This is the first time I heard about this version compatibility for kubeadm.
I understand that maintaining older versions can be a pain but unfortunately, reality is that orgs using k8s are quite often not moving so fast and run older versions (e.g. 1.5.7 in our case) in production. It would be nice if we had an option to run the exact same version locally to do testing. localkube is supporting 1.5.3 which is a couple of versions behind the one we use in prod. Also, since the plan seems to discontinue support for localkube and switch entirely to kubeadm, support for older versions may then completely go away, which is not really ideal IMHO.
I tried to create a localkube with 1.5.7 but even the build for k8s is "broken" for those old versions - I am sure that there is a "hack" to make this work but it does not seem to be documented anywhere. :(

Finally, I am all for simplicity and using kubeadm exclusively in minikube does not seem to be a great idea then if localkube "has a lot fewer moving parts" since more moving parts mean more complexity.
Unfortunately, I have a busy day job and kids at home so my time is very limited, otherwise I would roll up my sleeve and start helping to "fix" this current situation.

@anguslees
Copy link
Member

anguslees commented Nov 14, 2017

@uromahn: It sounds like you're saying that we're taking something away from you by continuing to create new versions of minikube - which shouldn't be the case. Are you saying your old version of minikube (of the k8s 1.5 era) that used to work with k8s 1.5, no longer works with k8s 1.5?

Note that if you're trying to recompile that old version of minikube/localkube, then you'll need to ensure you're also using older versions of all the dependencies, etc.

@uromahn
Copy link

uromahn commented Nov 16, 2017

@anguslees I am not sure if I made myself clear enough or you misunderstood my comment above. So, trying to clarify: minikube is supposed to be a tool to run (and manage) a local single-node k8s cluster. So, what I am saying is that if I want to use a different k8s version than what is supported by minikube (based on the json that is read by minikube get-k8s-versions) I have only two options:

  1. using the kubeadm bootstrapper, or

  2. building my own localkube with the k8s version I want

However, there are two issues with the two approaches above:

with 1) since it has only been introduced recently (and is still considered beta) I am limited to fairly recent k8s versions. I could successfully create a k8s minikube cluster with a version 1.7 but it already failed with a version 1.6.
with 2) the issue is that creating my own localkube version is practically impossible since a) the build process is rather complex and b) as you mentioned I have to move back and use the older versions of all dependencies which is quite difficult (if not impossible given the huge number of dependencies) to achieve. Furthermore, even if I can go back to old versions of minikube, there must have been some changes in the source tree that are not rolled back when going back in the various github repos since there seem to be a few important scripts missing that is causing errors during the build.

With that in mind, if I need to run an older version of k8s my only options seem to be to stick with an old version of minikube that still supports localkube but I am then still stuck with a k8s version that is available via localkube which may not match the version I actually want or need to use.

I guess the lesson learned here is: do not get "stuck" with an older version since you may be left out in the dark. :)

@patrikerdes
Copy link
Contributor

@r2d4 I think it would make sense to add to the help text for the get-k8s-versions command that the list is only applicable if the localkube bootstrapper is used. It seems relevant to mention when the default bootstrapper is switched to kubeadm.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 14, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 16, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants