-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Automatically set kube context in development container #246
chore: Automatically set kube context in development container #246
Conversation
Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
scripts/develop.sh
Outdated
# create local copy of a kube-config file | ||
mkdir -p .kube/ | ||
kubectl config view --minify --flatten 2> /dev/null > .kube/config | ||
|
||
declare -a docker_run_args=( | ||
-v "${PWD}:/workspace" | ||
-v "${PWD}/.bash_history:/root/.bash_history" | ||
-v "${PWD}/.kube/config:/root/.kube/config" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ckadner this looks good in principle. But why does the kube context need to be saved separately? Why not just mount the user's .kube
directory to the right place so that the existing context is used directly:
# create local copy of a kube-config file | |
mkdir -p .kube/ | |
kubectl config view --minify --flatten 2> /dev/null > .kube/config | |
declare -a docker_run_args=( | |
-v "${PWD}:/workspace" | |
-v "${PWD}/.bash_history:/root/.bash_history" | |
-v "${PWD}/.kube/config:/root/.kube/config" | |
declare -a docker_run_args=( | |
-v "${PWD}:/workspace" | |
-v "${HOME}/.bash_history:/root/.bash_history" | |
-v "${HOME}/.kube:/root/.kube" |
(I think the .bash_history
mount should be fixed too)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @njhill -- thanks for your review!
Why not just mount the user's
.kube
directory ... directly
Because different cloud providers use various CLI tools to connect to K8s and OCP clusters, some of which do not create ~/.kube/config
files (i.e. IKS, OpenShift on IBM Cloud, OCP on Fyre). In each of those cases, though, you can create that minified kubeconfig file and it works well.
I think the
.bash_history
mount should be fixed too
My intention for the .bashhistory
was to keep only those commands that are relevant to ModelMesh development, as opposed to all other user command history (mine spans a wide range of completely unrelated commands that I do not want to click through)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because different cloud providers use various CLI tools to connect to K8s and OCP clusters, some of which do not create ~/.kube/config files (i.e. IKS, OpenShift on IBM Cloud, OCP on Fyre). In each of those cases, though, you can create that minified kubeconfig file and it works well.
@ckadner I could very well be wrong but are you sure about this? The non-kubectl
CLI tool you're referring to here is oc
right? That shares the same kubectl context/config (see here). This is only a client-side thing, it shouldn't matter what kind of Kubernetes cluster you're connecting to...
My intention for the .bashhistory was to keep only those commands that are relevant to ModelMesh development.
Ah ok now I understand the ${PWD}
rationale for that, sounds reasonable, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @njhill -- yes oc
does not use/create a ~/.kube/config
file (see below) and for IKS I am using the IKS CLI and a tool to switch between clusters by pointing to different kube-config
files which the ibmcloud
CLI stores under ${HOME}/.bluemix/plugins/container-service/clusters/...
i.e.
KUBECONFIG=/Users/ckadner/.bluemix/plugins/container-service/clusters/ckadner-modelmesh-dev-cbu3tugd0r5j02te049g/kube-config-aaa00-ckadner-modelmesh-dev.yml
This is very useful when I am working on different clusters in different terminal shells at the same time, where I need different kube contexts for each of the terminals.
For OpenShift on IBM Cloud or OCP on Fyre, I see this behavior:
# login to cluster
[modelmesh-serving]$ oc login --token=sha256~u_o6N****zznde_3rw --server=https://c104-e.us-south.containers.cloud.ibm.com:30276
Logged into "https://c104-e.us-south.containers.cloud.ibm.com:30276" as "IAM#ckadner@us.ibm.com" using the token provided.
You have access to 67 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "default".
# list some pods
[modelmesh-serving]$ kubectl get pods -n kube-system | head
NAME READY STATUS RESTARTS AGE
ibm-file-plugin-684495896f-x6vsp 1/1 Running 0 14d
ibm-keepalived-watcher-6zg84 1/1 Running 0 14d
ibm-keepalived-watcher-9zqw9 1/1 Running 0 14d
ibm-master-proxy-static-10.87.76.105 2/2 Running 2 28d
ibm-master-proxy-static-10.87.76.117 2/2 Running 2 28d
ibm-storage-metrics-agent-5c89ffc69f-hqd4h 1/1 Running 0 4d20h
ibm-storage-watcher-77847bfb4c-xp5jn 1/1 Running 0 14d
ibmcloud-block-storage-driver-n99lg 1/1 Running 0 14d
ibmcloud-block-storage-driver-rjtrg 1/1 Running 0 14d
# see there is no kube config file
[modelmesh-serving]$ cat ~/.kube/config
cat: /Users/ckadner/.kube/config: No such file or directory
# use my .kube folder mount in develop.sh script
[modelmesh-serving]$ cat scripts/develop.sh | grep kube
-v "${HOME}/.kube:/root/.kube"
# start the container, no kube context
[modelmesh-serving]$ make develop
./scripts/build_devimage.sh
Pulling dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Building dev image kserve/modelmesh-controller-develop:6be58b09c25833c1
[+] Building 1.0s (10/10) FINISHED
...
Image kserve/modelmesh-controller-develop:6be58b09c25833c1 has 14 layers
Tagging dev image kserve/modelmesh-controller-develop:6be58b09c25833c1 as latest
./scripts/develop.sh
[root@72d9bf620910 workspace]# kubectl get pods
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
To view or setup config directly use the 'config' command.
[root@72d9bf620910 workspace]#
@ckadner how about
|
Hi @njhill -- I think you meant the user's
Unfortunately also does not work since, for IKS at least, the kubeconfig files under
Does the minified kubeconfig approach in this PR not work for you? Or you don't like creating another local file? We could put that into the user's |
Yes, sorry, that's what I meant!
Ah ok, fair enough
No it's fine, it just seemed cleaner to me to link to the outside config if possible. But I understand the complications now so this seems fine. I guess my only concern is that this would overwrite an existing |
Right, but it should not break anything since it would rewrite it with the same kubeconfig file contents -- just "minified" :-) We could instead save all local development artifacts in a |
What if the saved config comes from a different location (due to env var setting)?
That sounds like a good idea to me! |
Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
@njhill -- I moved the And added a little bit of text to the developer doc: Could you you do one more review of this PR? Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ckadner!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ckadner, njhill The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
Motivation
When using the containerized development environment
make develop
to run FVT tests, one needs to configure access to a Kubernetes or OpenShift cluster from inside the container. Which has to be done for everymake develop
session. This can be tricky when cloud provider specific CLI tools are needed to connect and authenticate to a cluster.Currently there is a short paragraph in the FVT README about how to export a minified kubeconfig file and create that inside the container. It is tedious to repeat those steps for each
make develop
session and depending on OS, shell environment, editors and possible text encoding issue it is also error prone.Modifications
This PR proposes to automatically create the kubeconfig file in a local and git-ignored directory inside the local project and automatically mount it to the develop container. All the user then has to do is connect and authenticate to the cluster in the shell that will be running
make develop
.Result
Kubernetes context is ready inside the development container.
/cc @njhill