Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Automatically set kube context in development container #246

Merged
merged 3 commits into from
Sep 30, 2022

Conversation

ckadner
Copy link
Member

@ckadner ckadner commented Sep 15, 2022

Motivation

When using the containerized development environment make develop to run FVT tests, one needs to configure access to a Kubernetes or OpenShift cluster from inside the container. Which has to be done for every make develop session. This can be tricky when cloud provider specific CLI tools are needed to connect and authenticate to a cluster.

Currently there is a short paragraph in the FVT README about how to export a minified kubeconfig file and create that inside the container. It is tedious to repeat those steps for each make develop session and depending on OS, shell environment, editors and possible text encoding issue it is also error prone.

Modifications

This PR proposes to automatically create the kubeconfig file in a local and git-ignored directory inside the local project and automatically mount it to the develop container. All the user then has to do is connect and authenticate to the cluster in the shell that will be running make develop.

Result

Kubernetes context is ready inside the development container.

# shell environment, outside the develop container has access to K8s cluster
[modelmesh-serving_ckadner]$ kubectl get pods

NAME                                        READY   STATUS    RESTARTS   AGE
pod/etcd                                    1/1     Running   0          17m
pod/minio                                   1/1     Running   0          17m
pod/modelmesh-controller-387aef25be-ftyqu   1/1     Running   0          17m

[modelmesh-serving_ckadner]$ make develop

./scripts/build_devimage.sh
Pulling dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Building dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Image kserve/modelmesh-controller-develop:6be58b09c25833c1 has 14 layers
Tagging dev image kserve/modelmesh-controller-develop:6be58b09c25833c1 as latest
./scripts/develop.sh
[root@17c121286549 workspace]# kubectl get pods
NAME                                        READY   STATUS    RESTARTS   AGE
pod/etcd                                    1/1     Running   0          18m
pod/minio                                   1/1     Running   0          18m
pod/modelmesh-controller-387aef25be-ftyqu   1/1     Running   0          18m
[root@17c121286549 workspace]# 

/cc @njhill

Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
@ckadner ckadner added the test testing related bugs and fixes label Sep 15, 2022
@ckadner ckadner changed the title chore: Automatically set kube context in make develop chore: Automatically set kube context in development container Sep 15, 2022
Comment on lines 60 to 67
# create local copy of a kube-config file
mkdir -p .kube/
kubectl config view --minify --flatten 2> /dev/null > .kube/config

declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${PWD}/.bash_history:/root/.bash_history"
-v "${PWD}/.kube/config:/root/.kube/config"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ckadner this looks good in principle. But why does the kube context need to be saved separately? Why not just mount the user's .kube directory to the right place so that the existing context is used directly:

Suggested change
# create local copy of a kube-config file
mkdir -p .kube/
kubectl config view --minify --flatten 2> /dev/null > .kube/config
declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${PWD}/.bash_history:/root/.bash_history"
-v "${PWD}/.kube/config:/root/.kube/config"
declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${HOME}/.bash_history:/root/.bash_history"
-v "${HOME}/.kube:/root/.kube"

(I think the .bash_history mount should be fixed too)

Copy link
Member Author

@ckadner ckadner Sep 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @njhill -- thanks for your review!

Why not just mount the user's .kube directory ... directly

Because different cloud providers use various CLI tools to connect to K8s and OCP clusters, some of which do not create ~/.kube/config files (i.e. IKS, OpenShift on IBM Cloud, OCP on Fyre). In each of those cases, though, you can create that minified kubeconfig file and it works well.

I think the .bash_history mount should be fixed too

My intention for the .bashhistory was to keep only those commands that are relevant to ModelMesh development, as opposed to all other user command history (mine spans a wide range of completely unrelated commands that I do not want to click through)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because different cloud providers use various CLI tools to connect to K8s and OCP clusters, some of which do not create ~/.kube/config files (i.e. IKS, OpenShift on IBM Cloud, OCP on Fyre). In each of those cases, though, you can create that minified kubeconfig file and it works well.

@ckadner I could very well be wrong but are you sure about this? The non-kubectl CLI tool you're referring to here is oc right? That shares the same kubectl context/config (see here). This is only a client-side thing, it shouldn't matter what kind of Kubernetes cluster you're connecting to...

My intention for the .bashhistory was to keep only those commands that are relevant to ModelMesh development.

Ah ok now I understand the ${PWD} rationale for that, sounds reasonable, thanks!

Copy link
Member Author

@ckadner ckadner Sep 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @njhill -- yes oc does not use/create a ~/.kube/config file (see below) and for IKS I am using the IKS CLI and a tool to switch between clusters by pointing to different kube-config files which the ibmcloud CLI stores under ${HOME}/.bluemix/plugins/container-service/clusters/... i.e.

KUBECONFIG=/Users/ckadner/.bluemix/plugins/container-service/clusters/ckadner-modelmesh-dev-cbu3tugd0r5j02te049g/kube-config-aaa00-ckadner-modelmesh-dev.yml

This is very useful when I am working on different clusters in different terminal shells at the same time, where I need different kube contexts for each of the terminals.

For OpenShift on IBM Cloud or OCP on Fyre, I see this behavior:

# login to cluster
[modelmesh-serving]$ oc login --token=sha256~u_o6N****zznde_3rw --server=https://c104-e.us-south.containers.cloud.ibm.com:30276

Logged into "https://c104-e.us-south.containers.cloud.ibm.com:30276" as "IAM#ckadner@us.ibm.com" using the token provided.

You have access to 67 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".


# list some pods
[modelmesh-serving]$ kubectl get pods -n kube-system | head

NAME                                             READY   STATUS    RESTARTS   AGE
ibm-file-plugin-684495896f-x6vsp                 1/1     Running   0          14d
ibm-keepalived-watcher-6zg84                     1/1     Running   0          14d
ibm-keepalived-watcher-9zqw9                     1/1     Running   0          14d
ibm-master-proxy-static-10.87.76.105             2/2     Running   2          28d
ibm-master-proxy-static-10.87.76.117             2/2     Running   2          28d
ibm-storage-metrics-agent-5c89ffc69f-hqd4h       1/1     Running   0          4d20h
ibm-storage-watcher-77847bfb4c-xp5jn             1/1     Running   0          14d
ibmcloud-block-storage-driver-n99lg              1/1     Running   0          14d
ibmcloud-block-storage-driver-rjtrg              1/1     Running   0          14d


# see there is no kube config file
[modelmesh-serving]$ cat ~/.kube/config

cat: /Users/ckadner/.kube/config: No such file or directory


# use my .kube folder mount in develop.sh script
[modelmesh-serving]$ cat scripts/develop.sh | grep kube

  -v "${HOME}/.kube:/root/.kube"


# start the container, no kube context
[modelmesh-serving]$ make develop

./scripts/build_devimage.sh
Pulling dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Building dev image kserve/modelmesh-controller-develop:6be58b09c25833c1
[+] Building 1.0s (10/10) FINISHED                                                                                                                                       
...
Image kserve/modelmesh-controller-develop:6be58b09c25833c1 has 14 layers
Tagging dev image kserve/modelmesh-controller-develop:6be58b09c25833c1 as latest
./scripts/develop.sh
[root@72d9bf620910 workspace]# kubectl get pods
error: Missing or incomplete configuration info.  Please point to an existing, complete config file:


  1. Via the command-line flag --kubeconfig
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
[root@72d9bf620910 workspace]# 

@njhill
Copy link
Member

njhill commented Sep 20, 2022

@ckadner how about

-v "${KUBECONFIG:-${PWD}/.kube/config}:/root/.kube/config"

@ckadner
Copy link
Member Author

ckadner commented Sep 20, 2022

@ckadner how about

-v "${KUBECONFIG:-${PWD}/.kube/config}:/root/.kube/config"

Hi @njhill -- I think you meant the user's ~/.kube/config like this?

  -v "${KUBECONFIG:-${HOME}/.kube/config}:/root/.kube/config"

Unfortunately also does not work since, for IKS at least, the kubeconfig files under ${HOME}/.bluemix/plugins/container-service/clusters/ have local absolute file references, which will not be present if only mounting the the one kubeconfig file without the whole ${HOME}/.bluemix/ folder also mounted:

[root@5f681a6f418d workspace]# kubectl get pods
error: unable to read certificate-authority /Users/ckadner/.bluemix/plugins/container-service/clusters/ckadner-modelmesh-dev-cbu3tugd0r5j02te049g/ca-aaa00-ckadner-modelmesh-dev.pem for ckadner-modelmesh-dev/cbu3tugd0r5j02te049g due to open /Users/ckadner/.bluemix/plugins/container-service/clusters/ckadner-modelmesh-dev-cbu3tugd0r5j02te049g/ca-aaa00-ckadner-modelmesh-dev.pem: no such file or directory

Does the minified kubeconfig approach in this PR not work for you? Or you don't like creating another local file? We could put that into the user's $TMP dir instead or tuck away all local developer related artifacts into a .dev folder, like .bash_history and develop_image_name ...

@njhill
Copy link
Member

njhill commented Sep 20, 2022

I think you meant the user's ~/.kube/config like this?

Yes, sorry, that's what I meant!

the kubeconfig files under ${HOME}/.bluemix/plugins/container-service/clusters/ have local absolute file references,

Ah ok, fair enough

Does the minified kubeconfig approach in this PR not work for you?

No it's fine, it just seemed cleaner to me to link to the outside config if possible. But I understand the complications now so this seems fine. I guess my only concern is that this would overwrite an existing .kube/config file if it was run in the user's home dir.

@ckadner
Copy link
Member Author

ckadner commented Sep 20, 2022

I guess my only concern is that this would overwrite an existing .kube/config file if it was run in the user's home dir.

Right, but it should not break anything since it would rewrite it with the same kubeconfig file contents -- just "minified" :-)

We could instead save all local development artifacts in a .dev folder inside the users current folder, which we assume to be the local clone of modelmesh-serving. Artifacts like .bash_history and .develop_image_name and the now new .kube/config file. WDYT?

@njhill
Copy link
Member

njhill commented Sep 21, 2022

Right, but it should not break anything since it would rewrite it with the same kubeconfig file contents -- just "minified" :-)

What if the saved config comes from a different location (due to env var setting)?

We could instead save all local development artifacts in a .dev folder inside the users current folder, which we assume to be the local clone of modelmesh-serving. Artifacts like .bash_history and .develop_image_name and the now new .kube/config file.

That sounds like a good idea to me!

Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
@ckadner
Copy link
Member Author

ckadner commented Sep 30, 2022

Right, but it should not break anything since it would rewrite it with the same kubeconfig file contents -- just "minified" :-)

What if the saved config comes from a different location (due to env var setting)?

We could instead save all local development artifacts in a .dev folder inside the users current folder, which we assume to be the local clone of modelmesh-serving. Artifacts like .bash_history and .develop_image_name and the now new .kube/config file.

That sounds like a good idea to me!

@njhill -- I moved the .bash_history and .kube_config file inside a .dev folder:

And added a little bit of text to the developer doc:

Could you you do one more review of this PR? Thank you!

Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ckadner!

@kserve-oss-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ckadner, njhill

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@njhill
Copy link
Member

njhill commented Sep 30, 2022

/lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved development lgtm test testing related bugs and fixes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants