Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: Automatically set kube context in development container #246

Merged
merged 3 commits into from
Sep 30, 2022
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -33,3 +33,4 @@ bin
devbuild
.develop_image_name
.bash_history
.kube
45 changes: 32 additions & 13 deletions fvt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,49 +29,68 @@ The FVTs rely on a set of models existing in a configured `localMinIO` storage.

If starting with a fresh namespace, install ModelMesh Serving configured for the FVTs with:

```
```Shell
./scripts/install.sh --namespace modelmesh-serving --fvt --dev-mode-logging
```

To re-configure an existing quick-start instance for FVTs, run:

```
```Shell
kubectl apply -f config/dependencies/fvt.yaml
```

### Development Environment

The FVTs run using the `ginkgo` CLI tool and need `kubectl` configuration to the cluster. It is recommended to use the containerized development environment to run the FVTs. First build the environment with:
The FVTs run using the `ginkgo` CLI tool and need `kubectl` configured to communicate
to a Kubernetes cluster. It is recommended to use the containerized development environment
to run the FVTs.

```
make develop
First, verify that you have access to your Kubernetes cluster:

```Shell
kubectl config current-context
```

This will drop you in a shell in the container. The next step is to configure this environment to communicate to the Kubernetes cluster. If using an OpenShift cluster, you can run:
If you are using an OpenShift cluster, you can run:

```
```Shell
oc login --token=${YOUR_TOKEN} --server=https://${YOUR_ADDRESS}
```

Another method is to can export a functioning `kubectl` configuration and copy it into the container at `/root/.kube/config`.
Then build and start the development environment with:

```Shell
make develop
```

This will drop you in a shell in the development container where the `./kube/config` is mounted to `root` so
that you should be able to communicate to your Kubernetes cluster from inside the container.
If not, you can manually export a functioning `kubectl` configuration and copy it into the container
at `/root/.kube/config` or, for OpenShift, run the `oc login ...` command inside the development
container.

```Shell
# in shell that is has `kubectl` configured with the desired cluster as the
# current context, the following command will print a portable kubeconfig file
kubectl config view --minify --flatten
```

### Run the FVTs

With a suitable development environment and ModelMesh Serving installation as described above, FVTs can be executed with a `make` target:
With a suitable development environment and ModelMesh Serving installation as described above,
the FVTs can be executed with a `make` target:

```
```Shell
make fvt
# use the command below if you installed to a namespace other than modelmesh-serving
# NAMESPACE=your-namespace make fvt`
```

## Running or not running specific tests
Set the `NAMESPACE` environment variable, if you installed to a **namespace** other than `modelmesh-serving`

```Shell
NAMESPACE="<your-namespace>" make fvt
```

## Enabling or disabling specific tests

Thanks to the Ginkgo framework, we have the ability to run or not run specific tests. See [this doc](https://onsi.github.io/ginkgo/#filtering-specs) for details.
This is useful when you'd like to skip failing tests or want to debug specific test(s).
Expand Down
5 changes: 5 additions & 0 deletions scripts/develop.sh
Original file line number Diff line number Diff line change
Expand Up @@ -57,9 +57,14 @@ cd "${DIR}/.."
# Make sure .bash_history exists and is a file
touch .bash_history

# create local copy of a kube-config file
mkdir -p .kube/
kubectl config view --minify --flatten 2> /dev/null > .kube/config

declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${PWD}/.bash_history:/root/.bash_history"
-v "${PWD}/.kube/config:/root/.kube/config"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ckadner this looks good in principle. But why does the kube context need to be saved separately? Why not just mount the user's .kube directory to the right place so that the existing context is used directly:

Suggested change
# create local copy of a kube-config file
mkdir -p .kube/
kubectl config view --minify --flatten 2> /dev/null > .kube/config
declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${PWD}/.bash_history:/root/.bash_history"
-v "${PWD}/.kube/config:/root/.kube/config"
declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${HOME}/.bash_history:/root/.bash_history"
-v "${HOME}/.kube:/root/.kube"

(I think the .bash_history mount should be fixed too)

Copy link
Member Author

@ckadner ckadner Sep 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @njhill -- thanks for your review!

Why not just mount the user's .kube directory ... directly

Because different cloud providers use various CLI tools to connect to K8s and OCP clusters, some of which do not create ~/.kube/config files (i.e. IKS, OpenShift on IBM Cloud, OCP on Fyre). In each of those cases, though, you can create that minified kubeconfig file and it works well.

I think the .bash_history mount should be fixed too

My intention for the .bashhistory was to keep only those commands that are relevant to ModelMesh development, as opposed to all other user command history (mine spans a wide range of completely unrelated commands that I do not want to click through)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because different cloud providers use various CLI tools to connect to K8s and OCP clusters, some of which do not create ~/.kube/config files (i.e. IKS, OpenShift on IBM Cloud, OCP on Fyre). In each of those cases, though, you can create that minified kubeconfig file and it works well.

@ckadner I could very well be wrong but are you sure about this? The non-kubectl CLI tool you're referring to here is oc right? That shares the same kubectl context/config (see here). This is only a client-side thing, it shouldn't matter what kind of Kubernetes cluster you're connecting to...

My intention for the .bashhistory was to keep only those commands that are relevant to ModelMesh development.

Ah ok now I understand the ${PWD} rationale for that, sounds reasonable, thanks!

Copy link
Member Author

@ckadner ckadner Sep 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @njhill -- yes oc does not use/create a ~/.kube/config file (see below) and for IKS I am using the IKS CLI and a tool to switch between clusters by pointing to different kube-config files which the ibmcloud CLI stores under ${HOME}/.bluemix/plugins/container-service/clusters/... i.e.

KUBECONFIG=/Users/ckadner/.bluemix/plugins/container-service/clusters/ckadner-modelmesh-dev-cbu3tugd0r5j02te049g/kube-config-aaa00-ckadner-modelmesh-dev.yml

This is very useful when I am working on different clusters in different terminal shells at the same time, where I need different kube contexts for each of the terminals.

For OpenShift on IBM Cloud or OCP on Fyre, I see this behavior:

# login to cluster
[modelmesh-serving]$ oc login --token=sha256~u_o6N****zznde_3rw --server=https://c104-e.us-south.containers.cloud.ibm.com:30276

Logged into "https://c104-e.us-south.containers.cloud.ibm.com:30276" as "IAM#ckadner@us.ibm.com" using the token provided.

You have access to 67 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".


# list some pods
[modelmesh-serving]$ kubectl get pods -n kube-system | head

NAME                                             READY   STATUS    RESTARTS   AGE
ibm-file-plugin-684495896f-x6vsp                 1/1     Running   0          14d
ibm-keepalived-watcher-6zg84                     1/1     Running   0          14d
ibm-keepalived-watcher-9zqw9                     1/1     Running   0          14d
ibm-master-proxy-static-10.87.76.105             2/2     Running   2          28d
ibm-master-proxy-static-10.87.76.117             2/2     Running   2          28d
ibm-storage-metrics-agent-5c89ffc69f-hqd4h       1/1     Running   0          4d20h
ibm-storage-watcher-77847bfb4c-xp5jn             1/1     Running   0          14d
ibmcloud-block-storage-driver-n99lg              1/1     Running   0          14d
ibmcloud-block-storage-driver-rjtrg              1/1     Running   0          14d


# see there is no kube config file
[modelmesh-serving]$ cat ~/.kube/config

cat: /Users/ckadner/.kube/config: No such file or directory


# use my .kube folder mount in develop.sh script
[modelmesh-serving]$ cat scripts/develop.sh | grep kube

  -v "${HOME}/.kube:/root/.kube"


# start the container, no kube context
[modelmesh-serving]$ make develop

./scripts/build_devimage.sh
Pulling dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Building dev image kserve/modelmesh-controller-develop:6be58b09c25833c1
[+] Building 1.0s (10/10) FINISHED                                                                                                                                       
...
Image kserve/modelmesh-controller-develop:6be58b09c25833c1 has 14 layers
Tagging dev image kserve/modelmesh-controller-develop:6be58b09c25833c1 as latest
./scripts/develop.sh
[root@72d9bf620910 workspace]# kubectl get pods
error: Missing or incomplete configuration info.  Please point to an existing, complete config file:


  1. Via the command-line flag --kubeconfig
  2. Via the KUBECONFIG environment variable
  3. In your home directory as ~/.kube/config

To view or setup config directly use the 'config' command.
[root@72d9bf620910 workspace]# 

-v "/var/run/docker.sock:/var/run/docker.sock"
)

Expand Down