Skip to content

Commit

Permalink
chore: Automatically set kube context in development container (#246)
Browse files Browse the repository at this point in the history
#### Motivation

When using the containerized development environment `make develop` to run FVT tests, one needs to configure access to a Kubernetes or OpenShift cluster from inside the container. Which has to be done for every `make develop` session. This can be tricky when cloud provider specific CLI tools are needed to connect and authenticate to a cluster.

Currently there is a short paragraph in the FVT README about how to export a minified kubeconfig file and create that inside the container. It is tedious to repeat those steps for each `make develop` session and depending on OS, shell environment, editors and possible text encoding issue it is also error prone.

#### Modifications

This PR proposes to automatically create the kubeconfig file in a local and git-ignored directory inside the local project and automatically mount it to the develop container. All the user then has to do is connect and authenticate to the cluster in the shell that will be running `make develop`. 
 
#### Result

Kubernetes context is ready inside the development container.


```Shell
# shell environment, outside the develop container has access to K8s cluster
[modelmesh-serving_ckadner]$ kubectl get pods

NAME                                        READY   STATUS    RESTARTS   AGE
pod/etcd                                    1/1     Running   0          17m
pod/minio                                   1/1     Running   0          17m
pod/modelmesh-controller-387aef25be-ftyqu   1/1     Running   0          17m

[modelmesh-serving_ckadner]$ make develop

./scripts/build_devimage.sh
Pulling dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Building dev image kserve/modelmesh-controller-develop:6be58b09c25833c1...
Image kserve/modelmesh-controller-develop:6be58b09c25833c1 has 14 layers
Tagging dev image kserve/modelmesh-controller-develop:6be58b09c25833c1 as latest
./scripts/develop.sh
[root@17c121286549 workspace]# kubectl get pods
NAME                                        READY   STATUS    RESTARTS   AGE
pod/etcd                                    1/1     Running   0          18m
pod/minio                                   1/1     Running   0          18m
pod/modelmesh-controller-387aef25be-ftyqu   1/1     Running   0          18m
[root@17c121286549 workspace]# 
```

/cc @njhill 


Signed-off-by: Christian Kadner <ckadner@us.ibm.com>
  • Loading branch information
ckadner authored Sep 30, 2022
1 parent 33d34cf commit 3cfb2db
Show file tree
Hide file tree
Showing 4 changed files with 67 additions and 22 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,4 @@ bin
# Modelmesh development related artifacts
devbuild
.develop_image_name
.bash_history
.dev/
31 changes: 25 additions & 6 deletions docs/developer.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,11 @@ kubectl create ns modelmesh-serving
This installs the `modelmesh-controller` and dependencies in the `modelmesh-serving` namespace. The `minio` pod that this deploys
contains special test images that are used in the functional tests.

## Building development image

A dockerized development environment is provided to help set up dependencies for testing, linting, and code generating. Using this environment is suggested as this is what the GitHub Actions workflows use. To create the development image, perform the following:
If you already deployed ModelMesh Serving on a Kubernetes or OpenShift cluster before and are reconnecting to it now,
make sure to set the default namespace to `modelmesh-serving`.

```shell
make build.develop
kubectl config set-context --current --namespace=modelmesh-serving
```

## Building and updating controller image
Expand Down Expand Up @@ -59,6 +58,26 @@ you will need to restart the controller pod. This can be done through the follow
kubectl rollout restart deploy modelmesh-controller
```

## Building the developer image

A dockerized development environment is provided to help set up dependencies for testing, linting, and code generating.
Using this environment is suggested as this is what the GitHub Actions workflows use.
To create the development image, perform the following:

```shell
make build.develop
```

## Using the developer image for linting and testing

To use the dockerized development environment run:

```shell
make develop
```

Then, from inside the developer container, proceed to run the linting, code generation, and testing as described below.

## Formatting and linting code

After building the development image, you can lint and format the code with:
Expand Down Expand Up @@ -97,5 +116,5 @@ To run them, do the following:
make fvt
```

**Note**: sometimes the tests can fail on the first run because pulling the serving runtime images can take a while, causing a timeout.
Just try again after the pulling is done.
**Note**: sometimes the tests can fail on the first run because pulling the serving runtime images can take a while,
causing a timeout. Just try again after the pulling is done.
45 changes: 32 additions & 13 deletions fvt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,49 +29,68 @@ The FVTs rely on a set of models existing in a configured `localMinIO` storage.

If starting with a fresh namespace, install ModelMesh Serving configured for the FVTs with:

```
```Shell
./scripts/install.sh --namespace modelmesh-serving --fvt --dev-mode-logging
```

To re-configure an existing quick-start instance for FVTs, run:

```
```Shell
kubectl apply -f config/dependencies/fvt.yaml
```

### Development Environment

The FVTs run using the `ginkgo` CLI tool and need `kubectl` configuration to the cluster. It is recommended to use the containerized development environment to run the FVTs. First build the environment with:
The FVTs run using the `ginkgo` CLI tool and need `kubectl` configured to communicate
to a Kubernetes cluster. It is recommended to use the containerized development environment
to run the FVTs.

```
make develop
First, verify that you have access to your Kubernetes cluster:

```Shell
kubectl config current-context
```

This will drop you in a shell in the container. The next step is to configure this environment to communicate to the Kubernetes cluster. If using an OpenShift cluster, you can run:
If you are using an OpenShift cluster, you can run:

```
```Shell
oc login --token=${YOUR_TOKEN} --server=https://${YOUR_ADDRESS}
```

Another method is to can export a functioning `kubectl` configuration and copy it into the container at `/root/.kube/config`.
Then build and start the development environment with:

```Shell
make develop
```

This will drop you in a shell in the development container where the `./kube/config` is mounted to `root` so
that you should be able to communicate to your Kubernetes cluster from inside the container.
If not, you can manually export a functioning `kubectl` configuration and copy it into the container
at `/root/.kube/config` or, for OpenShift, run the `oc login ...` command inside the development
container.

```Shell
# in shell that is has `kubectl` configured with the desired cluster as the
# current context, the following command will print a portable kubeconfig file
kubectl config view --minify --flatten
```

### Run the FVTs

With a suitable development environment and ModelMesh Serving installation as described above, FVTs can be executed with a `make` target:
With a suitable development environment and ModelMesh Serving installation as described above,
the FVTs can be executed with a `make` target:

```
```Shell
make fvt
# use the command below if you installed to a namespace other than modelmesh-serving
# NAMESPACE=your-namespace make fvt`
```

## Running or not running specific tests
Set the `NAMESPACE` environment variable, if you installed to a **namespace** other than `modelmesh-serving`

```Shell
NAMESPACE="<your-namespace>" make fvt
```

## Enabling or disabling specific tests

Thanks to the Ginkgo framework, we have the ability to run or not run specific tests. See [this doc](https://onsi.github.io/ginkgo/#filtering-specs) for details.
This is useful when you'd like to skip failing tests or want to debug specific test(s).
Expand Down
11 changes: 9 additions & 2 deletions scripts/develop.sh
Original file line number Diff line number Diff line change
Expand Up @@ -54,12 +54,19 @@ eval set -- "$PARAMS"
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
cd "${DIR}/.."

# store local development files in .dev directory
mkdir -p .dev/

# Make sure .bash_history exists and is a file
touch .bash_history
touch .dev/.bash_history

# create a minified flattened local copy of the kube config
kubectl config view --minify --flatten 2> /dev/null > .dev/.kube_config

declare -a docker_run_args=(
-v "${PWD}:/workspace"
-v "${PWD}/.bash_history:/root/.bash_history"
-v "${PWD}/.dev/.bash_history:/root/.bash_history"
-v "${PWD}/.dev/.kube_config:/root/.kube/config"
-v "/var/run/docker.sock:/var/run/docker.sock"
)

Expand Down

0 comments on commit 3cfb2db

Please sign in to comment.