Skip to content
This repository has been archived by the owner on Jan 31, 2024. It is now read-only.

Use test framework to use opendatahub-io/peak repo for testing #741

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion tests/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ WORKDIR /root
RUN dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm &&\
dnf install -y bc git go-toolset python3-pip python3-devel unzip chromium chromedriver && \
dnf clean all && \
git clone https://github.com/crobby/peak $HOME/peak && \
git clone https://github.com/opendatahub-io/peak $HOME/peak && \
cd $HOME/peak && \
git submodule update --init

Expand Down
4 changes: 2 additions & 2 deletions tests/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ a test from overwriting an artifact generated by another test.
# Running tests manually

Manual running of the tests relies on the test
runner [located here](https://github.com/AICoE/peak).
runner [located here](https://github.com/opendatahub-io/peak).
See the README.md there for more detailed information on how it works.

Note when running on a **mac** you may need to do the following:
Expand All @@ -69,7 +69,7 @@ ln -s /usr/local/bin/greadlink /usr/local/bin/readlink
Make sure you have an OpenShift login, then do the following:

```sh
git clone https://github.com/AICoE/peak
git clone https://github.com/opendatahub-io/peak
cd peak
git submodule update --init
echo opendatahub-kubeflow nil https://github.com/opendatahub-io/odh-manifests > my-list
Expand Down
4 changes: 2 additions & 2 deletions tests/TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The aim was to set ourselves up with a test system that would give us an automated way to run tests against our PRs. At the outset of this, our repo had no tests of any sort, so the first task became getting some basic tests running against the bits in our repo.

Our tests are based on the utilities found in https://github.com/openshift/origin/tree/master/hack/lib which are a set of bash functions and scripts that facilitate a reasonably fast way to develop and run a set of tests against either OpenShift itself or, in our case, a set of applications running on OpenShift. Those tests were adapted for use in radanalytics and then re-adapted them for testing operators running in OpenShift. We have borrowed their test runner (our fork is [here](https://github.com/crobby/peak)) that will search a subdirectory tree for scripts that match a given regular expression (ie: ‘tests’ would find all scripts that have ‘tests’ anywhere in their full path or name), so it is easy to run a single test or a large group of tests.
Our tests are based on the utilities found in https://github.com/openshift/origin/tree/master/hack/lib which are a set of bash functions and scripts that facilitate a reasonably fast way to develop and run a set of tests against either OpenShift itself or, in our case, a set of applications running on OpenShift. Those tests were adapted for use in radanalytics and then re-adapted them for testing operators running in OpenShift. We have borrowed their test runner (our fork is [here](https://github.com/opendatahub-io/peak)) that will search a subdirectory tree for scripts that match a given regular expression (ie: ‘tests’ would find all scripts that have ‘tests’ anywhere in their full path or name), so it is easy to run a single test or a large group of tests.

Each test script has a small amount of boilerplate code followed by a series of bash tests. Each test could call out to another utility/language/whatever. The utilities available in the testing library can check for specific results in text/exit code/etc of each call. Any test lines that produce a failed result are tabulated and reported at the end of the test runs. Of course, the stdout/stderr of each failed call is also available in addition to whatever other logging your test call might produce. Here’s what I would call the main building block of the tests: https://github.com/openshift/origin/blob/master/hack/lib/cmd.sh It defines what amount to wrappers around whatever calls you want to make in your tests and handles the parsing of the result text/exit codes.

Expand All @@ -17,4 +17,4 @@ Lastly, and perhaps the most important is defining the configuration that will r
2) Instructions on how to build that test image and
3) A workflow that has your test or tests in the “tests” portion of the workflow definition. In our case, we are using the ipi-aws workflow which will spin-up a fresh OpenShift cluster in AWS where our tests will run (our test container will start with an admin KUBECONFIG for that cluster)

For greater detail on any of the steps, you can refer to the [OpenShift relase README](https://github.com/openshift/release/blob/master/README.md)
For greater detail on any of the steps, you can refer to the [OpenShift relase README](https://github.com/openshift/release/blob/master/README.md)