Skip to content

Commit

Permalink
Make the CI builds install script able to detect and work against clu…
Browse files Browse the repository at this point in the history
…sters with hosted control planes (#369)

* Make the CI builds install script able to detect and work against clusters with hosted control planes [1]

In such situation, we cannot really rely on the ImageContentSourcePolicy/ImageDigestMirrorSet,
which will be created but won't be propagated to the cluster nodes.
As a workaround, this script rebuilds a new IIB
by replacing all references to the internal registries with quay.io.
It also does the same thing by rebuilding new operator bundles,
because the manifests in them might also contain references to
internal registries.
It pushes those new images into the internal cluster image registry.

This should make this work on HyperShift, ROSA or IBM Cloud clusters.

NOTE: We might be able to reuse that logic to make the airgap script work on such clusters. But this can be done in a follow-up step. [2]

[1] https://issues.redhat.com/browse/RHIDP-3205
[2] https://issues.redhat.com/browse/RHIDP-4415

* Remove limitation note on clusters with hosted control planes in the docs

* Fix IDMS / ICSP creation on regular OCP clusters

* Update prerequisites

* Ignore TLS cert checks when pushing images to the internal cluster registry

It might be exposed over an insecure route,
e.g., on HyperShift clusters launched with `launch 4.x`

* Fix `podman create` not working with certain versions of Podman

`--entrypoint` is needed on Podman v4 if the container image
did not define a CMD or ENTRYPOINT.
This didn't happen on Podman v5

* Make it possible to install the Operator even when the internal registry is exposed over an insecure route

The solution is to use the internal registry service and port,
which are trusted inside the cluster.

This fixes the issue with HyperShift clusters provisioned on Cluster Bot (`launch 4.x`).
  • Loading branch information
rm3l authored Oct 28, 2024
1 parent 8a7e0fe commit 5ad9fa2
Show file tree
Hide file tree
Showing 2 changed files with 162 additions and 37 deletions.
16 changes: 9 additions & 7 deletions .rhdh/docs/installing-ci-builds.adoc
Original file line number Diff line number Diff line change
@@ -1,26 +1,28 @@
== Installing CI builds of Red Hat Developer Hub

WARNING: The procedure below will not work properly on OpenShift clusters with hosted control planes, like link:https://hypershift-docs.netlify.app/[HyperShift] or link:https://www.redhat.com/en/blog/red-hat-openshift-service-aws-hosted-control-planes-now-available[ROSA with hosted control planes]. This is due to a limitation preventing link:https://docs.openshift.com/container-platform/4.14/rest_api/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1.html[`ImageContentSourcePolicy`] resources from being propagated to the cluster nodes. There is currently no workaround for these clusters.

*Prerequisites*

* You are logged in as an administrator on the OpenShift web console.
* You have configured the appropriate roles and permissions within your project to create an application. See the link:https://docs.openshift.com/container-platform/4.14/applications/index.html[Red Hat OpenShift documentation on Building applications] for more details.
* `oc`. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-installing-cli_cli-developer-commands[Installing the OpenShift CLI].
* You are logged in as an administrator using `oc login`. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-logging-in_cli-developer-commands[Logging in to the OpenShift CLI] or link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-logging-in-web_cli-developer-commands[Logging in to the OpenShift CLI using a web browser].
* `skopeo`. See link:https://github.com/containers/skopeo/blob/main/install.md[Installing Skopeo].
* `opm`. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/opm-cli[opm CLI].
* `podman`. See link:https://podman.io/docs/installation[Podman Installation Instructions].
* `sed`. See link:https://www.gnu.org/software/sed/[GNU sed].
*Procedure*

. Run the link:../scripts/install-rhdh-catalog-source.sh[installation script] to create the RHDH Operator CatalogSource in your cluster. By default, it installs the Release Candidate or GA version (from the `1.yy.x` branch), but the `--next` option allows to install the current development build (from the `main` branch). For example:
. Run the link:../scripts/install-rhdh-catalog-source.sh[installation script] to create the RHDH Operator CatalogSource in your cluster. By default, it installs the Release Candidate or GA version (from the `release-1.yy` branch), but the `--next` option allows to install the current development build (from the `main` branch). For example:
+
[source,console]
----
cd /tmp
curl -sSLO https://raw.githubusercontent.com/redhat-developer/rhdh-operator/main/.rhdh/scripts/install-rhdh-catalog-source.sh
chmod +x install-rhdh-catalog-source.sh

# install catalog source and operator subscription, for the latest stable RC or GA from 1.yy.x branch
# install catalog source and operator subscription, for the latest downstream stable, RC or GA build from the release-1.yy branch
./install-rhdh-catalog-source.sh --latest --install-operator rhdh

# OR, install catalog source and operator subscription, for the next CI build from main branch
# OR, install catalog source and operator subscription, for the next donwstream CI build from the main branch
./install-rhdh-catalog-source.sh --next --install-operator rhdh
----

Expand Down
183 changes: 153 additions & 30 deletions .rhdh/scripts/install-rhdh-catalog-source.sh
Original file line number Diff line number Diff line change
Expand Up @@ -106,14 +106,30 @@ TMPDIR=$(mktemp -d)
# shellcheck disable=SC2064
trap "rm -fr $TMPDIR" EXIT

ICSP_URL="quay.io/rhdh/"
ICSP_URL_PRE=${ICSP_URL%%/*}

# for 1.4+, use IDMS instead of ICSP
# TODO https://issues.redhat.com/browse/RHIDP-4188 if we onboard 1.3 to Konflux, use IDMS for latest too
if [[ "$IIB_IMAGE" == *"next"* ]]; then
echo "[INFO] Adding ImageDigestMirrorSet to resolve unreleased images on registry.redhat.io from quay.io"
echo "apiVersion: config.openshift.io/v1
CATALOGSOURCE_NAME="${TO_INSTALL}-${OLM_CHANNEL}"
DISPLAY_NAME_SUFFIX="${TO_INSTALL}"

# Add CatalogSource for the IIB
if [ -z "$TO_INSTALL" ]; then
IIB_NAME="${UPSTREAM_IIB##*:}"
IIB_NAME="${IIB_NAME//_/-}"
IIB_NAME="${IIB_NAME//./-}"
IIB_NAME="$(echo "$IIB_NAME" | tr '[:upper:]' '[:lower:]')"
CATALOGSOURCE_NAME="rhdh-iib-${IIB_NAME}-${OLM_CHANNEL}"
DISPLAY_NAME_SUFFIX="${IIB_NAME}"
fi

function install_regular_cluster() {
# A regular cluster should support ImageContentSourcePolicy/ImageDigestMirrorSet resources
ICSP_URL="quay.io/rhdh/"
ICSP_URL_PRE=${ICSP_URL%%/*}

# for 1.4+, use IDMS instead of ICSP
# TODO https://issues.redhat.com/browse/RHIDP-4188 if we onboard 1.3 to Konflux, use IDMS for latest too
if [[ "$IIB_IMAGE" == *"next"* ]]; then
echo "[INFO] Adding ImageDigestMirrorSet to resolve unreleased images on registry.redhat.io from quay.io" >&2
echo "---
apiVersion: config.openshift.io/v1
kind: ImageDigestMirrorSet
metadata:
name: ${ICSP_URL_PRE//./-}
Expand All @@ -123,13 +139,17 @@ spec:
mirrors:
- ${ICSP_URL}rhdh-hub-rhel9
- source: registry.redhat.io/rhdh/rhdh-rhel9-operator
mirrors:
mirrors:
- ${ICSP_URL}rhdh-rhel9-operator
" > "$TMPDIR/ImageDigestMirrorSet_${ICSP_URL_PRE}.yml" && oc apply -f "$TMPDIR/ImageDigestMirrorSet_${ICSP_URL_PRE}.yml"
else
echo "[INFO] Adding ImageContentSourcePolicy to resolve references to images not on quay.io as if from quay.io"
# echo "[DEBUG] ${ICSP_URL_PRE}, ${ICSP_URL_PRE//./-}, ${ICSP_URL}"
echo "apiVersion: operator.openshift.io/v1alpha1
- source: registry-proxy.engineering.redhat.com/rh-osbs/rhdh-rhdh-operator-bundle
mirrors:
- ${ICSP_URL}rhdh-operator-bundle
" > "$TMPDIR/ImageDigestMirrorSet_${ICSP_URL_PRE}.yml" && oc apply -f "$TMPDIR/ImageDigestMirrorSet_${ICSP_URL_PRE}.yml" >&2
else
echo "[INFO] Adding ImageContentSourcePolicy to resolve references to images not on quay.io as if from quay.io" >&2
# echo "[DEBUG] ${ICSP_URL_PRE}, ${ICSP_URL_PRE//./-}, ${ICSP_URL}"
echo "---
apiVersion: operator.openshift.io/v1alpha1
kind: ImageContentSourcePolicy
metadata:
name: ${ICSP_URL_PRE//./-}
Expand Down Expand Up @@ -187,29 +207,131 @@ spec:
- mirrors:
- registry.redhat.io
source: registry-proxy.engineering.redhat.com
" > "$TMPDIR/ImageContentSourcePolicy_${ICSP_URL_PRE}.yml" && oc apply -f "$TMPDIR/ImageContentSourcePolicy_${ICSP_URL_PRE}.yml"
fi
" > "$TMPDIR/ImageContentSourcePolicy_${ICSP_URL_PRE}.yml" && oc apply -f "$TMPDIR/ImageContentSourcePolicy_${ICSP_URL_PRE}.yml" >&2
fi

CATALOGSOURCE_NAME="${TO_INSTALL}-${OLM_CHANNEL}"
DISPLAY_NAME_SUFFIX="${TO_INSTALL}"
printf "%s" "${IIB_IMAGE}"
}

# Add CatalogSource for the IIB
if [ -z "$TO_INSTALL" ]; then
IIB_NAME="${UPSTREAM_IIB##*:}"
IIB_NAME="${IIB_NAME//_/-}"
IIB_NAME="${IIB_NAME//./-}"
IIB_NAME="$(echo "$IIB_NAME" | tr '[:upper:]' '[:lower:]')"
CATALOGSOURCE_NAME="rhdh-iib-${IIB_NAME}-${OLM_CHANNEL}"
DISPLAY_NAME_SUFFIX="${IIB_NAME}"
function install_hosted_control_plane_cluster() {
# Clusters with an hosted control plane do not propagate ImageContentSourcePolicy/ImageDigestMirrorSet resources
# to the underlying nodes, causing an issue mirroring internal images effectively.
internal_registry_url="image-registry.openshift-image-registry.svc:5000"
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge >&2
my_registry=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
podman login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $my_registry >&2
if oc -n openshift-marketplace get secret internal-reg-auth-for-rhdh &> /dev/null; then
oc -n openshift-marketplace delete secret internal-reg-auth-for-rhdh >&2
fi
if oc -n openshift-marketplace get secret internal-reg-ext-auth-for-rhdh &> /dev/null; then
oc -n openshift-marketplace delete secret internal-reg-ext-auth-for-rhdh >&2
fi
oc -n openshift-marketplace create secret docker-registry internal-reg-ext-auth-for-rhdh \
--docker-server=${my_registry} \
--docker-username=kubeadmin \
--docker-password=$(oc whoami -t) \
--docker-email="admin@internal-registry-ext.example.com" >&2
oc -n openshift-marketplace create secret docker-registry internal-reg-auth-for-rhdh \
--docker-server=${internal_registry_url} \
--docker-username=kubeadmin \
--docker-password=$(oc whoami -t) \
--docker-email="admin@internal-registry.example.com" >&2
oc registry login --registry="$my_registry" --auth-basic="kubeadmin:$(oc whoami -t)" >&2
for ns in rhdh-operator rhdh; do
# To be able to push images under this scope in the internal image registry
if ! oc get namespace "$ns" > /dev/null; then
oc create namespace "$ns" >&2
fi
oc adm policy add-cluster-role-to-user system:image-signer system:serviceaccount:${ns}:default >&2 || true
done
oc policy add-role-to-user system:image-puller system:serviceaccount:openshift-marketplace:default -n openshift-marketplace >&2 || true
oc policy add-role-to-user system:image-puller system:serviceaccount:rhdh-operator:default -n rhdh-operator >&2 || true

echo ">>> WORKING DIR: $TMPDIR <<<" >&2
mkdir -p "${TMPDIR}/rhdh/rhdh" >&2
opm render "$UPSTREAM_IIB" --output=yaml > "${TMPDIR}/rhdh/rhdh/render.yaml"
pushd "${TMPDIR}" >&2
for bundleImg in $(cat "${TMPDIR}/rhdh/rhdh/render.yaml" | grep -E '^image: .*operator-bundle' | awk '{print $2}' | uniq); do
originalBundleImg="$bundleImg"
digest="${originalBundleImg##*@sha256:}"
bundleImg="${bundleImg/registry.stage.redhat.io/quay.io}"
bundleImg="${bundleImg/registry.redhat.io/quay.io}"
bundleImg="${bundleImg/registry-proxy.engineering.redhat.com\/rh-osbs\/rhdh-/quay.io\/rhdh\/}"
echo "[DEBUG] $originalBundleImg => $bundleImg" >&2
if podman pull "$bundleImg" >&2; then
mkdir -p "bundles/$digest" >&2
# --entrypoint is needed on some older versions of Podman, but work with
containerId=$(podman create --entrypoint='/bin/sh' "$bundleImg" || exit 1)
podman cp $containerId:/metadata "./bundles/${digest}/metadata" >&2
podman cp $containerId:/manifests "./bundles/${digest}/manifests" >&2
podman rm -f $containerId >&2

# Replace the occurrences in the .csv.yaml or .clusterserviceversion.yaml files
for file in "./bundles/${digest}/manifests"/*; do
if [ -f "$file" ]; then
sed -i 's#registry.redhat.io/rhdh#quay.io/rhdh#g' "$file" >&2
sed -i 's#registry.stage.redhat.io/rhdh#quay.io/rhdh#g' "$file" >&2
sed -i 's#registry-proxy.engineering.redhat.com/rh-osbs/rhdh-#quay.io/rhdh/#g' "$file" >&2
fi
done

cat <<EOF > "./bundles/${digest}/bundle.Dockerfile"
FROM scratch
COPY ./manifests /manifests/
COPY ./metadata /metadata/
EOF
pushd "./bundles/${digest}" >&2
newBundleImage="${my_registry}/rhdh/rhdh-operator-bundle:${digest}"
newBundleImageAsInt="${internal_registry_url}/rhdh/rhdh-operator-bundle:${digest}"
podman image build -f bundle.Dockerfile -t "${newBundleImage}" . >&2
podman image push "${newBundleImage}" --tls-verify=false >&2
popd >&2

sed -i "s#${originalBundleImg}#${newBundleImageAsInt}#g" "${TMPDIR}/rhdh/rhdh/render.yaml" >&2
fi
done

local newIndex="${UPSTREAM_IIB/quay.io/"${my_registry}"}"
local newIndexAsInt="${UPSTREAM_IIB/quay.io/"${internal_registry_url}"}"

opm generate dockerfile rhdh/rhdh >&2
podman image build -t "${newIndex}" -f "./rhdh/rhdh.Dockerfile" --no-cache rhdh >&2
podman image push "${newIndex}" --tls-verify=false >&2

printf "%s" "${newIndexAsInt}"
}

# Defaulting to the hosted control plane behavior which has more chances to work
CONTROL_PLANE_TECH=$(oc get infrastructure cluster -o jsonpath='{.status.controlPlaneTopology}' || \
(echo '[WARN] Could not determine the cluster type => defaulting to the hosted control plane behavior' >&2 && echo 'External'))
IS_HOSTED_CONTROL_PLANE="false"
if [[ "${CONTROL_PLANE_TECH}" == "External" ]]; then
# 'External' indicates that the control plane is hosted externally to the cluster
# and that its components are not visible within the cluster.
IS_HOSTED_CONTROL_PLANE="true"
fi

newIIBImage=${IIB_IMAGE}
if [[ "${IS_HOSTED_CONTROL_PLANE}" = "true" ]]; then
echo "[INFO] Detected a cluster with a hosted control plane"
newIIBImage=$(install_hosted_control_plane_cluster)
else
newIIBImage=$(install_regular_cluster)
fi

echo "[DEBUG] newIIBImage=${newIIBImage}"

echo "apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: ${CATALOGSOURCE_NAME}
namespace: ${NAMESPACE_CATALOGSOURCE}
spec:
sourceType: grpc
image: ${IIB_IMAGE}
image: ${newIIBImage}
secrets:
- internal-reg-auth-for-rhdh
- internal-reg-ext-auth-for-rhdh
publisher: IIB testing ${DISPLAY_NAME_SUFFIX}
displayName: IIB testing catalog ${DISPLAY_NAME_SUFFIX}
" > "$TMPDIR"/CatalogSource.yml && oc apply -f "$TMPDIR"/CatalogSource.yml
Expand Down Expand Up @@ -242,11 +364,12 @@ spec:
sourceNamespace: ${NAMESPACE_CATALOGSOURCE}
" > "$TMPDIR"/Subscription.yml && oc apply -f "$TMPDIR"/Subscription.yml

CLUSTER_ROUTER_BASE=$(oc get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\.//')
OCP_CONSOLE_ROUTE_HOST=$(oc get route console -n openshift-console -o=jsonpath='{.spec.host}')
CLUSTER_ROUTER_BASE=$(oc get ingress.config.openshift.io/cluster '-o=jsonpath={.spec.domain}')
echo "
To install, go to:
https://console-openshift-console.${CLUSTER_ROUTER_BASE}/catalog/ns/${NAMESPACE_SUBSCRIPTION}?catalogType=OperatorBackedService
https://${OCP_CONSOLE_ROUTE_HOST}/catalog/ns/${NAMESPACE_SUBSCRIPTION}?catalogType=OperatorBackedService
Or run this:
Expand All @@ -270,4 +393,4 @@ spec:
Once deployed, Developer Hub will be available at
https://backstage-developer-hub-${NAMESPACE_SUBSCRIPTION}.${CLUSTER_ROUTER_BASE}
"
"

0 comments on commit 5ad9fa2

Please sign in to comment.