Skip to content

Commit

Permalink
update: enable operator management
Browse files Browse the repository at this point in the history
  • Loading branch information
codekow committed Oct 30, 2023
1 parent 9070f9d commit 12c1feb
Show file tree
Hide file tree
Showing 22 changed files with 516 additions and 16 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@
values:
name: operator-local-storage
path: components/operators/local-storage/operator/overlays/stable
# - cluster: local
# url: https://kubernetes.default.svc
# values:
# name: operator-openshift-virtualization
# path: components/operators/kubevirt-hyperconverged/operator/overlays/stable
- cluster: local
url: https://kubernetes.default.svc
values:
name: operator-openshift-virtualization
path: components/operators/kubevirt-hyperconverged/operator/overlays/stable
- cluster: local
url: https://kubernetes.default.svc
values:
Expand All @@ -61,20 +61,25 @@
# values:
# name: operator-keda
# path: components/operators/openshift-keda/operator/overlays/stable
# - cluster: local
# url: https://kubernetes.default.svc
# values:
# name: operator-logging
# path: components/operators/openshift-logging/aggregate/overlays/default
# - cluster: local
# url: https://kubernetes.default.svc
# values:
# name: operator-tekton
# path: components/operators/openshift-pipelines-operator/overlays/latest
- cluster: local
url: https://kubernetes.default.svc
values:
name: openshift-serverless-operator
name: operator-logging
path: components/operators/openshift-logging/aggregate/overlays/default
- cluster: local
url: https://kubernetes.default.svc
values:
name: operator-tekton
path: components/operators/openshift-pipelines-operator/overlays/latest
- cluster: local
url: https://kubernetes.default.svc
values:
name: operator-velero-oadp
path: components/operators/redhat-oadp-operator/operator/overlays/stable
- cluster: local
url: https://kubernetes.default.svc
values:
name: operator-openshift-serverless
path: components/operators/openshift-serverless/aggregate/knative-kafka
- cluster: local
url: https://kubernetes.default.svc
Expand Down
3 changes: 3 additions & 0 deletions components/operators/redhat-oadp-operator/INFO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# redhat-oadp-operator

OADP (OpenShift API for Data Protection) operator sets up and installs Data Protection Applications on the OpenShift platform.
34 changes: 34 additions & 0 deletions components/operators/redhat-oadp-operator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
# OADP Operator

Install OADP Operator.

Do not use the `base` directory directly, as you will need to patch the `channel` based on the version of OpenShift you are using, or the version of the operator you want to use.

The current *overlays* available are for the following channels:

* [stable](operator/overlays/stable)
* [stable-1.0](operator/overlays/stable-1.0)
* [stable-1.1](operator/overlays/stable-1.1)

## Usage

If you have cloned the `gitops-catalog` repository, you can install OADP Operator based on the overlay of your choice by running from the root (`gitops-catalog`) directory.

```
oc apply -k redhat-oadp-operator/operator/overlays/<channel>
```

Or, without cloning:

```
oc apply -k https://github.com/redhat-cop/gitops-catalog/redhat-oadp-operator/operator/overlays/<channel>
```

As part of a different overlay in your own GitOps repo:

```
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/redhat-cop/gitops-catalog/redhat-oadp-operator/operator/overlays/<channel>?ref=main
```
33 changes: 33 additions & 0 deletions components/operators/redhat-oadp-operator/instance/base/dpa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: default
spec:
configuration:
velero:
defaultPlugins:
- openshift
- aws
restic:
enable: true
backupLocations:
- velero:
provider: aws
default: true
objectStorage:
bucket: ocp-cluster
prefix: patch-see-overlay
config:
insecureSkipTLSVerify: "false"
profile: "backupStorage"
region: us-east-1
credential:
key: cloud
name: cloud-credentials
snapshotLocations:
- velero:
provider: aws
config:
region: us-west-2
profile: "volumeSnapshot"
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- dpa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
---
apiVersion: v1
kind: Secret
metadata:
name: cloud-credentials
type: Opqaue
stringData:
cloud: |
[default]
aws_access_key_id=${AWS_ACCESS_KEY_ID}
aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
[backupStorage]
aws_access_key_id=${AWS_ACCESS_KEY_ID}
aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
[volumeSnapshot]
aws_access_key_id=${AWS_ACCESS_KEY_ID}
aws_secret_access_key=${AWS_SECRET_ACCESS_KEY}
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: openshift-adp

resources:
- ../../base
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: openshift-adp

resources:
- ../minio
- schedule.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: daily-full-backup
spec:
schedule: 0 1 * * *
template:
defaultVolumesToRestic: true
excludedResources:
- imagetags.image.openshift.io
- images.image.openshift.io
- oauthaccesstokens.oauth.openshift.io
- oauthauthorizetokens.oauth.openshift.io
- templateinstances.template.openshift.io
- clusterserviceversions.operators.coreos.com
- packagemanifests.packages.operators.coreos.com
- operatorgroups.operators.coreos.com
- subscriptions.operators.coreos.com
- servicebrokers.servicecatalog.k8s.io
- servicebindings.servicecatalog.k8s.io
- serviceclasses.servicecatalog.k8s.io
- serviceinstances.servicecatalog.k8s.io
- serviceplans.servicecatalog.k8s.io
- events.events.k8s.io
- events
includedNamespaces:
- '*'
excludedNamespaces:
- 'minio'
snapshotVolumes: false
ttl: 168h0m0s
---
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: hourly-object-backup
spec:
schedule: 17 * * * *
template:
excludedResources:
- imagetags.image.openshift.io
- images.image.openshift.io
- oauthaccesstokens.oauth.openshift.io
- oauthauthorizetokens.oauth.openshift.io
- templateinstances.template.openshift.io
- clusterserviceversions.operators.coreos.com
- packagemanifests.packages.operators.coreos.com
- operatorgroups.operators.coreos.com
- subscriptions.operators.coreos.com
- servicebrokers.servicecatalog.k8s.io
- servicebindings.servicecatalog.k8s.io
- serviceclasses.servicecatalog.k8s.io
- serviceinstances.servicecatalog.k8s.io
- serviceplans.servicecatalog.k8s.io
- events.events.k8s.io
- events
includedNamespaces:
- '*'
snapshotVolumes: false
ttl: 24h0m0s
---
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: weekly-full-backup
spec:
schedule: 0 2 * * 1
template:
defaultVolumesToRestic: true
excludedResources:
- imagetags.image.openshift.io
- images.image.openshift.io
- oauthaccesstokens.oauth.openshift.io
- oauthauthorizetokens.oauth.openshift.io
- templateinstances.template.openshift.io
- clusterserviceversions.operators.coreos.com
- packagemanifests.packages.operators.coreos.com
- operatorgroups.operators.coreos.com
- subscriptions.operators.coreos.com
- servicebrokers.servicecatalog.k8s.io
- servicebindings.servicecatalog.k8s.io
- serviceclasses.servicecatalog.k8s.io
- serviceinstances.servicecatalog.k8s.io
- serviceplans.servicecatalog.k8s.io
- events.events.k8s.io
- events
includedNamespaces:
- '*'
excludedNamespaces:
- 'minio'
snapshotVolumes: false
ttl: 720h0m0s
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: create-minio-bucket
annotations:
argocd.argoproj.io/sync-wave: "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: create-minio-bucket
annotations:
argocd.argoproj.io/sync-wave: "1"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: create-minio-bucket
---
apiVersion: batch/v1
kind: Job
metadata:
name: create-minio-bucket
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
backoffLimit: 4
template:
spec:
serviceAccount: create-minio-bucket
serviceAccountName: create-minio-bucket
initContainers:
- name: wait-for-minio
image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: data-connection-minio
command: ["/bin/bash"]
args:
- -ec
- |-
echo -n "Waiting for ${AWS_S3_ENDPOINT}"
while ! curl -I "${AWS_S3_ENDPOINT}/minio/health/live" 2>/dev/null; do
echo -n .
sleep 5
done; echo
containers:
- name: create-bucket
image: image-registry.openshift-image-registry.svc:5000/openshift/python:latest
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args:
- -ec
- |-
pip install boto3 -q
cat << 'EOF' | python3
import os, boto3
bucket = os.getenv("AWS_S3_BUCKET", "ocp-cluster")
s3 = boto3.client("s3",
endpoint_url=os.getenv("AWS_S3_ENDPOINT", "http://minio.minio.svc:9000"),
aws_access_key_id=os.getenv("AWS_ACCESS_KEY_ID", "minioadmin"),
aws_secret_access_key=os.getenv("AWS_SECRET_ACCESS_KEY", "minioadmin"))
if bucket not in [bu["Name"] for bu in s3.list_buckets()["Buckets"]]:
s3.create_bucket(Bucket=bucket)
print(f'created: {bucket}')
EOF
envFrom:
- secretRef:
name: data-connection-minio
restartPolicy: Never
Loading

0 comments on commit 12c1feb

Please sign in to comment.