- Info
- Assumptions / Requirements
- Deployed Resource URLs
- Running the Playbook
- Additional Resources
- Backlog for enhancements
- PRs welcome!
Ansible playbook for provisioning a Debezium demo using my Summit Lab Spring Music application as the "monolith". The Debezium connector is configured to use the Outbox Event Router.
The application is a simple Spring Boot application connected to a MySQL database. We'll install a 3 replica Kafka cluster with Kafka connect and then install the Debezium MySQL connector.
Once the events get into Kafka, a camel-k application runs and updates a Red Hat Data Grid cache according to contents from the event (ALBUM_CREATED
/ALBUM_UPDATED
/ALBUM_DELETED
).
The database credentials are stored in a Secret
and then mounted into the Kafka Connect cluster.
The Kafka Broker, Kafka Connect, and Kafka Bridge are all authenticated via OAuth 2.0. Red Hat Single Sign-on is installed and used as the authorization server. A new realm is automatically created and provisioned.
All metrics are captures by Prometheus and there are Grafana dashboards for Kafka, Zookeeper, and the caches.
Once completed, the resulting output of everything in the OpenShift Topology view should look something like
- Ansible >= 2.9 is required for running the playbook
- The OpenShift
sso74-postgresql-persistent
template is installed in theopenshift
namespace - OperatorHub is available with the following operators available
- The
openssl
utility is installed - The
keytool
utility is installed
All the below resource URLs are suffixed with the apps url of the cluster (i.e. for an RHPDS environment, apps.cluster-##GUID##.##GUID##.example.opentlc.com
).
- OpenShift Console
- Kafdrop
- Demo App
- Red Hat Single Sign-on
- Prometheus
- Grafana
- Red Hat Data Grid Console
- https://albums-rhdg-external-demo.##CLUSTER_SUFFIX##
- Username:
developer
- Password:
developer
To run this you would do something like
$ ansible-playbook -v main.yml -e ocp_api_url=<OCP_API_URL> -e ocp_admin_pwd=<OCP_ADMIN_USER_PASSWORD>
You'll need to replace the following variables with appropriate values:
Variable | Description |
---|---|
<OCP_API_URL> |
API url of your cluster |
<OCP_ADMIN_USER_PASSWORD> |
Password for the OCP admin account |
This playbook also makes some assumptions about some things within the cluster. The biggest assumption is that the playbook is installing everything into an empty cluster. The following variables can be overridden with the -e
switch when running the playbook to customize some of the installation locations and configuration.
Description | Variable | Default Value |
---|---|---|
OpenShift admin user name | ocp_admin |
opentlc-mgr |
OCP user to install demo into | ocp_proj_user |
user1 |
OCP user password for above user | ocp_proj_user_pwd |
openshift |
Project name to install demo into | proj_nm_demo |
demo |
Project name to install ALL global operators into | proj_nm_rh_operators |
openshift-operators |
Project name to install AMQ Streams operator into | proj_nm_amq_streams_operator |
{{ proj_nm_rh_operators }} |
Project name to install Container Security operator into | proj_nm_container_security_operator |
{{ proj_nm_rh_operators }} |
Project name to install Serverless operator into | proj_nm_serverless_operator |
{{ proj_nm_rh_operators }} |
- MySQL Database Template
- AMQ Streams Template
- Includes
Kafka
,KafkaConnect
,KafkaConnector
, andKafkaBridge
custom resources
- Includes
- Kafdrop Template
- Red Hat SSO Realm Config
- Red Hat Data Grid Config
- Camel K Config
- Guide used for help in setting this all up
- Thanks @sigreen!
PRs welcome!
- Enabling schema registry and using AVRO serializtion/deserialization
- Add authorization on the topics to the different clients
- Getting Kafdrop to authenticate with the broker
- This will allow removal of the
plain
listener on the broker
- This will allow removal of the
- Integrate the
KafkaBridge
with something (3scale?) - Build some kind of consumer(s) to read the messages & do something with them
- Currently there is a Red Hat Data Grid cache that gets updated by a Camel-K application. I'd like to build another application that uses the cache data in some way.