Fuse (on Spring Boot) to create a prototype to integrate JPA (with external DB), JMA (AMQ Broker), Drools (remote invocation), Rest APIs, Swagger.
- install OKD 3.11 following these instructions
oc cluster up
oc login -u system:admin
oc project openshift
- Set up and download an Openshift registry service account following the instructions from https://access.redhat.com/RegistryAuthentication#registry-service-accounts-for-shared-environments-4
oc create -f <secret>.yaml
(<secret>
is the name of the downloaded Openshift registry service account)oc secrets link default <secret> --for=pull
oc secrets link builder <secret> --for=pull
- Download from https://access.redhat.com/jbossnetwork/restricted/listSoftware.html?downloadType=distributions&product=rhdm&productChanged=yes and unzip
oc create -f ./Downloads/rhdm-7.3-openshift-templates/rhdm73-image-streams.yaml
oc import-image rhdm73-decisioncentral-openshift:1.0
oc import-image rhdm73-kieserver-openshift:1.0
oc replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-amq-7-broker-openshift-image/73-7.3.0.GA/amq-broker-7-image-streams.yaml
oc import-image amq-broker-7/amq-broker-73-openshift --from=registry.redhat.io/amq-broker-7/amq-broker-73-openshift --confirm
for template in amq-broker-73-basic.yaml amq-broker-73-ssl.yaml amq-broker-73-custom.yaml amq-broker-73-persistence.yaml amq-broker-73-persistence-ssl.yaml amq-broker-73-persistence-clustered.yaml amq-broker-73-persistence-clustered-ssl.yaml; do oc replace --force -f https://raw.githubusercontent.com/jboss-container-images/jboss-amq-7-broker-openshift-image/73-7.3.0.GA/templates/${template}; done
Instructions from https://access.redhat.com/documentation/en-us/red_hat_fuse/7.3/html-single/fuse_on_openshift_guide/index#get-started-admin-install
BASEURL=https://raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.1.fuse-730065-redhat-00002
oc create -n openshift -f ${BASEURL}/fis-image-streams.json
for template in eap-camel-amq-template.json eap-camel-cdi-template.json eap-camel-cxf-jaxrs-template.json eap-camel-cxf-jaxws-template.json eap-camel-jpa-template.json karaf-camel-amq-template.json karaf-camel-log-template.json karaf-camel-rest-sql-template.json karaf-cxf-rest-template.json spring-boot-camel-amq-template.json spring-boot-camel-config-template.json spring-boot-camel-drools-template.json spring-boot-camel-infinispan-template.json spring-boot-camel-rest-sql-template.json spring-boot-camel-teiid-template.json spring-boot-camel-template.json spring-boot-camel-xa-template.json spring-boot-camel-xml-template.json spring-boot-cxf-jaxrs-template.json spring-boot-cxf-jaxws-template.json ; do oc create -n openshift -f https://raw.githubusercontent.com/jboss-fuse/application-templates/application-templates-2.1.fuse-730065-redhat-00002/quickstarts/${template}; done
- click on
Create Project
button to create a new project - use
migration-analytics
as value for the name - once the project has been created and added to the list of the available project, click on
migration-analytics
- from the
migration-analytics
Overview page, click onImport YAML / JSON
button - copy and paste the content of the analytics_template.json (using the
Raw
button will let you have the plain text version of this file) - click on
Create
button - check that
Process the template
is selected (no need to selectSave template
) - click on
Continue
button - click on
Create
button (No need to change the form values unless the user wants to customize them)
oc login -u developer
oc new-project migration-analytics
oc create -f <secret>.yaml
oc secrets link default <secret> --for=pull
oc secrets link builder <secret> --for=pull
oc process -f https://raw.githubusercontent.com/project-xavier/xavier-integration/master/src/main/resources/okd/analytics_template.json| oc create -f -
- go to Application -> Routes page and click on the URL in the
Hostname
column beside themyapp-rhdmcentr
service - login with
adminUser
-SxNhwF2!
- click on
Design
link - click on
Import Project
button - in the
Repository URL
field pastehttps://github.com/project-xavier/xavier-analytics.git
- select
sample-analytics
box and click theOK
button on the upper right side - once the import has finished, click the
Build & Install
button from the upper rightBuild
menu - once the build has been successfully done, click on the
Deploy
button
- Go to Resources -> Secrets page and select
postgresql
secret for the list of the secrets - Select
Reveal Secret
link to get thedatabase-name
value - Go to PostgreSQL pod's
Terminal
tab to log in doingpsql <database-name>
- To get the report entries persisted execute
select * from report_data_model;
- To DELETE ALL the report entries execute
truncate table report_data_model;
AMQ Web Console
To enable the DEBUG
level for logging, please add the environment variable logging.level.org
(or whatever package you want) with value DEBUG
to the the analytics-integration
deployment configuration.
oc delete all,pvc,secrets -l application=migration-analytics -n migration-analytics
- https://sonarcloud.io/dashboard?id=project-xavier_xavier-integration
- mvn clean verify -Psonar -Dsonar.login={{token generated for the user on SonarCloud}}
In order to test AWS S3, we can download and install aws-cli command
In order to interact as admin with the S3 bucket we can use the S3Api
The documentation for interacting as users is S3
From the Camel perspective we have the test class MainRouteBuilder_S3Test
that we can use to test locally against AWS S3 servers , replacing the credentials headers.
Few mentions :
- we need to add the HTTP headers in order to download the file from the API call
- we need to stub (disable) the aws-s3 component as it has eager start and then it will crash as we are not having the credentials in the Tests . This is fixed on Camel 3 on the DefaultEndpoint class with its
lazyProducerStart
( but this will be available on Fuse 8) - we need to specify
deleteAfterRead=false
in the call , if not the resource will be deleted after being read - same concept but different names for Camel consistency : KEY (upload) / FILENAME (download)
Snippets of calls :
- Configure aws :
aws configure
with your credentials and region=us-east-1
and bucket=xavier-dev
- List files in a bucket :
aws s3 ls s3://xavier-dev --human
- List buckets :
aws s3 ls
- Upload a file :
aws s3 cp cfme_inventory_0.json s3://xavier-dev
- Download a file :
aws s3 cp s3://xavier-dev/cfme_inventory_0.json cfme_inventory_0.json
- Remove files in a bucket :
aws s3 rm s3://xavier-dev/ --recursive
- See status of encription on a bucket :
aws s3api get-bucket-encryption --bucket xavier-dev
- Enable encription on a bucket :
aws s3api put-bucket-encryption --bucket xavier-dev --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
For the End2End test a docker container test framework is used : Testcontainers.
If using the Fedora docker package it is needed to set the ryuk.container.privileged=true property in the local ~/.testcontainers.properties file in order to allow Ryuk to be a privileged container.