The ejb-txn-remote-call
quickstart demonstrates remote transactional EJB calls over two application servers of WildFly.
The ejb-txn-remote-call
quickstart demonstrates the remote transactional EJB calls over two WildFly Application Servers. The remote side forms a HA cluster.
This quickstart demonstrates how EJB remote calls propagate JTA transaction across WildFly Application Servers. Further, this quickstart demonstrates the transaction recovery, which is run for both servers when a failure occurs.
This quickstart contains two Maven projects.
The first maven project represents the sender side, and is intended to be deployed on the first WildFly (server1
).
The second project represents the receiver side. This project is intended to be deployed
to the other two WildFly (server2
and server3
). The two projects must not be deployed to the same server.
Project | Description |
---|---|
|
The application deployed to the first WildFly server.
Users can interact with this application through some REST endpoints, which start remote EJB calls toward the |
|
The application deployed to the second and third WildFly servers.
This application receives the remote EJB calls from the |
This quickstart demonstrates its functionalities on bare metal, using WildFly Maven plugin, and on OpenShift.
The application this project produces is designed to be run on WildFly Application Server 32 or later.
All you need to build this project is Java 11.0 (Java SDK 11) or later and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.
This quickstart requires that you clone your WILDFLY_HOME
installation directory and run two servers. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.
In the following instructions, replace WILDFLY_HOME_1
with the path to your first WildFly server and replace WILDFLY_HOME_2
with the path to your second cloned WildFly server.
When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.
The EJB remote call propagates transaction from client
application
to server
application. The remote call hits one of the two servers where the server
application is deployed.
First of all, three WildFly servers needs to be configured. Then, the client
application gets deployed to the first server (server1
),
while the server
application gets deployed to the other two WildFly servers (server2
, and server3
, which are configured as a cluster).
The easiest way to start multiple instances of WildFly on a local computer is to copy the WildFly installation directory to three separate directories.
The installation directories are named:
-
WILDFLY_HOME_1
forserver1
-
WILDFLY_HOME_2
forserver2
-
WILDFLY_HOME_3
forserver3
Given that the installation directory of the WildFly is identified with $WILDFLY_HOME:
cp -r $WILDFLY_HOME server1; \
WILDFLY_HOME_1="$PWD/server1"
cp -r $WILDFLY_HOME server2; \
WILDFLY_HOME_2="$PWD/server2"
cp -r $WILDFLY_HOME server3; \
WILDFLY_HOME_3="$PWD/server3"
To successfully process EJB remote calls from server1
to either server2
or to server3
a user to authenticate the EJB remote calls must be created on the receiving servers.
Run the following procedure in the directories WILDFLY_HOME_2
and WILDFLY_HOME_3
to create the user for server2
and server3
.
This quickstart uses secured application interfaces and requires that you create the following application user to access the running application.
UserName | Realm | Password | Roles |
---|---|---|---|
quickstartUser |
ApplicationRealm |
quickstartPwd1! |
To add the application user, open a terminal and type the following command:
$ WILDFLY_HOME/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!'
Note
|
For Windows, use the WILDFLY_HOME\bin\add-user.bat script.
|
Note
|
For the The output of command when To represent the user add the following to the server-identities definition <secret value="cXVpY2tzdGFydFB3ZDEh" /> |
As this quickstart performs transactional work against a database, it is needed to create a new database. For the purpose of this quickstart, a simple PostgreSQL container will be used:
podman run -p 5432:5432 --rm -ePOSTGRES_DB=test -ePOSTGRES_USER=test -ePOSTGRES_PASSWORD=test postgres:9.4 -c max-prepared-transactions=110 -c log-statement=all
The WildFly servers need to be configured to be able to connect to the database. First of all, a JDBC driver needs to be installed as jboss module.
The following command (along packaging the client
and the server
applications) downloads the PostgreSQL driver automatically through Maven:
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call;
mvn clean package
Then, the PostgreSQL driver needs to be loaded as jboss module in all WildFly servers:
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh "embed-server,\
module add --name=org.postgresql.jdbc \
--resources=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/target/postgresql/postgresql.jar"
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh "embed-server,\
module add --name=org.postgresql.jdbc \
--resources=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/target/postgresql/postgresql.jar"
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh "embed-server,\
module add --name=org.postgresql.jdbc \
--resources=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/target/postgresql/postgresql.jar"
Moreover, the PostgreSQL driver needs to be installed on all WildFly servers.
For server1
, the configuration file standalone.xml
will be used.
For server2
and server3
the configuration file standalone-ha.xml
will be used.
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh "embed-server --server-config=standalone.xml,\
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh "embed-server --server-config=standalone-ha.xml,\
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh "embed-server --server-config=standalone-ha.xml,\
/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
Finally, it is time to run the scripts for adding the PostgreSQL datasource to the WildFly servers:
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
--file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/postgresql-datasource.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/cli.local.properties
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
--file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/postgresql-datasource.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/cli.local.properties
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
--file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/postgresql-datasource.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/cli.local.properties
EJB remote calls from server1
to either server2
or server3
need to be authenticated. To achieve
this configuration, the script ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/remoting-configuration.cli
will be executed on server1
.
Note
|
|
cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh -DremoteServerUsername="quickstartUser" -DremoteServerPassword="quickstartPwd1!" \
--file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/remoting-configuration.cli \
--properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/cli.local.properties
Note
|
For Windows, use the bin\jboss-cli.bat script.
|
Running remoting-configuration.cli
results in the creation of:
-
A
remote outbound socket
that points to the port onserver2
/server3
where EJB remoting endpoints can be reached -
A
remote outbound connection
that can be referenced in the war deployment withjboss-ejb-client.xml
descriptor (see${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/src/main/webapp/WEB-INF/jboss-ejb-client.xml
). -
An authentication context
auth_context
that is used by the new created remoting connectionremote-ejb-connection
; the authentication context uses the same username and password created forserver2
andserver3
At this point, the configuration of the WildFly servers is completed.
server1
must be started with the standalone.xml
configuration,
while server2
and server3
must be started with the standalone-ha.xml
configuration to create a cluster.
As all WildFly servers will be run in the same bare metal environment,
a port offset will be applied to server2
and server3
. Moreover,
each server has to define a unique transaction node identifier and jboss node name.
Start each server in a separate terminal.
cd $WILDFLY_HOME_1; \
./bin/standalone.sh -c standalone.xml -Djboss.tx.node.id=server1 -Djboss.node.name=server1 -Dwildfly.config.url=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/configuration/custom-config.xml
cd $WILDFLY_HOME_2; \
./bin/standalone.sh -c standalone-ha.xml -Djboss.tx.node.id=server2 -Djboss.node.name=server2 -Djboss.socket.binding.port-offset=100
cd $WILDFLY_HOME_3; \
./bin/standalone.sh -c standalone-ha.xml -Djboss.tx.node.id=server3 -Djboss.node.name=server3 -Djboss.socket.binding.port-offset=200
Note
|
To enable the recovery of remote transaction failures, the configuration file custom-config.xml
should be loaded into server1 ; this operation will authenticate server1 against server2 /server3 .
|
Note
|
For Windows, use the bin\standalone.bat script.
|
-
With all WildFly servers configured and running, the
client
andserver
application can be deployed -
The whole project can be built using the following commands:
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/ mvn clean package
-
Then, the
client
application can be deployed using the following commands:cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client mvn wildfly:deploy
-
Lastly, the
server
application can be deployed using the following commands:cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server mvn wildfly:deploy -Dwildfly.port=10090 mvn wildfly:deploy -Dwildfly.port=10190
The commands just run employed the WildFly Maven plugin to connect to the running instances of WildFly
and deploy the war
archives to the servers.
-
If errors occur, verify that the WildFly servers are running and that they are configured properly
-
Verify that all deployments are published into all three servers
-
On
server1
check the log to confirm that theclient/target/client.war
archive is deployed... INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 76) WFLYUT0021: Registered web context: '/client' for server 'default-server' INFO [org.jboss.as.server] (management-handler-thread - 2) WFLYSRV0010: Deployed "client.war" (runtime-name : "client.war")
-
On
server2
andserver3
, check the log to confirm that theserver/target/server.war
archive is deployed.... INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 86) WFLYUT0021: Registered web context: '/server' for server 'default-server' INFO [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "server.war" (runtime-name : "server.war")
-
-
Verify that
server2
andserver3
formed a HA cluster.-
Check the server log of either
server2
andserver3
, or both.[org.infinispan.CLUSTER] () ISPN000094: Received new cluster view for channel ejb: [server2|1] (2) [server2, server3] [org.infinispan.CLUSTER] () ISPN100000: Node server3 joined the cluster ... INFO [org.infinispan.CLUSTER] () [Context=server.war/infinispan] ISPN100010: Finished rebalance with members [server2, server3], topology id 5
-
Once the WildFly servers are configured and started, and the quickstart artifacts are deployed, it is possible to
invoke the endpoints of server1
, which generate EJB remote invocations against the HA cluster formed by server2
and server3
.
The following table defines the available endpoints, and their expected behaviour.
Note
|
The endpoints return data in JSON format. You can use
|
Note
|
On Windows, |
The HTTP invocations return the hostnames of the contacted servers.
URL | Behaviour | Expectation |
---|---|---|
Two invocations under the transaction context started on |
The two returned hostnames must be the same. |
|
Several remote invocations to stateless EJB without a transaction context.
The EJB remote call is configured from the |
The list of the returned hostnames should contain occurrences of both
|
|
Two invocations under the transaction context started on |
The returned hostnames must be the same. |
|
Two invocations under the transaction context started on |
The returned hostnames must be the same. |
|
Two invocations under the transaction context started on |
The returned hostnames must be the same. |
|
An invocation under the transaction context started on |
When the recovery manager finishes the work all the transaction resources are committed. |
The EJB call to the endpoint client/remote-outbound-fail-stateless
simulates the presence
of an intermittent network error happening at the commit phase of the two-phase commit protocol (2PC).
The transaction recovery manager
periodically tries to recover the unfinished work and only when this attempt is successful,
the transaction is completed (which makes the update in the database visible). It is possible to confirm the completion of
the transaction by invoking the REST endpoint server/commits
at both servers server2
and server3
.
curl -s http://localhost:8180/server/commits
curl -s http://localhost:8280/server/commits
The response of server/commits
is a tuple composed by the host’s info and the number of commits.
For example the output could be ["host: mydev.narayana.io/192.168.0.1, jboss node name: server2","3"]
and it says that the hostname is mydev.narayana.io
, the jboss node name is server2
,
and the number of commits is 3
.
The Transaction recovery manager runs periodically (by default, it runs every 2 minutes) on all servers.
Nevertheless, as the transaction is initiated on server1
, the recovery manager on this server will be
responsible to initiate the recovery process.
Note
|
The recovery process can be started manually. Using
|
-
Before invoking the remote-outbound-fail-stateless endpoint, double check the number of
commits
onserver2
andserver3
by invoking theserver/commits
endpoints.curl http://localhost:8180/server/commits; echo # output example: # ["host: mydev.narayana.io/192.168.0.1, jboss node name: server2","1"] curl http://localhost:8280/server/commits; echo # output example: # ["host: mydev.narayana.io/192.168.0.1, jboss node name: server3","2"]
-
Invoke the REST endpoint
client/remote-outbound-fail-stateless
curl http://localhost:8080/client/remote-outbound-fail-stateless | jq .
The JSON output from the previous command reports the name of server the request was sent to.
-
At the server reported by the previous command, verify the number of
commits
by invoking theserver/commits
endpoint. -
Check the log of
server1
for the following warning messageARJUNA016036: commit on < formatId=131077, gtrid_length=35, bqual_length=36, tx_uid=..., node_name=server1, branch_uid=..., subordinatenodename=null, eis_name=unknown eis name > (Subordinate XAResource at remote+http://localhost:8180) failed with exception $XAException.XA_RETRY: javax.transaction.xa.XAException: WFTXN0029: The peer threw an XA exception
This message means that the transaction manager was not able to commit the transaction as an error occurred while committing the transaction on the remote server. The
XAException.XA_RETRY
exception, meaning an intermittent failure, was reported in the logs. -
The logs on
server2
orserver3
contain a warning about theXAResource
failure as well.ARJUNA016036: commit on < formatId=131077, gtrid_length=35, bqual_length=43, tx_uid=..., node_name=server1, branch_uid=..., subordinatenodename=server2, eis_name=unknown eis name > (org.jboss.as.quickstarts.ejb.mock.MockXAResource@731ae22) failed with exception $XAException.XAER_RMFAIL: javax.transaction.xa.XAException
-
Wait for the recovery process at
server1
to recover the unfinished transaction (or force a recovery cycle manually) -
The number of commits on the targeted server should be incremented by one.
When you are finished testing the quickstart, follow these steps to undeploy the archive.
-
Make sure WildFly server is started.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type this command to undeploy the archive:
$ mvn wildfly:undeploy
Repeat the last step for server2
and server3
:
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
mvn wildfly:undeploy -Dwildfly.port=10090;
mvn wildfly:undeploy -Dwildfly.port=10190
This quickstart is not production grade. The server logs include the following warnings during the startup. It is safe to ignore these warnings.
WFLYDM0111: Keystore standalone/configuration/application.keystore not found, it will be auto generated on first use with a self signed certificate for host localhost
WFLYELY01084: KeyStore .../standalone/configuration/application.keystore not found, it will be auto generated on first use with a self-signed certificate for host localhost
WFLYSRV0018: Deployment "deployment.server.war" is using a private module ("org.jboss.jts") which may be changed or removed in future versions without notice.
Instead of using a standard WildFly server distribution, the three WildFly servers to deploy and run the quickstart can be alternatively provisioned by activating the Maven profile named provisioned-server
when building the quickstart:
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client;
mvn clean package -Pprovisioned-server \
-DremoteServerUsername="quickstartUser" -DremoteServerPassword="quickstartPwd1!" \
-DpostgresqlUsername="test" -DpostgresqlPassword="test"
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
mvn clean package -Pprovisioned-server \
-Dwildfly.provisioning.dir=server2 -Djboss-as.home=target/server2 \
-DpostgresqlUsername="test" -DpostgresqlPassword="test";
mvn package -Pprovisioned-server \
-Dwildfly.provisioning.dir=server3 -Djboss-as.home=target/server3 \
-DpostgresqlUsername="test" -DpostgresqlPassword="test"
The provisioned WildFly servers, with the quickstart deployed, can then be found in the target/server
directory, and their usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the pom.xml files of the quickstart.
The quickstart user should be added before running the provisioned server:
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
./target/server2/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!';
./target/server3/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!'
Note
|
For Windows, use the |
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with provisioned server.
Follow these steps to run the integration tests.
-
As this quickstart performs transactional work against a database, it is needed to create a new database. For the purpose of this quickstart, a simple PostgreSQL container will be used:
podman run -p 5432:5432 --rm -ePOSTGRES_DB=test -ePOSTGRES_USER=test -ePOSTGRES_PASSWORD=test postgres:9.4 -c max-prepared-transactions=110 -c log-statement=all
-
Make sure the servers are provisioned by running the commands reported in Building and running the quickstart application with provisioned WildFly server
-
Add the quickstart user to the provisioned
server2
andserver3
by running the commands reported in Building and running the quickstart application with provisioned WildFly server -
Start the WildFly provisioned servers in three distinct terminals, this time using the WildFly Maven Plugin, which is recommended for testing due to simpler automation.
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client; mvn wildfly:start -Djboss-as.home=target/server \ -Dwildfly.javaOpts="-Djboss.tx.node.id=server1 -Djboss.node.name=server1"
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server; mvn wildfly:start -Djboss-as.home=target/server2 \ -Dwildfly.port=10090 \ -Dwildfly.serverConfig=standalone-ha.xml \ -Dwildfly.javaOpts="-Djboss.socket.binding.port-offset=100 -Djboss.tx.node.id=server2 -Djboss.node.name=server2"
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server; mvn wildfly:start -Djboss-as.home=target/server3 \ -Dwildfly.port=10190 \ -Dwildfly.serverConfig=standalone-ha.xml \ -Dwildfly.javaOpts="-Djboss.socket.binding.port-offset=200 -Djboss.tx.node.id=server3 -Djboss.node.name=server3"
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated, and specifying the quickstart’s URL using theserver.host
system property.cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client; mvn verify -Pintegration-testing
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server; mvn verify -Pintegration-testing -Dserver.host="http://localhost:8180/server"
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server; mvn verify -Pintegration-testing -Dserver.host="http://localhost:8280/server"
-
To shut down the WildFly provisioned servers using the WildFly Maven Plugin:
mvn wildfly:shutdown
mvn wildfly:shutdown -Dwildfly.port=10090
mvn wildfly:shutdown -Dwildfly.port=10190
The ephemeral nature of OpenShift does not work smoothly with WildFly’s ability to handle transactions. In fact, WildFly’s transaction management saves logs to keep record of transactions' history in case of extreme scenarios, like crashes or network issues. Moreover, EJB remoting requires a stable remote endpoint to guarantee:
-
The transaction affinity of
stateful beans
and -
The recovery of transactions.
To fulfil the aforementioned requirements, applications that requires ACID transactions must be deployed to WildFly using the WildFly’s Operator, which can employ OpenShift’s StatefulSet. Failing to do so might result in no-ACID transactions.
Unresolved directive in README-source.adoc - include::../shared-doc/cd-create-project.adoc[leveloffset=+3]
To install WildFly’s Operator, follow the official documentation (which instructions are also reported here for convenience)
cd /tmp
git clone https://github.com/wildfly/wildfly-operator.git
cd wildfly-operator
oc adm policy add-cluster-role-to-user cluster-admin developer
make install
make deploy
To verify that the WildFly Operator is running, execute the following command:
oc get po -n $(oc project -q)
NAME READY STATUS RESTARTS AGE
wildfly-operator-5d4b7cc868-zfxcv 1/1 Running 1 22h
This quickstart requires a PostgreSQL database to run correctly. In the scope of this quickstart, a PostgreSQL database will be deployed on the OpenShift instance using the Helm chart provided by bitnami:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgresql bitnami/postgresql -f charts/postgresql.yaml --wait --timeout="5m"
To build the client
and the server
applications, this quickstart employs
WildFly’s Helm charts.
For more information about WildFly’s Helm chart, please refer to the official
documentation.
helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
helm install client -f charts/client.yaml wildfly/wildfly
helm install server -f charts/server.yaml wildfly/wildfly
Wait for the builds to finish. Their status can be verified by executing the oc get pod
command.
To deploy the client
and the server
applications, this quickstart uses the WildFlyServer
custom resource,
thanks to which the WildFly Operator is able to create a WildFly pod and
deploy an application.
Note
|
Make sure that view permissions are granted to the default system account.
The KUBE_PING protocol, which is used for forming the HA WildFly cluster
on OpenShift, requires view permissions to read the labels of the pods:
oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
|
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call;
oc create -f client/client-cr.yaml;
oc create -f server/server-cr.yaml
If the above commands are successful, the oc get pod
command shows
all the pods required for the quickstart, i.e. the client
pod and two
server
pods (and the PostgreSQL database).
NAME READY STATUS RESTARTS AGE
client-0 1/1 Running 0 29m
postgresql-f9f475f87-l944r 1/1 Running 1 22h
server-0 1/1 Running 0 11m
server-1 1/1 Running 0 11m
The WildFly Operator creates routes that make the client
and the server
applications accessible
outside the OpenShift environment. The oc get route
command shows the addresses of the HTTP endpoints.
An example of the output is:
oc get route
NAME HOST/PORT PATH SERVICES PORT
client-route client-route-ejb-txn-remote-call-client-artifacts.apps-crc.testing client-loadbalancer http
server-route server-route-ejb-txn-remote-call-client-artifacts.apps-crc.testing server-loadbalancer http
With the following commands, it is possible to verify the some functionalities of this quickstart:
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-notx-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/direct-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-notx-stateful | jq .
For other HTTP endpoints, refer to the table above.
If you like to observe the recovery process then you can follow these shell commands.
# To check failure resolution
# verify the number of commits that come from the first and second node of the `server` deployments.
# Two calls are needed, as each reports the commit count of different node.
# Remember the reported number of commits to be compared with the results after crash later.
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
# Run the remote call that causes the JVM of the server to crash.
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-fail-stateless
# The platforms restarts the server back to life.
# The following commands then make us waiting while printing the number of commits happened at the servers.
while true; do
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
I=$((I+1))
echo " <<< Round: $I >>>"
sleep 2
done
To delete the client
and the server
applications, the WildFlyServer
definitions needs to be deleted.
This is achievable running:
oc delete WildFlyServer client;
oc delete WildFlyServer server
The client
and the server
applications will be stopped, and the two pods will be removed.
To remove the Helm charts installed previously:
helm uninstall client;
helm uninstall server;
helm uninstall postgresql
Finally, to undeploy and uninstall the WildFly’s operator:
cd /tmp/wildfly-operator;
make undeploy;
make uninstall
The above commands completely clean the OpenShift namespace