Skip to content

Latest commit

 

History

History
1186 lines (972 loc) · 52.7 KB

README.adoc

File metadata and controls

1186 lines (972 loc) · 52.7 KB

ejb-txn-remote-call: Demonstrates remote EJB calls and transaction propagation

The ejb-txn-remote-call quickstart demonstrates remote transactional EJB calls over two application servers of WildFly.

What is it?

The ejb-txn-remote-call quickstart demonstrates the remote transactional EJB calls over two WildFly Application Servers. The remote side forms a HA cluster.

Description

This quickstart demonstrates how EJB remote calls propagate JTA transaction across WildFly Application Servers. Further, this quickstart demonstrates the transaction recovery, which is run for both servers when a failure occurs.

This quickstart contains two Maven projects. The first maven project represents the sender side, and is intended to be deployed on the first WildFly (server1). The second project represents the receiver side. This project is intended to be deployed to the other two WildFly (server2 and server3). The two projects must not be deployed to the same server.

Table 1. Maven projects in this quickstart
Project Description

client

The application deployed to the first WildFly server. Users can interact with this application through some REST endpoints, which start remote EJB calls toward the server application deployed on the other two WildFlys. In more details, the transaction initiated at the client side enlists two participants: a database, and the EJB remote server. The transaction manager then uses the two-phase commit to commit the transactions over the two servers. Moreover, this quickstart shows how transactional failures how dealt with.

server

The application deployed to the second and third WildFly servers. This application receives the remote EJB calls from the client application and, depending on the scenario, process the propagated transaction. In more details, the transaction initiated at the server side enlists two participants: a database, and a mock XAResource.

Running the Quickstart

This quickstart demonstrates its functionalities on bare metal, using WildFly Maven plugin, and on OpenShift.

System Requirements

The application this project produces is designed to be run on WildFly Application Server 32 or later.

All you need to build this project is Java 11.0 (Java SDK 11) or later and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.

Use of the WILDFLY_HOME_1, WILDFLY_HOME_2, and QUICKSTART_HOME Variables

This quickstart requires that you clone your WILDFLY_HOME installation directory and run two servers. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.

In the following instructions, replace WILDFLY_HOME_1 with the path to your first WildFly server and replace WILDFLY_HOME_2 with the path to your second cloned WildFly server.

When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.

The Goal

The EJB remote call propagates transaction from client application to server application. The remote call hits one of the two servers where the server application is deployed.

Running in a bare metal environment

First of all, three WildFly servers needs to be configured. Then, the client application gets deployed to the first server (server1), while the server application gets deployed to the other two WildFly servers (server2, and server3, which are configured as a cluster).

Setup WildFly servers

The easiest way to start multiple instances of WildFly on a local computer is to copy the WildFly installation directory to three separate directories.

The installation directories are named:

  • WILDFLY_HOME_1 for server1

  • WILDFLY_HOME_2 for server2

  • WILDFLY_HOME_3 for server3

Given that the installation directory of the WildFly is identified with $WILDFLY_HOME:

cp -r $WILDFLY_HOME server1; \
WILDFLY_HOME_1="$PWD/server1"
cp -r $WILDFLY_HOME server2; \
WILDFLY_HOME_2="$PWD/server2"
cp -r $WILDFLY_HOME server3; \
WILDFLY_HOME_3="$PWD/server3"

Creating a user for server2 and server3

To successfully process EJB remote calls from server1 to either server2 or to server3 a user to authenticate the EJB remote calls must be created on the receiving servers.

Run the following procedure in the directories WILDFLY_HOME_2 and WILDFLY_HOME_3 to create the user for server2 and server3.

Add the Authorized Application User

This quickstart uses secured application interfaces and requires that you create the following application user to access the running application.

UserName Realm Password Roles

quickstartUser

ApplicationRealm

quickstartPwd1!

To add the application user, open a terminal and type the following command:

$ WILDFLY_HOME/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!' 
Note
For Windows, use the WILDFLY_HOME\bin\add-user.bat script.
Note

For the add-user.sh (or .bat) command you, can add the parameter -ds. When you include this parameter, after the user is added, the system outputs a secret value that you can use to set up the remote output connection on server1.

The output of command when -ds parameter is used:

To represent the user add the following to the server-identities definition <secret value="cXVpY2tzdGFydFB3ZDEh" />

Configure datasources

As this quickstart performs transactional work against a database, it is needed to create a new database. For the purpose of this quickstart, a simple PostgreSQL container will be used:

podman run -p 5432:5432 --rm  -ePOSTGRES_DB=test -ePOSTGRES_USER=test -ePOSTGRES_PASSWORD=test postgres:9.4 -c max-prepared-transactions=110 -c log-statement=all

The WildFly servers need to be configured to be able to connect to the database. First of all, a JDBC driver needs to be installed as jboss module.

The following command (along packaging the client and the server applications) downloads the PostgreSQL driver automatically through Maven:

cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call;
mvn clean package

Then, the PostgreSQL driver needs to be loaded as jboss module in all WildFly servers:

cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh "embed-server,\
  module add --name=org.postgresql.jdbc \
  --resources=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/target/postgresql/postgresql.jar"
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh "embed-server,\
  module add --name=org.postgresql.jdbc \
  --resources=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/target/postgresql/postgresql.jar"
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh "embed-server,\
  module add --name=org.postgresql.jdbc \
  --resources=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/target/postgresql/postgresql.jar"

Moreover, the PostgreSQL driver needs to be installed on all WildFly servers. For server1, the configuration file standalone.xml will be used. For server2 and server3 the configuration file standalone-ha.xml will be used.

cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh "embed-server --server-config=standalone.xml,\
  /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh "embed-server --server-config=standalone-ha.xml,\
  /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh "embed-server --server-config=standalone-ha.xml,\
  /subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=org.postgresql.jdbc,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)"

Finally, it is time to run the scripts for adding the PostgreSQL datasource to the WildFly servers:

cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
  --file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/postgresql-datasource.cli \
  --properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/cli.local.properties
cd $WILDFLY_HOME_2; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
  --file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/postgresql-datasource.cli \
  --properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/cli.local.properties
cd $WILDFLY_HOME_3; \
./bin/jboss-cli.sh -DpostgresqlUsername="test" -DpostgresqlPassword="test" \
  --file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/postgresql-datasource.cli \
  --properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server/scripts/cli.local.properties

Configuring EJB remoting on server1

EJB remote calls from server1 to either server2 or server3 need to be authenticated. To achieve this configuration, the script ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/remoting-configuration.cli will be executed on server1.

Note

remoting-configuration.cli is configured with properties in cli.local.properties and runs against standalone.xml

cd $WILDFLY_HOME_1; \
./bin/jboss-cli.sh -DremoteServerUsername="quickstartUser" -DremoteServerPassword="quickstartPwd1!" \
  --file=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/remoting-configuration.cli \
  --properties=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/scripts/cli.local.properties
Note
For Windows, use the bin\jboss-cli.bat script.

Running remoting-configuration.cli results in the creation of:

  • A remote outbound socket that points to the port on server2/server3 where EJB remoting endpoints can be reached

  • A remote outbound connection that can be referenced in the war deployment with jboss-ejb-client.xml descriptor (see ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/src/main/webapp/WEB-INF/jboss-ejb-client.xml).

  • An authentication context auth_context that is used by the new created remoting connection remote-ejb-connection; the authentication context uses the same username and password created for server2 and server3

Start WildFly servers

At this point, the configuration of the WildFly servers is completed. server1 must be started with the standalone.xml configuration, while server2 and server3 must be started with the standalone-ha.xml configuration to create a cluster. As all WildFly servers will be run in the same bare metal environment, a port offset will be applied to server2 and server3. Moreover, each server has to define a unique transaction node identifier and jboss node name.

Start each server in a separate terminal.

cd $WILDFLY_HOME_1; \
./bin/standalone.sh -c standalone.xml -Djboss.tx.node.id=server1 -Djboss.node.name=server1 -Dwildfly.config.url=${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client/configuration/custom-config.xml
cd $WILDFLY_HOME_2; \
./bin/standalone.sh -c standalone-ha.xml -Djboss.tx.node.id=server2 -Djboss.node.name=server2 -Djboss.socket.binding.port-offset=100
cd $WILDFLY_HOME_3; \
./bin/standalone.sh -c standalone-ha.xml -Djboss.tx.node.id=server3 -Djboss.node.name=server3 -Djboss.socket.binding.port-offset=200
Note
To enable the recovery of remote transaction failures, the configuration file custom-config.xml should be loaded into server1; this operation will authenticate server1 against server2/server3.
Note
For Windows, use the bin\standalone.bat script.

Deploying the Quickstart applications

  1. With all WildFly servers configured and running, the client and server application can be deployed

  2. The whole project can be built using the following commands:

    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/
    mvn clean package
  3. Then, the client application can be deployed using the following commands:

    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client
    mvn wildfly:deploy
  4. Lastly, the server application can be deployed using the following commands:

    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server
    mvn wildfly:deploy -Dwildfly.port=10090
    mvn wildfly:deploy -Dwildfly.port=10190

The commands just run employed the WildFly Maven plugin to connect to the running instances of WildFly and deploy the war archives to the servers.

Checkpoints

  1. If errors occur, verify that the WildFly servers are running and that they are configured properly

  2. Verify that all deployments are published into all three servers

    1. On server1 check the log to confirm that the client/target/client.war archive is deployed

      ...
      INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 76) WFLYUT0021: Registered web context: '/client' for server 'default-server'
      INFO  [org.jboss.as.server] (management-handler-thread - 2) WFLYSRV0010: Deployed "client.war" (runtime-name : "client.war")
    2. On server2 and server3, check the log to confirm that the server/target/server.war archive is deployed.

      ...
      INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 86) WFLYUT0021: Registered web context: '/server' for server 'default-server'
      INFO  [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "server.war" (runtime-name : "server.war")
  3. Verify that server2 and server3 formed a HA cluster.

    1. Check the server log of either server2 and server3, or both.

      [org.infinispan.CLUSTER] () ISPN000094: Received new cluster view for channel ejb: [server2|1] (2) [server2, server3]
      [org.infinispan.CLUSTER] () ISPN100000: Node server3 joined the cluster
      ...
      INFO  [org.infinispan.CLUSTER] () [Context=server.war/infinispan] ISPN100010: Finished rebalance with members [server2, server3], topology id 5

Examining the Quickstart

Once the WildFly servers are configured and started, and the quickstart artifacts are deployed, it is possible to invoke the endpoints of server1, which generate EJB remote invocations against the HA cluster formed by server2 and server3.

The following table defines the available endpoints, and their expected behaviour.

Note

The endpoints return data in JSON format. You can use curl for the invocation and jq for formatting the results. For example:

Note

On Windows, curl and jq might not be available. If so, enter the endpoints directly to a browser of your choice. The behaviour and the obtained JSON will be the same as for the curl command.

The HTTP invocations return the hostnames of the contacted servers.

Table 2. HTTP endpoints of the test invocation
URL Behaviour Expectation

http://localhost:8080/client/remote-outbound-stateless

Two invocations under the transaction context started on server1 (client application). The EJB remote call is configured from the remote-outbound-connection. Both calls are directed to the same remote server instance (server application) due to transaction affinity.

The two returned hostnames must be the same.

http://localhost:8080/client/remote-outbound-notx-stateless

Several remote invocations to stateless EJB without a transaction context. The EJB remote call is configured from the remote-outbound-connection. The EJB client is expected to load balance the calls on various servers.

The list of the returned hostnames should contain occurrences of both server2 and server3.

http://localhost:8080/client/direct-stateless

Two invocations under the transaction context started on server1 (client application). The stateless bean is invoked on the remote side. The EJB remote call is configured from data in the client application source code. The remote invocation is run via the EJB remoting protocol.

The returned hostnames must be the same.

http://localhost:8080/client/direct-stateless-http

Two invocations under the transaction context started on server1 (client application). The stateless bean is invoked on the remote side. The EJB remote call is configured from data in the client application source code. The remote invocation is run, unlike the other calls of this quickstarts, via EJB over HTTP.

The returned hostnames must be the same.

http://localhost:8080/client/remote-outbound-notx-stateful

Two invocations under the transaction context started on server1 (client application). The EJB remote call is configured from the remote-outbound-connection. Both calls are directed to the same stateful bean on the remote server because the stateful bean invocations are sticky ensuring affinity to the same server instance.

The returned hostnames must be the same.

http://localhost:8080/client/remote-outbound-fail-stateless

An invocation under the transaction context started on server1 (client application). The call goes to one of the remote servers, where errors occur during transaction processing. The failure is simulated at time of two-phase commit. This HTTP call finishes with success. Only the server log shows some warnings. This is an expected behaviour. An intermittent failure during commit phase of two-phase protocol makes the transaction manager obliged to finish the work eventually. The finalization of work is done in the background (by Narayana recovery manager, see details below), and the HTTP call may inform the client back with success.

When the recovery manager finishes the work all the transaction resources are committed.

Observing the recovery processing after client/remote-outbound-fail-stateless call

The EJB call to the endpoint client/remote-outbound-fail-stateless simulates the presence of an intermittent network error happening at the commit phase of the two-phase commit protocol (2PC).

The transaction recovery manager periodically tries to recover the unfinished work and only when this attempt is successful, the transaction is completed (which makes the update in the database visible). It is possible to confirm the completion of the transaction by invoking the REST endpoint server/commits at both servers server2 and server3.

curl -s http://localhost:8180/server/commits
curl -s http://localhost:8280/server/commits

The response of server/commits is a tuple composed by the host’s info and the number of commits. For example the output could be ["host: mydev.narayana.io/192.168.0.1, jboss node name: server2","3"] and it says that the hostname is mydev.narayana.io, the jboss node name is server2, and the number of commits is 3.

The Transaction recovery manager runs periodically (by default, it runs every 2 minutes) on all servers. Nevertheless, as the transaction is initiated on server1, the recovery manager on this server will be responsible to initiate the recovery process.

Note

The recovery process can be started manually. Using telnet and connecting to localhost:4712 (i.e. the port where the recovery process is listening to), it is possible to send the SCAN command to force a recovery cycle

telnet localhost 4712
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
SCAN
DONE
Connection closed by foreign host.
Steps to observe that the recovery processing was done
  1. Before invoking the remote-outbound-fail-stateless endpoint, double check the number of commits on server2 and server3 by invoking the server/commits endpoints.

    curl http://localhost:8180/server/commits; echo
    # output example:
    # ["host: mydev.narayana.io/192.168.0.1, jboss node name: server2","1"]
    curl http://localhost:8280/server/commits; echo
    # output example:
    # ["host: mydev.narayana.io/192.168.0.1, jboss node name: server3","2"]
  2. Invoke the REST endpoint client/remote-outbound-fail-stateless

    curl http://localhost:8080/client/remote-outbound-fail-stateless | jq .

    The JSON output from the previous command reports the name of server the request was sent to.

  3. At the server reported by the previous command, verify the number of commits by invoking the server/commits endpoint.

  4. Check the log of server1 for the following warning message

    ARJUNA016036: commit on < formatId=131077, gtrid_length=35, bqual_length=36, tx_uid=..., node_name=server1, branch_uid=..., subordinatenodename=null, eis_name=unknown eis name > (Subordinate XAResource at remote+http://localhost:8180) failed with exception $XAException.XA_RETRY: javax.transaction.xa.XAException: WFTXN0029: The peer threw an XA exception

    This message means that the transaction manager was not able to commit the transaction as an error occurred while committing the transaction on the remote server. The XAException.XA_RETRY exception, meaning an intermittent failure, was reported in the logs.

  5. The logs on server2 or server3 contain a warning about the XAResource failure as well.

    ARJUNA016036: commit on < formatId=131077, gtrid_length=35, bqual_length=43, tx_uid=..., node_name=server1, branch_uid=..., subordinatenodename=server2, eis_name=unknown eis name > (org.jboss.as.quickstarts.ejb.mock.MockXAResource@731ae22) failed with exception $XAException.XAER_RMFAIL: javax.transaction.xa.XAException
  6. Wait for the recovery process at server1 to recover the unfinished transaction (or force a recovery cycle manually)

  7. The number of commits on the targeted server should be incremented by one.

Undeploy the Quickstart

When you are finished testing the quickstart, follow these steps to undeploy the archive.

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type this command to undeploy the archive:

    $ mvn wildfly:undeploy

Repeat the last step for server2 and server3:

cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
mvn wildfly:undeploy -Dwildfly.port=10090;
mvn wildfly:undeploy -Dwildfly.port=10190

Server Log: Expected Warnings and Errors

This quickstart is not production grade. The server logs include the following warnings during the startup. It is safe to ignore these warnings.

WFLYDM0111: Keystore standalone/configuration/application.keystore not found, it will be auto generated on first use with a self signed certificate for host localhost

WFLYELY01084: KeyStore .../standalone/configuration/application.keystore not found, it will be auto generated on first use with a self-signed certificate for host localhost

WFLYSRV0018: Deployment "deployment.server.war" is using a private module ("org.jboss.jts") which may be changed or removed in future versions without notice.

Building and running the quickstart application with provisioned WildFly server

Instead of using a standard WildFly server distribution, the three WildFly servers to deploy and run the quickstart can be alternatively provisioned by activating the Maven profile named provisioned-server when building the quickstart:

cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client;
mvn clean package -Pprovisioned-server \
  -DremoteServerUsername="quickstartUser" -DremoteServerPassword="quickstartPwd1!" \
  -DpostgresqlUsername="test" -DpostgresqlPassword="test"
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
mvn clean package -Pprovisioned-server \
  -Dwildfly.provisioning.dir=server2 -Djboss-as.home=target/server2 \
  -DpostgresqlUsername="test" -DpostgresqlPassword="test";
mvn package -Pprovisioned-server \
  -Dwildfly.provisioning.dir=server3 -Djboss-as.home=target/server3 \
  -DpostgresqlUsername="test" -DpostgresqlPassword="test"

The provisioned WildFly servers, with the quickstart deployed, can then be found in the target/server directory, and their usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the pom.xml files of the quickstart.

The quickstart user should be added before running the provisioned server:

cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
./target/server2/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!';
./target/server3/bin/add-user.sh -a -u 'quickstartUser' -p 'quickstartPwd1!'
Note

For Windows, use the WILDFLY_HOME\bin\add-user.bat script.

Run the Integration Tests with a provisioned server

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with provisioned server.

Follow these steps to run the integration tests.

  1. As this quickstart performs transactional work against a database, it is needed to create a new database. For the purpose of this quickstart, a simple PostgreSQL container will be used:

    podman run -p 5432:5432 --rm  -ePOSTGRES_DB=test -ePOSTGRES_USER=test -ePOSTGRES_PASSWORD=test postgres:9.4 -c max-prepared-transactions=110 -c log-statement=all
  2. Make sure the servers are provisioned by running the commands reported in Building and running the quickstart application with provisioned WildFly server

  3. Add the quickstart user to the provisioned server2 and server3 by running the commands reported in Building and running the quickstart application with provisioned WildFly server

  4. Start the WildFly provisioned servers in three distinct terminals, this time using the WildFly Maven Plugin, which is recommended for testing due to simpler automation.

    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client;
    mvn wildfly:start -Djboss-as.home=target/server \
      -Dwildfly.javaOpts="-Djboss.tx.node.id=server1 -Djboss.node.name=server1"
    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
    mvn wildfly:start -Djboss-as.home=target/server2 \
      -Dwildfly.port=10090 \
      -Dwildfly.serverConfig=standalone-ha.xml \
      -Dwildfly.javaOpts="-Djboss.socket.binding.port-offset=100 -Djboss.tx.node.id=server2 -Djboss.node.name=server2"
    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
    mvn wildfly:start -Djboss-as.home=target/server3 \
      -Dwildfly.port=10190 \
      -Dwildfly.serverConfig=standalone-ha.xml \
      -Dwildfly.javaOpts="-Djboss.socket.binding.port-offset=200 -Djboss.tx.node.id=server3 -Djboss.node.name=server3"
  5. Type the following command to run the verify goal with the integration-testing profile activated, and specifying the quickstart’s URL using the server.host system property.

    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/client;
    mvn verify -Pintegration-testing
    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
    mvn verify -Pintegration-testing -Dserver.host="http://localhost:8180/server"
    cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call/server;
    mvn verify -Pintegration-testing -Dserver.host="http://localhost:8280/server"
  6. To shut down the WildFly provisioned servers using the WildFly Maven Plugin:

    mvn wildfly:shutdown
    mvn wildfly:shutdown -Dwildfly.port=10090
    mvn wildfly:shutdown -Dwildfly.port=10190

Running on OpenShift

The ephemeral nature of OpenShift does not work smoothly with WildFly’s ability to handle transactions. In fact, WildFly’s transaction management saves logs to keep record of transactions' history in case of extreme scenarios, like crashes or network issues. Moreover, EJB remoting requires a stable remote endpoint to guarantee:

  • The transaction affinity of stateful beans and

  • The recovery of transactions.

To fulfil the aforementioned requirements, applications that requires ACID transactions must be deployed to WildFly using the WildFly’s Operator, which can employ OpenShift’s StatefulSet. Failing to do so might result in no-ACID transactions.

Prerequisites

Unresolved directive in README-source.adoc - include::../shared-doc/cd-create-project.adoc[leveloffset=+3]

Install WildFly’s Operator

To install WildFly’s Operator, follow the official documentation (which instructions are also reported here for convenience)

cd /tmp
git clone https://github.com/wildfly/wildfly-operator.git

cd wildfly-operator

oc adm policy add-cluster-role-to-user cluster-admin developer
make install
make deploy

To verify that the WildFly Operator is running, execute the following command:

oc get po -n $(oc project -q)

NAME                                READY   STATUS      RESTARTS   AGE
wildfly-operator-5d4b7cc868-zfxcv   1/1     Running     1          22h

Start a PostgreSQL database

This quickstart requires a PostgreSQL database to run correctly. In the scope of this quickstart, a PostgreSQL database will be deployed on the OpenShift instance using the Helm chart provided by bitnami:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install postgresql bitnami/postgresql -f charts/postgresql.yaml --wait --timeout="5m"

Build the applications

To build the client and the server applications, this quickstart employs WildFly’s Helm charts. For more information about WildFly’s Helm chart, please refer to the official documentation.

helm repo add wildfly https://docs.wildfly.org/wildfly-charts/

helm install client -f charts/client.yaml wildfly/wildfly
helm install server -f charts/server.yaml wildfly/wildfly

Wait for the builds to finish. Their status can be verified by executing the oc get pod command.

Deploy the Quickstart

To deploy the client and the server applications, this quickstart uses the WildFlyServer custom resource, thanks to which the WildFly Operator is able to create a WildFly pod and deploy an application.

Note
Make sure that view permissions are granted to the default system account. The KUBE_PING protocol, which is used for forming the HA WildFly cluster on OpenShift, requires view permissions to read the labels of the pods: oc policy add-role-to-user view system:serviceaccount:$(oc project -q):default -n $(oc project -q)
cd ${PATH_TO_QUICKSTART_DIR}/ejb-txn-remote-call;
oc create -f client/client-cr.yaml;
oc create -f server/server-cr.yaml

If the above commands are successful, the oc get pod command shows all the pods required for the quickstart, i.e. the client pod and two server pods (and the PostgreSQL database).

NAME                                READY   STATUS      RESTARTS   AGE
client-0                            1/1     Running     0          29m
postgresql-f9f475f87-l944r          1/1     Running     1          22h
server-0                            1/1     Running     0          11m
server-1                            1/1     Running     0          11m

Verify the Quickstarts

The WildFly Operator creates routes that make the client and the server applications accessible outside the OpenShift environment. The oc get route command shows the addresses of the HTTP endpoints. An example of the output is:

oc get route

NAME           HOST/PORT                                                            PATH   SERVICES              PORT
client-route   client-route-ejb-txn-remote-call-client-artifacts.apps-crc.testing          client-loadbalancer   http
server-route   server-route-ejb-txn-remote-call-client-artifacts.apps-crc.testing          server-loadbalancer   http

With the following commands, it is possible to verify the some functionalities of this quickstart:

curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-notx-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/direct-stateless | jq .
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-notx-stateful | jq .

For other HTTP endpoints, refer to the table above.

If you like to observe the recovery process then you can follow these shell commands.

# To check failure resolution
# verify the number of commits that come from the first and second node of the `server` deployments.
# Two calls are needed, as each reports the commit count of different node.
# Remember the reported number of commits to be compared with the results after crash later.
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits

# Run the remote call that causes the JVM of the server to crash.
curl -s $(oc get route client-route --template='{{ .spec.host }}')/client/remote-outbound-fail-stateless
# The platforms restarts the server back to life.
# The following commands then make us waiting while printing the number of commits happened at the servers.
while true; do
  curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
  curl -s $(oc get route server-route --template='{{ .spec.host }}')/server/commits
  I=$((I+1))
  echo " <<< Round: $I >>>"
  sleep 2
done

Running on OpenShift: Quickstart application removal

To delete the client and the server applications, the WildFlyServer definitions needs to be deleted. This is achievable running:

oc delete WildFlyServer client;
oc delete WildFlyServer server

The client and the server applications will be stopped, and the two pods will be removed.

To remove the Helm charts installed previously:

helm uninstall client;
helm uninstall server;
helm uninstall postgresql

Finally, to undeploy and uninstall the WildFly’s operator:

cd /tmp/wildfly-operator;
make undeploy;
make uninstall

The above commands completely clean the OpenShift namespace