Theejb-timer
quickstart demonstrates how to use the Jakarta Enterprise Bean timer service@Schedule
and@Timeout
annotations with WildFly.
The ejb-timer
quickstart demonstrates how to use the Jakarta Enterprise Bean timer service in WildFly Application Server. This example creates a timer service that uses the @Schedule
and @Timeout
annotations.
The following Jakarta Enterprise Bean Timer services are demonstrated:
-
@Schedule
: Uses this annotation to mark a method to be executed according to the calendar schedule specified in the attributes of the annotation. This example schedules a message to be printed to the server console every 6 seconds. -
@Timeout
: Uses this annotation to mark a method to execute when a programmatic timer goes off. This example sets the timer to go off every 3 seconds, at which point the method prints a message to the server console.
The application this project produces is designed to be run on WildFly Application Server 35 or later.
All you need to build this project is Java SE 17.0 or later, and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.
In the following instructions, replace WILDFLY_HOME
with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.
When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly server with the default profile by typing the following command.
$ WILDFLY_HOME/bin/standalone.sh
NoteFor Windows, use the WILDFLY_HOME\bin\standalone.bat
script.
-
Make sure WildFly server is started.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type the following command to build the quickstart.
$ mvn clean package
-
Type the following command to deploy the quickstart.
$ mvn wildfly:deploy
This deploys the ejb-timer/target/ejb-timer.war
to the running instance of the server.
You should see a message in the server log indicating that the archive deployed successfully.
This application only prints messages to stdout. Each timeout callback logs the class name of the @Singleton bean that created the timer, an identifier of the timer, and the timestamp of the callback. In our example application, the ScheduleExample bean creates a persistent timer, while the TimeoutExample creates a non-persistent (i.e. transient) timer. To see it working, check the server log. You should see similar output:
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:24.896811Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:27.002334Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:30.004340Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:30.014526Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:33.001997Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:36.001444Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:36.004266Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:39.001746Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:42.002048Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:42.010535Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:45.000920Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:48.001840Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:04:48.010532Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:51.002591Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-331711014] at 2022-05-04T21:04:54.001734Z
Existing threads in the thread pool handle the invocations. They are rotated and the name of the thread that handles the invocation is printed within the parenthesis (EJB Default - #)
.
To demonstrate the behavioral difference between persistent and non-persistent timers, stop the server via "CRTL-C" and restart it. Upon restart, you will see similar periodic timeout events, but while the persistent timer identifier remains the same, since persistent timers were restore upon restart; the non-persistent timer now has a different identifier, since the transient timers are lost when the server shutdown, and was recreated on startup.
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:36.013024Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:39.001383Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:42.002232Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:42.011380Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:45.001951Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:48.002369Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:48.008104Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:51.002364Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:54.002230Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:18:54.009333Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:18:57.001874Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:19:00.002287Z
INFO [stdout] (EJB default - 1) Timeout received for ScheduleExample[-2138907176] at 2022-05-04T21:19:00.010617Z
INFO [stdout] (EJB default - 1) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:19:03.002128Z
INFO [stdout] (EJB default - 2) Timeout received for TimeoutExample[-1065503193] at 2022-05-04T21:19:06.002358Z
This quickstart includes integration tests, which are located under the src/test/
directory. The integration tests verify that the quickstart runs correctly when deployed on the server.
Follow these steps to run the integration tests.
-
Make sure WildFly server is started.
-
Make sure the quickstart is deployed.
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated.$ mvn verify -Pintegration-testing
When you are finished testing the quickstart, follow these steps to undeploy the archive.
-
Make sure WildFly server is started.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type this command to undeploy the archive:
$ mvn wildfly:undeploy
Instead of using a standard WildFly server distribution, you can alternatively provision a WildFly server to deploy and run the quickstart. The functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>provisioned-server</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
When built, the provisioned WildFly server can be found in the target/server
directory, and its usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.
Follow these steps to run the quickstart using the provisioned server.
-
Make sure the server is provisioned.
$ mvn clean package
-
Start the WildFly provisioned server, using the WildFly Maven Plugin
start
goal.$ mvn wildfly:start
-
Type the following command to run the integration tests.
$ mvn verify -Pintegration-testing
-
Shut down the WildFly provisioned server.
$ mvn wildfly:shutdown
On OpenShift, the S2I build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for OpenShift environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.
-
You must be logged in OpenShift and have an
oc
client to connect to OpenShift -
Helm must be installed to deploy the backend on OpenShift.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
Log in to your OpenShift instance using the oc login
command.
The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following command:
$ helm install ejb-timer -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s
NAME: ejb-timer
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
oc get deployment ejb-timer
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: ejb-timer
deploy:
replicas: 1
This will create a new deployment on OpenShift and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
Get the URL of the route to the deployment.
$ oc get route ejb-timer -o jsonpath="{.spec.host}"
Access the application in your web browser using the displayed URL.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route ejb-timer --template='{{ .spec.host }}')
Note
|
The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from. |
For Kubernetes, the build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, suitable for running on Kubernetes.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.
In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.
minikube start --memory='4gb'
The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman
, as covered in the Minikube documentation.
Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.
minikube addons enable registry
In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000
# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"
# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &
# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"
-
Helm must be installed to deploy the backend on Kubernetes.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following commands:
mvn -Popenshift package wildfly:image
This will use the openshift
Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be ejb-timer
.
Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io
. In this case we tag as localhost:5000/ejb-timer:latest
and push it to the internal registry in our Kubernetes instance:
# Tag the image
docker tag ejb-timer localhost:5000/ejb-timer:latest
# Push the image to the registry
docker push localhost:5000/ejb-timer:latest
In the below call to helm install
which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:
-
--set build.enabled=false
- This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use. -
--set deploy.route.enabled=false
- This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes. -
--set image.name="localhost:5000/ejb-timer"
- This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.
$ helm install ejb-timer -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/ejb-timer"
NAME: ejb-timer
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
kubectl get deployment ejb-timer
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: ejb-timer
deploy:
replicas: 1
This will create a new deployment on Kubernetes and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the ejb-timer
service created for us by the Helm chart.
This service will run on port 8080
, and we set up the port forward to also run on port 8080
:
kubectl port-forward service/ejb-timer 8080:8080
The server can now be accessed via http://localhost:8080
from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080
To demonstrate distributed TimerService behavior, a cluster of at least two application server instances must be started. Begin by making a copy of the entire WildFly directory to be used as second cluster member. Note that the example can be run on a single node as well, but without observation of the singleton properties.
The default configuration of the HA profiles is pre-configured for a fully distributed persistent timers, as well as passivation support for non-persistent timers.
Start the two WildFly servers with the same HA profile using the following commands. Note that a socket binding port offset and a unique node name must be passed to the second server if the servers are binding to the same host.
$ WILDFLY_HOME_1/bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=node1
$ WILDFLY_HOME_2/bin/standalone.sh -c standalone-ha.xml -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=100
Note
|
For Windows, use the WILDFLY_HOME_1\bin\standalone.bat and WILDFLY_HOME_2\bin\standalone.bat scripts.
|
This example is not limited to two servers. Additional servers can be started by specifying a unique port offset for each one.
Next, use the following commands to deploy the already built demo application archive to each server.
Note that since the default socket binding port is 9990
and the second server has ports offset by 100
, the sum, 10090
must be passed as an argument to the deploy maven goal.
mvn wildfly:deploy
mvn wildfly:deploy -Dwildfly.port=10090
Once deployed, you should begin to see log messages for our timer events. However, while timeout events for the non-persistent timer created by the TimoutExample bean are triggered on both nodes, timeout events for the persistent timer created by the ScheduleExample bean are only triggered on one node.
node1:
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:36.003154Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:39.003098Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:42.002884Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:45.003209Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:48.001284Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:51.001656Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:54.001396Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:57:57.001848Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:58:00.001673Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[390846799] at 2022-05-05T20:58:03.001794Z
node2:
INFO [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:36.003800Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:36.003799Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:39.003279Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:42.003483Z
INFO [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:42.003699Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:45.003339Z
INFO [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:48.001545Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:48.001544Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:51.001657Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:54.001710Z
INFO [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:57:54.001710Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:57:57.001717Z
INFO [stdout] (TimerScheduler - 1) Timeout received for ScheduleExample[-392733837] at 2022-05-05T20:58:00.001091Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:58:00.001547Z
INFO [stdout] (TimerScheduler - 1) Timeout received for TimeoutExample[1048239741] at 2022-05-05T20:58:03.001514Z
If you then shutdown the node on which the ScheduleExample timeouts appear (in our case, node2), the other node (in our case, node1) will promptly begin receiving timeouts for that same persistent timer (as indicated by the same identifier).
Restarting the node that we previously shutdown (in our case, node2), using the same command as listed above), you should observe that timeouts for the ScheduleExample timer will resume on the original node (in our case, node2), and now the other node (in our case, node1) no longer receives those timeout events. In fact, if you collate the timestamps for the ScheduleExample bean across each server log carefully, you should find that no events were skipped, and no duplicate events were received.