Since version 12.2.1.4 Coherence has had the ability to expose a http endpoint that can be used to scrape metrics. This would typically be used to expose metrics to something like Prometheus.
Note
|
The default metrics endpoint is disabled by default in Coherence clusters but can be enabled and configured by
setting the relevant fields in the Coherence CRD.
If your Coherence version is before CE 21.12.1 this example assumes that your application has included the
coherence-metrics module as a dependency.
See the Coherence product documentation for more details on enabling metrics
in your application.
|
The example below shows how to enable and access Coherence metrics.
Once the metrics port has been exposed, for example via a load balancer or port-forward command, the metrics
endpoint is available at http://host:port/metrics
.
See the Using Coherence Metrics documentation for full details on the available metrics.
From version 3.4.1 of the Coherence Operator, the packaged Grafana dashboards no longer use the vendor: prefix for querying Prometheus metrics. This prefix was deprecated a number of releases ago and the default, of legacy metrics, in Coherence and will be removed in the most recent Coherence releases after this Operator release.
If you are using a Coherence cluster version you are using has not yet changed this property, you may see no metrics in the Grafana dashboards.
To change your cluster to not use legacy names, set the environment variable COHERENCE_METRICS_LEGACY_NAMES
to false
in your yaml.
apiVersion: coherence.oracle.com/v1
kind: Coherence
metadata:
name: metrics-cluster
spec:
env:
- name: "COHERENCE_METRICS_LEGACY_NAMES"
value: "false"
coherence:
...
has not set "coherence.metrics.legacy.names=false"
To deploy a Coherence
resource with metrics enabled and exposed on a port, the simplest yaml
would look like this:
apiVersion: coherence.oracle.com/v1
kind: Coherence
metadata:
name: metrics-cluster
spec:
coherence:
metrics:
enabled: true # (1)
ports:
- name: metrics # (2)
-
Setting the
coherence.metrics.enabled
field totrue
will enable metrics -
To expose metrics via a
Service
it is added to theports
list. Themetrics
port is a special case where theport
number is optional so in this case metrics will bind to the default port9612
. (see Exposing Ports for details)
To expose metrics on a different port the alternative port value can be set in the coherence.metrics
section, for example:
apiVersion: coherence.oracle.com/v1
kind: Coherence
metadata:
name: metrics-cluster
spec:
coherence:
metrics:
enabled: true
port: 8080 # (1)
ports:
- name: metrics
-
metrics will now be exposed on port
8080
After installing the basic metrics-cluster.yaml
from the first example above there would be a three member
Coherence cluster installed into Kubernetes.
For example, the cluster can be installed with kubectl
kubectl -n coherence-test create -f metrics-cluster.yaml
coherence.coherence.oracle.com/metrics-cluster created
The kubectl
CLI can be used to list Pods
for the cluster:
kubectl -n coherence-test get pod -l coherenceCluster=metrics-cluster
NAME READY STATUS RESTARTS AGE
metrics-cluster-0 1/1 Running 0 36s
metrics-cluster-1 1/1 Running 0 36s
metrics-cluster-2 1/1 Running 0 36s
In a test or development environment the simplest way to reach an exposed port is to use the kubectl port-forward
command.
For example to connect to the first Pod
in the deployment:
kubectl -n coherence-test port-forward metrics-cluster-0 9612:9612
Forwarding from [::1]:9612 -> 9612
Forwarding from 127.0.0.1:9612 -> 9612
The operator can create a Prometheus ServiceMonitor
for the metrics port so that Prometheus will automatically
scrape metrics from the Pods
in a Coherence
deployment.
apiVersion: coherence.oracle.com/v1
kind: Coherence
metadata:
name: metrics-cluster
spec:
coherence:
metrics:
enabled: true
ports:
- name: metrics
serviceMonitor:
enabled: true # (1)
-
The
serviceMonitor.enabled
field is set totrue
for themetrics
port.
See Exposing ports and Services - Service Monitors documentation for more details.