The Open Liberty Operator can be used to deploy and manage applications running on Open Liberty or WebSphere Liberty into Kubernetes clusters. You can also perform Day-2 operations such as gathering traces and dumps using the Operator.
Important: This user guide only applies to Operator versions 1.2.0 and above.
Your environment must meet cluster, sizing, persistent storage, and network requirements for Open Liberty operator.
If you are installing an Open Liberty operator on an Red Hat OpenShift cluster, your environment must meet Red Hat OpenShift Container Platform (OCP) cluster requirements.
Open Liberty operator requires an OCP version 4.16, OCP version 4.15, OCP version 4.14, or OCP version 4.12 cluster on Linux x86_64 (amd64), Linux on Power (ppc64le), or Linux on IBM Z (s390x) platform, with cluster-admin permissions. To manage OCP projects with OCP CLI (oc) commands, the installation also requires the OCP CLI.
By default, certificates are generated by using the OpenShift certificate manager. If you want to use the manageTLS
capability and use a different certificate manager (such as cert-manager) to generate and manage the certificates, you must install it.
If you are installing an Open Liberty operator on a Kubernetes cluster, your environment must meet the Kubernetes cluster requirements.
Open Liberty operator requires a Kubernetes version 1.29, 1.28, 1.27, 1.26, or 1.25 cluster on Linux x86_64 (amd64), Linux on Power (ppc64le), or Linux on IBM Z (s390x) platform, with cluster-admin permissions.
If you plan to use Operator Lifecycle Manager (OLM), it must be installed on your cluster.
If you want to use the manageTLS
capability, you must have a certificate manager (such as cert-manager) installed.
Before you can use the Ingress resource to expose your application, you must install an ingress controller such as Nginx or Traefik.
Your environment must meet sizing requirements for Open Liberty operator.
Project |
CPU request (cores) |
Memory request (Mi) |
Disk space (Gi) |
Notes |
Open Liberty operator |
0.2 (limit: 0.4) |
128 (limit: 1024) |
N/A |
Applications that are deployed and managed by the operator have their own resource requests and limits as specified in the Open Liberty operator custom resources. |
Note
|
The values in the tables do not include any requirements inherent in the storage provider. The storage infrastructure might require more resources (for example, CPU or memory) from the worker nodes. |
Your environment might need to meet certain storage requirements when you use Open Liberty operator.
No storage requirements exist for Open Liberty operator. However, if you are using the Open Liberty operator serviceability feature, and you have applications with multiple replicas, storage must support ReadWriteMany
access mode. For more information, see Storage for serviceability.
You are responsible for configuring and managing storage for any applications that you deploy with Open Liberty operator.
Your environment must meet network requirements for Open Liberty operator.
Hostnames |
Ports and Protocols |
Purpose |
icr.io, cp.icr.io |
443 (HTTP over TLS) |
The listed domain is the container image registry that is used as part of the Open Liberty operator installation. This registry is also used when Open Liberty operator and dependency software levels are updated. |
Important: Are you upgrading from version 0.8.x or below? If so, before the upgrade, make sure to review the documentation on behavioural changes that could impact your applications.
Use the instructions for one of the releases to install the operator into a Kubernetes cluster.
The Open Liberty Operator is available for the following CPU architectures:
-
Linux® x86_64 (amd64)
-
Linux® on IBM® Z (s390x)
-
Linux® on Power® (ppc64le)
The Open Liberty Operator can be installed to:
-
watch own namespace
-
watch another namespace
-
watch all namespaces in the cluster
Appropriate cluster roles and bindings are required to watch another namespace or to watch all namespaces in the cluster.
Note
|
The Open Liberty Operator can only interact with resources it is given permission to interact through Role-based access control (RBAC). Some of the operator features require interacting with resources in other namespaces. In that case, the operator must be installed with correct ClusterRole definitions.
|
You can continue to use existing custom resources (CRs) with apiVersion: apps.openliberty.io/v1beta2
, but there are some out-of-the-box behavioural changes in versions 1.2.0+ that could impact your applications. Primary changes are listed below. Prior to upgrading the Operator in a production environment, try the upgrade in a test or staging environment and validate that your applications function as you expect with the new version of the Open Liberty Operator. If not, make the necessary changes to your custom resources (CRs).
-
Certificate for service is automatically generated for each application to secure traffic. If
.spec.expose
is true, the Route is configured automatically to enable TLS by using reencrypt termination. You must enable TLS within the application image. A secure TLS port (i.e.9443
) must be specified for.spec.service.port
field. See Configuring transport layer security (TLS) certificates (.spec.manageTLS) -
Network policies are created for each application to block incoming traffic. See Allowing or limiting incoming traffic (.spec.networkPolicy)
-
Security context is set to the most secure policy. See Set privileges and permissions for a pod or container (.spec.securityContext)
The architecture of the Open Liberty Operator follows the basic controller pattern: the Operator container with the controller is deployed into a Pod and listens for incoming resources with Kind: OpenLibertyApplication
. Creating an OpenLibertyApplication
custom resource (CR) triggers the Open Liberty Operator to create, update or delete Kubernetes resources needed by the application to run on your cluster.
In addition, Open Liberty Operator makes it easy to perform Day-2 operations on an Open Liberty or WebSphere Liberty server running inside a Pod as part of an OpenLibertyApplication
instance:
* Gather server traces using resource Kind: OpenLibertyTrace
* Generate server dumps using resource Kind: OpenLibertyDump
Each instance of OpenLibertyApplication
CR represents the application to be deployed on the cluster:
apiVersion: apps.openliberty.io/v1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: ClusterIP
port: 9080
expose: true
statefulSet:
storage:
size: 2Gi
mountPath: "/logs"
The following table lists configurable fields of the OpenLibertyApplication
CRD. For complete OpenAPI v3 representation of these values, view the files under /deploy/releases/<operator-version>/kubectl/openliberty-app-crd.yaml
. For example, the OpenLibertyApplication
CRD for release 0.8.2
.
Each OpenLibertyApplication
CR must specify .spec.applicationImage
field. Specifying other fields is optional.
Field |
Description |
|
Configures pods to run on specific nodes. For examples, see Limit a pod to run on specified nodes. |
|
An array of architectures to be considered for deployment. Their position in the array indicates preference. |
|
A YAML object that represents a NodeAffinity. |
|
A YAML object that contains set of required labels and their values. |
|
A YAML object that represents a PodAffinity. |
|
A YAML object that represents a PodAntiAffinity. |
|
The absolute name of the image to be deployed, containing the registry and the tag. On OpenShift, it can also be set to |
|
The name of the application this resource is part of. If not specified, it defaults to the name of the CR. |
|
The current version of the application. Label |
|
Configures the wanted resource consumption of pods. For examples, see Configure multiple application instances for high availability. |
|
Required field for autoscaling. Upper limit for the number of pods that can be set by the autoscaler. It cannot be lower than the minimum number of replicas. |
|
Lower limit for the number of pods that can be set by the autoscaler. |
|
Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. |
|
A Boolean to toggle the creation of Knative resources and use of Knative serving. To create a Knative service, set the parameter to true. For examples, see Deploy serverless applications with Knative and Expose applications externally. |
|
The wanted state and cycle of the deployment and resources owned by the deployment. |
|
Annotations to be added only to the deployment and resources owned by the deployment. |
|
A field to specify the update strategy of the deployment. For examples, see updateStrategy |
|
The type of update strategy of the deployment. The type can be set to |
|
DNS settings for the application pods. For more information, see #configure-dns-specdnspolicy-and-specdnsconfig |
|
The DNS Config for the application pods. |
|
The DNS Policy for the application pod. Defaults to ClusterFirst. |
|
Disable information about services being injected into the application pod as environment variables. The default value for this field is |
|
An array of environment variables following the format of |
|
An array of references to |
|
A boolean that toggles the external exposure of this deployment via a Route or a Knative Route resource. |
|
The list of Init Container definitions. |
|
A Boolean that enables management of Lightweight Third-Party Authentication (LTPA) key sharing among Liberty containers. The default is |
|
Enable management of password encryption key sharing amongst Liberty containers. Defaults to false. For more information, see Managing Password Encryption. |
|
A boolean to toggle automatic certificate generation and mounting TLS secret into the pod. The default value for this field is |
|
Specifies parameters for |
|
A YAML snippet representing an array of Endpoint component from ServiceMonitor. |
|
Labels to set on ServiceMonitor. |
|
Defines the network policy. For examples, see Allowing or limiting incoming traffic. |
|
A Boolean to disable the creation of the network policy. The default value is |
|
The labels of one or more pods from which incoming traffic is allowed. |
|
The labels of namespaces from which incoming traffic is allowed. |
|
Defines health checks on an application container to determine whether it is alive or ready to receive traffic. For examples, see Configure probes. |
|
A YAML object configuring the Kubernetes liveness probe that controls when Kubernetes needs to restart the pod. |
|
A YAML object configuring the Kubernetes readiness probe that controls when the pod is ready to receive traffic. |
|
A YAML object configuring the Kubernetes startup probe that controls when Kubernetes needs to startup the pod on its first initialization. |
|
The policy used when pulling the image. One of: |
|
If using a registry that requires authentication, the name of the secret containing credentials. |
|
The static number of desired replica pods that run simultaneously. |
|
The upper limit of CPU core. Specify integers, fractions (e.g. |
|
The memory upper limit in bytes. Specify integers with suffixes: |
|
The minimum required CPU core. Specify integers, fractions (e.g. |
|
The minimum memory in bytes. Specify integers with one of these suffixes: |
|
Annotations to be added to the |
|
A name of a secret that already contains TLS key, certificate and CA to be used in the |
|
Hostname to be used for the |
|
HTTP traffic policy with TLS enabled. Can be one of |
|
Path to be used for the |
|
Path type to be used. Required field for Ingress. See Ingress path types. |
|
TLS termination policy. Can be one of |
|
A security context to control privilege and permission settings for the application container. For examples, see Set privileges and permissions for a pod or container. If set, the fields of |
|
A Boolean that controls whether a process can gain more privileges than its parent process. This Boolean controls whether the |
|
The capabilities to add or drop when containers are run. Defaults to the default set of capabilities that the container runtime grants. |
|
An array of added capabilities of POSIX capabilities type. |
|
An array of removed capabilities of POSIX capabilities type. |
|
A Boolean to specify whether to run a container in privileged mode. Processes in privileged containers are equivalent to root on the host. The default is |
|
The type of proc mount to use for the containers. The default is |
|
A Boolean to specify whether this container has a read-only root file system. The default is |
|
The GID to run the entrypoint of the container process. If unset, |
|
A Boolean that specifies whether the container must run as a nonroot user. If |
|
The UID to run the entrypoint of the container process. If unset, the default is the user that is specified in image metadata. The value can be set in |
|
The SELinux context to be applied to the container. Its properties include |
|
The |
|
A profile that is defined in a file on the node. The profile must be preconfigured on the node to work. Specify a descending path, relative to the kubelet configured |
|
(Required) The kind of |
|
The Windows specific settings to apply to all containers. If unset, the options from the |
|
Configures the Semeru Cloud Compiler to handle Just-In-Time (JIT) compilation requests from the application. |
|
Enables the Semeru Cloud Compiler. Defaults to |
|
Number of desired pods for the Semeru Cloud Compiler. Defaults to |
|
Resource requests and limits for the Semeru Cloud Compiler. The CPU defaults to |
|
Configures parameters for the network service of pods. For an example, see Specify multiple service ports. |
|
Annotations to be added to the service. |
|
A boolean to toggle whether the operator expose the application as a bindable service. Defaults to |
|
Configure the TLS certificates for the service. The |
|
A name of a secret that already contains TLS key, certificate and CA to be mounted in the pod. The following keys are valid in the secret: |
|
Node proxies this port into your service. Please note once this port is set to a non-zero value it cannot be reset to zero. |
|
The port exposed by the container. |
|
An array consisting of service ports. |
|
The name for the port exposed by the container. |
|
The port that the operator assigns to containers inside pods. Defaults to the value of |
|
The Kubernetes Service Type. |
|
Specifies serviceability-related operations, such as gathering server memory dumps and server traces. For examples, see Storage for serviceability. |
|
A convenient field to request the size of the persisted storage to use for serviceability. Can be overridden by the |
|
A convenient field to request the StorageClassName of the persisted storage to use for serviceability. Can be overridden by the |
|
The name of the PersistentVolumeClaim resource you created to be used for serviceability. Must be in the same namespace. |
|
Deprecated. Use |
|
The service account to use for application deployment. If a service account name is not specified, a service account is automatically created. For examples, see Configure a service account. |
|
A Boolean to toggle whether the service account’s token should be mounted in the application pods. If unset or |
|
Name of the service account to use for deploying the application. |
|
The list of |
|
Specifies the configuration for single sign-on providers to authenticate with. Specify sensitive fields, such as clientId and clientSecret, for the selected providers by using the |
|
Specifies the host name of your enterprise GitHub, such as |
|
Specifies whether to map a user identifier to a registry user. This field applies to all providers. |
|
The list of OAuth 2.0 providers to authenticate with. Required fields: authorizationEndpoint and tokenEndpoint fields. Specify sensitive fields, clientId and clientSecret by using the |
|
Name of the header to use when an OAuth access token is forwarded. |
|
Determines whether the access token that is provided in the request is used for authentication. If the field is set to true, the client must provide a valid access token. |
|
Determines whether to support access token authentication if an access token is provided in the request. If the field is set to true and an access token is provided in the request, then the access token is used as an authentication token. |
|
Specifies an authorization endpoint URL for the OAuth 2.0 provider. Required field. |
|
The name of the social login configuration for display. |
|
Specifies the name of the claim. Use its value as the user group membership. |
|
Specifies the unique ID for the provider. The default value is oauth2. |
|
Specifies the realm name for this social media. |
|
Specifies the name of the claim. Use its value as the subject realm. |
|
Specifies one or more scopes to request. |
|
Specifies a token endpoint URL for the OAuth 2.0 provider. Required field. |
|
Specifies the required authentication method. |
|
The URL for retrieving the user information. |
|
Indicates which specification to use for the user API. |
|
Specifies the name of the claim. Use its value as the authenticated user principal. |
|
The list of OpenID Connect (OIDC) providers with which to authenticate. Each list item provides an OIDC client configuration. List items must include the |
|
Specifies a discovery endpoint URL for the OpenID Connect provider. Required field. |
|
The name of the social login configuration for display. |
|
Specifies the name of the claim. Use its value as the user group membership. |
|
Specifies whether to enable host name verification when the client contacts the provider. |
|
The unique ID for the provider. Default value is oidc. |
|
Specifies the name of the claim. Use its value as the subject realm. |
|
Specifies one or more scopes to request. |
|
Specifies the required authentication method. |
|
Specifies whether the UserInfo endpoint is contacted. |
|
Specifies the name of the claim. Use its value as the authenticated user principal. |
|
Specifies a callback protocol, host and port number, such as https://myfrontend.mycompany.com. This field applies to all providers. |
|
The wanted state and cycle of stateful applications. For examples, see Persist resources. |
|
Annotations to be added only to the StatefulSet and resources owned by the StatefulSet. |
|
The directory inside the container where this persisted storage will be bound to. |
|
A convenient field to set the size of the persisted storage. Can be overridden by the |
|
A YAML object representing a volumeClaimTemplate component of a |
|
A field to specify the update strategy of the StatefulSet. For examples, see updateStrategy |
|
The type of update strategy of the StatefulSet. The type can be set to |
|
Tolerations to be added to application pods. Tolerations allow the scheduler to schedule pods on nodes with matching taints. For more information, see Configure tolerations. |
|
Configures topology spread constraints for the application instance and if applicable, the Semeru Cloud Compiler instance. For examples, see Constrain how pods are spread between nodes and zones. |
|
A YAML array that represents a list of TopologySpreadConstraints. |
|
Disables the default TopologySpreadConstraints set by the operator. Defaults to |
|
A YAML object representing a pod volumeMount. For examples, see Persist Resources. |
|
A YAML object representing a pod volume. |
Use official Open Liberty images and guidelines to create your application image.
Use the following CR to deploy your application image to a Kubernetes environment:
apiVersion: apps.openliberty.io/v1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
The applicationImage
value must be defined in OpenLibertyApplication
CR. On OpenShift, the operator tries to find an image stream name with the applicationImage
value. The operator falls back to the registry lookup if it is not able to find any image stream that matches the value. If you want to distinguish an image stream called my-company/my-app
(project: my-company
, image stream name: my-app
) from the Docker Hub my-company/my-app
image, you can use the full image reference as docker.io/my-company/my-app
.
To get information on the deployed CR, use either of the following:
oc get olapp my-liberty-app
oc get olapps my-liberty-app
oc get openlibertyapplication my-liberty-app
An application administrator can view the status of an application that is deployed in a container. To get information about the deployed custom resource (CR), use a CLI or the Red Hat OpenShift console.
The status types for the .status.condition
parameter in the OpenLibertyApplication
CR are Ready
, ResourcesReady
, Reconciled
.
Reconciled
-
Indicates whether the current version of the operator successfully processed the configurations in the CR.
ResourcesReady
-
Indicates whether the application resources created and managed by the operator are ready.
Ready
-
Indicates the overall status of the application. If true, the application configuration was reconciled and its resource are in ready state.
To use the CLI to get information about a deployed CR, run a kubectl get
or oc get
command.
To run kubectl commands, you need the Kubernetes command line tool or the Red Hat OpenShift command-line interface (CLI). To run oc commands, you need the Red Hat OpenShift CLI.
In the following get commands, replace my-liberty-app
with your CR name. Run any one of the commands. olapp
and olapps
are short names for openlibertyapplication
and openlibertyapplications
.
-
Run any of the following
kubectl get
commands.
kubectl get olapp my-liberty-app
kubectl get olapps my-liberty-app
kubectl get openlibertyapplication my-liberty-app
-
Run any of the following
oc get
commands.
oc get olapp my-liberty-app
oc get olapps my-liberty-app
oc get openlibertyapplication my-liberty-app
The results of the command resemble the following.
NAME IMAGE EXPOSED RECONCILED RESOURCESREADY READY AGE
my-liberty-app quay.io/my-repo/my-app:1.0 True True True 18m
The value in the READY
column is True
when the application is successfully installed. If the value in the READY
column is not True
, see Troubleshooting Open Liberty operators.
To use the Red Hat OpenShift console to get information about a deployed CR, view the deployed OpenLibertyApplication
instance and inspect the .status
section.
status:
conditions:
- lastTransitionTime: '2022-05-10T15:59:04Z'
status: 'True'
type: Reconciled
- lastTransitionTime: '2022-05-10T15:59:16Z'
message: 'Deployment replicas ready: 3/3'
reason: MinimumReplicasAvailable
status: 'True'
type: ResourcesReady
- lastTransitionTime: '2022-05-10T15:59:16Z'
message: Application is reconciled and resources are ready.
status: 'True'
type: Ready
imageReference: 'quay.io/my-repo/my-app:1.0'
references:
svcCertSecretName: my-liberty-app-svc-tls-ocp
versions:
reconciled: 1.0.0
If the .status.conditions.type
Ready type does not have a status of True
, see Troubleshooting Open Liberty operators.
The value of the .status.versions.reconciled
parameter is the version of the operand that is deployed into the cluster after the reconcile loop completes.
The operator controller periodically runs reconciliation to match the current state to the wanted state so that the managed resources remain functional. Open Liberty operator allows for increasing the reconciliation interval to reduce the controller’s workload when status remains unchanged. The reconciliation frequency can be configured with the Operator ConfigMap settings.
The value of the .status.conditions.unchangedConditionCount
parameter represents the number of reconciliation cycles during which the condition status type remains unchanged. Each time this value becomes an even number, the reconciliation interval increases according to the configurations in the ConfigMap
. The reconciliation interval increase feature is enabled by default but can be disabled if needed.
The .status.reconcileInterval
parameter represents the current reconciliation interval of the instance. The parameter increases by the increase percentage, which is specified in the ConfigMap
, based on the current interval. The calculation uses the base reconciliation interval, the increase percentage, and the count of unchanged status conditions, with the increases compounding over time. The maximum reconciliation interval is 240 seconds for repeated failures and 120 seconds for repeated successful status conditions.
The ConfigMap
named open-liberty-operator
is used for configuring Liberty operator managed resources. It is created once when the operator starts and is located in the operator’s installed namespace.
Note
|
For OCP users, the AllNamespaces install mode designates openshift-operators as the operator’s installed namespace. |
This is a sample operator ConfigMap
that would get generated when the operator is installed and running in the test-namespace
namespace.
kind: ConfigMap
apiVersion: v1
metadata:
name: open-liberty-operator
namespace: test-namespace
data:
certManagerCACertDuration: 8766h
certManagerCertDuration: 2160h
defaultHostname: ""
operatorLogLevel: info
reconcileIntervalIncreasePercentage: "100"
reconcileIntervalMinimum: "5"
And here is the set of descriptions corresponding to each configurable field.
Field |
Description |
|
The cert-manager issued CA certificate’s duration before expiry in Go time.Duration string format. The default value is 8766h (1 year). To learn more about this field see Generating certificates with certificate manager. |
|
The cert-manager issued service certificate’s duration before expiry in Go time.Duration string format. The default value is 2160h (90 days). To learn more about this field see Generating certificates with certificate manager. |
|
The default hostname for the OpenLibertyApplication Route or Ingress URL when .spec.expose is set to true. To learn more about this field see Expose applications externally ( |
|
The log level for the Liberty operator. The default value is |
|
The default value of the base reconciliation interval in seconds is 5. The operator runs the reconciliation loop every reconciliation interval seconds for each instance. If an instance’s status conditions remain unchanged for 2 consecutive reconciliation loops, the reconciliation interval increases to reduce the reconciliation frequency. The interval increases based on the base reconciliation interval and specified increase percentage. For more information on the operator’s reconciliation frequency, see Viewing reconciliation frequency in the status. |
|
When the reconciliation interval increases, the increase is calculated as a specified percentage of the current interval. To disable the reconciliation interval increase, set the value to 0. |
Open Liberty Operator builds upon components from the generic Runtime Component Operator and provides additional features to customize your Open Liberty applications.
-
Override console logging environment variable default values (
.spec.env
) -
Configuring single sign-on (SSO) (
.spec.sso
) -
Storage for serviceability (
.spec.serviceability
) -
Configuring Lightweight Third-Party Authentication (LTPA) (
.spec.manageLTPA
) -
Managing password encryption (
.spec.managePasswordEncryption
)
-
Reference image streams (
.spec.applicationImage
) -
Configure service account (
.spec.serviceAccount
) -
Add or change labels (
.metadata.labels
) -
Add annotations (
.metadata.annotations
) -
Set environment variables for an application container (
.spec.env
or.spec.envFrom
) -
Setting up basic authentication credentials by using environment variables (
.spec.envFrom[].secretRef
) -
Configure multiple application instances for high availability (
.spec.replicas
or.spec.autoscaling
) -
Set privileges and permissions for a pod or container (
.spec.securityContext
) -
Persist resources (
.spec.statefulSet
and.spec.volumeMounts
) -
Monitor resources (
.spec.monitoring
) -
Specify multiple service ports (
.spec.service.port*
and.spec.monitoring.endpoints
) -
Configure probes (
.spec.probes
) -
Deploy serverless applications with Knative (
.spec.createKnativeService
) -
Expose applications externally (
.spec.expose
,.spec.createKnativeService
,.spec.route
) -
Allowing or limiting incoming traffic (
.spec.networkPolicy
) -
Bind applications with operator-managed backing services (
.status.binding.name
and.spec.service.bindable
) -
Limit a pod to run on specified nodes (
.spec.affinity
) -
Constrain how pods are spread between nodes and zones (
.spec.topologySpreadConstraints
) -
Configuring transport layer security (TLS) certificates
-
Generating certificates with Red Hat OpenShift service CA (
.spec.service.annotations
) -
Specifying certificates for a secret Route and Service (
.spec.service.certificateSecretRef
and.spec.route.certificateSecretRef
)
The Open Liberty operator sets environment variables that are related to console logging by default. You can override the console logging default values with your own values in your CR .spec.env
list.
Name |
Value |
|
info |
|
message,accessLog,ffdc,audit |
|
json |
To override default values for the console logging environment variables, set your preferred values manually in your CR .spec.env
list. For information about values that you can set, see the Open Liberty logging documentation.
The following example shows a CR .spec.env
list that sets nondefault values for the console logging environment variables.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: WLP_LOGGING_CONSOLE_FORMAT
value: "DEV"
- name: WLP_LOGGING_CONSOLE_SOURCE
value: "messages,trace,accessLog"
- name: WLP_LOGGING_CONSOLE_LOGLEVEL
value: "error"
For more information about overriding variable default values, see Set environment variables for an application container (.spec.env
or .spec.envFrom
).
An administrator can configure single sign-on (SSO) for Open Liberty operators to authenticate and manage users. Authentication can be delegated to external providers, such as Google, Facebook, LinkedIn, Twitter, GitHub, or any OpenID Connect (OIDC) or OAuth 2.0 clients.
-
Configure and build the application image with single sign-on following the instructions in Open Liberty images and guidelines and then Configuring Security: Single Sign-On configuration.
-
Complete one of these choices to configure SSO in your operator.
The operator can specify a client ID and secret in advance. A disadvantage to this configuration is that the client ID and secret must be supplied for registration repetitively, rather than automatically with the provider administrator supplying the information needed for registration one time.
-
Create a secret that specifies sensitive information such as client IDs, client secrets, and tokens for the login providers you selected in application image. Create the
Secret
namedOpenLibertyApplication_name-olapp-sso
in the same namespace as theOpenLibertyApplication
instance. In the following sample snippets,OpenLibertyApplication
is namedmy-app
, so the secret must be namedmy-app-olapp-sso
. Both are in the same namespace calleddemo
.-
The keys within the
Secret
must follow theprovider_name-sensitive_field_name
naming pattern. For example,google-clientSecret
. Instead of a-
character in between, you can also use.
or_
. For example,oauth2_userApiToken
.apiVersion: v1 kind: Secret metadata: # Name of the secret should be in this format: <OpenLibertyApplication_name>-olapp-sso name: my-app-olapp-sso # Secret must be created in the same namespace as the OpenLibertyApplication instance namespace: demo type: Opaque data: # The keys must be in this format: <provider_name>-<sensitive_field_name> github-clientId: bW9vb29vb28= github-clientSecret: dGhlbGF1Z2hpbmdjb3c= twitter-consumerKey: bW9vb29vb28= twitter-consumerSecret: dGhlbGF1Z2hpbmdjb3c= oidc-clientId: bW9vb29vb28= oidc-clientSecret: dGhlbGF1Z2hpbmdjb3c= oauth2-clientId: bW9vb29vb28= oauth2-clientSecret: dGhlbGF1Z2hpbmdjb3c= oauth2-userApiToken: dGhlbGF1Z2hpbmdjb3c=
-
The operator watches for the creation and deletion of the SSO secret and any updates to it. Adding, updating, or removing keys from the secret are passed down to the application automatically.
-
-
Configure single sign-on in the OpenLibertyApplication custom resource (CR). At minimum, set the
.spec.sso: {}
field so that the operator can pass the values from the secret to your application. Refer to the OpenLibertyApplication CR for more SSO configurations. -
Configure secured
Service
and securedRoute
with necessary certificates. Refer to Certificates for more information. -
To automatically trust certificates from popular identity providers, including social login providers such as Google and Facebook, set the
SEC_TLS_TRUSTDEFAULTCERTS
environment variable totrue
. To automatically trust certificates issued by the Kubernetes cluster, set environment variableSEC_IMPORT_K8S_CERTS
totrue
. Alternatively, you can include the necessary certificates manually when building application image or mounting them using a volume when you deploy your application.spec: applicationImage: quay.io/my-repo/my-app:1.0 env: - name: SEC_TLS_TRUSTDEFAULTCERTS value: "true" - name: SEC_IMPORT_K8S_CERTS value: "true" sso: redirectToRPHostAndPort: https://redirect-url.mycompany.com github: hostname: github.mycompany.com oauth2: - authorizationEndpoint: specify-required-value tokenEndpoint: specify-required-value oidc: - discoveryEndpoint: specify-required-value service: certificateSecretRef: mycompany-service-cert port: 9443 type: ClusterIP expose: true route: certificateSecretRef: mycompany-route-cert termination: reencrypt
The operator can request a client ID and client secret from providers, rather than requiring them in advance. This ability can simplify deployment, as the provider administrator can supply the information that is needed for registration one time, instead of supplying client IDs and secrets repetitively. The callback URL from the provider to the client is supplied by the operator, so doesn’t need to be known in advance.
-
Add attributes that are named
provider_name-autoreg-field_name
to the Kubernetes secret. First, the operator makes an https request to the.spec.sso.oidc[].discoveryEndpoint
field to obtain URLs for subsequent REST calls. Next, it makes other REST calls to the provider and obtains a client ID and client secret. The Kubernetes secret is updated with the obtained values. -
For Red Hat® Single Sign-on (RH-SSO), you can set the
.spec.sso.oidc[].userNameAttribute
field topreferred_username
to obtain the user ID that was used to log in. For IBM Security Verify, set the field togiven_name
. The following example secret is tested on Red Hat OpenShift® with RH-SSO and IBM® Security Verify.apiVersion: v1 kind: Secret metadata: # Name of the secret should be in this format: <OpenLibertyApplication_name>-olapp-sso name: my-app-olapp-sso # Secret must be created in the same namespace as the OpenLibertyApplication instance namespace: demo type: Opaque data: # base64 encode the data before entering it here. # # Leave the clientId and secret out, registration will obtain them and update their values. # oidc-clientId # oidc-clientSecret # # Reserved: <provider>-autoreg-RegisteredClientId and RegisteredClientSecret # are used by the operator to store a copy of the clientId and clientSecret values. # # Automatic registration attributes have -autoreg- after the provider name. # # Red Hat Single Sign On requires an initial access token for registration. oidc-autoreg-initialAccessToken: xxxxxyyyyy # # IBM Security Verify requires a special clientId and clientSecret for registration. # oidc-autoreg-initialClientId: bW9vb29vb28= # oidc-autoreg-initialClientSecret: dGhlbGF1Z2hpbmdjb3c= # # Optional: Grant types are the types of OAuth flows the resulting clients will allow # Default is authorization_code,refresh_token. Specify a comma separated list. # oidc-autoreg-grantTypes: base64 data goes here # # Optional: Scopes limit the types of information about the user that the provider will return. # Default is openid,profile. Specify a comma-separated list. # oidc-autoreg-scopes: base64 data goes here # # Optional: To skip TLS certificate checking with the provider during registration, specify insecureTLS as true. # Default is false. # oidc-autoreg-insecureTLS: dHJ1ZQ==
You can authenticate with multiple OIDC and OAuth 2.0 providers.
-
Configure and build application image with multiple OIDC or OAuth 2.0 providers. For example, set the provider name in your Dockerfile. The provider name must be unique and must contain only alphanumeric characters.
ARG SEC_SSO_PROVIDERS="google oidc:provider1,provider2 oauth2:provider3,provider4"
-
Use the provider name in an SSO
Secret
to specify its client ID and secret. For example, the followingSecret
setsprovider1-clientSecret: dGhlbGF1Z2hpbmdjb3c=
for a client ID and secret.apiVersion: v1 kind: Secret metadata: # Name of the secret should be in this format: <OpenLibertyApplication_name>-olapp-sso name: my-app-olapp-sso # Secret must be created in the same namespace as the OpenLibertyApplication instance namespace: demo type: Opaque data: # The keys must be in this format: <provider_name>-<sensitive_field_name> google-clientId: xxxxxxxxxxxxx google-clientSecret: yyyyyyyyyyyyyy provider1-clientId: bW9vb29vb28= provider1-clientSecret: dGhlbGF1Z2hpbmdjb3c= provider2-autoreg-initialClientId: bW9vb29vb28= provider2-autoreg-initialClientSecret: dGhlbGF1Z2hpbmdjb3c= provider3-clientId: bW9vb29vb28= provider3-clientSecret: dGhlbGF1Z2hpbmdjb3c= provider4-clientId: bW9vb29vb28= provider4-clientSecret: dGhlbGF1Z2hpbmdjb3c=
-
Configure a field for each corresponding provider in the
OpenLibertyApplication
CR. Use one or both of the.spec.sso.oidc[].id
and.spec.sso.oauth2[].id
fields.sso: oidc: - id: provider1 discoveryEndpoint: specify-required-value - id: provider2 discoveryEndpoint: specify-required-value oauth2: - id: provider3 authorizationEndpoint: specify-required-value tokenEndpoint: specify-required-value - id: provider4 authorizationEndpoint: specify-required-value tokenEndpoint: specify-required-value
The operator provides single storage for serviceability.
The operator makes it easy to use a single storage for Day-2 Operations that are related to serviceability, such as gathering server traces or server dumps. The single storage is shared by all pods of a OpenLibertyApplication
instance. You don’t need to mount a separate storage for each pod.
Your cluster must be configured to automatically bind the PersistentVolumeClaim (PVC) to a PersistentVolume or you must bind it manually.
You can specify the size of the persisted storage to request with the .spec.serviceability.size
parameter.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
serviceability:
size: 1Gi
You can specify which storage class to request with the .spec.serviceability.storageClassName
parameter if you don’t want to use the default storage class. The operator automatically creates a PersistentVolumeClaim
with the specified size and access mode ReadWriteMany
. It is mounted at /serviceability
inside all pods of the OpenLibertyApplication
instance.
Alternatively, you can create the PersistentVolumeClaim and specify its name with the .spec.serviceability.volumeClaimName
parameter. You must create it in the same namespace as the OpenLibertyApplication
instance.
apiVersion: apps.openliberty.io/v1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
serviceability:
size: 1Gi
You can also create the PersistentVolumeClaim
yourself and specify its name using .spec.serviceability.volumeClaimName
field. You must create it in the same namespace as the OpenLibertyApplication
instance.
apiVersion: apps.openliberty.io/v1
kind: OpenLibertyApplication
metadata:
name: my-liberty-app
spec:
applicationImage: quay.io/my-repo/my-app:1.0
serviceability:
volumeClaimName: my-pvc
Once a PersistentVolumeClaim
is created by operator, its size can not be updated. It will not be deleted when serviceability is disabled or when the OpenLibertyApplication
is deleted.
To deploy an image from an image stream, you must specify a .spec.applicationImage
field in your CR.
spec:
applicationImage: my-namespace/my-image-stream:1.0
The previous example looks up the 1.0
tag from the my-image-stream
image stream in the my-namespace
project and populates the CR .status.imageReference
field with the exact referenced image similar to the following one: image-registry.openshift-image-registry.svc:5000/my-namespace/my-image-stream@sha256:*
. The operator watches the specified image stream and deploys new images as new ones are available for the specified tag.
To reference an image stream, the .spec.applicationImage
field must follow the <project name>/<image stream name>[:<tag>]
format. If <project name>
or <tag>
is not specified, the operator defaults the values to the namespace of the CR and the value of latest
, respectively. For example, the applicationImage: my-image-stream
configuration is the same as the applicationImage: my-namespace/my-image-stream:latest
configuration.
The Operator tries to find an image stream name first with the <project name>/<image stream name>[:<tag>]
format and falls back to the registry lookup if it is not able to find any image stream that matches the value.
Note
|
This feature is only available if you are running on Red Hat OpenShift. The operator requires ClusterRole permissions if the image stream resource is in another namespace.
|
Lightweight Third-Party Authentication (LTPA) provides SSO configuration to authenticate users to access applications. With LTPA, cryptographic keys enable and disable user details that pass between servers for authentication. To complete authentication, an LTPA token is generated. The LTPA token is signed with cryptographic keys, stores the user details, and has an expiration time. When authentication is complete, the LTPA token passes to other servers through cookies for web sources when SSO is enabled.
Open Liberty operator can generate and manage an LTPA key for applications. By default, this functionality is disabled. Set the .spec.manageLTPA
parameter to true
in each OpenLibertyApplication custom resource to enable this functionality.
A single LTPA key is used per each namespace and is shared with microservices and applications in a namespace. A password is generated and encrypted to secure the LTPA key. The LTPA key and the password are stored in a Kubernetes Secret resource with the app.kubernetes.io/name=olo-managed-ltpa label
.
To revoke the LTPA key, delete the Kubernetes Secret resource with the app.kubernetes.io/name=olo-managed-ltpa
label in the namespace. A new LTPA key and password is then generated and used with applications in the namespace. When .spec.manageLTPA
is enabled with .spec.managePasswordEncryption
, the Liberty Operator encrypts the password of the LTPA key with the specified password encryption key. For more information on LTPA, see Single sign-on (SSO).
Note
|
LTPA support from Liberty Operator version 1.3 continues to work as before. The LTPA key that is generated with Liberty Operator version 1.3 remains used. When any OpenLibertyApplication CR enables the .spec.managePasswordEncryption parameter in the namespace, the LTPA key is regenerated. The new LTPA key is shared between OpenLibertyApplication CR instances with and without .spec.managePasswordEncryption .
|
The Liberty server must allow configuration drop-ins. The following configuration must not be set on the server. Otherwise, the manageLTPA functionality does not work.
<config updateTrigger="disabled"/>
Enable the Application Security feature in the Liberty server configuration for the application.
Note
|
Only available for operator version v1.4.0+ |
The managePasswordEncryption
function allows management of password encryption key sharing among Liberty containers. Encrypting a password makes it difficult for someone to recover a password without the password encryption key.
The Liberty Operator can manage password encryption key sharing among Liberty containers. To enable password encryption support, create a Secret named wlp-password-encryption-key
in the same namespace as the OpenLibertyApplication CR instance. Within the secret, the encryption key must be specified by using passwordEncryptionKey
. All CR instances that enable password encryption share the encryption key in the namespace.
apiVersion: v1
kind: Secret
metadata:
name: wlp-password-encryption-key
type: Opaque
stringData:
passwordEncryptionKey: randomkey
Set .spec.managePasswordEncryption
to true in the CR.
spec:
managePasswordEncryption: true
The Liberty Operator handles mounting the password encryption key into the application pod and enables the necessary Liberty server configuration to use it.
When .spec.manageLTPA
is enabled with .spec.managePasswordEncryption
, the Liberty Operator encrypts the password of the LTPA key with the password encryption key you specified.
Note
|
Encrypt all other passwords that are in a Liberty server configuration and uses AES encryption by using the password encryption key that you specify in the Secret named wlp-password-encryption-key . Liberty servers cannot decrypt the passwords if the passwords are not encrypted. For more information about how to obfuscate passwords for Liberty, see the securityUtility encode command.
|
The Liberty server must allow configuration drop-ins. The following configuration must not be set on the server. Otherwise, the managePasswordEncryption
function does not work.
<config updateTrigger="disabled"/>
The operator can create a ServiceAccount
resource when deploying an OpenLibertyApplication
custom resource (CR). If .spec.serviceAccount.name
is not specified in a CR, the operator creates a service account with the same name as the CR (e.g. my-app
).
Note
|
.spec.serviceAccountName is now deprecated. The operator still looks up the value of .spec.serviceAccountName , but you must switch to using .spec.serviceAccount.name .
|
You can set .spec.serviceAccount.mountToken
to disable mounting the service account token into the application pods. By default, the service account token is mounted. This configuration applies to either the default service account that the operator creates or to the custom service account that you provide.
If applications require specific permissions but still want the operator to create a ServiceAccount
, users can still manually create a role binding to bind a role to the service account created by the operator. To learn more about Role-based access control (RBAC), see Kubernetes documentation.
By default, the operator adds the following labels into all resources created
for an OpenLibertyApplication
CR:
Label | Default value | Description |
---|---|---|
|
|
A unique name or identifier for this component. You cannot change the default. |
|
|
A name that represents this component. |
|
|
The tool that manages this component. |
|
|
The type of component that is created. For a full list, see the Red Hat OpenShift documentation. |
|
|
The name of the higher-level application that this component is a part of. If the component is not a stand-alone application, configure this label. |
|
|
The version of the component. |
You can add new labels or overwrite existing labels, excluding the app.kubernetes.io/instance
label. To set labels, specify them in your CR as key-value pairs in the .metadata.labels
field.
metadata:
name: my-app
labels:
my-label-key: my-label-value
spec:
applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of the CR, any changes to its labels are applied only if a spec
field is updated.
When running in Red Hat OpenShift, there are additional labels and annotations that are standard on the platform. Overwrite defaults where applicable and add any labels from the Red Hat OpenShift list that are not set by default using the previous instructions.
To add new annotations into all resources created for a OpenLibertyApplication
, specify them in your CR as key-value pairs in the .metadata.annotations
field. Annotations in a CR override any annotations specified on a resource, except for the annotations set on Service
with .spec.service.annotations
.
metadata:
name: my-app
annotations:
my-annotation-key: my-annotation-value
spec:
applicationImage: quay.io/my-repo/my-app:1.0
After the initial deployment of OpenLibertyApplication
, any changes to its annotations are applied only when one of the fields from spec
is updated.
When running in Red Hat OpenShift, there are additional annotations that are standard on the platform. Overwrite defaults where applicable and add any labels from the Red Hat OpenShift list that are not set by default using the previous instructions.
To set environment variables for your application container, specify .spec.env
or .spec.envFrom
fields in a CR. The environment variables can come directly from key-value pairs, ConfigMap
, or Secret
. The environment variables set by the .spec.env
or .spec.envFrom
fields override any environment variables that are specified in the container image.
Use .spec.envFrom
to define all data in a ConfigMap
or a Secret
as environment variables in a container. Keys from ConfigMap
or Secret
resources become environment variable names in your container. The following CR sets key-value pairs in .spec.env
and .spec.envFrom
fields.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
env:
- name: DB_NAME
value: "database"
- name: DB_PORT
valueFrom:
configMapKeyRef:
name: db-config
key: db-port
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-credential
key: adminUsername
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credential
key: adminPassword
envFrom:
- configMapRef:
name: env-configmap
- secretRef:
name: env-secrets
For another example that uses .spec.envFrom[].secretRef
, see Setting up basic authentication credentials by using environment variables.
Setting up basic authentication credentials by using environment variables (.spec.envFrom[].secretRef
)
An administrator can use the username
and password
container environment variables for basic authentication credentials.
-
Create a secret with your wanted
username
andpassword
values in your Kubernetes cluster. -
Modify your
OpenLibertyApplication
CR to add a.spec.envFrom
parameter definition that references yourSecret
. For example, add the following.spec.envFrom[].secretRef
parameter to your CR and replacebasic-auth
with your secret.spec: envFrom: - secretRef: name: basic-auth
-
Ensure that your application container can access the
Secret
.
The .spec.envFrom
configuration sets two environment variables for your application container, username
and password
, and uses the username
and password
values in your secret.
Configure multiple application instances for high availability (.spec.replicas
or .spec.autoscaling
)
To run multiple instances of your application for high availability, use the .spec.replicas
field for multiple static instances or the .spec.autoscaling
field for auto-scaling, which autonomically creates or deletes instances based on resource consumption. The .spec.autoscaling
.maxReplicas and .spec.resources.requests.cpu
fields are required for auto-scaling.
A security context controls privilege and permission settings for a pod or application container. By default, the operator sets several .spec.securityContext
parameters for an application container as shown in the following example.
spec:
containers:
- name: app
securityContext:
capabilities:
drop:
- ALL
privileged: false
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
To override the default values or set more parameters, change the .spec.securityContext
parameters, for example:
spec:
applicationImage: quay.io/my-repo/my-app:1.0
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1001
seLinuxOptions:
level: "s0:c123,c456"
For more information, see Set the security context for a Container. For more information about security context parameters, see SecurityContext v1 core.
Note
|
If your Kubernetes cluster does not generate a user ID and .spec.securityContext.runAsUser is not specified, the user ID defaults to the value in the image metadata. If the image does not have a user ID specified either, you will have to assign a user ID through .spec.securityContext.runAsUser to meet .spec.securityContext.runAsNonRoot requirement.
|
If storage is specified in the OpenLibertyApplication
CR, the operator can create a StatefulSet
and PersistentVolumeClaim
for each pod. If storage is not specified, StatefulSet
resource is created without persistent storage.
The following CR definition uses .spec.statefulSet.storage
to provide basic storage. The operator creates a StatefulSet
with the size of 1Gi
that mounts to the /data
folder.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
statefulSet:
storage:
size: 1Gi
mountPath: "/data"
An Open Liberty operator CR definition can provide more advanced storage. With the following CR definition, the operator creates a PersistentVolumeClaim
called pvc
with the size of 1Gi
and ReadWriteOnce
access mode. The operator enables users to provide an entire .spec.statefulSet.storage.volumeClaimTemplate
for full control over the automatically created PersistentVolumeClaim
. To persist to more than one folder, the CR definition uses the .spec.volumeMounts
field instead of .spec.statefulSet.storage.mountPath
.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
volumeMounts:
- name: pvc
mountPath: /data_1
subPath: data_1
- name: pvc
mountPath: /data_2
subPath: data_2
statefulSet:
storage:
volumeClaimTemplate:
metadata:
name: pvc
spec:
accessModes:
- "ReadWriteMany"
storageClassName: 'glusterfs'
resources:
requests:
storage: 1Gi
Note
|
After StatefulSet is created, the persistent storage and PersistentVolumeClaim cannot be added or changed.
|
The following CR definition does not specify storage and creates StatefulSet
resources without persistent storage. You can create StatefulSet
resources without storage if you require only ordering and uniqueness of a set of pods.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
statefulSet: {}
An Open Liberty operator can create a ServiceMonitor
resource to integrate with Prometheus Operator.
Note
|
The operator monitoring does not support integration with Knative Service. Prometheus Operator is required to use ServiceMonitor .
|
At minimum, provide a label for Prometheus set on ServiceMonitor
objects. In the following example, the .spec.monitoring
label is apps-prometheus
.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
monitoring:
labels:
app-prometheus: ''
endpoints:
- interval: '30s'
basicAuth:
username:
key: username
name: metrics-secret
password:
key: password
name: metrics-secret
tlsConfig:
insecureSkipVerify: true
For more advanced monitoring, set many ServiceMonitor
parameters such as authentication secret with Prometheus Endpoint.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
monitoring:
labels:
app-prometheus: ''
endpoints:
- interval: '30s'
basicAuth:
username:
key: username
name: metrics-secret
password:
key: password
name: metrics-secret
tlsConfig:
insecureSkipVerify: true
To provide multiple service ports in addition to the primary service port, configure the primary service port with the .spec.service.port
, .spec.service.targetPort
, .spec.service.portName
, and .spec.service.nodePort
fields. The primary port is exposed from the container that runs the application and the port values are used to configure the Route (or Ingress), Service binding and Knative service.
To specify an alternative port for Service Monitor, use the .spec.monitoring.endpoints
field and specify either the port
or targetPort
field, otherwise it defaults to the primary port.
Specify the primary port with the .spec.service.port
field and additional ports with the .spec.service.ports
field as shown in the following example.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
service:
type: NodePort
port: 9080
portName: http
targetPort: 9080
nodePort: 30008
ports:
- port: 9443
name: https
monitoring:
endpoints:
- basicAuth:
password:
key: password
name: metrics-secret
username:
key: username
name: metrics-secret
interval: 5s
port: https
scheme: HTTPS
tlsConfig:
insecureSkipVerify: true
labels:
app-monitoring: 'true'
Probes are health checks on an application container to determine whether it is alive or ready to receive traffic. The Open Liberty operator has startup, liveness, and readiness probes.
Probes are not enabled in applications by default. To enable a probe with the default values, set the probe parameters to {}
. The following example enables all 3 probes to use default values.
spec:
probes:
startup: {}
liveness: {}
readiness: {}
The following code snippet shows the default values for the startup probe (.spec.probes.startup
).
httpGet:
path: /health/started
port: 9443
scheme: HTTPS
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 20
The following code snippet shows the default values for the liveness probe (.spec.probes.liveness
).
httpGet:
path: /health/live
port: 9443
scheme: HTTPS
initialDelaySeconds: 60
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 3
The following code snippet shows the default values for the readiness probe (.spec.probes.readiness
).
httpGet:
path: /health/ready
port: 9443
scheme: HTTPS
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 10
To override a default value, specify a different value. The following example overrides a liveness probe initial delay default of 60
seconds and sets the initial delay to 90
seconds.
spec:
probes:
liveness:
initialDelaySeconds: 90
When a probe initialDelaySeconds parameter is set to 0
, the default value is used. To set a probe initial delay to 0
, define the probe instead of using the default probe. The following example overrides the default value and sets the initial delay to 0
.
spec:
probes:
liveness:
httpGet:
path: "/health/live"
port: 9443
initialDelaySeconds: 0
If Knative is installed on a Kubernetes cluster, to deploy serverless applications with Knative on the cluster, the operator creates a Knative Service resource which manages the entire life cycle of a workload. To create a Knative service, set .spec.createKnativeService
to true
.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
createKnativeService: true
The operator creates a Knative service in the cluster and populates the resource with applicable OpenLibertyApplication
fields. Also, it ensures non-Knative resources such as Kubernetes Service
, Route
, and Deployment
are deleted.
The CRD fields that can populate the Knative service resource include .spec.applicationImage
, .spec.serviceAccountName
, .spec.probes.liveness
, .spec.probes.readiness
, .spec.service.port
, .spec.volumes
, .spec.volumeMounts
, .spec.env
, .spec.envFrom
, .spec.pullSecret
and .spec.pullPolicy
. Startup probe is not fully supported by Knative, thus .spec.probes.startup
does not apply when Knative service is enabled.
When using private registries with Knative / OpenShift Serverless .spec.pullSecret
must be specified. OpenShift global
pull secret can not be used to provide registry credentials to Knative Services.
For details on how to configure Knative for tasks such as enabling HTTPS connections and setting up a custom domain, see the Knative documentation.
Autoscaling fields in OpenLibertyApplication
are not used to configure Knative Pod Autoscaler (KPA). To learn how to configure KPA, see Configuring the Autoscaler.
Expose an application externally with a Route, Knative Route, or Ingress resource.
To expose an application externally with a route in a non-Knative deployment, set .spec.expose
to true.
The operator creates a secured route based on the application service when .spec.manageTLS
is enabled. To use custom certificates, see information about .spec.service.certificateSecretRef
and .spec.route.certificateSecretRef
.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
To expose an application externally with Ingress in a non-Knative deployment, complete the following steps.
-
To use the
Ingress
resource to expose your cluster, install anIngress
controller such a Nginx or Traefik. -
Ensure that a
Route
resource is not on the cluster. The Ingress resource is created only if theRoute
resource is not available on the cluster. -
To use the
Ingress
resource, set thedefaultHostName
variable in theopen-liberty-operator
ConfigMap
object to a hostname such asmycompany.com
. -
Enable TLS. Generate a certificate and specify the secret that contains the certificate with the
.spec.route.certificateSecretRef
field.spec: applicationImage: quay.io/my-repo/my-app:1.0 expose: true route: certificateSecretRef: mycompany-tls
-
Specify
.spec.route.annotations
to configure theIngress
resource. Annotations such as Nginx, HAProxy, Traefik, and others are specific to theIngress
controller implementation. The following example specifies annotations, an existing TLS secret, and a custom hostname.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
route:
annotations:
# You can use this annotation to specify the name of the ingress controller to use.
# You can install multiple ingress controllers to address different types of incoming traffic such as an external or internal DNS.
kubernetes.io/ingress.class: "nginx"
# The following nginx annotation enables a secure pod connection:
nginx.ingress.kubernetes.io/ssl-redirect: true
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# The following traefik annotation enables a secure pod connection:
traefik.ingress.kubernetes.io/service.serversscheme: https
# Use a custom hostname for the Ingress
host: app-v1.mycompany.com
# Reference a pre-existing TLS secret:
certificateSecretRef: mycompany-tls
To expose an application as a Knative service, set .spec.createKnativeService
and .spec.expose
to true
. The operator creates an unsecured Knative route. To configure secure HTTPS connections for your Knative deployment, see Configuring HTTPS with TLS certificates.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
createKnativeService: true
expose: true
By default, network policies for an application isolate incoming traffic.
-
The default network policy created for applications that are not exposed limits incoming traffic to pods in the same namespace that are part of the same application. Traffic is limited to only the ports that are configured by the service. By default, traffic will be exposed to
.spec.service.targetPort
when specified and otherwise fallback to using the.spec.service.port
. Using the same logic, traffic will be exposed for each additionaltargetPort
orport
provided in the.spec.service.ports[]
array. -
Red Hat OpenShift supports network policies by default. For exposed applications on Red Hat OpenShift, the network policy allows incoming traffic from the Red Hat OpenShift ingress controller on the ports in the service configuration. The network policy also allows incoming traffic from the Red Hat OpenShift monitoring stack.
-
For exposed applications on other Kubernetes platforms, the network policy allows incoming traffic from any pods in any namespace on the ports in the service configuration. For deployments to other Kubernetes platforms, ensure that your network plug-in supports the Kubernetes network policies.
To disable the creation of network policies for an application, set .spec.networkPolicy.disable
to true
.
spec:
networkPolicy:
disable: true
You can change the network policy to allow incoming traffic from specific namespaces or pods. By default, .spec.networkPolicy.namespaceLabels
is set to the same namespace to which the application is deployed, and .spec.networkPolicy.fromLabels
is set to pods that belong to the same application specified by .spec.applicationName
. The following example allows incoming traffic from pods that are labeled with the frontend
role and are in the same namespace.
spec:
networkPolicy:
fromLabels:
role: frontend
The following example allows incoming traffic from pods that belong to the same application in the example
namespace.
spec:
networkPolicy:
namespaceLabels:
kubernetes.io/metadata.name: example
The following example allows incoming traffic from pods that are labeled with the frontend
role in the example
namespace.
spec:
networkPolicy:
namespaceLabels:
kubernetes.io/metadata.name: example
fromLabels:
role: frontend
Bind applications with operator-managed backing services (.status.binding.name
and .spec.service.bindable
)
The Service Binding Operator enables application developers to bind applications together with operator-managed backing services. If the Service Binding Operator is installed on your cluster, you can bind applications by creating a ServiceBindingRequest
custom resource.
You can configure an Open Liberty application to behave as a Provisioned Service that is defined by the Service Binding Specification. According to the specification, a Provisioned Service resource must define a .status.binding.name
that refers to a Secret
. To expose your application as a Provisioned Service, set the .spec.service.bindable
field to a value of true. The operator creates a binding secret that is named CR_NAME-expose-binding
and adds the host
, port
, protocol
, basePath
, and uri
entries to the secret.
To override the default values for the entries in the binding secret or to add new entries to the secret, create an override secret that is named CR_NAME-expose-binding-override
and add any entries to the secret. The operator reads the content of the override secret and overrides the default values in the binding secret.
After an Open Liberty application is exposed as a Provisioned Service, a service binding request can refer to the application as a backing service.
The instructions that follow show how to bind Open Liberty applications as services or producers to other workloads (such as pods or deployments).
Note
|
Two Open Liberty applications that are deployed through the Open Liberty Operator cannot be bound. |
-
Set up the Service Binding operator to access Open Liberty applications. By default, the Service Binding operator does not have permission to interact with Open Liberty applications that are deployed through the Open Liberty operator. You must create two RoleBindings to give the Service Binding operator view and edit access for Open Liberty applications.
-
In the Red Hat OpenShift dashboard, navigate to User Management > RoleBindings.
-
Select Create binding.
-
Set the Binding type to
Cluster-wide role binding
(ClusterRoleBinding
). -
Enter a name for the binding. Choose a name that is related to service bindings and view access for Open Liberty applications.
-
For the role name, enter
openlibertyapplications.apps.openliberty.io-v1-view
. -
Set the Subject to
ServiceAccount
. -
A Subject namespace menu appears. Select
openshift-operators
. -
In the Subject name field, enter
service-binding-operator
. -
Click Create.
Now that you have set up the first role binding, navigate to the
RoleBindings
list and click Create binding again. Set up edit access by using the following instructions. -
Set Binding type to
Cluster-wide role binding
(ClusterRoleBinding
). -
Enter a name for the binding. Choose a name that is related to service bindings and edit access for Open Liberty applications.
-
In the Role name field, enter
openlibertyapplications.apps.openliberty.io-v1-edit
. -
Set Subject to
ServiceAccount
. -
In the Subject namespace list, select
openshift-operators
. -
In the Subject name field, type
service-binding-operator
. -
Click Create.
Service bindings from Open Liberty applications (or "services") to pods or deployments (or "workloads") now succeed. After a binding is made, the bound workload restarts or scales to mount the binding secret to
/bindings
in all containers. -
-
Set up a service binding by using the Red Hat method. For more information, see the Red Hat documentation or the Red Hat tutorial.
-
On the Red Hat OpenShift web dashboard, click Administrator in the sidebar and select Developer.
-
In the Topology view for the current namespace, hover over the border of the Open Liberty application to be bound as a service, and drag an arrow to the Pod or Deployment workload. A tooltip appears entitled Create Service Binding.
-
The Create Service Binding window opens. Change the name to value that is fewer than 63 characters. The Service Binding operator might fail to mount the secret as a volume if the name exceeds 63 characters.
-
Click Create.
-
A sidebar opens. To see the status of the binding, click the name of the secret and then scroll until the status appears.
-
Check the pod/deployment workload and verify that a volume is mounted. You can also open a terminal session into a container and run
ls /bindings
.
-
-
Set up a service binding using the Spec API Tech Preview / Community method.
This method is newer than the Red Hat method but achieves the same results. You must add a label to your Open Liberty application, such as
app=frontend
, if it does not have any unique labels. Set the binding to use a label selector so that the Service Binding operator looks for an Open Liberty application with a specific label.-
Install the Service Binding operator by using the Red Hat OpenShift Operator Catalog.
-
Select Operators > Installed Operators and set the namespace to the same one used by both your Open Liberty application and pod/deployment workload.
-
Open the Service Binding (Spec API Tech Preview) page.
-
Click Create ServiceBinding.
-
Choose a short name for the binding. Names that exceed 63 characters might cause the binding secret volume mount to fail.
-
Expand the Service section.
-
In the Api Version field, enter
apps.openliberty.io/v1
. -
In the Kind field, enter
OpenLibertyApplication
. -
In the Name field, enter the name of your application. You can get this name from the list of applications on the Open Liberty operator page.
-
Expand the Workload section.
-
Set the Api Version field to the value of apiVersion in your target workload YAML. For example, if the workload is a deployment, the value is
apps/v1
. -
Set the Kind field to the value of kind in your target workload YAML. For example, if the workload is a deployment, the value is
Deployment
. -
Expand the Selector subsection, and then expand the Match Expressions subsection.
-
Click Add Match Expression.
-
In the Key field, enter the label key that you set earlier. For example, for the label
app=frontend
, the key isapp
). -
In the Operator field, enter
Exists
. -
Expand the Values subsection and click Add Value.
-
In the Value field, enter the label value that you set earlier. For example, if using the label
app=frontend
, the value isfrontend
. -
Click Create.
-
Check the Pod/Deployment workload and verify that a volume is mounted, either by scrolling down or by opening a terminal session into a container and running
ls /bindings
.
-
Use .spec.affinity
to constrain a Pod to run only on specified nodes.
To set required labels for pod scheduling on specific nodes, use the .spec.affinity.nodeAffinityLabels
field.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
nodeAffinityLabels:
customNodeLabel: label1, label2
customNodeLabel2: label3
The following example requires a large
node type and preferences for two zones, which are named zoneA
and zoneB
.
metadata:
name: my-app
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- large
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 60
preference:
matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- zoneA
- weight: 20
preference:
matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- zoneB
Use pod affinity and anti-affinity to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on node.
The following example shows that pod affinity is required and that the pods for Service-A
and Service-B
must be in the same zone. Through pod anti-affinity, it is preferable not to schedule Service_B
and Service_C
on the same host.
metadata:
name: Service-B
namespace: test
spec:
applicationImage: quay.io/my-repo/my-app:1.0
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: service
operator: In
values:
- Service-A
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: service
operator: In
values:
- Service-C
topologyKey: kubernetes.io/hostname
Use the .spec.topologySpreadConstraints
YAML object to specify constraints on how pods of the application instance (and if enabled, the Semeru Cloud Compiler instance) are spread between nodes and zones of the cluster.
Using the .spec.topologySpreadConstraints.constraints
field, you can specify a list of Pod TopologySpreadConstraints to be added, such as in the example below:
spec:
topologySpreadConstraints:
constraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: example
By default, the operator will add the following Pod topology spread constraints on the application instance’s pods (and if applicable, the Semeru Cloud Compiler instance’s pods). The default behaviour is to constrain the spread of pods which are owned by the same application instance (or Semeru Cloud Compiler generation instance), denoted by <instance name>
with a maxSkew
of 1
.
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: <instance name>
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app.kubernetes.io/instance: <instance name>
To remove the operator’s default topology spread constraints from above, set the .spec.topologySpreadConstraints.disableOperatorDefaults
flag to true
.
spec:
topologySpreadConstraints:
disableOperatorDefaults: true
Alternatively, override each constraint manually by creating a new TopologySpreadConstraint under .spec.topologySpreadConstraints.constraints
for each topologyKey
you want to modify.
Note
|
When using the disableOperatorDefaults: true flag. If cluster-level default constraints are not enabled, by default, the K8s scheduler will use its own internal default Pod topology spread constraints as outlined in https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#internal-default-constraints.
|
An administrator can configure TLS certificates to secure applications that run on Kubernetes-based clusters. By default, the operator generates certificates. Instead, an administrator can specify certificates for the Route and Service.
By default, the OpenLibertyApplication
.spec.manageTLS
parameter is set to true
and the operator attempts to generate certificates and mount them to the pod at /etc/x509/certs
. You must enable TLS within your application image to use this capability. Port 9443
is used as the default service port. If your OpenLibertyApplication
custom resource specifies the port explicitly using .spec.service.port
field then make sure its value is set to the secure TLS port. Specifying 9080
as the value for .spec.service.port
can cause the application to not work properly. If .spec.expose
is set to true
, the Route also is configured automatically to enable TLS by using reencrypt
termination. This configuration ensures end-to-end encryption from an external source to the application or pod.
To change this default configuration, see the following sections.
Note
|
If your application CR sets .spec.manageTLS to false , then the operator does not manage the certificate. You must provide your own TLS certificates and configure probes, monitoring, routes, and other parameters.
|
Note
|
Only available for operator version v1.4.0+ |
DNS can be configured in OpenLibertyApplication CR by using the .spec.dns.policy
field or the .spec.dns.config
field. The .spec.dns.policy
field is the DNS policy for the application pod and defaults to the ClusterFirst
policy. The .spec.dns.config
field is the DNS config for the application pod.
Kubernetes supports the following pod-specific DNS policies. The following policies can be specified by using the .spec.dns.policy
field:
-
Default
: The pod inherits the name resolution configuration from the node that the pods run on. -
ClusterFirst
: Any DNS query that does not match the configured cluster domain suffix, such as www.kubernetes.io, is forwarded to an upstream name server by the DNS server. Cluster administrators can have extra stub-domain and upstream DNS servers configured. -
ClusterFirstWithHostNet
: Set the DNS policy toClusterFirstWithHostNet
if the pod runs withhostNetwork
. Pods running withhostNetwork
and set to theClusterFirst
policy behaves like theDefault
policy.
Note
|
ClusterFirstWithHostNet is not supported on Windows. For more information, see DNS Resolution on Windows.
|
-
None
: A pod can ignore DNS settings from the Kubernetes environment. All DNS settings are provided by using the.spec.dns.config
field of OpenLibertyApplication CR.
For more information, see Customizing DNS Service.
Note
|
Default is not the default DNS policy. If .spec.dns.policy is not explicitly specified, then ClusterFirst is used.
|
DNS Config allows users more control over the DNS settings for an application Pod.
The .spec.dns.config
field is optional and it can work with any .spec.dns.policy
settings. However, when a .spec.dns.policy
is set to None
, the .spec.dns.config
field must be specified.
The following properties are specified within the .spec.dns.config
field:
-
.spec.dns.config.nameservers
: a list of IP addresses that are used as DNS servers for the Pod. Up to 3 IP addresses are specified. When.spec.dns.policy
is set toNone
, the list must contain at least one IP address, otherwise this property is optional. The servers that are listed are combined to the base name servers generated from the specified DNS policy with duplicate addresses removed. -
.spec.dns.config.searches
: a list of DNS search domains for hostname lookup in the Pod. This property is optional. When specified, the provided list is merged into the base search domain names that are generated from the chosen DNS policy. Duplicate domain names are removed. Kubernetes allows up to 32 search domains. -
.spec.dns.config.options
: an optional list of objects where each object must have a name property and can have a value property. The contents in this property are merged to the options generated from the specified DNS policy. Duplicate entries are removed.
spec: dns: policy: "None" config: nameservers: - 192.0.2.1 # this is an example searches: - ns1.svc.cluster-domain.example - my.dns.search.suffix options: - name: ndots value: "2" - name: edns0
For more information on DNS, see the Kubernetes DNS documentation.
Note
|
Only available for operator version v1.4.0+ |
Node affinity is a property that attracts pods to a set of nodes either as a preference or a hard requirement. However, taints allow a node to repel a set of pods.
Tolerations are applied to pods and allow a scheduler to schedule pods with matching taints. The scheduler also evaluates other parameters as part of its function.
Taints and tolerations work together to help ensure that application pods are not scheduled onto inappropriate nodes. If one or more taints are applied to a node, the node cannot accept any pods that do not tolerate the taints.
Tolerations can be configured in OpenLibertyApplication CR by using the .spec.tolerations
field.
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
For more information on taints and toleration, see the Kubernetes taints and toleration documentation.
Note
|
When .spec.manageTLS is set to true , the default, certificate manager must be installed on the Kubernetes cluster.
|
When certificate manager is installed on the cluster, the service certificate is generated with the cert-manager.io/v1
Certificate
kind. The cert-manager tool enables the operator to automatically provision TLS certificates for pods and routes. Certificates are mounted into containers from a Kubernetes secret so that the certificates are automatically refreshed when they update. For more information about the cert-manager tool, see https://cert-manager.io/.
The operator creates a certificate authority (CA) Issuer
instance to be shared by applications within a single namespace. The secret (or issuer) must be created in the same namespace as the OpenLibertyApplication
. The issuer is used to generate a service certificate for each application that is mounted into the pod. The tls.crt
, tls.key
, and ca.crt
files are mounted to the pod. The location is set by the TLS_DIR
environment variable. The same secret (or issuer) is used for all instances of the application in the namespace.
By default, the operator creates its own certificate authority (CA) for issuing service certificates. However, you can use your own CA certificate. To use your CA certificate, create a Kubernetes secret named olo-custom-ca-tls
. This secret must contain the CA’s tls.crt
and tls.key
file. After this secret is provided, the operator reissues certificates for the service by using the provided CA.
See the following example CA secret:
apiVersion: v1
kind: Secret
metadata:
name: olo-custom-ca-tls
data:
tls.crt: >-
LS0tLS.....
tls.key: >-
LS0tL.....
type: kubernetes.io/tls
You can provide a custom Issuer (for example, certificate authority (CA), or Vault) for the service certificates.
The issuer must be named olo-custom-issuer
.
See the following example custom issuer:
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: olo-custom-issuer
spec:
vault:
auth:
tokenSecretRef:
key: token
name: vault-token
path: pki/sign/cert-manager
server: >-
https://vault-internal.vault.svc:8201
If the operator runs on Red Hat OpenShift Container Platform, the operator can automatically generate service certificates with Red Hat OpenShift Service CA.
This method is the default, and is the simplest way to generate certificates if the certificate manager operator is not installed on the cluster.
The tls.crt
and tls.key
files are mounted to the pod and the location is set by the TLS_DIR
environment variable. The Red Hat OpenShift CA certificate is in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
file.
If the certificate manager is installed on the cluster, the certificate manager generates service certificates unless otherwise specified by the application. For example, to force use of the Red Hat OpenShift service CA, add an annotation to the application YAML file with .spec.service.annotations
.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
manageTLS: true
expose: true
service:
annotations:
service.beta.openshift.io/serving-cert-secret-name: my-app-svc-tls-ocp
port: 9443
Specifying certificates for a secret Route and Service (.spec.service.certificateSecretRef
and .spec.route.certificateSecretRef
)
Specify your own certificate for a secret Route
with the OpenLibertyApplication
CR .spec.route.certificateSecretRef
parameter. Specify your own certificate for a secret Service
with the .spec.service.certificateSecretRef
parameter.
The following examples specify certificates for a route.
For .spec.route.certificateSecretRef
, replace my-app-rt-tls
with the name of a secret that contains TLS key, certificate, and CA to use in the route. It can also contain destination CA certificate.
spec:
applicationImage: quay.io/my-repo/my-app:1.0
expose: true
route:
host: myapp.mycompany.com
termination: reencrypt
certificateSecretRef: my-app-rt-tls
service:
port: 9443
The following example manually provides a route secret. For the secret, replace my-app-rt-tls
with the name of the secret. For a route, the following keys are valid in the secret.
-
ca.crt
-
destCA.crt
-
tls.crt
-
tls.key
kind: Secret
apiVersion: v1
metadata:
name: my-app-rt-tls
data:
ca.crt: >-
Certificate Authority public certificate...(base64)
tls.crt: >-
Route public certificate...(base64)
tls.key: >-
Route private key...(base64)
destCA.crt: >-
Pod/Service certificate Certificate Authority (base64). Might be required when using reencrypt termination policy.
type: kubernetes.io/tls
For an example that uses .spec.route.certificateSecretRef
and makes applications available externally, see the .spec.expose
examples.
-
The corresponding
OpenLibertyApplication
must already have storage for serviceability configured in order to use the day-2 operations -
The custom resource (CR) for a day-2 operation must be created in the same namespace as the
OpenLibertyApplication
To allow auto-discovery of supported day-2 operations from external tools the following annotation has been added to the OpenLibertyApplication
CRD:
annotations:
openliberty.io/day2operations: OpenLibertyTrace,OpenLibertyDump
Additionally, each day-2 operation CRD has the following annotation which illustrates the k8s Kind
(s) the operation applies to:
annotations:
day2operation.openliberty.io/targetKinds: Pod
You can request a snapshot of the server status including different types of server dumps, from an instance of Liberty server running inside a Pod
, using Open Liberty Operator and OpenLibertyDump
custom resource (CR). To use this feature the OpenLibertyApplication
needs to have storage for serviceability already configured. Also, the OpenLibertyDump
CR must be created in the same namespace as the Pod
to operate on.
The configurable fields are:
Field |
Description |
|
The name of the Pod, which must be in the same namespace as the |
|
Optional. List of memory dump types to request: thread,heap,system |
Example including thread dump:
apiVersion: apps.openliberty.io/v1
kind: OpenLibertyDump
metadata:
name: example-dump
spec:
podName: Specify_Pod_Name_Here
include:
- thread
The dump file name is added to the OpenLibertyDump
CR status and file is stored in the serviceability
folder with a format such as /serviceability/namespace/pod_name/timestamp.zip
Once the dump has started, the CR can not be re-used to take more dumps. A new CR needs to be created for each server dump.
You can check the status of a dump operation using the status
field inside the CR YAML. You can also run the command oc get oldump -o wide
to see the status of all dump operations in the current namespace.
Note: System dump might not work on certain Kubernetes versions, such as OpenShift 4.x
You can request server traces, from an instance of Liberty server running inside a Pod
, using Open Liberty Operator and OpenLibertyTrace
custom resource (CR). To use this feature the OpenLibertyApplication
must already have storage for serviceability configured. Also, the OpenLibertyTrace
CR must be created in the same namespace as the Pod
to operate on.
The configurable fields are:
Field |
Description |
|
The name of the Pod, which must be in the same namespace as the |
|
The trace string to be used to selectively enable trace. The default is *=info. |
|
The maximum size (in MB) that a log file can reach before it is rolled. To disable this attribute, set the value to 0. By default, the value is 20. This setting does not apply to the |
|
If an enforced maximum file size exists, this setting is used to determine how many of each of the logs files are kept. This setting also applies to the number of exception logs that summarize exceptions that occurred on any particular day. |
|
Set to true to stop tracing. |
Example:
apiVersion: apps.openliberty.io/v1
kind: OpenLibertyTrace
metadata:
name: example-trace
spec:
podName: Specify_Pod_Name_Here
traceSpecification: "*=info:com.ibm.ws.webcontainer*=all"
maxFileSize: 20
maxFiles: 5
Generated trace files, along with messages.log files, are stored in the serviceability
folder with a format such as /serviceability/namespace/pod_name/timestamp.zip
.
Once the trace has started, it can be stopped by setting the .spec.disable
field to true
. Deleting the CR will also stop the tracing. Changing the podName
will first stop the tracing on the old Pod before enabling traces on the new Pod.
You can check the status of a trace operation using the status
field inside the CR YAML. You can also run the command oc get oltrace -o wide
to see the status of all trace operations in the current namespace.
Important: Liberty server must allow configuration dropins. The following configuration should not be set on the server: <config updateTrigger=“disabled”/>
. Otherwise, OpenLibertyTrace operation will not work on the server.
Note: The operator doesn’t monitor the Pods. If the Pod is restarted or deleted after the trace is enabled, then the tracing wouldn’t be automatically enabled when the Pod comes back up. In that case, the status of the trace operation may not correctly report whether the trace is enabled or not.
See the troubleshooting guide for information on how to investigate and resolve deployment problems.