The OpenTelemetry Operator is an implementation of a Kubernetes Operator.
The operator manages:
- OpenTelemetry Collector
- auto-instrumentation of the workloads using OpenTelemetry instrumentation libraries
To install the operator in an existing cluster, make sure you have cert-manager
installed and run:
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
Once the opentelemetry-operator
deployment is ready, create an OpenTelemetry Collector (otelcol) instance, like:
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: simplest
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
EOF
WARNING: Until the OpenTelemetry Collector format is stable, changes may be required in the above example to remain compatible with the latest version of the OpenTelemetry Collector image being referenced.
This will create an OpenTelemetry Collector instance named simplest
, exposing a jaeger-grpc
port to consume spans from your instrumented applications and exporting those spans via logging
, which writes the spans to the console (stdout
) of the OpenTelemetry Collector instance that receives the span.
The config
node holds the YAML
that should be passed down as-is to the underlying OpenTelemetry Collector instances. Refer to the OpenTelemetry Collector documentation for a reference of the possible entries.
At this point, the Operator does not validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.
The CustomResource
for the OpenTelemetryCollector
exposes a property named .Spec.Mode
, which can be used to specify whether the collector should run as a DaemonSet
, Sidecar
, or Deployment
(default). Look at this sample for reference.
A sidecar with the OpenTelemetry Collector can be injected into pod-based workloads by setting the pod annotation sidecar.opentelemetry.io/inject
to either "true"
, or to the name of a concrete OpenTelemetryCollector
from the same namespace, like in the following example:
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: sidecar-for-my-app
spec:
mode: sidecar
config: |
receivers:
jaeger:
protocols:
grpc:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp, jaeger]
processors: []
exporters: [logging]
EOF
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: myapp
annotations:
sidecar.opentelemetry.io/inject: "true"
spec:
containers:
- name: myapp
image: jaegertracing/vertx-create-span:operator-e2e-tests
ports:
- containerPort: 8080
protocol: TCP
EOF
When there are multiple OpenTelemetryCollector
resources with a mode set to Sidecar
in the same namespace, a concrete name should be used. When there's only one Sidecar
instance in the same namespace, this instance is used when the annotation is set to "true"
.
The annotation value can come either from the namespace, or from the pod. The most specific annotation wins, in this order:
- the pod annotation is used when it's set to a concrete instance name or to
"false"
- namespace annotation is used when the pod annotation is either absent or set to
"true"
, and the namespace is set to a concrete instance or to"false"
When using a pod-based workload, such as Deployment
or Statefulset
, make sure to add the annotation to the PodTemplate
part. Like:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
annotations:
sidecar.opentelemetry.io/inject: "true" # WRONG
spec:
selector:
matchLabels:
app: my-app
replicas: 1
template:
metadata:
labels:
app: my-app
annotations:
sidecar.opentelemetry.io/inject: "true" # CORRECT
spec:
containers:
- name: myapp
image: jaegertracing/vertx-create-span:operator-e2e-tests
ports:
- containerPort: 8080
protocol: TCP
EOF
The operator can inject and configure OpenTelemetry auto-instrumentation libraries. At this moment, the operator can inject only OpenTelemetry Java auto-instrumentation.
The injection of the Java agent can be enabled by adding an annotation to the namespace, so that all pods within that namespace will get the instrumentation, or by adding the annotation to individual PodSpec objects, available as part of Deployment, Statefulset, and other resources.
instrumentation.opentelemetry.io/inject-java: "true"
The value can be
"false"
- do not inject"true"
- inject andInstrumentation
resource from the namespace."java-instrumentation"
- name ofInstrumentation
CR instance.
In addition to the annotation, the following CR
has to be created. The Instrumentation
resource provides configuration for OpenTelemetry SDK and auto-instrumentation.
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: java-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4318
java:
image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest # <1>
EOF
- Container image with OpenTelemetry Java auto-instrumentation. The image must contain the Java agent JAR
/javaagent.jar
, and the operator will copy it to a shared volume mounted to the application container.
The above CR can be queried by kubectl get otelinst
.
The OpenTelemetry Operator follows the same versioning as the operand (OpenTelemetry Collector) up to the minor part of the version. For example, the OpenTelemetry Operator v0.18.1 tracks OpenTelemetry Collector 0.18.0. The patch part of the version indicates the patch level of the operator itself, not that of OpenTelemetry Collector. Whenever a new patch version is released for OpenTelemetry Collector, we'll release a new patch version of the operator.
We strive to be compatible with the widest range of Kubernetes versions as possible, but some changes to Kubernetes itself require us to break compatibility with older Kubernetes versions, be it because of code incompatibilities, or in the name of maintainability.
Our promise is that we'll follow what's common practice in the Kubernetes world and support N-2 versions, based on the release date of the OpenTelemetry Operator.
For instance, when we released v0.27.0, the latest Kubernetes version was v1.21.1. As such, the minimum version of Kubernetes we support for OpenTelemetry Operator v0.27.0 is v1.19 and we tested it with up to 1.21.
The OpenTelemetry Operator might work on versions outside of the given range, but when opening new issues, please make sure to test your scenario on a supported version.
OpenTelemetry Operator | Kubernetes |
---|---|
v0.37.1 | v1.20 to v1.22 |
v0.37.0 | v1.20 to v1.22 |
v0.36.0 | v1.20 to v1.22 |
v0.35.0 | v1.20 to v1.22 |
v0.34.0 | v1.20 to v1.22 |
v0.33.0 | v1.20 to v1.22 |
v0.32.0 (skipped) | n/a |
v0.31.0 | v1.19 to v1.21 |
v0.30.0 | v1.19 to v1.21 |
v0.29.0 | v1.19 to v1.21 |
v0.28.0 | v1.19 to v1.21 |
v0.27.0 | v1.19 to v1.21 |
v0.26.0 | v1.18 to v1.20 |
Please see CONTRIBUTING.md.
Approvers (@open-telemetry/operator-approvers):
Maintainers (@open-telemetry/operator-maintainers):
- @open-telemetry/collector-maintainers
- Juraci Paixão Kröhling, Grafana Labs
- Vineeth Pothulapati, Timescale
Learn more about roles in the community repository.
Thanks to all the people who already contributed!