-
Notifications
You must be signed in to change notification settings - Fork 626
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
22 changed files
with
1,684 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
29 changes: 29 additions & 0 deletions
29
exporter/opentelemetry-exporter-prometheus-remote-write/README.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
OpenTelemetry Prometheus Remote Write Exporter | ||
============================================== | ||
|
||
|pypi| | ||
|
||
.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-prometheus-remote-write.svg | ||
:target: https://pypi.org/project/opentelemetry-exporter-prometheus-remote-write/ | ||
|
||
This package contains an exporter to send metrics from the OpenTelemetry Python SDK directly to a Prometheus Remote Write integrated backend | ||
(such as Cortex or Thanos) without having to run an instance of the Prometheus server. | ||
|
||
|
||
Installation | ||
------------ | ||
|
||
:: | ||
|
||
pip install opentelemetry-exporter-prometheus-remote-write | ||
|
||
|
||
.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/ | ||
.. _Prometheus Remote Write integrated backend: https://prometheus.io/docs/operating/integrations/ | ||
|
||
|
||
References | ||
---------- | ||
|
||
* `OpenTelemetry Project <https://opentelemetry.io/>`_ | ||
* `Prometheus Remote Write Integration <https://prometheus.io/docs/operating/integrations/>`_ |
11 changes: 11 additions & 0 deletions
11
exporter/opentelemetry-exporter-prometheus-remote-write/example/Dockerfile
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
FROM python:3.8 | ||
|
||
RUN apt-get update -y && apt-get install libsnappy-dev -y | ||
|
||
WORKDIR /code | ||
COPY . . | ||
|
||
RUN pip install -e . | ||
RUN pip install -r ./examples/requirements.txt | ||
|
||
CMD ["python", "./examples/sampleapp.py"] |
42 changes: 42 additions & 0 deletions
42
exporter/opentelemetry-exporter-prometheus-remote-write/example/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
# Prometheus Remote Write Exporter Example | ||
This example uses [Docker Compose](https://docs.docker.com/compose/) to set up: | ||
|
||
1. A Python program that creates 5 instruments with 5 unique | ||
aggregators and a randomized load generator | ||
2. An instance of [Cortex](https://cortexmetrics.io/) to receive the metrics | ||
data | ||
3. An instance of [Grafana](https://grafana.com/) to visualizse the exported | ||
data | ||
|
||
## Requirements | ||
* Have Docker Compose [installed](https://docs.docker.com/compose/install/) | ||
|
||
*Users do not need to install Python as the app will be run in the Docker Container* | ||
|
||
## Instructions | ||
1. Run `docker-compose up -d` in the the `examples/` directory | ||
|
||
The `-d` flag causes all services to run in detached mode and frees up your | ||
terminal session. This also causes no logs to show up. Users can attach themselves to the service's logs manually using `docker logs ${CONTAINER_ID} --follow` | ||
|
||
2. Log into the Grafana instance at [http://localhost:3000](http://localhost:3000) | ||
* login credentials are `username: admin` and `password: admin` | ||
* There may be an additional screen on setting a new password. This can be skipped and is optional | ||
|
||
3. Navigate to the `Data Sources` page | ||
* Look for a gear icon on the left sidebar and select `Data Sources` | ||
|
||
4. Add a new Prometheus Data Source | ||
* Use `http://cortex:9009/api/prom` as the URL | ||
* (OPTIONAl) set the scrape interval to `2s` to make updates appear quickly | ||
* click `Save & Test` | ||
|
||
5. Go to `Metrics Explore` to query metrics | ||
* Look for a compass icon on the left sidebar | ||
* click `Metrics` for a dropdown list of all the available metrics | ||
* (OPTIONAL) Adjust time range by clicking the `Last 6 hours` button on the upper right side of the graph | ||
* (OPTIONAL) Set up auto-refresh by selecting an option under the dropdown next to the refresh button on the upper right side of the graph | ||
* Click the refresh button and data should show up on the graph | ||
|
||
6. Shutdown the services when finished | ||
* Run `docker-compose down` in the examples directory |
101 changes: 101 additions & 0 deletions
101
exporter/opentelemetry-exporter-prometheus-remote-write/example/cortex-config.yml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
# This Cortex Config is copied from the Cortex Project documentation | ||
# Source: https://github.com/cortexproject/cortex/blob/master/docs/configuration/single-process-config.yaml | ||
|
||
# Configuration for running Cortex in single-process mode. | ||
# This configuration should not be used in production. | ||
# It is only for getting started and development. | ||
|
||
# Disable the requirement that every request to Cortex has a | ||
# X-Scope-OrgID header. `fake` will be substituted in instead. | ||
# pylint: skip-file | ||
auth_enabled: false | ||
|
||
server: | ||
http_listen_port: 9009 | ||
|
||
# Configure the server to allow messages up to 100MB. | ||
grpc_server_max_recv_msg_size: 104857600 | ||
grpc_server_max_send_msg_size: 104857600 | ||
grpc_server_max_concurrent_streams: 1000 | ||
|
||
distributor: | ||
shard_by_all_labels: true | ||
pool: | ||
health_check_ingesters: true | ||
|
||
ingester_client: | ||
grpc_client_config: | ||
# Configure the client to allow messages up to 100MB. | ||
max_recv_msg_size: 104857600 | ||
max_send_msg_size: 104857600 | ||
use_gzip_compression: true | ||
|
||
ingester: | ||
# We want our ingesters to flush chunks at the same time to optimise | ||
# deduplication opportunities. | ||
spread_flushes: true | ||
chunk_age_jitter: 0 | ||
|
||
walconfig: | ||
wal_enabled: true | ||
recover_from_wal: true | ||
wal_dir: /tmp/cortex/wal | ||
|
||
lifecycler: | ||
# The address to advertise for this ingester. Will be autodiscovered by | ||
# looking up address on eth0 or en0; can be specified if this fails. | ||
# address: 127.0.0.1 | ||
|
||
# We want to start immediately and flush on shutdown. | ||
join_after: 0 | ||
min_ready_duration: 0s | ||
final_sleep: 0s | ||
num_tokens: 512 | ||
tokens_file_path: /tmp/cortex/wal/tokens | ||
|
||
# Use an in memory ring store, so we don't need to launch a Consul. | ||
ring: | ||
kvstore: | ||
store: inmemory | ||
replication_factor: 1 | ||
|
||
# Use local storage - BoltDB for the index, and the filesystem | ||
# for the chunks. | ||
schema: | ||
configs: | ||
- from: 2019-07-29 | ||
store: boltdb | ||
object_store: filesystem | ||
schema: v10 | ||
index: | ||
prefix: index_ | ||
period: 1w | ||
|
||
storage: | ||
boltdb: | ||
directory: /tmp/cortex/index | ||
|
||
filesystem: | ||
directory: /tmp/cortex/chunks | ||
|
||
delete_store: | ||
store: boltdb | ||
|
||
purger: | ||
object_store_type: filesystem | ||
|
||
frontend_worker: | ||
# Configure the frontend worker in the querier to match worker count | ||
# to max_concurrent on the queriers. | ||
match_max_concurrent: true | ||
|
||
# Configure the ruler to scan the /tmp/cortex/rules directory for prometheus | ||
# rules: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules | ||
ruler: | ||
enable_api: true | ||
enable_sharding: false | ||
storage: | ||
type: local | ||
local: | ||
directory: /tmp/cortex/rules | ||
|
33 changes: 33 additions & 0 deletions
33
exporter/opentelemetry-exporter-prometheus-remote-write/example/docker-compose.yml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,33 @@ | ||
# Copyright The OpenTelemetry Authors | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
version: "3.8" | ||
|
||
services: | ||
cortex: | ||
image: quay.io/cortexproject/cortex:v1.5.0 | ||
command: | ||
- -config.file=./config/cortex-config.yml | ||
volumes: | ||
- ./cortex-config.yml:/config/cortex-config.yml:ro | ||
ports: | ||
- 9009:9009 | ||
grafana: | ||
image: grafana/grafana:latest | ||
ports: | ||
- 3000:3000 | ||
sample_app: | ||
build: | ||
context: ../ | ||
dockerfile: ./examples/Dockerfile |
7 changes: 7 additions & 0 deletions
7
exporter/opentelemetry-exporter-prometheus-remote-write/example/requirements.txt
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
psutil | ||
protobuf>=3.13.0 | ||
requests>=2.25.0 | ||
python-snappy>=0.5.4 | ||
opentelemetry-api | ||
opentelemetry-sdk | ||
opentelemetry-proto |
114 changes: 114 additions & 0 deletions
114
exporter/opentelemetry-exporter-prometheus-remote-write/example/sampleapp.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,114 @@ | ||
# Copyright The OpenTelemetry Authors | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
import logging | ||
import random | ||
import sys | ||
import time | ||
from logging import INFO | ||
|
||
import psutil | ||
|
||
from opentelemetry import metrics | ||
from opentelemetry.exporter.prometheus_remote_write import ( | ||
PrometheusRemoteWriteMetricsExporter, | ||
) | ||
from opentelemetry.metrics import Observation | ||
from opentelemetry.sdk.metrics import MeterProvider | ||
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader | ||
|
||
logging.basicConfig(stream=sys.stdout, level=logging.INFO) | ||
logger = logging.getLogger(__name__) | ||
|
||
|
||
testing_labels = {"environment": "testing"} | ||
|
||
exporter = PrometheusRemoteWriteMetricsExporter( | ||
endpoint="http://cortex:9009/api/prom/push", | ||
headers={"X-Scope-Org-ID": "5"}, | ||
) | ||
reader = PeriodicExportingMetricReader(exporter, 1000) | ||
provider = MeterProvider(metric_readers=[reader]) | ||
metrics.set_meter_provider(provider) | ||
meter = metrics.get_meter(__name__) | ||
|
||
|
||
# Callback to gather cpu usage | ||
def get_cpu_usage_callback(observer): | ||
for (number, percent) in enumerate(psutil.cpu_percent(percpu=True)): | ||
labels = {"cpu_number": str(number)} | ||
yield Observation(percent, labels) | ||
|
||
|
||
# Callback to gather RAM usage | ||
def get_ram_usage_callback(observer): | ||
ram_percent = psutil.virtual_memory().percent | ||
yield Observation(ram_percent, {}) | ||
|
||
|
||
requests_counter = meter.create_counter( | ||
name="requests", | ||
description="number of requests", | ||
unit="1", | ||
) | ||
|
||
request_min_max = meter.create_counter( | ||
name="requests_min_max", | ||
description="min max sum count of requests", | ||
unit="1", | ||
) | ||
|
||
request_last_value = meter.create_counter( | ||
name="requests_last_value", | ||
description="last value number of requests", | ||
unit="1", | ||
) | ||
|
||
requests_active = meter.create_up_down_counter( | ||
name="requests_active", | ||
description="number of active requests", | ||
unit="1", | ||
) | ||
|
||
meter.create_observable_counter( | ||
callbacks=[get_ram_usage_callback], | ||
name="ram_usage", | ||
description="ram usage", | ||
unit="1", | ||
) | ||
|
||
meter.create_observable_up_down_counter( | ||
callbacks=[get_cpu_usage_callback], | ||
name="cpu_percent", | ||
description="per-cpu usage", | ||
unit="1", | ||
) | ||
|
||
request_latency = meter.create_histogram("request_latency") | ||
|
||
# Load generator | ||
num = random.randint(0, 1000) | ||
while True: | ||
# counters | ||
requests_counter.add(num % 131 + 200, testing_labels) | ||
request_min_max.add(num % 181 + 200, testing_labels) | ||
request_last_value.add(num % 101 + 200, testing_labels) | ||
|
||
# updown counter | ||
requests_active.add(num % 7231 + 200, testing_labels) | ||
|
||
request_latency.record(num % 92, testing_labels) | ||
logger.log(level=INFO, msg="completed metrics collection cycle") | ||
time.sleep(1) | ||
num += 9791 |
1 change: 1 addition & 0 deletions
1
exporter/opentelemetry-exporter-prometheus-remote-write/proto/.gitignore
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
opentelemetry |
3 changes: 3 additions & 0 deletions
3
exporter/opentelemetry-exporter-prometheus-remote-write/proto/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
## Instructions | ||
1. Install protobuf tools. Can use your package manager or download from [GitHub](https://github.com/protocolbuffers/protobuf/releases/tag/v21.7) | ||
2. Run `generate-proto-py.sh` from inside the `proto/` directory |
Oops, something went wrong.