Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Metrics #58

Merged
merged 2 commits into from
Jul 1, 2019
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ __NOTE__ This page describes preview software. If you have comments or issues, p
* [Step 2: Configure Prometheus](#step-2-configure-prometheus)
* [Filter metrics](#filter-metrics)
* [Trim and relabel metrics](#trim-and-relabel-metrics)
* [Custom metrics](#custom-metrics)
* [Tear down](#tear-down)

### [Debugging the Kubernetes Collection Pipeline](#debugging-the-kubernetes-collection-pipeline-1)
Expand Down Expand Up @@ -265,6 +266,51 @@ This filter will:
* Rename the label/metadata `container_name` to `container`, and `pod_name` to `pod`.
* Only apply to metrics with the `kube-system` namespace

### Custom Metrics

If you have custom metrics you'd like to send to Sumo via prometheus, you just need to expose a `/metrics` endpoint in prometheus format, and instruct prometheus via a ServiceMonitor to pull data from the endpoint. In this section, we'll walk through collecting custom metrics with Prometheus.
samjsong marked this conversation as resolved.
Show resolved Hide resolved

#### Step 1: Expose a `/metrics` endpoint on your service
There are many pre-built libraries that the community has built to expose these, but really any output that aligns with the prometheus format can work. Here is a list of libraries: [Libraries](https://prometheus.io/docs/instrumenting/clientlibs). Manually verify that you have metrics exposed in Prometheus format by hitting the metrics endpoint, and verifying that the output follows the [Prometheus format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md).

#### Step 2: Setup a service monitor so that Prometheus pulls the data

Service Monitors is how we tell Prometheus what endpoints and sources to pull metrics from. To define a Service Monitor, create a yaml file on disk with information templated as follows:

```
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-app
labels:
team: frontend
spec:
selector:
matchLabels:
app: example-app
endpoints:
- port: web
```

Replace the `name` with a name that relates to your service, and a `matchLabels` that would match the pods you want this service monitor to scrape against. While it will always scrape the `/metrics` endpoint, you can use the `port` field to configure which port gets scraped.

Once you have created this yaml file, go ahead and run `kubectl create -f name_of_yaml.yaml -n sumologic`. This will create the service monitor in the sumologic namespace.

#### Step 3: Update the overrides.yaml file to forward the metrics to Sumo.
The overrides.yaml controls what metrics get forwarded on to Sumo Logic. In order to get your custom metrics sending into Sumo Logic, you need to update the `overrides.yaml` file to include a rule to forward on your custom metrics. Here is an example addition to the `overrides.yaml` that will forward metrics to Sumo:

```
- url: http://fluentd:9888/prometheus.metrics
writeRelabelConfigs:
- action: keep
regex: <YOUR_CUSTOM_MATCHER>
sourceLabels: [__name__]
```

After adding this to the `yaml`, go ahead and run a `helm upgrade prometheus-operator stable/prometheus-operator -f overrides.yaml` to upgrade your `prometheus-operator`.

If all goes well, you should now have your custom metrics piping into Sumo Logic.

## Step 3: Deploy FluentBit

In this step, you will deploy FluentBit to forward logs to Fluentd.
Expand Down