Skip to content

Commit

Permalink
chore: Fix readme linter errors for input plugins E-L (#11214)
Browse files Browse the repository at this point in the history
  • Loading branch information
reimda authored and MyaLongmire committed Jul 6, 2022
1 parent c1915c5 commit 9573d81
Show file tree
Hide file tree
Showing 54 changed files with 531 additions and 309 deletions.
11 changes: 5 additions & 6 deletions plugins/inputs/ecs/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
# Amazon ECS Input Plugin

Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata and
stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API endpoints
to gather stats on running containers in a Task.
Amazon ECS, Fargate compatible, input plugin which uses the Amazon ECS metadata
and stats [v2][task-metadata-endpoint-v2] or [v3][task-metadata-endpoint-v3] API
endpoints to gather stats on running containers in a Task.

The telegraf container must be run in the same Task as the workload it is
inspecting.

This is similar to (and reuses a few pieces of) the [Docker][docker-input]
input plugin, with some ECS specific modifications for AWS metadata and stats
formats.
This is similar to (and reuses a few pieces of) the [Docker][docker-input] input
plugin, with some ECS specific modifications for AWS metadata and stats formats.

The amazon-ecs-agent (though it _is_ a container running on the host) is not
present in the metadata/stats endpoints.
Expand Down
34 changes: 20 additions & 14 deletions plugins/inputs/elasticsearch/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,31 @@
# Elasticsearch Input Plugin

The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
[Node Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html)
and optionally
[Cluster-Health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html)
metrics.
[Node Stats][1] and optionally [Cluster-Health][2] metrics.

In addition, the following optional queries are only made by the master node:
[Cluster Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html)
[Indices Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
[Shard Stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html)
[Cluster Stats][3] [Indices Stats][4] [Shard Stats][5]

Specific Elasticsearch endpoints that are queried:

- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local' configuration setting
- Cluster Heath: /_cluster/health?level=indices
- Cluster Stats: /_cluster/stats
- Indices Stats: /_all/_stats
- Shard Stats: /_all/_stats?level=shards

Note that specific statistics information can change between Elasticsearch versions. In general, this plugin attempts to stay as version-generic as possible by tagging high-level categories only and using a generic json parser to make unique field names of whatever statistics names are provided at the mid-low level.
- Node: either /_nodes/stats or /_nodes/_local/stats depending on 'local'
configuration setting
- Cluster Heath: /_cluster/health?level=indices
- Cluster Stats: /_cluster/stats
- Indices Stats: /_all/_stats
- Shard Stats: /_all/_stats?level=shards

Note that specific statistics information can change between Elasticsearch
versions. In general, this plugin attempts to stay as version-generic as
possible by tagging high-level categories only and using a generic json parser
to make unique field names of whatever statistics names are provided at the
mid-low level.

[1]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html
[2]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html
[3]: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html
[4]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html
[5]: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-stats.html

## Configuration

Expand Down
48 changes: 33 additions & 15 deletions plugins/inputs/elasticsearch_query/README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,19 @@
# Elasticsearch query input plugin
# Elasticsearch Query Input Plugin

This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints to obtain metrics from data stored in an Elasticsearch cluster.
This [elasticsearch](https://www.elastic.co/) query plugin queries endpoints to
obtain metrics from data stored in an Elasticsearch cluster.

The following is supported:

- return number of hits for a search query
- calculate the avg/max/min/sum for a numeric field, filtered by a query, aggregated per tag
- calculate the avg/max/min/sum for a numeric field, filtered by a query,
aggregated per tag
- count number of terms for a particular field

## Elasticsearch support
## Elasticsearch Support

This plugins is tested against Elasticsearch 5.x and 6.x releases.
Currently it is known to break on 7.x or greater versions.
This plugins is tested against Elasticsearch 5.x and 6.x releases. Currently it
is known to break on 7.x or greater versions.

## Configuration

Expand Down Expand Up @@ -91,7 +93,8 @@ Currently it is known to break on 7.x or greater versions.

## Examples

Please note that the `[[inputs.elasticsearch_query]]` is still required for all of the examples below.
Please note that the `[[inputs.elasticsearch_query]]` is still required for all
of the examples below.

### Search the average response time, per URI and per response status code

Expand Down Expand Up @@ -151,17 +154,32 @@ Please note that the `[[inputs.elasticsearch_query]]` is still required for all

### Required parameters

- `measurement_name`: The target measurement to be stored the results of the aggregation query.
- `measurement_name`: The target measurement to be stored the results of the
aggregation query.
- `index`: The index name to query on Elasticsearch
- `query_period`: The time window to query (eg. "1m" to query documents from last minute). Normally should be set to same as collection
- `query_period`: The time window to query (eg. "1m" to query documents from
last minute). Normally should be set to same as collection
- `date_field`: The date/time field in the Elasticsearch index

### Optional parameters

- `date_field_custom_format`: Not needed if using one of the built in date/time formats of Elasticsearch, but may be required if using a custom date/time format. The format syntax uses the [Joda date format](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern).
- `date_field_custom_format`: Not needed if using one of the built in date/time
formats of Elasticsearch, but may be required if using a custom date/time
format. The format syntax uses the [Joda date format][joda].
- `filter_query`: Lucene query to filter the results (default: "\*")
- `metric_fields`: The list of fields to perform metric aggregation (these must be indexed as numeric fields)
- `metric_funcion`: The single-value metric aggregation function to be performed on the `metric_fields` defined. Currently supported aggregations are "avg", "min", "max", "sum". (see [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html)
- `tags`: The list of fields to be used as tags (these must be indexed as non-analyzed fields). A "terms aggregation" will be done per tag defined
- `include_missing_tag`: Set to true to not ignore documents where the tag(s) specified above does not exist. (If false, documents without the specified tag field will be ignored in `doc_count` and in the metric aggregation)
- `missing_tag_value`: The value of the tag that will be set for documents in which the tag field does not exist. Only used when `include_missing_tag` is set to `true`.
- `metric_fields`: The list of fields to perform metric aggregation (these must
be indexed as numeric fields)
- `metric_funcion`: The single-value metric aggregation function to be performed
on the `metric_fields` defined. Currently supported aggregations are "avg",
"min", "max", "sum". (see the [aggregation docs][agg]
- `tags`: The list of fields to be used as tags (these must be indexed as
non-analyzed fields). A "terms aggregation" will be done per tag defined
- `include_missing_tag`: Set to true to not ignore documents where the tag(s)
specified above does not exist. (If false, documents without the specified tag
field will be ignored in `doc_count` and in the metric aggregation)
- `missing_tag_value`: The value of the tag that will be set for documents in
which the tag field does not exist. Only used when `include_missing_tag` is
set to `true`.

[joda]: https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-aggregations-bucket-daterange-aggregation.html#date-format-pattern
[agg]: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-metrics.html
3 changes: 2 additions & 1 deletion plugins/inputs/ethtool/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Ethtool Input Plugin

The ethtool input plugin pulls ethernet device stats. Fields pulled will depend on the network device and driver.
The ethtool input plugin pulls ethernet device stats. Fields pulled will depend
on the network device and driver.

## Configuration

Expand Down
9 changes: 6 additions & 3 deletions plugins/inputs/eventhub_consumer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,12 @@ This plugin provides a consumer for use with Azure Event Hubs and Azure IoT Hub.

The main focus for development of this plugin is Azure IoT hub:

1. Create an Azure IoT Hub by following any of the guides provided here: [Azure IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
2. Create a device, for example a [simulated Raspberry Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
3. The connection string needed for the plugin is located under *Shared access policies*, both the *iothubowner* and *service* policies should work
1. Create an Azure IoT Hub by following any of the guides provided here: [Azure
IoT Hub](https://docs.microsoft.com/en-us/azure/iot-hub/)
2. Create a device, for example a [simulated Raspberry
Pi](https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-raspberry-pi-web-simulator-get-started)
3. The connection string needed for the plugin is located under *Shared access
policies*, both the *iothubowner* and *service* policies should work

## Configuration

Expand Down
23 changes: 11 additions & 12 deletions plugins/inputs/example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,33 +4,32 @@ The `example` plugin gathers metrics about example things. This description
explains at a high level what the plugin does and provides links to where
additional information can be found.

Telegraf minimum version: Telegraf x.x
Plugin minimum tested version: x.x
Telegraf minimum version: Telegraf x.x Plugin minimum tested version: x.x

## Configuration

This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage <plugin-name>`.

```toml @sample.conf
# This is an example plugin
[[inputs.example]]
example_option = "example_value"
```

Running `telegraf --usage <plugin-name>` also gives the sample TOML
configuration.

### example_option

A more in depth description of an option can be provided here, but only do so
if the option cannot be fully described in the sample config.
A more in depth description of an option can be provided here, but only do so if
the option cannot be fully described in the sample config.

## Metrics

Here you should add an optional description and links to where the user can
get more information about the measurements.
Here you should add an optional description and links to where the user can get
more information about the measurements.

If the output is determined dynamically based on the input source, or there
are more metrics than can reasonably be listed, describe how the input is
mapped to the output.
If the output is determined dynamically based on the input source, or there are
more metrics than can reasonably be listed, describe how the input is mapped to
the output.

- measurement1
- tags:
Expand Down
11 changes: 7 additions & 4 deletions plugins/inputs/exec/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Exec Input Plugin

The `exec` plugin executes all the `commands` in parallel on every interval and parses metrics from
their output in any one of the accepted [Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
The `exec` plugin executes all the `commands` in parallel on every interval and
parses metrics from their output in any one of the accepted [Input Data
Formats](../../../docs/DATA_FORMATS_INPUT.md).

This plugin can be used to poll for custom metrics from any source.

Expand Down Expand Up @@ -41,14 +42,16 @@ scripts that match the pattern will cause them to be picked up immediately.

## Example

This script produces static values, since no timestamp is specified the values are at the current time.
This script produces static values, since no timestamp is specified the values
are at the current time.

```sh
#!/bin/sh
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
```

It can be paired with the following configuration and will be run at the `interval` of the agent.
It can be paired with the following configuration and will be run at the
`interval` of the agent.

```toml
[[inputs.exec]]
Expand Down
14 changes: 7 additions & 7 deletions plugins/inputs/execd/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Execd Input Plugin

The `execd` plugin runs an external program as a long-running daemon.
The programs must output metrics in any one of the accepted
[Input Data Formats][] on the process's STDOUT, and is expected to
stay running. If you'd instead like the process to collect metrics and then exit,
check out the [inputs.exec][] plugin.
The `execd` plugin runs an external program as a long-running daemon. The
programs must output metrics in any one of the accepted [Input Data Formats][]
on the process's STDOUT, and is expected to stay running. If you'd instead like
the process to collect metrics and then exit, check out the [inputs.exec][]
plugin.

The `signal` can be configured to send a signal the running daemon on each
collection interval. This is used for when you want to have Telegraf notify the
Expand Down Expand Up @@ -125,5 +125,5 @@ end
signal = "none"
```

[Input Data Formats]: https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
[inputs.exec]: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/exec/README.md
[Input Data Formats]: ../../../docs/DATA_FORMATS_INPUT.md
[inputs.exec]: ../exec/README.md
6 changes: 3 additions & 3 deletions plugins/inputs/fail2ban/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
The fail2ban plugin gathers the count of failed and banned ip addresses using
[fail2ban](https://www.fail2ban.org).

This plugin runs the `fail2ban-client` command which generally requires root access.
Acquiring the required permissions can be done using several methods:
This plugin runs the `fail2ban-client` command which generally requires root
access. Acquiring the required permissions can be done using several methods:

- [Use sudo](#using-sudo) run fail2ban-client.
- Run telegraf as root. (not recommended)
Expand Down Expand Up @@ -49,7 +49,7 @@ Defaults!FAIL2BAN !logfile, !syslog, !pam_session
- failed (integer, count)
- banned (integer, count)

### Example Output
## Example Output

```shell
# fail2ban-client status sshd
Expand Down
5 changes: 3 additions & 2 deletions plugins/inputs/fibaro/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Fibaro Input Plugin

The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values of hooked devices.
Those values could be true (1) or false (0) for switches, percentage for dimmers, temperature, etc.
The Fibaro plugin makes HTTP calls to the Fibaro controller API to gather values
of hooked devices. Those values could be true (1) or false (0) for switches,
percentage for dimmers, temperature, etc.

## Configuration

Expand Down
9 changes: 7 additions & 2 deletions plugins/inputs/file/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# File Input Plugin

The file plugin parses the **complete** contents of a file **every interval** using
the selected [input data format][].
The file plugin parses the **complete** contents of a file **every interval**
using the selected [input data format][].

**Note:** If you wish to parse only newly appended lines use the [tail][] input
plugin instead.
Expand Down Expand Up @@ -38,5 +38,10 @@ plugin instead.
# file_tag = ""
```

## Metrics

The format of metrics produced by this plugin depends on the content and data
format of the file.

[input data format]: /docs/DATA_FORMATS_INPUT.md
[tail]: /plugins/inputs/tail
2 changes: 1 addition & 1 deletion plugins/inputs/filestat/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# filestat Input Plugin
# Filestat Input Plugin

The filestat plugin gathers metrics about file existence, size, and other stats.

Expand Down
15 changes: 11 additions & 4 deletions plugins/inputs/fluentd/README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
# Fluentd Input Plugin

The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor plugin](https://docs.fluentd.org/input/monitor_agent).
This plugin understands data provided by /api/plugin.json resource (/api/config.json is not covered).
The fluentd plugin gathers metrics from plugin endpoint provided by [in_monitor
plugin][1]. This plugin understands data provided by /api/plugin.json resource
(/api/config.json is not covered).

You might need to adjust your fluentd configuration, in order to reduce series cardinality in case your fluentd restarts frequently. Every time fluentd starts, `plugin_id` value is given a new random value.
According to [fluentd documentation](https://docs.fluentd.org/configuration/config-file#common-plugin-parameter), you are able to add `@id` parameter for each plugin to avoid this behaviour and define custom `plugin_id`.
You might need to adjust your fluentd configuration, in order to reduce series
cardinality in case your fluentd restarts frequently. Every time fluentd starts,
`plugin_id` value is given a new random value. According to [fluentd
documentation][2], you are able to add `@id` parameter for each plugin to avoid
this behaviour and define custom `plugin_id`.

example configuration with `@id` parameter for http plugin:

Expand All @@ -16,6 +20,9 @@ example configuration with `@id` parameter for http plugin:
</source>
```

[1]: https://docs.fluentd.org/input/monitor_agent
[2]: https://docs.fluentd.org/configuration/config-file#common-plugin-parameter

## Configuration

```toml @sample.conf
Expand Down
13 changes: 7 additions & 6 deletions plugins/inputs/github/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ alternative method for collecting repository information.
# additional_fields = []
```

### Metrics
## Metrics

- github_repository
- tags:
Expand All @@ -61,17 +61,18 @@ When the [internal][] input is enabled:
- remaining - How many requests you have remaining (per hour)
- blocks - How many requests have been blocked due to rate limit

When specifying `additional_fields` the plugin will collect the specified properties.
**NOTE:** Querying this additional fields might require to perform additional API-calls.
Please make sure you don't exceed the query rate-limit by specifying too many additional fields.
In the following we list the available options with the required API-calls and the resulting fields
When specifying `additional_fields` the plugin will collect the specified
properties. **NOTE:** Querying this additional fields might require to perform
additional API-calls. Please make sure you don't exceed the query rate-limit by
specifying too many additional fields. In the following we list the available
options with the required API-calls and the resulting fields

- "pull-requests" (2 API-calls per repository)
- fields:
- open_pull_requests (int)
- closed_pull_requests (int)

### Example Output
## Example Output

```shell
github_repository,language=Go,license=MIT\ License,name=telegraf,owner=influxdata forks=2679i,networks=2679i,open_issues=794i,size=23263i,stars=7091i,subscribers=316i,watchers=7091i 1563901372000000000
Expand Down
Loading

0 comments on commit 9573d81

Please sign in to comment.