diff --git a/plugins/inputs/jenkins/README.md b/plugins/inputs/jenkins/README.md
index b4a484457fe3d..c9aed567cbea6 100644
--- a/plugins/inputs/jenkins/README.md
+++ b/plugins/inputs/jenkins/README.md
@@ -1,8 +1,10 @@
# Jenkins Input Plugin
-The jenkins plugin gathers information about the nodes and jobs running in a jenkins instance.
+The jenkins plugin gathers information about the nodes and jobs running in a
+jenkins instance.
-This plugin does not require a plugin on jenkins and it makes use of Jenkins API to retrieve all the information needed.
+This plugin does not require a plugin on jenkins and it makes use of Jenkins API
+to retrieve all the information needed.
## Configuration
diff --git a/plugins/inputs/jolokia/README.md b/plugins/inputs/jolokia/README.md
index ba9b90854564c..2d71870cc4bc7 100644
--- a/plugins/inputs/jolokia/README.md
+++ b/plugins/inputs/jolokia/README.md
@@ -1,6 +1,6 @@
# Jolokia Input Plugin
-## Deprecated in version 1.5: Please use the [jolokia2][] plugin
+**Deprecated in version 1.5: Please use the [jolokia2][] plugin**
## Configuration
diff --git a/plugins/inputs/jolokia2/README.md b/plugins/inputs/jolokia2/README.md
index 665a4f1dacb17..65abc22761d2b 100644
--- a/plugins/inputs/jolokia2/README.md
+++ b/plugins/inputs/jolokia2/README.md
@@ -1,6 +1,8 @@
# Jolokia2 Input Plugin
-The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP protocol](https://jolokia.org/reference/html/protocol.html).
+The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX
+metrics from an HTTP endpoint using Jolokia's [JSON-over-HTTP
+protocol](https://jolokia.org/reference/html/protocol.html).
* [jolokia2_agent Configuration](jolokia2_agent/README.md)
* [jolokia2_proxy Configuration](jolokia2_proxy/README.md)
@@ -9,7 +11,8 @@ The [Jolokia](http://jolokia.org) _agent_ and _proxy_ input plugins collect JMX
### Jolokia Agent Configuration
-The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia agent](https://jolokia.org/agent/jvm.html) REST endpoints.
+The `jolokia2_agent` input plugin reads JMX metrics from one or more [Jolokia
+agent](https://jolokia.org/agent/jvm.html) REST endpoints.
```toml @sample.conf
[[inputs.jolokia2_agent]]
@@ -39,7 +42,9 @@ Optionally, specify TLS options for communicating with agents:
### Jolokia Proxy Configuration
-The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_ by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html) REST endpoint.
+The `jolokia2_proxy` input plugin reads JMX metrics from one or more _targets_
+by interacting with a [Jolokia proxy](https://jolokia.org/features/proxy.html)
+REST endpoint.
```toml
[[inputs.jolokia2_proxy]]
@@ -84,7 +89,8 @@ Optionally, specify TLS options for communicating with proxies:
### Jolokia Metric Configuration
-Each `metric` declaration generates a Jolokia request to fetch telemetry from a JMX MBean.
+Each `metric` declaration generates a Jolokia request to fetch telemetry from a
+JMX MBean.
| Key | Required | Description |
|----------------|----------|-------------|
@@ -110,7 +116,8 @@ The preceeding `jvm_memory` `metric` declaration produces the following output:
jvm_memory HeapMemoryUsage.committed=4294967296,HeapMemoryUsage.init=4294967296,HeapMemoryUsage.max=4294967296,HeapMemoryUsage.used=1750658992,NonHeapMemoryUsage.committed=67350528,NonHeapMemoryUsage.init=2555904,NonHeapMemoryUsage.max=-1,NonHeapMemoryUsage.used=65821352,ObjectPendingFinalizationCount=0 1503762436000000000
```
-Use `*` wildcards against `mbean` property-key values to create distinct series by capturing values into `tag_keys`.
+Use `*` wildcards against `mbean` property-key values to create distinct series
+by capturing values into `tag_keys`.
```toml
[[inputs.jolokia2_agent.metric]]
@@ -120,7 +127,9 @@ Use `*` wildcards against `mbean` property-key values to create distinct series
tag_keys = ["name"]
```
-Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and `name` is used as a tag, the preceeding `jvm_garbage_collector` `metric` declaration produces two metrics.
+Since `name=*` matches both `G1 Old Generation` and `G1 Young Generation`, and
+`name` is used as a tag, the preceeding `jvm_garbage_collector` `metric`
+declaration produces two metrics.
```shell
jvm_garbage_collector,name=G1\ Old\ Generation CollectionCount=0,CollectionTime=0 1503762520000000000
@@ -138,7 +147,8 @@ Use `tag_prefix` along with `tag_keys` to add detail to tag names.
tag_prefix = "pool_"
```
-The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each with a distinct `pool_name` tag.
+The preceeding `jvm_memory_pool` `metric` declaration produces six metrics, each
+with a distinct `pool_name` tag.
```text
jvm_memory_pool,pool_name=Compressed\ Class\ Space PeakUsage.max=1073741824,PeakUsage.committed=3145728,PeakUsage.init=0,Usage.committed=3145728,Usage.init=0,PeakUsage.used=3017976,Usage.max=1073741824,Usage.used=3017976 1503764025000000000
@@ -149,7 +159,10 @@ jvm_memory_pool,pool_name=G1\ Survivor\ Space Usage.max=-1,Usage.init=0,Collecti
jvm_memory_pool,pool_name=Metaspace PeakUsage.init=0,PeakUsage.used=21852224,PeakUsage.max=-1,Usage.max=-1,Usage.committed=22282240,Usage.init=0,Usage.used=21852224,PeakUsage.committed=22282240 1503764025000000000
```
-Use substitutions to create fields and field prefixes with MBean property-keys captured by wildcards. In the following example, `$1` represents the value of the property-key `name`, and `$2` represents the value of the property-key `topic`.
+Use substitutions to create fields and field prefixes with MBean property-keys
+captured by wildcards. In the following example, `$1` represents the value of
+the property-key `name`, and `$2` represents the value of the property-key
+`topic`.
```toml
[[inputs.jolokia2_agent.metric]]
@@ -159,13 +172,16 @@ Use substitutions to create fields and field prefixes with MBean property-keys c
tag_keys = ["topic"]
```
-The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka topic. The `name` Mbean property-key is used as a field prefix to aid in gathering fields together into the single metric.
+The preceeding `kafka_topic` `metric` declaration produces a metric per Kafka
+topic. The `name` Mbean property-key is used as a field prefix to aid in
+gathering fields together into the single metric.
```text
kafka_topic,topic=my-topic BytesOutPerSec.MeanRate=0,FailedProduceRequestsPerSec.MeanRate=0,BytesOutPerSec.EventType="bytes",BytesRejectedPerSec.Count=0,FailedProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.EventType="requests",MessagesInPerSec.RateUnit="SECONDS",BytesInPerSec.EventType="bytes",BytesOutPerSec.RateUnit="SECONDS",BytesInPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.EventType="requests",TotalFetchRequestsPerSec.MeanRate=146.301533938701,BytesOutPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.MeanRate=0,BytesRejectedPerSec.FifteenMinuteRate=0,MessagesInPerSec.FiveMinuteRate=0,BytesInPerSec.Count=0,BytesRejectedPerSec.MeanRate=0,FailedFetchRequestsPerSec.MeanRate=0,FailedFetchRequestsPerSec.FiveMinuteRate=0,FailedFetchRequestsPerSec.FifteenMinuteRate=0,FailedProduceRequestsPerSec.Count=0,TotalFetchRequestsPerSec.FifteenMinuteRate=128.59314292334466,TotalFetchRequestsPerSec.OneMinuteRate=126.71551273850747,TotalFetchRequestsPerSec.Count=1353483,TotalProduceRequestsPerSec.FifteenMinuteRate=0,FailedFetchRequestsPerSec.OneMinuteRate=0,FailedFetchRequestsPerSec.Count=0,FailedProduceRequestsPerSec.FifteenMinuteRate=0,TotalFetchRequestsPerSec.FiveMinuteRate=130.8516148751592,TotalFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.RateUnit="SECONDS",BytesInPerSec.MeanRate=0,FailedFetchRequestsPerSec.RateUnit="SECONDS",BytesRejectedPerSec.OneMinuteRate=0,BytesOutPerSec.Count=0,BytesOutPerSec.OneMinuteRate=0,MessagesInPerSec.FifteenMinuteRate=0,MessagesInPerSec.MeanRate=0,BytesInPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.RateUnit="SECONDS",FailedProduceRequestsPerSec.OneMinuteRate=0,TotalProduceRequestsPerSec.EventType="requests",BytesRejectedPerSec.FiveMinuteRate=0,BytesRejectedPerSec.EventType="bytes",BytesOutPerSec.FiveMinuteRate=0,FailedProduceRequestsPerSec.FiveMinuteRate=0,MessagesInPerSec.Count=0,TotalProduceRequestsPerSec.FiveMinuteRate=0,TotalProduceRequestsPerSec.OneMinuteRate=0,MessagesInPerSec.EventType="messages",MessagesInPerSec.OneMinuteRate=0,TotalFetchRequestsPerSec.EventType="requests",BytesInPerSec.RateUnit="SECONDS",BytesInPerSec.FifteenMinuteRate=0,TotalProduceRequestsPerSec.Count=0 1503767532000000000
```
-Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configurations that apply to every `metric` declaration.
+Both `jolokia2_agent` and `jolokia2_proxy` plugins support default
+configurations that apply to every `metric` declaration.
| Key | Default Value | Description |
|---------------------------|---------------|-------------|
@@ -187,4 +203,5 @@ Both `jolokia2_agent` and `jolokia2_proxy` plugins support default configuration
* [Weblogic](/plugins/inputs/jolokia2/examples/weblogic.conf)
* [ZooKeeper](/plugins/inputs/jolokia2/examples/zookeeper.conf)
-Please help improve this list and contribute new configuration files by opening an issue or pull request.
+Please help improve this list and contribute new configuration files by opening
+an issue or pull request.
diff --git a/plugins/inputs/jti_openconfig_telemetry/README.md b/plugins/inputs/jti_openconfig_telemetry/README.md
index deeb7cae1d260..c325b2305e535 100644
--- a/plugins/inputs/jti_openconfig_telemetry/README.md
+++ b/plugins/inputs/jti_openconfig_telemetry/README.md
@@ -1,7 +1,11 @@
# JTI OpenConfig Telemetry Input Plugin
-This plugin reads Juniper Networks implementation of OpenConfig telemetry data from listed sensors using Junos Telemetry Interface. Refer to
-[openconfig.net](http://openconfig.net/) for more details about OpenConfig and [Junos Telemetry Interface (JTI)](https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html).
+This plugin reads Juniper Networks implementation of OpenConfig telemetry data
+from listed sensors using Junos Telemetry Interface. Refer to
+[openconfig.net](http://openconfig.net/) for more details about OpenConfig and
+[Junos Telemetry Interface (JTI)][1].
+
+[1]: https://www.juniper.net/documentation/en_US/junos/topics/concept/junos-telemetry-interface-oveview.html
## Configuration
diff --git a/plugins/inputs/kafka_consumer/README.md b/plugins/inputs/kafka_consumer/README.md
index 1bc3f2cb1ddd9..c69e5f11b5dba 100644
--- a/plugins/inputs/kafka_consumer/README.md
+++ b/plugins/inputs/kafka_consumer/README.md
@@ -3,8 +3,8 @@
The [Kafka][kafka] consumer plugin reads from Kafka
and creates metrics using one of the supported [input data formats][].
-For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input plugin
-and use the old zookeeper connection method.
+For old kafka version (< 0.8), please use the [kafka_consumer_legacy][] input
+plugin and use the old zookeeper connection method.
## Configuration
diff --git a/plugins/inputs/kafka_consumer_legacy/README.md b/plugins/inputs/kafka_consumer_legacy/README.md
index b9d9e97bbf5a2..dfff013c387d6 100644
--- a/plugins/inputs/kafka_consumer_legacy/README.md
+++ b/plugins/inputs/kafka_consumer_legacy/README.md
@@ -1,12 +1,13 @@
# Kafka Consumer Legacy Input Plugin
-## Deprecated in version 1.4. Please use [Kafka Consumer input plugin][]
+**Deprecated in version 1.4. Please use [Kafka Consumer input plugin][]**
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
-topic and adds messages to InfluxDB. The plugin assumes messages follow the
-line protocol. [Consumer Group](http://godoc.org/github.com/wvanbergen/kafka/consumergroup)
-is used to talk to the Kafka cluster so multiple instances of telegraf can read
-from the same topic in parallel.
+topic and adds messages to InfluxDB. The plugin assumes messages follow the line
+protocol. [Consumer Group][1] is used to talk to the Kafka cluster so multiple
+instances of telegraf can read from the same topic in parallel.
+
+[1]: http://godoc.org/github.com/wvanbergen/kafka/consumergroup
## Configuration
@@ -45,4 +46,4 @@ from the same topic in parallel.
Running integration tests requires running Zookeeper & Kafka. See Makefile
for kafka container command.
-[Kafka Consumer input plugin]: /plugins/inputs/kafka_consumer
+[Kafka Consumer input plugin]: ../kafka_consumer/README.md
diff --git a/plugins/inputs/kapacitor/README.md b/plugins/inputs/kapacitor/README.md
index b73db98230a02..55cf9e356b118 100644
--- a/plugins/inputs/kapacitor/README.md
+++ b/plugins/inputs/kapacitor/README.md
@@ -90,9 +90,12 @@ The Kapacitor plugin collects metrics from the given Kapacitor instances.
## kapacitor
-The `kapacitor` measurement stores fields with information related to
-[Kapacitor tasks](https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks)
-and [subscriptions](https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/).
+The `kapacitor` measurement stores fields with information related to [Kapacitor
+tasks][tasks] and [subscriptions][subs].
+
+[tasks]: https://docs.influxdata.com/kapacitor/latest/introduction/getting-started/#kapacitor-tasks
+
+[subs]: https://docs.influxdata.com/kapacitor/latest/administration/subscription-management/
### num_enabled_tasks
@@ -115,23 +118,30 @@ The `kapacitor_alert` measurement stores fields with information related to
### notification-dropped
-The number of internal notifications dropped because they arrive too late from another Kapacitor node.
-If this count is increasing, Kapacitor Enterprise nodes aren't able to communicate fast enough
-to keep up with the volume of alerts.
+The number of internal notifications dropped because they arrive too late from
+another Kapacitor node. If this count is increasing, Kapacitor Enterprise nodes
+aren't able to communicate fast enough to keep up with the volume of alerts.
### primary-handle-count
-The number of times this node handled an alert as the primary. This count should increase under normal conditions.
+The number of times this node handled an alert as the primary. This count should
+increase under normal conditions.
### secondary-handle-count
-The number of times this node handled an alert as the secondary. An increase in this counter indicates that the primary is failing to handle alerts in a timely manner.
+The number of times this node handled an alert as the secondary. An increase in
+this counter indicates that the primary is failing to handle alerts in a timely
+manner.
---
## kapacitor_cluster
-The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to communicate](https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications) with one another. Specifically, these metrics track the gossip communication between the Kapacitor nodes.
+The `kapacitor_cluster` measurement reflects the ability of [Kapacitor nodes to
+communicate][cluster] with one another. Specifically, these metrics track the
+gossip communication between the Kapacitor nodes.
+
+[cluster]: https://docs.influxdata.com/enterprise_kapacitor/v1.5/administration/configuration/#cluster-communications
### dropped_member_events
@@ -146,8 +156,9 @@ The number of gossip user events that were dropped.
## kapacitor_edges
The `kapacitor_edges` measurement stores fields with information related to
-[edges](https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines)
-in Kapacitor TICKscripts.
+[edges][] in Kapacitor TICKscripts.
+
+[edges]: https://docs.influxdata.com/kapacitor/latest/tick/introduction/#pipelines
### collected
@@ -161,8 +172,8 @@ The number of messages emitted by TICKscript edges.
## kapacitor_ingress
-The `kapacitor_ingress` measurement stores fields with information related to data
-coming into Kapacitor.
+The `kapacitor_ingress` measurement stores fields with information related to
+data coming into Kapacitor.
### points_received
@@ -173,7 +184,9 @@ The number of points received by Kapacitor.
## kapacitor_load
The `kapacitor_load` measurement stores fields with information related to the
-[Kapacitor Load Directory service](https://docs.influxdata.com/kapacitor/latest/guides/load_directory/).
+[Kapacitor Load Directory service][load-dir].
+
+[load-dir]: https://docs.influxdata.com/kapacitor/latest/guides/load_directory/
### errors
@@ -183,7 +196,8 @@ The number of errors reported from the load directory service.
## kapacitor_memstats
-The `kapacitor_memstats` measurement stores fields related to Kapacitor memory usage.
+The `kapacitor_memstats` measurement stores fields related to Kapacitor memory
+usage.
### alloc_bytes
@@ -341,14 +355,17 @@ The total number of unique series processed.
#### write_errors
-The number of errors that occurred when writing to InfluxDB or other write endpoints.
+The number of errors that occurred when writing to InfluxDB or other write
+endpoints.
---
### kapacitor_topics
-The `kapacitor_topics` measurement stores fields related to
-Kapacitor topics]().
+The `kapacitor_topics` measurement stores fields related to Kapacitor
+topics][topics].
+
+[topics]: https://docs.influxdata.com/kapacitor/latest/working/using_alert_topics/
#### collected (kapacitor_topics)
diff --git a/plugins/inputs/kernel/README.md b/plugins/inputs/kernel/README.md
index a2063a332499b..8b38e2a415a5b 100644
--- a/plugins/inputs/kernel/README.md
+++ b/plugins/inputs/kernel/README.md
@@ -3,8 +3,9 @@
This plugin is only available on Linux.
The kernel plugin gathers info about the kernel that doesn't fit into other
-plugins. In general, it is the statistics available in `/proc/stat` that are
-not covered by other plugins as well as the value of `/proc/sys/kernel/random/entropy_avail`
+plugins. In general, it is the statistics available in `/proc/stat` that are not
+covered by other plugins as well as the value of
+`/proc/sys/kernel/random/entropy_avail`
The metrics are documented in `man proc` under the `/proc/stat` section.
The metrics are documented in `man 4 random` under the `/proc/stat` section.
diff --git a/plugins/inputs/kernel_vmstat/README.md b/plugins/inputs/kernel_vmstat/README.md
index f5a9773b6eb0d..0da56783391e6 100644
--- a/plugins/inputs/kernel_vmstat/README.md
+++ b/plugins/inputs/kernel_vmstat/README.md
@@ -1,10 +1,13 @@
# Kernel VMStat Input Plugin
-The kernel_vmstat plugin gathers virtual memory statistics
-by reading /proc/vmstat. For a full list of available fields see the
-/proc/vmstat section of the [proc man page](http://man7.org/linux/man-pages/man5/proc.5.html).
-For a better idea of what each field represents, see the
-[vmstat man page](http://linux.die.net/man/8/vmstat).
+The kernel_vmstat plugin gathers virtual memory statistics by reading
+/proc/vmstat. For a full list of available fields see the /proc/vmstat section
+of the [proc man page][man-proc]. For a better idea of what each field
+represents, see the [vmstat man page][man-vmstat].
+
+[man-proc]: http://man7.org/linux/man-pages/man5/proc.5.html
+
+[man-vmstat]: http://linux.die.net/man/8/vmstat
```text
/proc/vmstat
diff --git a/plugins/inputs/kibana/README.md b/plugins/inputs/kibana/README.md
index e0d95f61a4da0..9c09850005483 100644
--- a/plugins/inputs/kibana/README.md
+++ b/plugins/inputs/kibana/README.md
@@ -63,10 +63,17 @@ Requires the following tools:
- [Docker](https://docs.docker.com/get-docker/)
- [Docker Compose](https://docs.docker.com/compose/install/)
-From the root of this project execute the following script: `./plugins/inputs/kibana/test_environment/run_test_env.sh`
+From the root of this project execute the following script:
+`./plugins/inputs/kibana/test_environment/run_test_env.sh`
-This will build the latest Telegraf and then start up Kibana and Elasticsearch, Telegraf will begin monitoring Kibana's status and write its results to the file `/tmp/metrics.out` in the Telegraf container.
+This will build the latest Telegraf and then start up Kibana and Elasticsearch,
+Telegraf will begin monitoring Kibana's status and write its results to the file
+`/tmp/metrics.out` in the Telegraf container.
-Then you can attach to the telegraf container to inspect the file `/tmp/metrics.out` to see if the status is being reported.
+Then you can attach to the telegraf container to inspect the file
+`/tmp/metrics.out` to see if the status is being reported.
-The Visual Studio Code [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension provides an easy user interface to attach to the running container.
+The Visual Studio Code [Remote - Containers][remote] extension provides an easy
+user interface to attach to the running container.
+
+[remote]: https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers
diff --git a/plugins/inputs/kinesis_consumer/README.md b/plugins/inputs/kinesis_consumer/README.md
index d571d887fb0c6..d85f3653d9cc4 100644
--- a/plugins/inputs/kinesis_consumer/README.md
+++ b/plugins/inputs/kinesis_consumer/README.md
@@ -89,8 +89,8 @@ DynamoDB:
### DynamoDB Checkpoint
-The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
-this functionality, create a table with the following string type keys:
+The DynamoDB checkpoint stores the last processed record in a DynamoDB. To
+leverage this functionality, create a table with the following string type keys:
```shell
Partition key: namespace
diff --git a/plugins/inputs/knx_listener/README.md b/plugins/inputs/knx_listener/README.md
index 04acac83a254d..f77511bcb8522 100644
--- a/plugins/inputs/knx_listener/README.md
+++ b/plugins/inputs/knx_listener/README.md
@@ -7,8 +7,6 @@ underlying "knx-go" project site ().
## Configuration
-This is a sample config for the plugin.
-
```toml @sample.conf
# Listener capable of handling KNX bus messages provided through a KNX-IP Interface.
[[inputs.knx_listener]]
diff --git a/plugins/inputs/kube_inventory/README.md b/plugins/inputs/kube_inventory/README.md
index 0f3150e7b08a6..711ba4d603622 100644
--- a/plugins/inputs/kube_inventory/README.md
+++ b/plugins/inputs/kube_inventory/README.md
@@ -1,6 +1,7 @@
# Kubernetes Inventory Input Plugin
-This plugin generates metrics derived from the state of the following Kubernetes resources:
+This plugin generates metrics derived from the state of the following Kubernetes
+resources:
- daemonsets
- deployments
@@ -86,7 +87,13 @@ avoid cardinality issues:
## Kubernetes Permissions
-If using [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/), you will need to create a cluster role to list "persistentvolumes" and "nodes". You will then need to make an [aggregated ClusterRole](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) that will eventually be bound to a user or group.
+If using [RBAC authorization][rbac], you will need to create a cluster role to
+list "persistentvolumes" and "nodes". You will then need to make an [aggregated
+ClusterRole][agg] that will eventually be bound to a user or group.
+
+[rbac]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/
+
+[agg]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles
```yaml
---
@@ -115,7 +122,8 @@ aggregationRule:
rules: [] # Rules are automatically filled in by the controller manager.
```
-Bind the newly created aggregated ClusterRole with the following config file, updating the subjects as needed.
+Bind the newly created aggregated ClusterRole with the following config file,
+updating the subjects as needed.
```yaml
---
@@ -135,8 +143,9 @@ subjects:
## Quickstart in k3s
-When monitoring [k3s](https://k3s.io) server instances one can re-use already generated administration token.
-This is less secure than using the more restrictive dedicated telegraf user but more convienient to set up.
+When monitoring [k3s](https://k3s.io) server instances one can re-use already
+generated administration token. This is less secure than using the more
+restrictive dedicated telegraf user but more convienient to set up.
```console
# an empty token will make telegraf use the client cert/key files instead
@@ -294,7 +303,8 @@ tls_key = "/run/telegraf-kubernetes-key"
### pv `phase_type`
-The persistentvolume "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
+The persistentvolume "phase" is saved in the `phase` tag with a correlated
+numeric field called `phase_type` corresponding with that tag value.
| Tag value | Corresponding field value |
| --------- | ------------------------- |
@@ -307,7 +317,8 @@ The persistentvolume "phase" is saved in the `phase` tag with a correlated numer
### pvc `phase_type`
-The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated numeric field called `phase_type` corresponding with that tag value.
+The persistentvolumeclaim "phase" is saved in the `phase` tag with a correlated
+numeric field called `phase_type` corresponding with that tag value.
| Tag value | Corresponding field value |
| --------- | ------------------------- |
diff --git a/plugins/inputs/kubernetes/README.md b/plugins/inputs/kubernetes/README.md
index c0b4c96c7d3a9..6a5bda4fe86aa 100644
--- a/plugins/inputs/kubernetes/README.md
+++ b/plugins/inputs/kubernetes/README.md
@@ -6,13 +6,15 @@ is running as part of a `daemonset` within a kubernetes installation. This
means that telegraf is running on every node within the cluster. Therefore, you
should configure this plugin to talk to its locally running kubelet.
-To find the ip address of the host you are running on you can issue a command like the following:
+To find the ip address of the host you are running on you can issue a command
+like the following:
```sh
curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
```
-In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
+In this case we used the downward API to pass in the `$POD_NAMESPACE` and
+`$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
Kubernetes is a fast moving project, with a new minor release every 3 months. As
such, we will aim to maintain support only for versions that are supported by
@@ -65,8 +67,8 @@ avoid cardinality issues:
## DaemonSet
-For recommendations on running Telegraf as a DaemonSet see [Monitoring Kubernetes
-Architecture][k8s-telegraf] or view the Helm charts:
+For recommendations on running Telegraf as a DaemonSet see [Monitoring
+Kubernetes Architecture][k8s-telegraf] or view the Helm charts:
- [Telegraf][]
- [InfluxDB][]
diff --git a/plugins/inputs/lanz/README.md b/plugins/inputs/lanz/README.md
index 650039ccaf899..3c63aa596a97e 100644
--- a/plugins/inputs/lanz/README.md
+++ b/plugins/inputs/lanz/README.md
@@ -1,9 +1,11 @@
# Arista LANZ Consumer Input Plugin
-This plugin provides a consumer for use with Arista Networks’ Latency Analyzer (LANZ)
+This plugin provides a consumer for use with Arista Networks’ Latency Analyzer
+(LANZ)
Metrics are read from a stream of data via TCP through port 50001 on the
-switches management IP. The data is in Protobuffers format. For more information on Arista LANZ
+switches management IP. The data is in Protobuffers format. For more information
+on Arista LANZ
-
@@ -13,11 +15,6 @@ This plugin uses Arista's sdk.
## Configuration
-You will need to configure LANZ and enable streaming LANZ data.
-
--
--
-
```toml @sample.conf
# Read metrics off Arista LANZ, via socket
[[inputs.lanz]]
@@ -28,9 +25,15 @@ You will need to configure LANZ and enable streaming LANZ data.
]
```
+You will need to configure LANZ and enable streaming LANZ data.
+
+-
+-
+
## Metrics
-For more details on the metrics see
+For more details on the metrics see
+
- lanz_congestion_record:
- tags:
diff --git a/plugins/inputs/leofs/README.md b/plugins/inputs/leofs/README.md
index dcb1448c7f645..078145519afb1 100644
--- a/plugins/inputs/leofs/README.md
+++ b/plugins/inputs/leofs/README.md
@@ -1,6 +1,8 @@
# LeoFS Input Plugin
-The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using SNMP. See [LeoFS Documentation / System Administration / System Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
+The LeoFS plugin gathers metrics of LeoGateway, LeoManager, and LeoStorage using
+SNMP. See [LeoFS Documentation / System Administration / System
+Monitoring](https://leo-project.net/leofs/docs/admin/system_admin/monitoring/).
## Configuration
diff --git a/plugins/inputs/linux_sysctl_fs/README.md b/plugins/inputs/linux_sysctl_fs/README.md
index 1fbf6e9fb4772..84d79ee73ed7a 100644
--- a/plugins/inputs/linux_sysctl_fs/README.md
+++ b/plugins/inputs/linux_sysctl_fs/README.md
@@ -1,6 +1,8 @@
# Linux Sysctl FS Input Plugin
-The linux_sysctl_fs input provides Linux system level file metrics. The documentation on these fields can be found at .
+The linux_sysctl_fs input provides Linux system level file metrics. The
+documentation on these fields can be found at
+.
Example output:
diff --git a/plugins/inputs/logparser/README.md b/plugins/inputs/logparser/README.md
index f8b8125707842..9498c65ceb706 100644
--- a/plugins/inputs/logparser/README.md
+++ b/plugins/inputs/logparser/README.md
@@ -1,6 +1,7 @@
# Logparser Input Plugin
-## Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the [`grok` data format][grok parser]
+**Deprecated in Telegraf 1.15: Please use the [tail][] plugin along with the
+[`grok` data format][grok parser]**
The `logparser` plugin streams and parses the given logfiles. Currently it
has the capability of parsing "grok" patterns from logfiles, which also supports
diff --git a/plugins/inputs/logstash/README.md b/plugins/inputs/logstash/README.md
index 1f09eb830dd75..15963430050b3 100644
--- a/plugins/inputs/logstash/README.md
+++ b/plugins/inputs/logstash/README.md
@@ -1,7 +1,7 @@
# Logstash Input Plugin
-This plugin reads metrics exposed by
-[Logstash Monitoring API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
+This plugin reads metrics exposed by [Logstash Monitoring
+API](https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html).
Logstash 5 and later is supported.
@@ -43,7 +43,8 @@ Logstash 5 and later is supported.
## Metrics
-Additional plugin stats may be collected (because logstash doesn't consistently expose all stats)
+Additional plugin stats may be collected (because logstash doesn't consistently
+expose all stats)
- logstash_jvm
- tags:
diff --git a/plugins/inputs/lustre2/README.md b/plugins/inputs/lustre2/README.md
index 8d2e562c3a762..8cd3ed9139064 100644
--- a/plugins/inputs/lustre2/README.md
+++ b/plugins/inputs/lustre2/README.md
@@ -1,9 +1,10 @@
# Lustre Input Plugin
-The [Lustre][]® file system is an open-source, parallel file system that supports
-many requirements of leadership class HPC simulation environments.
+The [Lustre][]® file system is an open-source, parallel file system that
+supports many requirements of leadership class HPC simulation environments.
-This plugin monitors the Lustre file system using its entries in the proc filesystem.
+This plugin monitors the Lustre file system using its entries in the proc
+filesystem.
## Configuration
@@ -28,7 +29,8 @@ This plugin monitors the Lustre file system using its entries in the proc filesy
## Metrics
-From `/proc/fs/lustre/obdfilter/*/stats` and `/proc/fs/lustre/osd-ldiskfs/*/stats`:
+From `/proc/fs/lustre/obdfilter/*/stats` and
+`/proc/fs/lustre/osd-ldiskfs/*/stats`:
- lustre2
- tags:
diff --git a/plugins/inputs/lvm/README.md b/plugins/inputs/lvm/README.md
index f0267077676e3..bc07c37cb208e 100644
--- a/plugins/inputs/lvm/README.md
+++ b/plugins/inputs/lvm/README.md
@@ -5,9 +5,6 @@ physical volumes, volume groups, and logical volumes.
## Configuration
-The `lvm` command requires elevated permissions. If the user has configured
-sudo with the ability to run these commands, then set the `use_sudo` to true.
-
```toml @sample.conf
# Read metrics about LVM physical volumes, volume groups, logical volumes.
[[inputs.lvm]]
@@ -15,6 +12,9 @@ sudo with the ability to run these commands, then set the `use_sudo` to true.
use_sudo = false
```
+The `lvm` command requires elevated permissions. If the user has configured sudo
+with the ability to run these commands, then set the `use_sudo` to true.
+
### Using sudo
If your account does not already have the ability to run commands