- Use Ubuntu
# Install FPM
sudo aptitude install ruby ruby-dev rubygems build-essential rpm
sudo gem install --no-ri --no-rdoc fpm
make package-no-upload
Then upload the artifacts from the build directory to packagecloud, e.g.:
https://packagecloud.io/medallia/ops/packages/el/7/telegraf-1.3.0_medallia_2~db1e208-0.x86_64.rpm
and
Telegraf is an agent written in Go for collecting, processing, aggregating, and writing metrics.
Design goals are to have a minimal memory footprint with a plugin system so that developers in the community can easily add support for collecting metrics from well known services (like Hadoop, Postgres, or Redis) and third party APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
Telegraf is plugin-driven and has the concept of 4 distinct plugins:
- Input Plugins collect metrics from the system, services, or 3rd party APIs
- Processor Plugins transform, decorate, and/or filter metrics
- Aggregator Plugins create aggregate metrics (e.g. mean, min, max, quantiles, etc.)
- Output Plugins write metrics to various destinations
For more information on Processor and Aggregator plugins please read this.
New plugins are designed to be easy to contribute, we'll eagerly accept pull requests and will manage the set of plugins that Telegraf supports. See the contributing guide for instructions on writing new plugins.
You can either download the binaries directly from the downloads page.
A few alternate installs are available here as well:
Latest:
Ansible role: https://github.com/rossmcdonald/telegraf
Telegraf manages dependencies via gdm, which gets installed via the Makefile if you don't have it already. You also must build with golang version 1.8+.
- Install Go
- Setup your GOPATH
- Run
go get github.com/influxdata/telegraf
- Run
cd $GOPATH/src/github.com/influxdata/telegraf
- Run
make
See usage with:
telegraf --help
telegraf config > telegraf.conf
telegraf --input-filter cpu --output-filter influxdb config
telegraf --config telegraf.conf -test
telegraf --config telegraf.conf
telegraf --config telegraf.conf -input-filter cpu:mem -output-filter influxdb
See the configuration guide for a rundown of the more advanced configuration options.
- aerospike
- amqp_consumer (rabbitmq)
- apache
- aws cloudwatch
- bcache
- cassandra
- ceph
- cgroup
- chrony
- consul
- conntrack
- couchbase
- couchdb
- disque
- dns query time
- docker
- dovecot
- elasticsearch
- exec (generic executable plugin, support JSON, influx, graphite and nagios)
- filestat
- haproxy
- hddtemp
- http_response
- httpjson (generic JSON-emitting http service plugin)
- internal
- influxdb
- interrupts
- ipmi_sensor
- iptables
- jolokia
- kubernetes
- leofs
- lustre2
- mailchimp
- memcached
- mesos
- mongodb
- mysql
- net_response
- nginx
- nsq
- nstat
- ntpq
- phpfpm
- phusion passenger
- ping
- postgresql
- postgresql_extensible
- powerdns
- procstat
- prometheus
- puppetagent
- rabbitmq
- raindrops
- redis
- rethinkdb
- riak
- sensors
- snmp
- snmp_legacy
- sql server (microsoft)
- twemproxy
- varnish
- zfs
- zookeeper
- win_perf_counters (windows performance counters)
- sysstat
- system
- cpu
- mem
- net
- netstat
- disk
- diskio
- swap
- processes
- kernel (/proc/stat)
- kernel (/proc/vmstat)
- linux_sysctl_fs (/proc/sys/fs)
Telegraf can also collect metrics via the following service plugins:
- http_listener
- kafka_consumer
- mqtt_consumer
- nats_consumer
- nsq_consumer
- logparser
- statsd
- socket_listener
- tail
- tcp_listener
- udp_listener
- webhooks
Telegraf is able to parse the following input data formats into metrics, these
formats may be used with input plugins supporting the data_format
option:
- influxdb
- amon
- amqp (rabbitmq)
- aws kinesis
- aws cloudwatch
- datadog
- discard
- elasticsearch
- file
- graphite
- graylog
- instrumental
- kafka
- librato
- mqtt
- nats
- nsq
- opentsdb
- prometheus
- riemann
- riemann_legacy
- socket_writer
- tcp
- udp
Please see the contributing guide for details on contributing a plugin to Telegraf.