Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add polling method to logparser and tail inputs #3213

Merged
merged 1 commit into from
Sep 11, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 8 additions & 5 deletions plugins/inputs/logparser/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,15 @@ regex patterns.
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
## /var/log/apache.log -> only tail the apache log file
files = ["/var/log/apache/access.log"]

## Read files that currently exist from the beginning. Files that are created
## while telegraf is running (and that match the "files" globs) will always
## be read from the beginning.
from_beginning = false

## Method used to watch for file updates. Can be either "inotify" or "poll".
# watch_method = "inotify"

## Parse logstash-style "grok" patterns:
## Telegraf built-in parsing patterns: https://goo.gl/dkay10
[inputs.logparser.grok]
Expand All @@ -34,15 +37,15 @@ regex patterns.

## Name of the outputted measurement name.
measurement = "apache_access_log"

## Full path(s) to custom pattern files.
custom_pattern_files = []

## Custom patterns can also be defined here. Put one pattern per line.
custom_patterns = '''
'''

## Timezone allows you to provide an override for timestamps that
## Timezone allows you to provide an override for timestamps that
## don't already include an offset
## e.g. 04/06/2016 12:41:45 data one two 5.43µs
##
Expand Down Expand Up @@ -145,7 +148,7 @@ Wed Apr 12 13:10:34 PST 2017 value=42
For cases where the timestamp itself is without offset, the `timezone` config var is available
to denote an offset. By default (with `timezone` either omit, blank or set to `"UTC"`), the times
are processed as if in the UTC timezone. If specified as `timezone = "Local"`, the timestamp
will be processed based on the current machine timezone configuration. Lastly, if using a
will be processed based on the current machine timezone configuration. Lastly, if using a
timezone from the list of Unix [timezones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones), the logparser grok will attempt to offset
the timestamp accordingly. See test cases for more detailed examples.

Expand Down
18 changes: 17 additions & 1 deletion plugins/inputs/logparser/logparser.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@ import (
"github.com/influxdata/telegraf/plugins/inputs/logparser/grok"
)

const (
defaultWatchMethod = "inotify"
)

// LogParser in the primary interface for the plugin
type LogParser interface {
ParseLine(line string) (telegraf.Metric, error)
Expand All @@ -34,6 +38,7 @@ type logEntry struct {
type LogParserPlugin struct {
Files []string
FromBeginning bool
WatchMethod string

tailers map[string]*tail.Tail
lines chan logEntry
Expand Down Expand Up @@ -61,6 +66,9 @@ const sampleConfig = `
## be read from the beginning.
from_beginning = false

## Method used to watch for file updates. Can be either "inotify" or "poll".
# watch_method = "inotify"

## Parse logstash-style "grok" patterns:
## Telegraf built-in parsing patterns: https://goo.gl/dkay10
[inputs.logparser.grok]
Expand Down Expand Up @@ -167,6 +175,11 @@ func (l *LogParserPlugin) tailNewfiles(fromBeginning bool) error {
seek.Offset = 0
}

var poll bool
if l.WatchMethod == "poll" {
poll = true
}

// Create a "tailer" for each file
for _, filepath := range l.Files {
g, err := globpath.Compile(filepath)
Expand All @@ -188,6 +201,7 @@ func (l *LogParserPlugin) tailNewfiles(fromBeginning bool) error {
Follow: true,
Location: &seek,
MustExist: true,
Poll: poll,
Logger: tail.DiscardingLogger,
})
if err != nil {
Expand Down Expand Up @@ -285,6 +299,8 @@ func (l *LogParserPlugin) Stop() {

func init() {
inputs.Add("logparser", func() telegraf.Input {
return &LogParserPlugin{}
return &LogParserPlugin{
WatchMethod: defaultWatchMethod,
}
})
}
3 changes: 3 additions & 0 deletions plugins/inputs/tail/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ The plugin expects messages in one of the
## Whether file is a named pipe
pipe = false

## Method used to watch for file updates. Can be either "inotify" or "poll".
# watch_method = "inotify"

## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
Expand Down
14 changes: 14 additions & 0 deletions plugins/inputs/tail/tail.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,15 @@ import (
"github.com/influxdata/telegraf/plugins/parsers"
)

const (
defaultWatchMethod = "inotify"
)

type Tail struct {
Files []string
FromBeginning bool
Pipe bool
WatchMethod string

tailers []*tail.Tail
parser parsers.Parser
Expand Down Expand Up @@ -50,6 +55,9 @@ const sampleConfig = `
## Whether file is a named pipe
pipe = false

## Method used to watch for file updates. Can be either "inotify" or "poll".
# watch_method = "inotify"

## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
Expand Down Expand Up @@ -83,6 +91,11 @@ func (t *Tail) Start(acc telegraf.Accumulator) error {
}
}

var poll bool
if t.WatchMethod == "poll" {
poll = true
}

// Create a "tailer" for each file
for _, filepath := range t.Files {
g, err := globpath.Compile(filepath)
Expand All @@ -96,6 +109,7 @@ func (t *Tail) Start(acc telegraf.Accumulator) error {
Follow: true,
Location: seek,
MustExist: true,
Poll: poll,
Pipe: t.Pipe,
Logger: tail.DiscardingLogger,
})
Expand Down