Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filebeat panic with json overwrite_keys and elasticsearch output #1378

Closed
jeffearl opened this issue Apr 12, 2016 · 2 comments · Fixed by #1421
Closed

filebeat panic with json overwrite_keys and elasticsearch output #1378

jeffearl opened this issue Apr 12, 2016 · 2 comments · Fixed by #1421
Assignees
Labels

Comments

@jeffearl
Copy link

jeffearl commented Apr 12, 2016

I'm testing the new json support in filebeat 5.0.0-alpha1. I'm having an issue sending json output to elasticsearch with the overwrite_keys option. If overwrite_keys is enabled, the filebeat process crashes. Debug output indicates "panic: interface conversion: interface is string, not common.Time". The source timestamp in my test log entry is "2016-04-05T18:47:18.444Z", which seems to be in the proper format. Both logstash and local file outputs work as expected under the same conditions with the same log. If I disable overwrite_keys, logs are properly sent to elasticsearch and the filebeat process continues to run.

Version: filebeat 5.0.0-alpha1
Operating System: Centos 6.7 64-bit

Steps to Reproduce:

  json:
    message_key: msg
    keys_under_root: true
    overwrite_keys: true

Log entry:
`

{"@timestamp":"2016-04-05T18:47:18.444Z","level":"INFO","logger":"iapi.logger","thread":"JobCourier4","appInfo":{"appname":"SessionManager","appid":"Pooler","host":"demohost.mydomain.com","ip":"192.168.128.113","pid":13982},"userFields":{"ApplicationId":"PROFAPP_001","RequestTrackingId":"RetrieveTBProfileToken-6066477"},"source":"DataAccess\/FetchActiveSessionToken.process","msg":"FetchActiveSessionToken process ended"}`

debug message:

2016/04/12 20:44:32.702501 prospector.go:86: INFO Starting prospector of type: log
2016/04/12 20:44:32.702505 prospector.go:94: INFO Run prospector
2016/04/12 20:44:32.702509 prospector_log.go:46: DBG  Start next scan
2016/04/12 20:44:32.702526 prospector_log.go:79: DBG  scan path /var/log/tibco/*/iapi-logs/*-watch.log
2016/04/12 20:44:32.702568 prospector_log.go:92: DBG  Check file for harvesting: /var/log/tibco/pooler/iapi-logs/One-iapi-watch.log
2016/04/12 20:44:32.702577 prospector_log.go:213: DBG  Update existing file for harvesting: /var/log/tibco/pooler/iapi-logs/One-iapi-watch.log
2016/04/12 20:44:32.702581 prospector_log.go:243: DBG  Not harvesting, file didn't change: /var/log/tibco/pooler/iapi-logs/One-iapi-watch.log
2016/04/12 20:44:33.702677 reader.go:138: DBG  End of file reached: /var/log/tibco/pooler/iapi-logs/One-iapi-watch.log; Backoff now.
2016/04/12 20:44:35.702881 reader.go:138: DBG  End of file reached: /var/log/tibco/pooler/iapi-logs/One-iapi-watch.log; Backoff now.
2016/04/12 20:44:37.702454 spooler.go:132: DBG  Flushing spooler because of timeout. Events flushed: 1
2016/04/12 20:44:37.702897 client.go:193: DBG  Publish: {
  "@timestamp": "2016-04-05T18:47:18.444Z",
  "appInfo": {
    "appid": "Pooler",
    "appname": "SessionManager",
    "host": "demohost.mydomain.com",
    "ip": "192.168.128.113",
    "pid": 13982
  },
  "beat": {
    "hostname": "localhost.localdomain",
    "name": "localhost.localdomain"
  },
  "input_type": "log",
  "level": "INFO",
  "logger": "iapi.logger",
  "msg": "FetchActiveSessionToken process ended",
  "offset": 0,
  "source": "DataAccess/FetchActiveSessionToken.process",
  "thread": "JobCourier4",
  "type": "log",
  "userFields": {
    "ApplicationId": "PROFAPP_001",
    "RequestTrackingId": "RetrieveTBProfileToken-6066477"
  }
}
2016/04/12 20:44:37.702940 output.go:107: DBG  output worker: publish 1 events
2016/04/12 20:44:37.705989 output.go:107: DBG  output worker: publish 1 events
panic: interface conversion: interface is string, not common.Time


goroutine 12 [running]:
panic(0x9bbec0, 0xc82019cb00)
        /usr/local/go/src/runtime/panic.go:464 +0x3e6
github.com/elastic/beats/libbeat/outputs/elasticsearch.getIndex(0xc8201ae5a0, 0xc82014a180, 0x4, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/elasticsearch/client.go:195 +0xd9
github.com/elastic/beats/libbeat/outputs/elasticsearch.eventBulkMeta(0xc82014a180, 0x4, 0xc8201ae5a0, 0x0, 0x0, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/elasticsearch/client.go:180 +0x58
github.com/elastic/beats/libbeat/outputs/elasticsearch.bulkEncodePublishRequest(0xc82011ec80, 0xc82014a180, 0x4, 0xc8200221c8, 0x1, 0x1, 0x0, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/elasticsearch/client.go:166 +0xe0
github.com/elastic/beats/libbeat/outputs/elasticsearch.(*Client).PublishEvents(0xc82007e1c0, 0xc8200221c8, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/elasticsearch/client.go:127 +0x2f1
github.com/elastic/beats/libbeat/outputs/mode.(*SingleConnectionMode).PublishEvents.func1(0xc82010b200)
        /go/src/github.com/elastic/beats/libbeat/outputs/mode/single.go:76 +0x9c
github.com/elastic/beats/libbeat/outputs/mode.(*SingleConnectionMode).publish(0xc82010b200, 0x7ff8af234e58, 0xc820159980, 0x1, 0xc820239b40, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/mode/single.go:131 +0x319
github.com/elastic/beats/libbeat/outputs/mode.(*SingleConnectionMode).PublishEvents(0xc82010b200, 0x7ff8af234e58, 0xc820159980, 0x1, 0xc8200221c8, 0x1, 0x1, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/mode/single.go:86 +0x75
github.com/elastic/beats/libbeat/outputs/elasticsearch.(*elasticsearchOutput).BulkPublish(0xc820156120, 0x7ff8af234e58, 0xc820159980, 0x1, 0xc8200221c8, 0x1, 0x1, 0x0, 0x0)
        /go/src/github.com/elastic/beats/libbeat/outputs/elasticsearch/output.go:220 +0x94
github.com/elastic/beats/libbeat/publisher.(*outputWorker).sendBulk(0xc820013630, 0xc820239d40, 0xc8200221c8, 0x1, 0x1)
        /go/src/github.com/elastic/beats/libbeat/publisher/output.go:110 +0x190
github.com/elastic/beats/libbeat/publisher.(*outputWorker).onBulk(0xc820013630, 0xc820239d40, 0xc8200221c8, 0x1, 0x1)
        /go/src/github.com/elastic/beats/libbeat/publisher/output.go:86 +0x39d
github.com/elastic/beats/libbeat/publisher.(*outputWorker).onMessage(0xc820013630, 0xffffffffffff0101, 0x7ff8af234e58, 0xc820159980, 0x0, 0xc8200221c8, 0x1, 0x1)
        /go/src/github.com/elastic/beats/libbeat/publisher/output.go:69 +0x79
github.com/elastic/beats/libbeat/publisher.(*messageWorker).onEvent(0xc820013630, 0xc820150101, 0x7ff8af234e58, 0xc820159980, 0x0, 0xc8200221c8, 0x1, 0x1)
        /go/src/github.com/elastic/beats/libbeat/publisher/worker.go:74 +0xb2
github.com/elastic/beats/libbeat/publisher.(*messageWorker).run(0xc820013630)
        /go/src/github.com/elastic/beats/libbeat/publisher/worker.go:60 +0x34c
created by github.com/elastic/beats/libbeat/publisher.(*messageWorker).init
        /go/src/github.com/elastic/beats/libbeat/publisher/worker.go:48 +0x126`
@monicasarbu
Copy link
Contributor

Thanks for reporting a 5.0.0-alpha1 bug! I'm affixing the Pioneer Program label to the issue to make you eligible for a gift package!

@jeffearl
Copy link
Author

Thanks for the Pioneer tag! I'll have to go find my coonskin cap

@tsg tsg self-assigned this Apr 19, 2016
@monicasarbu monicasarbu added P1 and removed priority labels Apr 19, 2016
tsg pushed a commit to tsg/beats that referenced this issue Apr 20, 2016
If `json.keys_under_root` and `json.overwrite_keys` are set, and the keys
`@timestamp` or `type` show up in the decoded JSON, they can cause issues because
the Elasticsearch output makes some assumptions about them: @timestamp has to
be of type `common.Time` and `type` has to be of type `string`.

The fix attempts to make use of the `@timestamp` and `type` from the JSON, but if
parsing fails or the resulting values are invalid, the fields are not overwritten
and an error key is added to help with troubleshooting.

Fixes elastic#1378.

Also hardens the `getIndex` call in libbeat, to protect against wrong types there
as well.
andrewkroh pushed a commit that referenced this issue Apr 20, 2016
If `json.keys_under_root` and `json.overwrite_keys` are set, and the keys
`@timestamp` or `type` show up in the decoded JSON, they can cause issues because
the Elasticsearch output makes some assumptions about them: @timestamp has to
be of type `common.Time` and `type` has to be of type `string`.

The fix attempts to make use of the `@timestamp` and `type` from the JSON, but if
parsing fails or the resulting values are invalid, the fields are not overwritten
and an error key is added to help with troubleshooting.

Fixes #1378.

Also hardens the `getIndex` call in libbeat, to protect against wrong types there
as well.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants