-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In logtrail.json, change "timezone" field to another value than "UTC" raise an error #24
Comments
Getting the same with America/Los_Angeles. Removing the timezone setting appears to fix the issue. {
"msg":"[parse_exception] failed to parse date field [1484267001673] with format [epoch_millis]",
"path":"/logstash-*/_search",
"query":{
"size":500
},
"body":{
"sort":[
{
"@timestamp":{
"order":"asc",
"unmapped_type":"boolean"
}
}
],
"query":{
"bool":{
"must":{
"query_string":{
"analyze_wildcard":true,
"default_field":"message",
"query":"logGroup.keyword: \"/var/log/squid/access.log\""
}
},
"filter":{
"bool":{
"must":[
{
"range":{
"@timestamp":{
"gte":1484267001673,
"time_zone":"America/Los_Angeles",
"format":"epoch_millis"
}
}
}
],
"must_not":[
]
}
}
}
}
},
"statusCode":400,
"response":{
"error":{
"root_cause":[
{
"type":"parse_exception",
"reason":"failed to parse date field [1484267001673] with format [epoch_millis]"
}
],
"type":"search_phase_execution_exception",
"reason":"all shards failed",
"phase":"query",
"grouped":true,
"failed_shards":[
{
"shard":0,
"index":"logstash-2016.12.07",
"node":"v13nGpFcQrC9EUTjRh8FGA",
"reason":{
"type":"parse_exception",
"reason":"failed to parse date field [1484267001673] with format [epoch_millis]",
"caused_by":{
"type":"illegal_argument_exception",
"reason":"Parse failure at index [0] of [1484267001673]"
}
}
}
],
"caused_by":{
"type":"parse_exception",
"reason":"failed to parse date field [1484267001673] with format [epoch_millis]",
"caused_by":{
"type":"illegal_argument_exception",
"reason":"Parse failure at index [0] of [1484267001673]"
}
}
},
"status":400
}
} |
Guessing this may be an issue with ES, as the search filter is valid as per the documentation. It also barfs on absolute TZ offsets ("-8:00", etc) which the docs claim are supported. |
Found it: If I'm reading this correctly, |
Logtrail uses time_zone in range queries. From this documentation https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-range-query.html#_time_zone_in_range_queries. Timezone specifies the timezone of timestamps used in range queries, which then will be converted to UTC based on the timezone specified. Can I know the reason why you want to change the time_zone in logtrail.json? Is your @timestamp field in event doc not in UTC? |
The reason is that I want to search logs using time range in my time zone. |
I guess I was also expecting it to convert the event timestamps to the specified timezone for display. Kibana does this based on the browser's TZ setting, but it seems like Logtrail does not - the timestamps are all UTC. I agree that the documentation SAYS they should be supported, but it seems that the code itself does not allow them for epoch timestamps. You have to provide the timestamp in a different format (not epoch_seconds or epoch_millis) if you want to use a TZ. |
Opened upstream issue: elastic/elasticsearch#22621 |
Thanks for the elasticsearch issue and logtrail PR ! |
Automatic timezone detection in browser would be great and thus coherent with Kibana. |
@fbaligand it's not too bad, it looks like I could steal most of it from the Kibana code. Might take a shot at it this weekend. |
Nice news ! |
@fbaligand To understand your requirement again, if you have 2 systems A ( running in IST) , B (running in PST) shipping logs to ES via logstash, with Correct me if I am wrong? Today since display_timestamp is mapped to source host time by default , each event will be show time in their respective source time zone (the default config assumed all hosts are running in same timezone). If we map this to |
@sivasamyk can you publish a release with the latest tz stuff in it? I'm trying to do a local build but I cannot for the life of me get the plugin to package properly. |
@brandond You can download the latest release from https://github.com/sivasamyk/logtrail/releases/tag/0.1.7 This release also contains support for multiple index patterns. Intoduced display_timezone and display_timestamp_format in logtrail.json. Checkout the README for more info. Let me know if you face any issues. |
Thanks for the release. Currently is broken if you set display_timezone to something other than 'local' due to bad require of moment-timezone. See #37 |
Thanks! |
Now something else weird is going on... it loads properly the first time showing the most recent data, but the first auto-refresh gives me data from December 8th, which is the oldest index I currently have loaded in ElasticSearch. |
I uncommented the log on line 116 of server.js, here's what it shows: {"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gt":1484905521000,"format":"epoch_millis"}}}],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gt":1481187931881,"format":"epoch_millis"}}}],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gt":1481187978645,"format":"epoch_millis"}}}],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[],"must_not":[]}}}}}}
{"index":"logstash-*","size":500,"body":{"sort":[{"@timestamp":{"order":"asc","unmapped_type":"boolean"}}],"query":{"bool":{"must":{"query_string":{"analyze_wildcard":true,"default_field":"message","query":"*"}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gt":1481188026481,"format":"epoch_millis"}}}],"must_not":[]}}}}}} Not sure why it does a search with no timestamp in ascending order, that would definitely explain the jump back to the start of my indexes though. |
Can you let me know the time stamp of last event shown in event viewer when you open logtrail for first time? Can you let me know the changes you made in logtrail .json from default value. I assume you did not edit es_index_margin_in_seconds param. |
Not sure how to get the raw timetamp of the last event? Is there a command I can run on the JS console? Here's my config: {
"index_patterns" : [
{
"es": {
"default_index": "logstash-*",
"allow_url_parameter": true
},
"tail_interval_in_seconds": 15,
"es_index_time_offset_in_seconds": 0,
"display_timezone": "America/Los_Angeles",
"display_timestamp_format": "MMM DD HH:mm:ss",
"max_buckets": 500,
"nested_objects" : false,
"default_time_range_in_days" : 0,
"fields" : {
"mapping" : {
"timestamp" : "@timestamp",
"display_timestamp" : "@timestamp",
"hostname" : "logStream",
"program": "logGroup",
"message": "message"
}
}
}
]
} |
The timestamp of last event of event viewer will be in PST time based on your config. If we convert that to UTC (epoch_millis) you will get the timestamp used for tail. So if you can paste the last few events shown in event viewer it should good. |
Here's the last event returned by the first search: {"id":"33114544732589944933745996832382567856447798587079393285","timestamp":"2017-01-20T10:19:01.779Z","display_timestamp":"2017-01-20T10:19:01.779Z","hostname":"i-0cc87399e3e7f7154","message":"type=USER_END msg=audit(1484907541.779:136134): pid=9462 uid=0 auid=0 ses=19843 msg='op=PAM:session_close grantors=pam_loginuid,pam_limits acct=\"root\" exe=\"/usr/sbin/crond\" hostname=? addr=? terminal=cron res=success'","program":"/var/log/audit/audit.log"} which converts to 1484936341779 |
logtrail should be sending 1484936341779 timestamp to server during tail. I am trying to reproduce this in my local setup. Its working fine. Are you seeing any exceptions in JS debug console? I assume you are using 5.1.2 version of Kibana and Elasticsearch. |
@brandond I have added few logger statements in this build. The logs will be printed in JS console of browser. Can you please use this build and send me the logs? |
Here's what I've got in the console log in Chrome after opening it and letting it sit for about a minute.
Notice that it runs two searches back to back in pairs. When I open the page it runs two searches, one after another. It waits 15 seconds and then runs 2 searches one after another. etc. |
Thanks again. These debug logs helped. From these logs I see there are some issues with queries sent by logtrail client. Will debug over the weekend! |
Hi @sivasamyk , Actually, my need is simple : I want the same timestamp behaviour than Kibana, so :
The first need already works as I expected => Nice ! Concerning the second need, currently, date is displayed in logtrail using UTC timezone. Finally, as you tell, I think that @brandond PR solves my second need. By the way, I use Kibana 4.6.4. |
@brandond The issue is fixed (the issue was due to wrong variable reference in app.js). Please download the latest plugin from https://github.com/sivasamyk/logtrail/releases/download/0.1.7/logtrail-5.x-0.1.7.zip @fbaligand regarding the second issue download the latest 0.1.7 release and set the display_timezone to |
@sivasamyk Thanks, I'm getting consistent data refreshes now. However, it no longer stays scrolled to the bottom of the window - when new data is added it scrolls off the bottom of the page. It also seems to take a TON of CPU time to add new events - I'm guessing this is related to the change that added sorting and deduplication of events as they're added? Hopefully there's some opportunity for optimization there. Also, is there any setting to control how many entries it keeps in the window? After a few refreshes it gets REALLY slow. Guessing this is a combination of not rolling off old events + having to sort and filter each new event as it's added. |
I have tested it with Kibana 5.1.2, and it works great !
Thanks a lot for the work done ! That said, I use Kibana 4.6.4 in production at my company, is it possible to port the fix to 4.x branch ? |
@fbaligand The same changes are available in 4.x-0.1.7 release https://github.com/sivasamyk/logtrail/releases/download/0.1.7/logtrail-4.x-0.1.7.tar.gz |
@brandond Can you let me know you average event incoming rate and tail refresh interval? Can you let me know the max number of events shown in your event viewer (like 10K events) ? |
Hi @sivasamyk, I have tested logtrail-4.x-0.1.7 and it works great ! |
In
logtrail.json
, if I set"timezone": "Europe/Paris"
, I get the following error each time logtrail plugin posts a search request :The text was updated successfully, but these errors were encountered: