-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Super slow Kibana UI on first load .. related to /_field_stats API call. #9386
Comments
Note I just found this post back from March where someone else was reporting a similar issue: https://discuss.elastic.co/t/kibana-field-stats-request-takes-20000-ms-if-number-of-indices-and-shards-are-more/43468/15 |
30 seconds is quite unusual. I'd suggest seeking some tuning advice in the Elasticsearch discuss forum. There might be some changes you can make to your ES setup to improve field stats performance. |
@Bargs, From a performance perspective, our cluster looks something like this: Masters
|
The main cause of slowness in field_stats I've seen is number of shards. field_stats needs to run per shard, so reducing the number of shards per index might help. Beyond that I don't have a ton of tuning advice myself, did you try posting in the Elasticsearch discuss forum? Since the issue here is field_stats performance we first need to diagnose why field_stats is slow in this particular case. If you post in Discuss and don't get any good advice I can try to bug some ES team members. |
@Bargs, |
@diranged can you please suggest if you got any inputs on the issue , we are still facing this issue . |
@diranged we are also using logstash-* pattern for field-stats api. |
@diranged @ersushantsood can you try to bisect the issue a bit and run the field stats against only a subset of the indices? What would be interesting is:
|
@diranged Have you tried isolating the field stats lookup test to just hot and/or warm nodes by altering the pattern? That should prove whether it is indeed the warm nodes that are causing the slowdown or not. |
I am experiencing the same issue. Even when I look at just one index the _fieldstats query takes 8 seconds on a 5+1 shard 800 GB index. |
Working being done on #11014 should resolve this issue. |
oh this makes my day! 👍 |
The new field_caps API won't help us on this issue. This issue is caused by the field_stats call that figures out which indices to query based on the selected time range. To fix this we need to consider #6644 |
@Bargs we will get rid of field stats so I wonder why you still need it? |
I don't think we still need it, we can probably just remove our use of it in this scenario. I just wanted to point out that switching to the field_caps API to get |
The solution here is to not use the field_stats behavior for index patterns. Since Elasticsearch has improved performance that helps when querying across time based indices now, the "expand" behavior for index patterns that we added as the default behavior early in 4.x that leveraged the field_stats API is no longer necessary and can even be harmful. Starting in 5.4.0, we changed the default for all new index patterns to not use the expanding behavior, and we will soon be deprecating that option entirely. I'd recommend anyone on 5.4.0 onward to uncheck the "Expand index pattern when searching" option for all of their index patterns, and this problem should go away. |
w00t |
Kibana version: 4.6.2
Elasticsearch version: 2.4.1
Server OS version: Ubuntu 14.04.3
Browser version: Chrome 54
Browser OS version: OSX
Description of the problem including expected versus actual behavior:
Issue #4342 introduced a new way to dynamically find indices when doing searching against them. This works and it works just fine for actual searches .. but causes problems when it comes time for Kibana to figure out which indexes to actually search. I think that the change was introduced in #4342, where usage of the
/_field_stats
api is used to figure out which indexes to search.We store 30 days worth of indexes named
logs-YYYY-MM-DD
andevents-YYYY-MM-DD
in our cluster. Each day we store between 1-2tb of raw data (unduped .. double it for replication). Any given month, we're storing ~90-100TB of duplicated data, and around 30-40 billion events.We have two clusters ingesting identical data and running the same versions of Kibana and ElasticSearch. The only difference is that on the newer cluster, we used the
logs-*
andevents-*
patterns rather than the[logs-]YYYY-MM-DD
and[events-]YYYY-MM-DD
.A quick bit of debugging showed that the
logs-*
configured Kibana instances were issuing different API calls .. in particular, an extremely long lasting/_field_stats
call:This call takes an average of 32-35s on our cluster almost every single time.. so the question I'm asking here is whether or not this is an expected behavior? Is this a normal amount of time to load up this data? Is it something thats not really well tested on large clusters? Or should we simply go back to the old model where we explicitly list the daily index strategy in our index pattern?
The text was updated successfully, but these errors were encountered: