-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metricbeat pct fields can be float and long which causes elasticsearch to throw an exception #5032
Comments
@randude Metricbeat comes with its own template, which you should make sure to load in ES. Normally, when sending the data directly to ES, this happens automatically, but not when using Logstash as an intermediary point. There are two ways of solving this. You can run
Or you can export the template and then use the
I will close this one as a "question" for now, because we prefer questions to go to the discuss forums. |
@tsg I'm not sure why you closed this. Metricbeat should NOT send float numbers sometimes as integers. It should always send as a float. |
Generally speaking, using Metricbeat without it's template is going to result in a lot of errors, so the correct solution is to use the Metricbeat template. That said, we do have code that should write all floats in the dotted format, so I'm reopening this to investigate that. |
@tsg i see something similar here. I have a metricbeat export (json dump). Indexing using the template results in some pct fields being mapped as float, others as long - see below. How should core pct values be mapped given the number is undefined? looking at the template this appears to be dynamic.
|
i think i know what causes this - 0 values for pct cause the node to try and map the field as a long, anything else is mapped as a float. If 2 docs are indexed at the same time, one with a 0 and another with a float for the cpu pct value, the second attempt for a dynamic mapping can be rejected. |
Adding this to the metricbeat mapping resolves the issue i think:
|
The alternative would be just to ensure "0" is passed as a float. |
@gingerwizard What you report above we should have in our template. Could you open a separate issue for that? How to do it was kind of an open question: https://github.com/elastic/beats/blob/master/metricbeat/module/docker/cpu/_meta/fields.yml#L37 And you have the solution I think. If you also have json events which are not part of the template, this upcoming feature should help: #6024 |
@gingerwizard @ruflin I just did a fresh install and the problem is definitely still happening with 6.2.3:
|
@ctindel I seems we never opened an issue for it so we forgot about it :-( As it's a different issue from the issue reported here initially we should have a separate issue for it. Could you open one? |
Hi @ruflin - Is this treated as an open issue? I'm able to reproduce it very easily when I try to point metricbeat at logstash instead of direct to elasticsearch
logstash-simple.conf:
elasticsearch log error:
|
@beirtipol the fix was already merged for the 6.3 branch |
@ctindel thanks. Any ETA on when 6.3 might be released? (I'm hunting around the elastic.co site but can't see any indications) |
Closing this issue as it will be resolved in 6.3 @beirtipol We don't announce any exact release dates but you can expect it in a few weeks. If you want to try it earlier, I can share some snapshot builds from master. |
I can hold off for a few weeks, thanks Nicolas. I can workaround by
pointing metricbeat at elastic directly
…On Wed, 25 Apr 2018, 14:16 Nicolas Ruflin, ***@***.***> wrote:
Closing this issue as it will be resolved in 6.3
@beirtipol <https://github.com/beirtipol> We don't announce any exact
release dates but you can expect it in a few weeks. If you want to try it
earlier, I can share some snapshot builds from master.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5032 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABMHbKzJIBbZMbIchJcFd1x-01FrGBx8ks5tsHcvgaJpZM4PEQoV>
.
|
We are seeing the same issue with dynamic fields from windows.perfmon module (metricbeat 6.3.0 and 6.4.0) Is it possible that this default is the cause:
Sending 0 instead of 0.0 (in case of float-format) seems to cause alot of trouble. |
I have the same issue with dynamic mapping of the perfmon module. |
currently we are using |
I also got this issue the other day: I look forward to the update 👍 |
same here with metricbeat (metricbeat-6.5.4-1.x86_64) and logstash (logstash-6.5.4-1.noarch) the error: [2019-01-04T05:35:34,452][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.5.4-2019.01.04", :_type=>"doc", :routing=>nil}, #LogStash::Event:0x3415972f], :response=>{"index"=>{"_index"=>"filebeat-6.5.4-2019.01.04", "_type"=>"doc", "_id"=>"7QcAGGgBoaWI95dE0-57", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.process.memory.rss.pct] cannot be changed from type [long] to [float]"}}}} |
I have similar problem, too. |
I didn't have data I needed to keep, so I stopped all the metricbeats on the network, then in Kibana I deleted metricbeat-* elasticsearch indices and kibana index patterns. |
Hello I am facing the same issue with 6.6.2.
It looks like
I use filebeat and heartbeat on my hosts and I have no trouble with index template on those |
@Raphyyy Judging by the |
Closing this issue based on the above. |
I have a fresh setup of Elastic Stack 6.7 and encounter the exact same issue.
Then the beat was enrolled with the system module active and the following extra configuration:
Logstash immediatly throws the followin errors:
As this is a fresh installation with no special configurations I'm not sure if this is indeed a configuration error. |
Can we please take this to discuss? Happy to open a fresh issue if it turns out it's an actual bug. For LS config, make sure it looks like here: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html Including version in the index name ... |
Yah, can't seem to reproduce this on two "clean" 6.7 installs, in cloud and docker. |
v6.0.0-beta1:
I'm using metricbeat to send normalized pct fields. Metricbeat sends the data to logstash which sends it to elastic search. All versions are v6.0.0-beta1.
I got this error on my elasticsearch server:
[metricbeat-2017.08.28][0] failed to execute bulk item (index) BulkShardRequest [[metricbeat-2017.08.28][0]] containing [4] requests
java.lang.IllegalArgumentException: mapper [system.process.cpu.total.norm.pct] cannot be changed from type [float] to [long]
This is because metricbeat sends sometimes the values as 2.0 or 2 instead of always as a float.
I was able to find a work around by setting a default template for my metricbeat index:
This will map all pct fields that are seen as integers back to float as they should.
Either this should be a part of the default template or just fixed at the lower level of the value creation (the latter is preferred).
{ "template": "metricbeat-*", "version": 60001, "settings": { "index.refresh_interval": "30s" }, "mappings": { "_default_": { "dynamic_templates": [ { "string_fields": { "path_unmatch": "*.pct", "match_mapping_type": "string", "mapping": { "type": "keyword", "norms": false } } }, { "percentage_fields_long_to_float": { "path_match": "*.pct", "match_mapping_type": "long", "mapping": { "type": "float" } } }], "properties": { "@timestamp": { "type": "date" }, "@version": { "type": "keyword" } } } } }
The text was updated successfully, but these errors were encountered: