Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metricbeat pct fields can be float and long which causes elasticsearch to throw an exception #5032

Closed
randude opened this issue Aug 28, 2017 · 28 comments
Assignees
Labels
bug libbeat question Team:Integrations Label for the Integrations team

Comments

@randude
Copy link

randude commented Aug 28, 2017

v6.0.0-beta1:

I'm using metricbeat to send normalized pct fields. Metricbeat sends the data to logstash which sends it to elastic search. All versions are v6.0.0-beta1.

I got this error on my elasticsearch server:
[metricbeat-2017.08.28][0] failed to execute bulk item (index) BulkShardRequest [[metricbeat-2017.08.28][0]] containing [4] requests
java.lang.IllegalArgumentException: mapper [system.process.cpu.total.norm.pct] cannot be changed from type [float] to [long]

This is because metricbeat sends sometimes the values as 2.0 or 2 instead of always as a float.

I was able to find a work around by setting a default template for my metricbeat index:
This will map all pct fields that are seen as integers back to float as they should.
Either this should be a part of the default template or just fixed at the lower level of the value creation (the latter is preferred).

{ "template": "metricbeat-*", "version": 60001, "settings": { "index.refresh_interval": "30s" }, "mappings": { "_default_": { "dynamic_templates": [ { "string_fields": { "path_unmatch": "*.pct", "match_mapping_type": "string", "mapping": { "type": "keyword", "norms": false } } }, { "percentage_fields_long_to_float": { "path_match": "*.pct", "match_mapping_type": "long", "mapping": { "type": "float" } } }], "properties": { "@timestamp": { "type": "date" }, "@version": { "type": "keyword" } } } } }

@tsg
Copy link
Contributor

tsg commented Aug 28, 2017

@randude Metricbeat comes with its own template, which you should make sure to load in ES. Normally, when sending the data directly to ES, this happens automatically, but not when using Logstash as an intermediary point.

There are two ways of solving this. You can run metricbeat setup from a machine that has access to ES to setup the templates and the dashboards:

metricbeat setup -e -E output.elasticsearch.hosts=...

Or you can export the template and then use the manage_template options from LS to load it.

metricbeat export template > metricbeat-template.json

I will close this one as a "question" for now, because we prefer questions to go to the discuss forums.

@tsg tsg closed this as completed Aug 28, 2017
@randude
Copy link
Author

randude commented Aug 28, 2017

@tsg I'm not sure why you closed this. Metricbeat should NOT send float numbers sometimes as integers. It should always send as a float.
My mapping is a workaround and it doesnt solve the issue at hand.

@tsg tsg added bug libbeat and removed Metricbeat Metricbeat labels Aug 28, 2017
@tsg
Copy link
Contributor

tsg commented Aug 28, 2017

Generally speaking, using Metricbeat without it's template is going to result in a lot of errors, so the correct solution is to use the Metricbeat template.

That said, we do have code that should write all floats in the dotted format, so I'm reopening this to investigate that.

@tsg tsg reopened this Aug 28, 2017
@gingerwizard
Copy link

@tsg i see something similar here. I have a metricbeat export (json dump). Indexing using the template results in some pct fields being mapped as float, others as long - see below. How should core pct values be mapped given the number is undefined? looking at the template this appears to be dynamic.

{
  "cpu": {
    "properties": {
      "core": {
        "properties": {
          "0": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "1": {
            "properties": {
              "pct": {
                "type": "long"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "2": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "3": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "4": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "5": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "6": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "7": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "8": {
            "properties": {
              "pct": {
                "type": "long"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "9": {
            "properties": {
              "pct": {
                "type": "long"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "10": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "11": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "12": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "13": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "14": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          },
          "15": {
            "properties": {
              "pct": {
                "type": "float"
              },
              "ticks": {
                "type": "long"
              }
            }
          }
        }
      },
      "kernel": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          },
          "ticks": {
            "type": "long"
          }
        }
      },
      "system": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          },
          "ticks": {
            "type": "long"
          }
        }
      },
      "total": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          }
        }
      },
      "user": {
        "properties": {
          "pct": {
            "type": "scaled_float",
            "scaling_factor": 1000
          },
          "ticks": {
            "type": "long"
          }
        }
      }
    }
  }
}

@gingerwizard
Copy link

i think i know what causes this - 0 values for pct cause the node to try and map the field as a long, anything else is mapped as a float. If 2 docs are indexed at the same time, one with a 0 and another with a float for the cpu pct value, the second attempt for a dynamic mapping can be rejected.

@gingerwizard
Copy link

Adding this to the metricbeat mapping resolves the issue i think:

{
	"docker.cpu.core.pct": {
	  "path_match": "docker.cpu.core.*.pct",
	  "mapping": {
	    "type": "float"
	  }
	}
},
{
	"docker.cpu.core.ticks": {
	  "path_match": "docker.cpu.core.*.ticks",
	  "mapping": {
	    "type": "long"
	  }
	}
}

@gingerwizard
Copy link

The alternative would be just to ensure "0" is passed as a float.

@ruflin
Copy link
Contributor

ruflin commented Feb 6, 2018

@gingerwizard What you report above we should have in our template. Could you open a separate issue for that? How to do it was kind of an open question: https://github.com/elastic/beats/blob/master/metricbeat/module/docker/cpu/_meta/fields.yml#L37 And you have the solution I think.

If you also have json events which are not part of the template, this upcoming feature should help: #6024

@ctindel
Copy link
Contributor

ctindel commented Apr 2, 2018

@gingerwizard @ruflin I just did a fresh install and the problem is definitely still happening with 6.2.3:

[2018-04-01T20:00:05,442][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-6.2.3-2018.04.02", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x2aa897b5>], :response=>{"index"=>{"_index"=>"metricbeat-6.2.3-2018.04.02", "_type"=>"doc", "_id"=>"2_Wng2IBJsKPXoGhVOah", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [docker.cpu.core.23.pct] cannot be changed from type [long] to [float]"}}}}

@ruflin
Copy link
Contributor

ruflin commented Apr 4, 2018

@ctindel I seems we never opened an issue for it so we forgot about it :-( As it's a different issue from the issue reported here initially we should have a separate issue for it. Could you open one?

@beirtipol
Copy link

Hi @ruflin - Is this treated as an open issue? I'm able to reproduce it very easily when I try to point metricbeat at logstash instead of direct to elasticsearch

  • Ubuntu 16.04 vm
  • ELK stack 6.2.4
  • Setup elasticsearch, kibana, logstash, metricbeats from zips
  • No configuration changes
  • Create simple pipeline for logstash as per wiki (below)
  • Start elasticsearch, kibana, logstash
  • Run 'metricbeat setup -e'
  • Run 'metricbeat' and see entries appear in Kibana
  • Configure metricbeat.yml to output to logstash and comment out the elasticsearch output.
  • Launch metricbeat
  • See errors in elasticsearch logs (below)

logstash-simple.conf:

input{
	beats {
		port => "5044"
	}
}

filter {
}

output {
	elasticsearch {
		hosts => [ "localhost:9200" ]
	}
}

elasticsearch log error:

java.lang.IllegalArgumentException: mapper [system.filesystem.used.pct] cannot be changed from type [float] to [long]
	at org.elasticsearch.index.mapper.MappedFieldType.checkTypeName(MappedFieldType.java:150) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.index.mapper.MappedFieldType.checkCompatibility(MappedFieldType.java:162) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.index.mapper.FieldTypeLookup.checkCompatibility(FieldTypeLookup.java:128) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.index.mapper.FieldTypeLookup.copyAndAddAll(FieldTypeLookup.java:94) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:426) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:353) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:285) ~[elasticsearch-6.2.4.jar:6.2.4]
	at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:313) ~[elasticsearch-6.2.4.jar:6.2.4]

@ctindel
Copy link
Contributor

ctindel commented Apr 24, 2018

@beirtipol the fix was already merged for the 6.3 branch

@beirtipol
Copy link

@ctindel thanks. Any ETA on when 6.3 might be released? (I'm hunting around the elastic.co site but can't see any indications)

@ruflin
Copy link
Contributor

ruflin commented Apr 25, 2018

Closing this issue as it will be resolved in 6.3

@beirtipol We don't announce any exact release dates but you can expect it in a few weeks. If you want to try it earlier, I can share some snapshot builds from master.

@beirtipol
Copy link

beirtipol commented Apr 26, 2018 via email

@sjancke
Copy link

sjancke commented Sep 11, 2018

We are seeing the same issue with dynamic fields from windows.perfmon module (metricbeat 6.3.0 and 6.4.0)

Is it possible that this default is the cause:

event.MetricSetFields.Put(r.measurement[counterPath], 0)

Sending 0 instead of 0.0 (in case of float-format) seems to cause alot of trouble.

@willemdh
Copy link

I have the same issue with dynamic mapping of the perfmon module.

@sjancke
Copy link

sjancke commented Sep 11, 2018

currently we are using setup.template.append_fields , but it's experimental (https://www.elastic.co/guide/en/beats/metricbeat/master/configuration-template.html)

@dmccuk
Copy link

dmccuk commented Sep 12, 2018

I also got this issue the other day:
https://discuss.elastic.co/t/cannot-be-changed-from-type-float-to-long/147710

I look forward to the update 👍

@fromanm1
Copy link

fromanm1 commented Jan 4, 2019

same here with metricbeat (metricbeat-6.5.4-1.x86_64) and logstash (logstash-6.5.4-1.noarch)

the error:

[2019-01-04T05:35:34,452][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.5.4-2019.01.04", :_type=>"doc", :routing=>nil}, #LogStash::Event:0x3415972f], :response=>{"index"=>{"_index"=>"filebeat-6.5.4-2019.01.04", "_type"=>"doc", "_id"=>"7QcAGGgBoaWI95dE0-57", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.process.memory.rss.pct] cannot be changed from type [long] to [float]"}}}}

@ruflin ruflin added the Team:Integrations Label for the Integrations team label Jan 7, 2019
@ondrejgr
Copy link

ondrejgr commented Jan 7, 2019

I have similar problem, too.
I'm using metricbeat (windows 2012 r2) version 6.5.4 (amd64), libbeat 6.5.4 [bd8922f built 2018-12-17 20:29:15
Default configuration (System module). Logstash and ES 6.5.4 on Docker (Ubuntu 18.04).
Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-6.5.4-2019.01.07", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x78b7a8cb>], :response=>{"index"=>{"_index"=>"metricbeat-6.5.4-2019.01.07", "_type"=>"doc", "_id"=>"OAmSKGgBjbqzJqEZHTQF", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.process.cpu.total.pct] cannot be changed from type [long] to [float]"}}}}

@ondrejgr
Copy link

ondrejgr commented Jan 7, 2019

I didn't have data I needed to keep, so I stopped all the metricbeats on the network, then in Kibana I deleted metricbeat-* elasticsearch indices and kibana index patterns.
I ran metricbeat and filebeat dashboards setup (just to be sure it's ok).
Then I started all metricbeats.
All the dashboards all now ok and collecting correct data.

@Raphyyy
Copy link

Raphyyy commented Mar 27, 2019

Hello

I am facing the same issue with 6.6.2.
My config : [metricbeat] -> [logstash -> ES]. Metricbeat hosts can't access to the ES service.
My system is quite simple and I followed guide without trouble.
I set up curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_template/metricbeat-6.6.2 -d@/tmp/metricbeat.template.json once but each day at midnight UTC on rotate I have this :

[2019-03-27T00:00:01,151][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [heartbeat-6.6.2-2019.03.27] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2019-03-27T00:00:01,856][INFO ][o.e.c.m.MetaDataMappingService] [e1] [heartbeat-6.6.2-2019.03.27/0tUxOUVMQUK0wNPywu_h6g] create_mapping [doc]
[2019-03-27T00:00:01,911][INFO ][o.e.c.m.MetaDataMappingService] [e1] [heartbeat-6.6.2-2019.03.27/0tUxOUVMQUK0wNPywu_h6g] update_mapping [doc]
[2019-03-27T00:00:02,813][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [.monitoring-es-6-2019.03.27] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [doc]
[2019-03-27T00:00:05,325][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [metricbeat-6.6.2-2019.03.27] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2019-03-27T00:00:05,579][INFO ][o.e.c.m.MetaDataCreateIndexService] [e1] [.monitoring-kibana-6-2019.03.27] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [doc]
[2019-03-27T00:00:05,963][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] create_mapping [doc]
[2019-03-27T00:00:05,967][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] update_mapping [doc]
[2019-03-27T00:00:09,134][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] update_mapping [doc]
[2019-03-27T00:00:09,226][INFO ][o.e.c.m.MetaDataMappingService] [e1] [metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg] update_mapping [doc]
[2019-03-27T00:00:09,227][DEBUG][o.e.a.a.i.m.p.TransportPutMappingAction] [e1] failed to put mappings on indices [[[metricbeat-6.6.2-2019.03.27/3OucW39hT5G-WLuoZmhbUg]]], type [doc]
java.lang.IllegalArgumentException: mapper [system.process.cpu.total.pct] cannot be changed from type [long] to [float]

It looks like update_mapping process does not look at index template.
So the only workaround I found is a cron at midnight each day, including deleting all my data :

curl -XDELETE 'http://localhost:9200/metricbeat-*'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_template/metricbeat-6.6.2 -d@/tmp/metricbeat.template.json

I use filebeat and heartbeat on my hosts and I have no trouble with index template on those

@fearful-symmetry fearful-symmetry self-assigned this Mar 27, 2019
@fearful-symmetry
Copy link
Contributor

@Raphyyy Judging by the failed to put mappings on indices log line, I'm guessing this is some sort of setup or config problem. You can ask for help on https://discuss.elastic.co .

@ruflin
Copy link
Contributor

ruflin commented Mar 29, 2019

Closing this issue based on the above.

@ruflin ruflin closed this as completed Mar 29, 2019
@matthenning
Copy link

matthenning commented Mar 29, 2019

I have a fresh setup of Elastic Stack 6.7 and encounter the exact same issue.
First the template was imported:

# metricbeat setup --template -E 'output.elasticsearch.hosts=["https://***:9200"]' -E 'output.elasticsearch.username=elastic' -E 'output.elasticsearch.password=***' -E 'setup.kibana.host="https://***:5601"'
Loaded index template

Then the beat was enrolled with the system module active and the following extra configuration:

metricsets:
  - cpu
  - load
  - memory
  - network
  - process
  - process_summary
  - uptime
  - socket_summary
  - core
  - diskio
  - filesystem
  - fsstat
enabled: true
processes:
  - '.*'

Logstash immediatly throws the followin errors:

[2019-03-29T14:26:26,769][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-2019.03.29", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x155eaf4b>], :response=>{"index"=>{"_index"=>"metricbeat-2019.03.29", "_type"=>"doc", "_id"=>"2b-hyWkBEZkmuxpyTNip", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.filesystem.used.pct] cannot be changed from type [float] to [long]"}}}}
[2019-03-29T14:26:27,277][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"metricbeat-2019.03.29", :_type=>"doc", :routing=>nil}, #<LogStash::Event:0x1212e991>], :response=>{"index"=>{"_index"=>"metricbeat-2019.03.29", "_type"=>"doc", "_id"=>"4L-hyWkBEZkmuxpyTNiq", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [system.diskio.iostat.request.avg_size] cannot be changed from type [float] to [long]"}}}}

As this is a fresh installation with no special configurations I'm not sure if this is indeed a configuration error.
I also tried stopping Logstash, deleting all metricbeat-* indices and the template, importing the template again and starting Logstash.

@ruflin
Copy link
Contributor

ruflin commented Mar 29, 2019

Can we please take this to discuss? Happy to open a fresh issue if it turns out it's an actual bug. For LS config, make sure it looks like here: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html Including version in the index name ...

@fearful-symmetry
Copy link
Contributor

Yah, can't seem to reproduce this on two "clean" 6.7 installs, in cloud and docker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug libbeat question Team:Integrations Label for the Integrations team
Projects
None yet
Development

No branches or pull requests