-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[filebeat] Failed to parse kubernetes.labels.app #8773
Comments
I'm seeing this as well. |
me too, is there any solutions? thanks |
Hi everyone, thank you for your detailed report This issue is caused by label/annotation dots ( We have some counter measures in place to detect these situations when the annotations belong to the same object, but it seems it's not enough. My thinking goes toward allowing replacing In the meanwhile, the workaround would be to drop the offending annotations. You can use the
|
Thank you! this fix works in the meantime! |
Hello, this workaround is also helped me, but I have almost the same problem with basic kubernetes component: metricbeat: 6.4.2
(I think this metric should be like "kubernetes.labels.role.kubernetes.io/networking: 1" ) |
Oddly enough, I have two separate clusters with the same version (6.4.2) and identical (as far as In my case, this issue is only occurring for logs which are being parsed as JSON using |
We see this issue also with labels like:
Is there a fix for this other than dropping the labels? |
@exekias Looking forward to your fix for label dots. Maybe like the Docker fix with |
Also getting this, but I'm seeing it primarily in metricbeat. I just want to add that this label format is what is recommended by the Kubernetes team. |
I've opened #9860 to tackle this issue, thank you everyone for your feedback |
Hello @exekias, However the messages was logged in to ES. |
The fix for metric beat made it in to 6.7.0, which is not enabled by default. I didn't read the code correctly and it does not fix filebeat at all from what I can tell. I think filebeat needs the config options exposed in the add_kubernetes_metadata processor? |
Hi everyone, you should be able to enable the new behavior by passing these parameters:
This will be the default starting with 7.0 |
For reference, this was fixed here: #9939, closing this |
I think you can set elasticsearch index mapping or template to resolve this question, for example: |
Hi everyone. That issue wasn't fixed at all. I'm getting the same behavior, even using the latest version of metricbeat (7.1.1) and config:
|
@nerddelphi Thanks for the feedback! What error message are you seeing? |
@kaiyan-sheng These errors:
and
|
Same here |
Same for me too, with latest 7.2 version
|
Is |
Where to add ? i am using this - https://github.com/elastic/beats/blob/master/deploy/kubernetes/metricbeat-kubernetes.yaml |
Ok you are using metricbeat, so for each k8s module entry, add
|
I can confirm this works with the 7.1 metricbeat kubernetes.yml deployment. |
…beats#8773); added events to clusterrole
Hello 😄 I have the problem in version 6.8.1 with filebeat and the labels/annotations.dedot to true, is this expected in this version ? |
@bcouetil We found the same in v6.8.2. I'm now upgrading our Elastic stack to v7 in the hope this will resolve it... |
I've done that successfully. Updating only filebeat works. |
Hmmm, I wasn’t able to run Filebeat 7 against Elastic 6. But thanks for confirming it’s properly fixed in v7! Things are looking happier on our end too |
Just an FYI..., even adding the dedot doesn't necessary solve the issue 100%, as least not immediately. The problem seems to be that the kubernetes.labels.app, depending on which document makes it there 1st and establishing it as a keyword field or a hierarchy (as the daily index), will basic kick out the other kind... The key is that you'll see kubernetes.labels.app in Kibana as a "conflict" field in the index pattern, since some it might be one some day, it might be another another day. I'd imagine the dedot will prevent this conflict when the daily index rolls over. Maybe I'm being impatient and not waiting to see what happens with the index rolling over on the next day.... I just added this at the bottom of the processor to get rid of that field.....
I think this describes the problem better.... |
Hello, we have similar issues since 7.6.1 in metricbeat...
Etc... |
@willemdh Did you add |
@kaiyan-sheng Yes, we added dedot true a long time ago.. A colleague of me created a ticket in the meantime. |
I'm using helm/stable/filebeat which is based on
docker.elastic.co/beats/filebeat-oss:6.4.2
all other components are also oss:6.4.2
I'm having a problem when filebeat failed to parse kubernetes.labels.app for many containers.
This is
filebeat
log when I'm sending directly to elasticsearch:and this is the same error from
logstash
, when I'm sending from filebeat to logstashThis is a part of my filebeat config file:
I noticed that parsing problems appear only for pods with such labels format(which is recommended for helm charts:
and the problem doesn't appear for more simpler label format like:
Expected behavior from filebeat is a successful parse of such labels and successful shipment to elasticsearch instead parse error and failure to send log.
For example Fluentd parse
app.kubernetes.io/component
and it appears in elasticsearch askubernetes.labels.app_kubernetes_io/component
During bug research, I found similar problems
The text was updated successfully, but these errors were encountered: