You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some JSON formatted logs are not being parsed correctly.
I've checked multiple other issues related to incorrect parsing of JSON logs. I'm not entirely certain, but I feel this might be specifically related to #337.
It's very difficult to get a raw log, because this happens very rarely relative to the number of logs which are processed. I have to retrieve these from the destination log storage.
This is the resulting jsonPayload from Fluent Bit:
Based on the above, for some reason the log is broken at network.server.ip. I can't account for the rest of the log. However, I do have different logs that are broken in different places.
Expected behavior
Ideally, JSON to be parsed correctly. In lieu of that, some warning/error message might be helpful.
Your Environment
We have two Fluent* environments running side by-side, one is Fluentd based (managed by a third party) and a Fluent Bit based one, where the filter chain looks something like:
Tail -> Kubernetes -> Stackdriver
Version used: Have been using v1.5.7 until recently, but I'm now using a Docker image that was built from this commit as I wanted to test these changes.
Environment name and version (e.g. Kubernetes? What version?): Kubernetes (v1.15)
Configuration:
[SERVICE]
Parsers_File parsers.conf
HTTP_Server On
Log_Level warning
storage.metrics On
[INPUT]
Name tail
DB /var/run/fluent-bit/pos-files/flb_kube.db
DB.Sync Normal
Buffer_Chunk_Size 256K
Buffer_Max_Size 2M
Mem_Buf_Limit 16M
Parser docker
Refresh_Interval 5
Rotate_Wait 10
Skip_Long_Lines On
Tag kube.*
Path /var/log/containers/*.log
[FILTER]
Name kubernetes
Match *
Annotations Off
Buffer_Size 0
Keep_Log Off
Merge_Log On
K8S-Logging.Exclude On
[FILTER]
Name nest
Match *
Operation lift
Nested_under kubernetes
Add_prefix k8s.
[FILTER]
Name nest
Match *
Operation lift
Nested_under k8s.labels
Add_prefix k8s-pod/
[FILTER]
Name nest
Match *
Operation nest
Nest_under k8s.labels
Wildcard k8s-pod/*
[FILTER]
Name modify
Match *
Hard_rename k8s.labels root.labels
[FILTER]
Name modify
Match *
Remove_wildcard k8s.
[FILTER]
Name modify
Match *
Hard_rename log message
[OUTPUT]
Name stackdriver
Match *
k8s_cluster_name ${CLUSTER}
k8s_cluster_location ${ZONE}
labels_key root.labels
resource k8s_container
severity_key level
[PARSER]
Name docker
Format json
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
Additional Context
We're hoping to move to Fluent Bit as Fluentd has proved to not be quite performant enough. However, there have been many teething issues while trying to get Fluent Bit to work in a production setting. This might be considered one of the final major blockers in adopting Fluent Bit entirely.
Even though this happens so rarely, it causes issues to many users once stored in long-term storage, because the log name is delivered as:
Bug Report
Describe the bug
Some JSON formatted logs are not being parsed correctly.
I've checked multiple other issues related to incorrect parsing of JSON logs. I'm not entirely certain, but I feel this might be specifically related to #337.
It's very difficult to get a raw log, because this happens very rarely relative to the number of logs which are processed. I have to retrieve these from the destination log storage.
This is the resulting
jsonPayload
from Fluent Bit:And here's the resulting
jsonPayload
from Fluentd:Based on the above, for some reason the log is broken at
network.server.ip
. I can't account for the rest of the log. However, I do have different logs that are broken in different places.Expected behavior
Ideally, JSON to be parsed correctly. In lieu of that, some warning/error message might be helpful.
Your Environment
We have two Fluent* environments running side by-side, one is Fluentd based (managed by a third party) and a Fluent Bit based one, where the filter chain looks something like:
Tail -> Kubernetes -> Stackdriver
Version used: Have been using v1.5.7 until recently, but I'm now using a Docker image that was built from this commit as I wanted to test these changes.
Environment name and version (e.g. Kubernetes? What version?): Kubernetes (v1.15)
Configuration:
Additional Context
We're hoping to move to Fluent Bit as Fluentd has proved to not be quite performant enough. However, there have been many teething issues while trying to get Fluent Bit to work in a production setting. This might be considered one of the final major blockers in adopting Fluent Bit entirely.
Even though this happens so rarely, it causes issues to many users once stored in long-term storage, because the log name is delivered as:
Instead of the regular
projects/my-project/logs/stdout
orprojects/my-project/logs/stderr
.The text was updated successfully, but these errors were encountered: