You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some background can be read at the end of this PR #8768 (comment)
Traefik Filebeat module relies on two steps to process the incoming logs: An initial step on the dissect part covered by this file that tokenizes the first 8 fields in a Traefik log message which follows Combined Log Format as you can see in the example written here
NOTE: Traefik docs have a mistake by saying that their logs are in Common instead of Combined Log Format but as you can see here the format that appears in the example in Traefik docs doesn't match but instead it matches with Combined format here)
The second step in tokenization any log line, involves the pipeline.json file which receives the output from the dissect, done within Filebeat itself and tokenizes everything that wasn't tokenized in the dissect part: fields 9th, corresponding to traefik.access.body_sent.bytes in any log format and forward.
The expected result we want is to tokenize all incoming content in one place, directly in the pipeline.json file to effectively use the Ingest node for tokenization.
The text was updated successfully, but these errors were encountered:
Some background can be read at the end of this PR #8768 (comment)
Traefik Filebeat module relies on two steps to process the incoming logs: An initial step on the dissect part covered by this file that tokenizes the first 8 fields in a Traefik log message which follows Combined Log Format as you can see in the example written here
The second step in tokenization any log line, involves the pipeline.json file which receives the output from the dissect, done within Filebeat itself and tokenizes everything that wasn't tokenized in the dissect part: fields 9th, corresponding to
traefik.access.body_sent.bytes
in any log format and forward.The expected result we want is to tokenize all incoming content in one place, directly in the
pipeline.json
file to effectively use the Ingest node for tokenization.The text was updated successfully, but these errors were encountered: