Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging Field Formats #7191

Closed
andy-townsend opened this issue Oct 9, 2024 · 2 comments
Closed

Logging Field Formats #7191

andy-townsend opened this issue Oct 9, 2024 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@andy-townsend
Copy link

andy-townsend commented Oct 9, 2024

Description

Observed Behavior:

In our environment we forward all logs into Opensearch but notice that some karpenter logs are getting dropped due to field mapping conflicts.

It looks as though the provisioner controller logs pods as a number; eg:

{
  "_index": "fluentd-karpenter-2024.10.08",
  "_id": "KUZqjSRYcLGdxgT2h2qIsg==",
  "_score": 1,
  "_source": {
    "@timestamp": "2024-10-08T21:00:03.445840Z",
    "stream": "stdout",
    "logtag": "F",
    "applog": {
      "level": "INFO",
      "time": "2024-10-08T21:00:03.445Z",
      "logger": "controller",
      "caller": "provisioning/provisioner.go:355",
      "message": "computed new nodeclaim(s) to fit pod(s)",
      "commit": "62a726c",
      "controller": "provisioner",
      "namespace": "",
      "name": "",
      "reconcileID": "4321d8f8-863b-4d34-aab6-053cf86c5355",
      "nodeclaims": 1,
      "pods": 1
    },
    "kubernetes": {
      "pod_name": "karpenter-c56c94bd7-kkzg6",
      "namespace_name": "karpenter",
      "pod_id": "8784a404-2220-485e-9796-8fbfebe8a843",
      "labels": {
        "app_kubernetes_io/instance": "karpenter",
        "app_kubernetes_io/name": "karpenter",
        "pod-template-hash": "c56c94bd7"
      },
      "host": "ip-10-14-1-26.eu-west-1.compute.internal",
      "container_name": "controller",
      "docker_id": "2bfdd7bb37253688f04a3d86e1fa1943b9b4c55cd779bf540f36ad53b2ff6a07",
      "container_hash": "public.ecr.aws/karpenter/controller@sha256:fc54495b35dfeac6459ead173dd8452ca5d572d90e559f09536a494d2795abe6",
      "container_image": "sha256:55b44b6c5d4782365148b46fc219a5b99b2581927cdebfa65a9ea0c42cfb036a"
    },
    "fluentd": "fluentd-os-694b685d78-j7m8z",
    "_hash": "KUZqjSRYcLGdxgT2h2qIsg==",
    "tag": "kube.karpenter"
  },
  "fields": {
    "applog.time": [
      "2024-10-08T21:00:03.445Z"
    ],
    "@timestamp": [
      "2024-10-08T21:00:03.445Z"
    ]
  }
}

But then it also tries to write it as a string at times; eg:

"pods\"=>\"logging/opensearch-cluster-data-4\"

Full log extract:

{
  "2024-10-08 15:43:57 +0000 [warn]: #1 dump an error event: error_class=Fluent::Plugin::OpenSearchErrorHandler::OpenSearchError error=\"400 - Rejected by OpenSearch [error type]: mapper_parsing_exception [reason]: 'failed to parse field [applog.pods] of type [long] in document with id '9GwohKfj8XSRAPFXmeiwYQ=='. Preview of field's value: 'logging/opensearch-cluster-data-4''\" location=nil tag=\"kube.karpenter\" time=2024-10-08 15:43:28.185502890 +0000 record={\"@timestamp\"=>\"2024-10-08T15:43:26.770130Z\", \"stream\"=>\"stdout\", \"logtag\"=>\"F\", \"applog\"=>{\"level\"=>\"INFO\", \"time\"=>\"2024-10-08T15:43:26.767Z\", \"logger\"=>\"controller\", \"caller\"=>\"provisioning/provisioner.go:174\", \"message\"=>\"pod(s) have a preferred TopologySpreadConstraint which can prevent consolidation\", \"commit\"=>\"62a726c\", \"controller\"=>\"provisioner\", \"namespace\"=>\"\", \"name\"=>\"\", \"reconcileID\"=>\"1140900a-2e88-4efa-9a95-c2cb59e2a0c9\", \"pods\"=>\"logging/opensearch-cluster-data-4\"}, \"kubernetes\"=>{\"pod_name\"=>\"karpenter-c56c94bd7-kkzg6\", \"namespace_name\"=>\"karpenter\", \"pod_id\"=>\"8784a404-2220-485e-9796-8fbfebe8a843\", \"labels\"=>{\"app_kubernetes_io/instance\"=>\"karpenter\", \"app_kubernetes_io/name\"=>\"karpenter\", \"bosun_jspaas_uk/costcentre\"=>\"PD7825\", \"pod-template-hash\"=>\"c56c94bd7\"}, \"host\"=>\"ip-10-14-1-26.eu-west-1.compute.internal\", \"container_name\"=>\"controller\", \"docker_id\"=>\"2bfdd7bb37253688f04a3d86e1fa1943b9b4c55cd779bf540f36ad53b2ff6a07\", \"container_hash\"=>\"public.ecr.aws/karpenter/controller@sha256:fc54495b35dfeac6459ead173dd8452ca5d572d90e559f09536a494d2795abe6\", \"container_image\"=>\"sha256:55b44b6c5d4782365148b46fc219a5b99b2581927cdebfa65a9ea0c42cfb036a\"}, \"bosun_env\"=>\"lab\", \"bosun_cluster\"=>\"lab-ie-core\", \"bosun_cluster_type\"=>\"core\", \"fluentd\"=>\"fluentd-os-694b685d78-9jq4v\", \"_hash\"=>\"9GwohKfj8XSRAPFXmeiwYQ==\"}"
}

Expected Behavior:
Ideally fields should be logged as the same type.

Reproduction Steps (Please include YAML):

Versions:

  • Chart Version: v1.0.0
  • Kubernetes Version (kubectl version): 1.30
  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@andy-townsend andy-townsend added bug Something isn't working needs-triage Issues that need to be triaged labels Oct 9, 2024
@engedaam
Copy link
Contributor

@engedaam engedaam removed the needs-triage Issues that need to be triaged label Oct 17, 2024
@engedaam engedaam self-assigned this Oct 21, 2024
@andy-townsend
Copy link
Author

Sure thing, thanks. I'll re-raise tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants