Skip to content

Kubeturbo Logging

Jason Shaw edited this page Jan 10, 2024 · 15 revisions

Kubeturbo Logging

Kubeturbo by default writes logs to a file into the container local storage at /var/log if the location is mounted as a writable file system within the container. If /var/log is not writable within the container, then these log files are written to /tmp.

NOTE: The default logging level for KubeTurbo is level: 2 and should not be changed unless under the guidance of Support, Engineering or Product Management and should be changed back to the default of level: 2 once the required increased logging is captured.

NOTE: That when the Kubeturbo pod (operator or release) is restarted, you loose all historical logs as there is no persistent storage used for Kubeturbo. With the new Turbonomic Secure Client (TSC), you can now persist Kubeturbo logs in the Turbo server logs, see section below

Collecting and Configuring kubeturbo logs in Kubernetes

The kubeturbo log contains a lot of useful info even when logging is set to v=2 (default). To get the logs, use the "kubectl logs" or "oc logs" command

  kubectl logs -n {namespace} {podName}

You can pipe this out to a text file to send off to support:

  kubectl logs -n {namespace} {podName} > kubeturbo.log

If running kubeturbo via an operator, or in OpenShift from the OperatorHub? Please include BOTH pod logs: kubeturbo-operator and kubeturbo-release pods. Note that the OCP console does not always show the full log. Either use the CLI or select the "Raw" option (shown below) from the UI to copy off the full log contents to send to support. ocp kubeturbo-release raw

Collecting Kubeturbo logs when using Turbonomic Secure Client (TSC)

If you have the Turbonomic Secure Client deployed and targeted Kubeturbo with it, starting in 8.9.3 in this scenario the Kubeturbo logs are written to the Turbonomic rsyslog pod like all other Turbonomic Server components. For details on how to review and collect the logs from the rsyslog go here

Increasing logging level (the new dynamic way)

Since 8.9.5, Kubeturbo started to support changing the logging level dynamically. It means you can update the logging level without a pod restart. Note it takes around 1 minute for the change to take effect.

Changing logging level

Support or Engineering may instruct you to increase the logging levels, to as high as v=5 (default is v=2). You will need to edit your deployment based on the method used to deploy kubeturbo, and ensure that the pod restarts to pick up the change. Wait for a full discovery to complete, and ideally recreate the error before gathering the logs. Set back to v=2 when done.

YAML method: Update the kubeturbo deployment, either with the "kubectl edit deployment {deployment name} -n {namespace}" or the kubectl patch command. Update the args value of v to v=5 (default is v=2). The pod should restart when the deployment is updated.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kubeturbo
  namespace: turbo
spec:
  template:
    spec:
      containers:
        args:
          - --v=5

Operator method: Add or update logging.level in the kubeturbo release Custom Resource.

apiVersion: charts.helm.k8s.io/v1
kind: Kubeturbo
metadata:
  name: kubeturbo-release
spec:
  logging:
    level: 5
...

Helm method: Upgrade your kubeturbo release. Substitute your values for {}:

helm upgrade {helmChart} {chart location or repo} –namespace turbo --set logging.level=4

Verify: Check the kubeturbo configmap and verify the logging level is changed.

kubectl get configmap turbo-config-kubeturbo-release -o yaml
apiVersion: v1
data:
  turbo-autoreload.config: |-
    {
        "logging": {
           "level": 4
        }
    }

Changing Logging output

Kubeturbo sends its log stream to STDERR FD of the container, which is generally available via kubectl logs or to the logging pipeline if configured within the k8s cluster. It is possible to change this behaviour. To disable writing logs to the file system and to ensure that the logs are sent only to STDERR, update the kubeturbo command line parameters as below:

--logtostderr=true
--alsologtostderr=false

If operator based deployment is being used the "args:" section in the spec of operator CR should be updated as below:

args:
  logtostderr: true
  alsologtostderr: false

Affinity/Anti-Affinity related logs

Kubeturbo can log details about pod to node and pod to pod affinities and anti-affinities. It can log statistical information eg. how many pods with affinities, how many of those pods have pod to pod affinities, etc. It can, at a higher log level, also log additional details for example the list of pods which carry these affinity rules, exact rule strings and topologies. Below is the list of information that is available in logs and their associated log levels.

Log level 3 or higher (numbers)

  • Total pods with All Affinities/Anti-Affinities, includes pod to node, pod to pod and affinities carried from pvs associated to pods
  • Total pods with node Affinities and Anti-Affinities
  • Total pods with pod Affinities and Anti-Affinities
  • Total Pods with volumes that specify Node AntiAffinities
  • Total unique Affinity terms by rule string across all pods
  • Total unique Anti-Affinity terms by rule string across all pods
  • Total unique label key=value pairs on pods
  • Total unique label key=value pairs (topologies) on nodes

Log level 5 or higher

  • List of all pods with Affinities/AntiAffinities
  • List of Pods with node Affinities/AntiAffinities
  • List of pods with pod to pod Affinities/AntiAffinities
  • List of pods with volumes that specify Node Affinities/AntiAffinities
  • List of all unique Affinity terms by rule string
  • List of all unique AntiAffinity terms by rule string
  • List of all unique label pairs on pods
  • List of unique label pairs (topologies) on nodes

Kubeturbo

Introduction
  1. What's new
  2. Supported Platforms
Kubeturbo Use Cases
  1. Overview
  2. Getting Started
  3. Full Stack Management
  4. Optimized Vertical Scaling
  5. Effective Cluster Management
  6. Intelligent SLO Scaling
  7. Proactive Rescheduling
  8. Better Cost Management
  9. GitOps Integration
  10. Observability and Reporting
Kubeturbo Deployment
  1. Deployment Options Overview
  2. Prerequisites
  3. Turbonomic Server Credentials
  4. Deployment by Helm Chart
    a. Updating Kubeturbo image
  5. Deployment by Yaml
    a. Updating Kubeturbo image
  6. Deployment by Operator
    a. Updating Kubeturbo image
  7. Deployment by Red Hat OpenShift OperatorHub
    a. Updating Kubeturbo image
Kubeturbo Config Details and Custom Configurations
  1. Turbonomic Server Credentials
  2. Working with a Private Repo
  3. Node Roles: Control Suspend and HA Placement
  4. CPU Frequency Getter Job Details
  5. Logging
  6. Actions and Special Cases
Actions and how to leverage them
  1. Overview
  2. Resizing or Vertical Scaling of Containerized Workloads
    a. DeploymentConfigs with manual triggers in OpenShift Environments
  3. Node Provision and Suspend (Cluster Scaling)
  4. SLO Horizontal Scaling
  5. Turbonomic Pod Moves (continuous rescheduling)
  6. Pod move action technical details
    a. Red Hat Openshift Environments
    b. Pods with PVs
IBM Cloud Pak for Data & Kubeturbo:Evaluation Edition
Troubleshooting
  1. Startup and Connectivity Issues
  2. KubeTurbo Health Notification
  3. Logging: kubeturbo log collection and configuration options
  4. Startup or Validation Issues
  5. Stitching Issues
  6. Data Collection Issues
  7. Collect data for investigating Kubernetes deployment issue
  8. Changes to Cluster Role Names and Cluster Role Binding Names
Kubeturbo and Server version mapping
  1. Turbonomic - Kubeturbo version mappings
Clone this wiki locally