You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're currently using kubectl's built-in logic for selecting which pod to dump logs from for deployments/replicaSets. That code is here and tl;dr is trying to select a pod that is more likely to actually have logs. When all the pods in an RS are failing, this is perfect. However, when some are succeeding, this logic is likely to select the good ones, which often have a large volume of irrelevant content. We should consider something like:
if@pods.map(&:deploy_succeeded?).uniq.length > 1# split-result ReplicaSetmost_useful_pod=@pods.find(&:deploy_failed?) || @pods.find(&:deploy_timed_out?)most_useful_pod.fetch_logselse# current logicend
It's worth noting that in most cases I've seen, the bad pods in a split-result ReplicaSet are failing at a very early stage (can't pull image, can't mount volume, etc.), so in practice the effect might be suppressing irrelevant logs rather than actually grabbing relevant ones.
We're currently using kubectl's built-in logic for selecting which pod to dump logs from for deployments/replicaSets. That code is here and tl;dr is trying to select a pod that is more likely to actually have logs. When all the pods in an RS are failing, this is perfect. However, when some are succeeding, this logic is likely to select the good ones, which often have a large volume of irrelevant content. We should consider something like:
It's worth noting that in most cases I've seen, the bad pods in a split-result ReplicaSet are failing at a very early stage (can't pull image, can't mount volume, etc.), so in practice the effect might be suppressing irrelevant logs rather than actually grabbing relevant ones.
cc @kirs @karanthukral
The text was updated successfully, but these errors were encountered: