-
We have a daemon that handles a custom resource called Workflow. Workflow objects manage batch processing. Once the processing associated with a Workflow finishes, the daemon exits. But the Workflow object stays in the api server so it can be examined later for the processing status / results. I know that daemons in kopf are intended to run for the lifetime of an object, but you can return from the daemon handler and I assumed that would destroy any resources needed for the daemon. But what we are seeing is that our operator's memory use keeps going up as we create new Workflow objects -- and goes down when we delete Workflow objects from the api server. Which seems to imply that kopf is using some memory for each Workflow object, even the ones whose daemon handlers have exited. Is it expected that kopf would use memory for each object in the api server that has an associated daemon handler, even if the daemon handler has exited? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi. Thanks for asking. Yes, Kopf keeps track of which daemons have actually exited — to not start them again. If that list is deleted, the daemons will be respawned as for a new object. But that memory should be minimal, a few bytes. Here is the list of daemon/handler ids per resource: https://github.com/nolar/kopf/blob/1.35.6/kopf/_core/engines/daemons.py#L416-L429 What could be leaking the memory, is the "live body" object of the resource. It is remembered once the daemon is started, updated on every new event from Kubernetes: kopf/kopf/_core/reactor/processing.py Lines 54 to 65 in 7b45690 I think, there is no place where Can you please make a PR? |
Beta Was this translation helpful? Give feedback.
Hi. Thanks for asking.
Yes, Kopf keeps track of which daemons have actually exited — to not start them again. If that list is deleted, the daemons will be respawned as for a new object. But that memory should be minimal, a few bytes. Here is the list of daemon/handler ids per resource: https://github.com/nolar/kopf/blob/1.35.6/kopf/_core/engines/daemons.py#L416-L429
What could be leaking the memory, is the "live body" object of the resource. It is remembered once the daemon is started, updated on every new event from Kubernetes:
kopf/kopf/_core/reactor/processing.py
Lines 54 to 65 in 7b45690