You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of our java applications needs a cache of workflows and workflowtemplates to make various decisions, as requesting them regularly from the argo server causes bandwidth issues due to the number/size of our workflows. For some things we could use the field selection to limit the amount of data coming back, but it's not ideal when wanting the full objects. We would like to still use the argo server for other less-frequent/low-bandwidth operations (submit, resubmits, retries, deletes, suspend, resume, etc...).
We saw that the argo java client transitively pulls in the kubernetes java sdk, so we thought to simply use a kubernetes informer to maintain the cache the wf/wftmpl for us. Unfortunately there are several differences between the way wf/wftmpl objects are serialized when talking to the kubernetes API server and the argo server:
To test, after a couple quick hacks on these and adding "implements KubernetesObject" to Workflow and WorkflowTemplate objects, the kubernetes api informer worked.
Questions
What's the best way to retain a full cache of workflows and workflowtemplates in our applications, while still leveraging the argo server for other operations on the cluster?
Summary
One of our java applications needs a cache of workflows and workflowtemplates to make various decisions, as requesting them regularly from the argo server causes bandwidth issues due to the number/size of our workflows. For some things we could use the field selection to limit the amount of data coming back, but it's not ideal when wanting the full objects. We would like to still use the argo server for other less-frequent/low-bandwidth operations (submit, resubmits, retries, deletes, suspend, resume, etc...).
We saw that the argo java client transitively pulls in the kubernetes java sdk, so we thought to simply use a kubernetes informer to maintain the cache the wf/wftmpl for us. Unfortunately there are several differences between the way wf/wftmpl objects are serialized when talking to the kubernetes API server and the argo server:
To test, after a couple quick hacks on these and adding "implements KubernetesObject" to Workflow and WorkflowTemplate objects, the kubernetes api informer worked.
Questions
The text was updated successfully, but these errors were encountered: