-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource leak when creating private keys #4509
Comments
@sfc-gh-jkowalski I'm not sure this is a resource leak on the part of the client. The issue seems to be with how the cavium provider works. The KeyFactory.generatePrivate call goes to importRSAPrivateKey, which seems to have a stateful side-effect. That does not happen with other providers, such as BouncyCastle. Also the java.security.KeyFactory interface does not provide any mechanism to "destroy" a key - I'm not sure there is a built-in way to do that. Without knowing more of their internals / expectations (perhaps they have some cleanup logic based upon garbage collection, or some provider specific method to call) it's hard to tell what to do here. Can you find docs / source on the cavium provider related to this exception? |
|
It is not. For other providers that would be just a in-memory operation to clear sensitive fields. Can you confirm for cavium that it allows for more generate calls? |
Hmm, looks like CaviumRSAPrivateKey has default implementation of |
Without their code / docs, I'm not sure how to make progress. Is it possible to come at this problem by reducing the number of times you open / close clients? |
Fixes fabric8io#4509 This took a while to find the root cause: The internal SPI fallback logic inside `KeyFactory.generatePrivate()` has the weird side effect of latching onto the LAST registered provider (which in our case was Cavium) after `InvalidKeySpecException` is thrown. This choice is sticky for a single instance of KeyFactory and the fix for our issue is to get fresh `KeyFactory` instance when retrying.
We found a very surprising fix for this issue, which ends up being caused by reusing |
Here is a repro showing that re-using the KeyFactory after an exception results in a different Provider than using a fresh KeyFactory:
|
Fixes #4509 This took a while to find the root cause: The internal SPI fallback logic inside `KeyFactory.generatePrivate()` has the weird side effect of latching onto the LAST registered provider (which in our case was Cavium) after `InvalidKeySpecException` is thrown. This choice is sticky for a single instance of KeyFactory and the fix for our issue is to get fresh `KeyFactory` instance when retrying.
Describe the bug
We're frequently creating and closing
KubernetesClient
instances in our code, but it appears there's some kind of resource leak associated with private keys, leading to:Stack trace:
Fabric8 Kubernetes Client version
6.0.0
Steps to reproduce
Expected behavior
Clients can be created indefinitely.
Runtime
Kubernetes (vanilla)
Kubernetes API Server version
1.23
Environment
Linux
Fabric8 Kubernetes Client Logs
Additional context
No response
The text was updated successfully, but these errors were encountered: