-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support key persistence #175
Comments
Any reason why we can't have a private endpoint like
Is this why? Is there a reason why we can't simply omit the key information from the logs? When using the |
It's not just logs. Having a private http server is error prone; one routing mistake and you're publishing your private keys on the internet. Writing keys to stdout and reading them from env is safer. @kdenhartog do you have an preference on this design element? |
What if we had a separate listener on some other port? That way we can have two separate routers to greatly reduce the chance of that happening. |
Just reviewed the key sync doc. Couldn't we just generate the key and send it to |
I prefer ENV vars on startup over private endpoints because the only time we need to be setting this key would be at start and from there the service can perform the rotation. So, in theory we reduce the amount of possible side effects from this endpoint getting called maliciously (if someone gets on the network) if we go with the startup ENV variable approach. Is there a reason where we'd want to update this key other than during start up service for this? |
We do need to do key rotation in this application. The PPOPRF key we're using is good for a certain number (256) of randomness epochs. When those are exhausted the server can't continue without a new key. For an isolated server instance we can generate new keys, and proceed from there. That is what the current code does. When the keying is controlled by some outside process (reloading with persistent state, propagating shared state across a cluster) we need a way to handle replacement at expiry. One approach is to have star-randsrv terminate on key exhaustion and the outer framework can restart with new material. But a private endpoint to poke in new keys is another way to handle that. |
I was thinking it would be a more like each node deterministically generates it's next key instead of syncing between the different nodes. Would that be another possible option here? |
@claucece said it was only safe to do that a few times. That would still give us over a year of service even for daily epochs, which in practice may be longer than our kubernetes deployment would stay up, so stretching the key for 3-4 rotations then terminating to force a restart might work. FWIW, our current thinking on state transfer with the nitriding proxy/sync daemon is to have star-randsrv (or a wrapper) pull new keys, rather than the nitriding sync daemon pushing. In either case, we can limit updates to times when star-randsrv needs new key material. |
Hmm, in thinking about this a bit further in relation to the other key syncing issue I'm noticing there's a broader pattern of needing to sync arbitrary data (secrets and non-secrets I suspect in the future) between different nitro enclave pods. I'm starting to backtrack on my original thinking and going down the path of trying to figure out how we generically solve data syncing between enclaves. On first though using the shim seems like a useful way to do this and handle the authorization between the various pods (I like our key sync idea of using the container image). Then have the shim pass the data into the enclave which can register an arbitrary handler with the shim to validate the data passed in from the shim. In this case, having these internal endpoints open is probably the way to go, but we need some way to maintain at least integrity and confidentiality guarantees between the pods. WDYT? |
Yes, there are definitely two levels here. For the nitriding framework supporting execution within the enclave, we want a general solution for synchronizing configuration from both external and internal sources. For the purposes of this repo, I'd like the maintain some separation between the two applications since the randomness server is still useful outside a secure enclave. |
SGTM |
For some applications, it would helpful to persist the OPRF key across restarts, or clone it among a cluster of instances. Implementing this is somewhat sensitive, since the whole point of the PPOPRF is to keep the private key private. Currently the
ppoprf
crate doesn't expose the private key.I suggest the following design:
star-randsrv --generate-key
will create appoprf::Server
and dump the private key to stdout, then terminate.STAR_RANDSRV_PRIVATE_KEY
env variable, and if set, use that key to construct theOPRFServer
state instead of a random one.Terminating the application after generating the key separates the step from normal invocation, making it easier to keep the key material out of logs. Likewise with reading a existing key from the environment, rather than a command-line argument.
The shared key will be unpunctured. Passing the correct epoch synchronization arguments will take care of puncturing no-longer valid epochs as they would with a random key.
To implement, we will also need to extend the
ppoprf
crate with something like the following interface:The text was updated successfully, but these errors were encountered: