-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore how to keep local cache for beamline alignment #537
Comments
I would recommend that this not become a feature of blueapi, for the sake of keeping it simple and small-scoped. Previously there has been discussion of using databases for lookup tables, I believe MX have had some success with redis. They have also used shared files as a temporary solution until the database was ready. |
This equates to Redis in a different pod in the namespace, and the plan calls out to it as part of execution, without it being a part of blueapi. Just a pattern for an adaptive scan plan_start -> fetch values from external -> plan_body -> store values in external -> plan_finish |
There's also a non-numpy redis python client https://redis.io/docs/latest/develop/connect/clients/python/ for when using it as just key-value storage and don't need the array support |
@callumforrester this is a necessary feature and the data captured would be cached directly after running the plan. And we know it's not beamline specific. Therefore this will be a centrally developed and maintained feature, whether it's a part of blueapi OR a microservice that wraps reddits that blueapi talks to directly is an architectural decision. and the 'worse is better' style would be to nicely add this into blueapi runner, whether here or in the #504 engine service. |
@stan-dot Yes, could have a microservice wrapping redis and then interact with it from you own plans, what functionality is needed in the core of blueapi? |
possible routes with postgres, differing by protocl REST
gRPC graphQL |
Of those, I'd be most interested in where you get with graphile |
Without trying to be that guy, is it worth stating the specific use(s) for this? Are we talking things like "we want to change the focal length of the mirror to the sample position, what are the 16 voltages we need for the bimorphs"? (i.e. things which rarely change) or more like "make a note of this motor position, we'll probably want to come back to this position later"? (i.e. a short-term storage)? |
what I got from the beamline interview is to replace this
|
https://github.com/etcd-io/etcd and https://github.com/apache/ignite look the most promising. the former for 46k stars the latter for being in the Apache ecosystem (but less stars) |
RE: chatting with Stan We probably want some simple way of tying into a key-value store instance that can be configured per beamline and interacted with as though it was just a dict. This probably lives in dodal as it is generic enough to be useful across beamlines. e.g. lookuptables = PersistedStore()
harmonic_7 = lookuptables["harmonic_7"]
def beamline_stub():
harmonic_7 = do_alignment()
lookuptables["harmonic_7"] = harmonic_7 |
if we do as much, what does 'harmonic 7' even mean? also fwiw I think that the 'inspect alignment' could be a whole another screen in the GUI |
But what would that GUI call? Eventually, a plan and that plan calls whatever stubs it requires. It doesn't matter what harmonic7 means, because it's equally applicable to any lookup table or stored beamline configuration. Moving from storing this data in a lookup file on the filesystem to in a memory store lets us view timeseries and recover previous states, access control etc. |
beamline alignment is a behavior applied to all beamlines so we'd like to have consistent lookup tables, at least at the level of science groups. also from the k8s perspective there is a well maintained bitnami chart for etcd so I'd go with that one |
Direct link to the etcd Helm Chart's ArtifactHub page |
dodal reads in from the filesystem:
|
We need to keep some cache lookup tables for the purposes of beamline alignment. That could be as a local pod-specific
*.npy
file, or REDIS in a different pod on a cluster. We certainly need some solution for this.Potential options include:
The text was updated successfully, but these errors were encountered: