-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add client pool to measurementlink-labview #262
Conversation
There are a lot of new VIs so I am not going to post screenshots unless requested. |
...ntime/MeasurementLink Measurement Server/Classes/IMeasurementService/Invoke Measure Logic.vi
Show resolved
Hide resolved
Source/Tests/Tests.Runtime/Measurement Server/MeasurementServerTests.lvlib
Show resolved
Hide resolved
@pbirkhol-ni This will be updated as a follow-up PR once we have a pre-release with the client pool I assume? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Waiting for an update where the tests library is not missing files.
Source/Tests/Tests.Runtime/Measurement Server/Helpers/Get Enum Getters.vi
Show resolved
Hide resolved
Source/Tests/Tests.Runtime/Measurement Server/MeasurementServerTests.lvlib
Show resolved
Hide resolved
I thought about updating the examples but I am hoping that |
So where do we expect to use this client pool first? Internally only in the session manager VIs we re-write? Do we expect customers to use them? |
For now, we would use the client pool internally and it would not be directly exposed to customers. That is why all of the VIs are either private or community scoped. We could consider making it public, but I think what we would do instead is provide an API through the MeasurementContext class. Maybe something like |
Does grpc-labview's pointer manager prevent this case from crashing? I have seen crashes due to gRPCid lifetime bugs, but none of them specifically involved calling CloseClient while another VI was using the client. |
In regards to when to destroy clients and clear the cache:
|
Source/Runtime/MeasurementLink Measurement Server/Classes/GrpcClientPool/Destroy.vi
Show resolved
Hide resolved
It appears to handle it correctly. I ran the following VI for a few minutes, and while it alternated between working correctly and throwing an error, it never crashed. |
What does this Pull Request accomplish?
This PR adds a gRPC client pool.
GrpcClientPool
class. The main API for this class is a LV2 style global that allows the pool to be shared across all measurements. Note that this could also cause problems, which I will detail below.Location to Insecure Address.vi
VI. This VI has been written before, but now it is part of a reusable library. It is marked as community scoped for now, but could be made public if desired.GrpcClientPool
class.Currently, the
GrpcClientPool
is accessed via a LV2 style global. This means that it is shared for all measurements. This generally shouldn't be a problem, but could be an issue when/ifDestroy
is called, which will destroy the clients for all measurements. However, even this isn't necessarily a problem. If the Session Manager service crashes, it will have crashed for all measurements and so it makes sense to destroy the client for all measurements.The real issue is how we call
Destroy
. Currently it is called by themeasurementlink-labview
framework when a measurement throws any error, even if the error is unrelated to gRPC. That could result in clients getting destroyed even though the client is still valid. That might not matter much for the measurement that errored, but will result in longer run times for other measurements. Below are a few ways we could fix this:Destroy
. Don't call it for every error; instead, call it for errors that indicate the client needs to be recreated. This has a few potential issues:Finally, this all depends on proper error handling from the customer. If the customer's measurement swallows errors and a gRPC client goes bad, they will be stuck.
Why should this Pull Request be merged?
This is something we should have done a long time ago, but it will be especially important with the upcoming session management work.
What testing has been done?
New auto tests. I also modified the DCPower shipping example to use the new VIs and verified that it worked as expected.