Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: correct configuration params, stream reconnects, add tests #104

Draft
wants to merge 43 commits into
base: main
Choose a base branch
from

Conversation

aepfli
Copy link
Member

@aepfli aepfli commented Nov 18, 2024

This pull request contains all my previous changes. and it has all the configuration properties maintained for everything.

I changed the times for the initialization to ms - to be in sync configuration-wise with the Java application (which might not be a good idea)

I removed timeout and replaced it with deadline; additionally, I added a deprecation note.

Every Implementation does not take all the corresponding configurations into account.

Resolves: #57
Resolves: #56
Resolves: #55

@toddbaert toddbaert changed the title [draft] syng pr with gherkin and testcontainers [draft] sync pr with gherkin and testcontainers Nov 18, 2024
@aepfli aepfli force-pushed the feat/grpc-sync-addition branch from 2db5904 to e826b3d Compare November 19, 2024 06:47
Copy link

codecov bot commented Nov 19, 2024

Codecov Report

Attention: Patch coverage is 96.20690% with 11 lines in your changes missing coverage. Please review.

Project coverage is 94.28%. Comparing base (b62d3d1) to head (a37e5a3).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
.../flagd/resolvers/process/connector/file_watcher.py 94.28% 4 Missing ⚠️
.../flagd/resolvers/process/connector/grpc_watcher.py 94.87% 4 Missing ⚠️
...d/src/openfeature/contrib/provider/flagd/config.py 96.66% 1 Missing ⚠️
...src/openfeature/contrib/provider/flagd/provider.py 85.71% 1 Missing ⚠️
.../contrib/provider/flagd/resolvers/process/flags.py 96.96% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #104      +/-   ##
==========================================
+ Coverage   93.92%   94.28%   +0.36%     
==========================================
  Files          14       17       +3     
  Lines         675      858     +183     
==========================================
+ Hits          634      809     +175     
- Misses         41       49       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@aepfli aepfli force-pushed the feat/grpc-sync-addition branch from e826b3d to 96ecdc1 Compare November 19, 2024 06:52
aepfli and others added 18 commits November 22, 2024 14:06
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey.one@gmail.com>
Signed-off-by: Cole Bailey <cole.bailey.one@gmail.com>
Signed-off-by: Cole Bailey <cole.bailey.one@gmail.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
Signed-off-by: Cole Bailey <cole.bailey@deliveryhero.com>
@aepfli aepfli force-pushed the feat/grpc-sync-addition branch from 96ecdc1 to 8a15f87 Compare November 22, 2024 15:19
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
@aepfli aepfli force-pushed the feat/grpc-sync-addition branch from 8a15f87 to 36269b5 Compare November 23, 2024 12:55
return
except grpc.RpcError as e:
logger.error(f"SyncFlags stream error, {e.code()=} {e.details()=}")
if e.code() == grpc.StatusCode.UNAVAILABLE:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

destroying the existing channel, and creating a new one, actually fixed the reconnect issue. i am not sure, if this is best practice, or should be avoided.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we need to handle more cases than this? UNKNOWN perhaps? Just wondering if other sorts of network errors that result in the same bad state might not be handled because they return a different status.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For me it is really hard to judge, my grpc knowledge is Limited, and I would fully rely on your opinion. If you do think it make sense let's go for it.

Copy link
Member

@toddbaert toddbaert Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My gut says since we don't 100% understand why we observe this behavior, we should play it safe and recreate the stub on any grpc.RpcError; in particular I'm worried that if we don't, the stream deadline functionality might not work reliably.

I think there's not much risk here since this is only for the stream call and if we're having a problem with that, pretty much nothing else in this resolver mode is working anyway.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done here: f91dd5c

Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Todd Baert <todd.baert@dynatrace.com>
@beeme1mr
Copy link
Member

Could you please clarify what part of the pr is breaking?

@toddbaert
Copy link
Member

Could you please clarify what part of the pr is breaking?

@beeme1mr ya, like I said above:

In any case, I will merge this EOD unless I hear objections.

(When I do I will make sure the git long message includes details about the breaking changes, some of which I made with the grpc -> rpc thing)

@gruebel
Copy link
Member

gruebel commented Nov 25, 2024

Sorry, hard reject from me. This PR has way too many changes, which have nothing to do with each other and are debatable on their own. Like adding a caching library, adding suppression comments # type:ignore[import-not-found] this raises big red flags!

I don't really understand the reason to put everything into one big PR instead of tackling each topic on its own. 3 issues are referenced and each one can be done on its own and #55 is just a collection of TODOs, which again can be separated in multiple PRs.

It's great @aepfli tackled all the issues, but not like this.

@toddbaert
Copy link
Member

toddbaert commented Nov 26, 2024

Sorry, hard reject from me. This PR has way too many changes, which have nothing to do with each other and are debatable on their own. Like adding a caching library, adding suppression comments # type:ignore[import-not-found] this raises big red flags!

I don't really understand the reason to put everything into one big PR instead of tackling each topic on its own. 3 issues are referenced and each one can be done on its own and #55 is just a collection of TODOs, which again can be separated in multiple PRs.

It's great @aepfli tackled all the issues, but not like this.

I appreciate that keeping PRs to a single topic can aid in reviewing and just make things easier.

However, this PR addresses real issues with this provider, some of which are hard to disentangle and benefit from going together, (for instance, it has an issue with the specified reconnect functionality, and this PR both fixes that an implements the associated e2e tests).

Rejecting this PR is totally acceptable from my perspective, but I feel you should to offer @aepfli some actionable feedback and next steps if you are going to do so (such as how he should decompose the PR and resolve the suppressions).

cc @federicobond @lukas-reining @thomaspoignant @kinyoklion

@gruebel
Copy link
Member

gruebel commented Nov 26, 2024

@toddbaert you can check the latest PRs @aepfli created in this repository and then suddenly closed all and merged everything into this one, for what reason? They were good on their own and addressed separate topics, like auto generating the grpc files or caching.

@gruebel
Copy link
Member

gruebel commented Nov 26, 2024

#108
#109
#110
#111

@lukas-reining
Copy link
Member

@gruebel I am with you that this PR is quite large and is much work to review and that each PR should in the best case be tackling one thing.
Also if this is really breaking, I would like to be sure that this is really needed. Especially changing from seconds to milliseconds could be a think to avoid for not breaking the API.

But I would like to be sure that we look for a way forward instead of "just rejecting". We should be sure that the change can even be untangled before saying that we are not accepting anything because of size or multiple solved issues.
We could also give some hints on what we think should be separated and try to make the work good for reviewers but also for the people opening PRs.

Maybe @aepfli can tell if these are separable or we just accept this once and review this large PR, while keeping on to try to do PRs as small as possible.

@aepfli
Copy link
Member Author

aepfli commented Nov 26, 2024

I can, and gladly will split it up into 4 parts again:

  1. proto buff generation build(flagd): auto generate proto files from schema #109
  2. events for RPC feat(flagd-rpc)!: add events for rpc mode, some breaking config fixes #108
  3. caching for RPC feat(flagd-rpc): add caching  #110 (depends on 1 and 2)
  4. sync implementation with config adaptions (this will still stay big, as it is a new feature overall) - this pr (depends on 3)

The config adaptations (Ms and seconds) also make sense because this syncs the behavior of environment flags for all providers. Java is in MS and is, IMHO, the most advanced provider for flags; hence, we should adhere to its environment variables usage and values, or else we have way more things to change. We use the same environment variables to make it easy to configure multiple providers with the same environment variables. This provider did not reach version 1, making it a better target than Java.

The same applies to the resolver's name. Everywhere else, it is named RPC; here, we do have GRPC. We should normalize this for consistency and easier maintenance in the long run.

A good thing is that all the features implemented fulfill the test-harness gherkin files. We're compliant with the expectations of flagd for a provider - confidence we did not have before. I also suggest not looking for perfect code; this confidence allows us to iterate faster and ship improvements confidently.

@beeme1mr
Copy link
Member

Thanks @aepfli! That should make it easier to review and make our release notes more accurate.

@toddbaert
Copy link
Member

toddbaert commented Nov 26, 2024

The config adaptations (Ms and seconds) also make sense because this syncs the behavior of environment flags for all providers. Java is in MS and is, IMHO, the most advanced provider for flags; hence, we should adhere to its environment variables usage and values, or else we have way more things to change. We use the same environment variables to make it easy to configure multiple providers with the same environment variables. This provider did not reach version 1, making it a better target than Java.

The same applies to the resolver's name. Everywhere else, it is named RPC; here, we do have GRPC. We should normalize this for consistency and for easier maintenance in the long run.

Besides the fact this is what Java does, these are actually in the spec for flagd providers, and are important for interop with the Operator.

But it sounds like we have a plan for moving forward! Feel free to link your other PRs to this one and close this one whenever @aepfli

@aepfli
Copy link
Member Author

aepfli commented Nov 26, 2024

But it sounds like we have a plan for moving forward! Feel free to link your other PRs to this one and close this one whenever @aepfli

this one is the last in the chain, i will update it and mark it as soon as we are ready

@aepfli aepfli marked this pull request as draft November 26, 2024 13:50
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
@aepfli aepfli force-pushed the feat/grpc-sync-addition branch from 28914fa to 6b21f96 Compare December 6, 2024 21:18
Signed-off-by: Simon Schrottner <simon.schrottner@dynatrace.com>
@aepfli aepfli force-pushed the feat/grpc-sync-addition branch from 6b21f96 to a37e5a3 Compare December 6, 2024 21:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
6 participants