Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: dependent feature toggles #168

Merged
merged 8 commits into from
Oct 17, 2023
Merged

Conversation

daveleek
Copy link
Collaborator

Description

Adds support for dependent features in .NET SDK. Also adds a warn once component

Type of change

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • Running spec tests locally
  • Adding and running a suite of unit tests

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

Copy link
Contributor

@kwasniew kwasniew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall it looks very good. Just some questions:

  • why do we have to write all the tests inside this repo. Isn't the specification covering everything that we need?
  • can we add a test that shows that metrics are not called on the dependent features
  • do we have impression events in .net SDK? If so, how do they behave on the dependent features?

@daveleek
Copy link
Collaborator Author

daveleek commented Oct 16, 2023

Good questions @kwasniew!

why do we have to write all the tests inside this repo. Isn't the specification covering everything that we need?

it's been convention so far to also add local unit tests, helps with development and debugging etc, I'll rope @sighphyre in for opinions though

can we add a test that shows that metrics are not called on the dependent features

Coming right up!

do we have impression events in .net SDK? If so, how do they behave on the dependent features?

Yes there's support for impression events, I'll test it and get back to you!

@daveleek daveleek closed this Oct 16, 2023
@daveleek daveleek reopened this Oct 16, 2023
@kwasniew
Copy link
Contributor

@daveleek in node SDK I made a decision to have impression events for dependent features (parents) to make debugging easier. However metrics are not reported for the parent to avoid skewing the results.

@sighphyre
Copy link
Member

sighphyre commented Oct 16, 2023

Good questions @kwasniew!

why do we have to write all the tests inside this repo. Isn't the specification covering everything that we need?

it's been convention so far to also add local unit tests, helps with development and debugging etc, I'll rope @sighphyre in for opinions though

I'd lean towards reducing the tests here. This is a really well covered feature in the client specs, I don't see much in your tests that aren't doing what the spec is doing. Usually I'm fussy about having tests that cover cases the spec doesn't for your particular implementation or in cases where a unit test is much easier to deal with than the spec. This is a pretty simple feature though

Things I'd like to see tested:

  • What does metrics do here? (don't think this is covered, we could use some tests here)
  • What should impression events do (think you're working on this already)
  • Does the response parse correctly and set the right defaults (this does look covered)
  • Does the feature work as expected (covered through the client spec, I don't see anything you've done in the code that would require additional testing for the actual implementation)

@daveleek
Copy link
Collaborator Author

daveleek commented Oct 17, 2023

@sighphyre

I'd lean towards reducing the tests here...

... or in cases where a unit test is much easier to deal with than the spec. This is a pretty simple feature though

I guess I could trim some tests, but they've been valuable to verify understanding at development time and as debug tools so I'd like to keep a few in

Things I'd like to see tested:
What does metrics do here? (don't think this is covered, we could use some tests here)

Added a test for that and moved the counting around to fit the use case (@kwasniew)

What should impression events do (think you're working on this already)

Also added a test for this. As @kwasniew noted and contrary to metrics this should be invoked for every evaluation

Does the response parse correctly and set the right defaults (this does look covered)

This should be covered, I have attempted to at least

Does the feature work as expected (covered through the client spec, I don't see anything you've done in the code that would require additional testing for the actual implementation)

👍

@sighphyre
Copy link
Member

@sighphyre

I'd lean towards reducing the tests here...

... or in cases where a unit test is much easier to deal with than the spec. This is a pretty simple feature though

I guess I could trim some tests, but they've been valuable to verify understanding at development time and as debug tools so I'd like to keep a few in

If you find them useful I'm happy for them to stay. At some point we should get to the point where it's nice to use the client spec but that's definitely out of scope here (JS and Ruby are pretty good at this)

Copy link
Member

@sighphyre sighphyre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy if you are

Copy link
Contributor

@kwasniew kwasniew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious if you used client spec in development? You said that manually written tests were useful in development, but in Node SDK I used client spec for that. On the other hand I had to wade through hundreds lines of test code to find the metrics test I wanted to check. I'm trying to understand what exactly are we missing from the shared client spec tests. Are the .net tests easier to work with?

@daveleek
Copy link
Collaborator Author

daveleek commented Oct 17, 2023

@kwasniew

I'm curious if you used client spec in development?

I did use that as well, but mostly after being done with large portions of code. And whenever something was wrong I could repro it with a specific unit test and debug from there.

I'm trying to understand what exactly are we missing from the shared client spec tests. Are the .net tests easier to work with?

I'm not going to say the .NET tests are awesome, but for me it boils down to a combination of two things:

  • Habit - I'm used to unit tests showing me what I'm building and my understanding of it is sane
  • Convenience - Unit tests are (for now) easier to work with on a case by case basis. Can probably instrument the spec test helpers to give more granular control over tests to run, but I have no idea of the amount of work involved so short term easier to just keep doing what I'm doing

I'm also less comfortable around the implementation of the spec parsing and knowing what might be missing there but still providing false positives - so I use Unit tests as a way of reassuring myself

With that said I find the spec massively useful as an overview into where I'm at with the implementation 30/40/90% "done" / green board check off, and also as a source of knowledge/insight into the feature and how it's expected to work

@daveleek daveleek merged commit d447906 into main Oct 17, 2023
1 check passed
@daveleek daveleek deleted the feat/dependent-feature-toggles branch October 17, 2023 11:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants