-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revised distributions for stochasticmux #171
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #171 +/- ##
==========================================
- Coverage 97.78% 97.67% -0.12%
==========================================
Files 8 8
Lines 542 559 +17
==========================================
+ Hits 530 546 +16
- Misses 12 13 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There's a bit of weirdness here in the initialization with binomial mode. (Will come back to this later...) The binomials are parametrized by The weirdness arises when we initialize streamers on at a time: the weight distribution is not fully known until the first batch of active streamers are fully initialized, so the calculations for Now, we could hack around this by pre-determining the weights so that everything is primed properly. However, I think it might actually be beneficial to leave it as is because it injects more randomness in the rate distributions early on, which ought to have a comparable effect to having random offsets in a burn-in phase as @ejhumphrey suggested in #132 . It's a bit weird, but given the potentially dynamic nature of the active stream distribution (especially in exhaustive mode), it won't be possible to always ensure that the rate distribution for a streamer is "correct" over time. The best we can do is sample the rate value according to whatever the distribution will be at the time the streamer is activated. |
Having slept on it, i think a better solution here is to initialize the active set weights by a random draw from the weights array instead of with zeros. This won't have any effect on const or poisson, but it will put the binomial mode in a less quirky position at initialization time. |
@cjacoby 👋 I know it's been a gajillion years, but do you have any interest in looking this over? I think it's basically good to go, but it does have some kinda breaky behavior relative to older versions that I'd like to get another set of eyes on. Quick TLDR is summarized in #148 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm other than I think a small comment improvement would improve it (for me in 6mo to a year when I come back and can't remember what this is about).
Ok, doc section is added and back-link is included. I tried to clean it up a bit from my original notebook (4 years ago!) and put in some expository text. Hopefully it makes sense? |
This PR implements several changes described in #148
const
andbinomial
(new default)rate
and notrate+1
I've also relaxed the uniform convergence unit tests. A p-value of >=0.95 was probably overkill for the sample size we were drawing, and I've reduced it to 0.5. Strangely, poisson was giving me the most trouble here, while const and binomial were behaving better. It's probably an artifact of setting
rate=2
in the test.