Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Taking weighting seriously #487

Open
wants to merge 89 commits into
base: master
Choose a base branch
from

Conversation

gragusa
Copy link

@gragusa gragusa commented Jul 15, 2022

This PR addresses several problems with the current GLM implementation.

Current status
In master, GLM/LM only accepts weights through the keyword wts. These weights are implicitly frequency weights.

With this PR
FrequencyWeights, AnalyticWeights, and ProbabilityWeights are possible. The API is the following

## Frequency Weights
lm(@formula(y~x), df; wts=fweights(df.wts)
## Analytic Weights
lm(@formula(y~x), df; wts=aweights(df.wts)
## ProbabilityWeights
lm(@formula(y~x), df; wts=pweights(df.wts)

The old behavior -- passing a vector wts=df.wts is deprecated and for the moment, the array os coerced df.wts to FrequencyWeights.

To allow dispatching on the weights, CholPred takes a parameter T<:AbstractWeights. The unweighted LM/GLM has UnitWeights as the parameter for the type.

This PR also implements residuals(r::RegressionModel; weighted::Bool=false) and modelmatrix(r::RegressionModel; weighted::Bool = false). The new signature for these two methods is pending in StatsApi.

There are many changes that I had to make to make everything work. Tests are passing, but some new feature needs new tests. Before implementing them, I wanted to ensure that the approach taken was liked.

I have also implemented momentmatrix, which returns the estimating function of the estimator. I arrived to the conclusion that it does not make sense to have a keyword argument weighted. Thus I will amend JuliaStats/StatsAPI.jl#16 to remove such a keyword from the signature.

Update

I think I covered all the suggestions/comments with this exception as I have to think about it. Maybe this can be addressed later. The new standard errors (the one for ProbabilityWeights) also work in the rank deficient case (and so does cooksdistance).

Tests are passing and I think they cover everything that I have implemented. Also, added a section in the documentation about using Weights and updated jldoc with the new signature of CholeskyPivoted.

To do:

  • Deal with weighted standard errors with rank deficient designs
  • Document the new API
  • Improve testing

Closes #186.

@codecov-commenter
Copy link

codecov-commenter commented Jul 16, 2022

Codecov Report

Attention: Patch coverage is 74.58746% with 77 lines in your changes missing coverage. Please review.

Project coverage is 85.38%. Comparing base (3e4114f) to head (c4f7959).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
src/linpred.jl 72.22% 30 Missing ⚠️
src/lm.jl 71.08% 24 Missing ⚠️
src/glmfit.jl 79.24% 22 Missing ⚠️
src/glmtools.jl 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #487      +/-   ##
==========================================
- Coverage   90.11%   85.38%   -4.74%     
==========================================
  Files           8        8              
  Lines        1123     1286     +163     
==========================================
+ Hits         1012     1098      +86     
- Misses        111      188      +77     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@lrnv
Copy link

lrnv commented Jul 20, 2022

Hey,

Would that fix the issue I am having, which is that if rows of the data contains missing values, GLM discard those rows, but does not discard the corresponding values of df.weights and then yells that there are too many weights ?

I think the interfacing should allow for a DataFrame input of weights, that would take care of such things (like it does for the other variables).

@gragusa
Copy link
Author

gragusa commented Jul 20, 2022

Would that fix the issue I am having, which is that if rows of the data contains missing values, GLM discard those rows, but does not discard the corresponding values of df.weights and then yells that there are too many weights ?

not really. But it would be easy to make this a feature. But before digging further on this I would like to know whether there is consensus on the approach of this PR.

@alecloudenback
Copy link

alecloudenback commented Aug 14, 2022

FYI this appears to fix #420; a PR was started in #432 and the author closed for lack of time on their part to investigate CI failures.

Here's the test case pulled from #432 which passes with the in #487.

@testset "collinearity and weights" begin
    rng = StableRNG(1234321)
    x1 = randn(100)
    x1_2 = 3 * x1
    x2 = 10 * randn(100)
    x2_2 = -2.4 * x2
    y = 1 .+ randn() * x1 + randn() * x2 + 2 * randn(100)
    df = DataFrame(y = y, x1 = x1, x2 = x1_2, x3 = x2, x4 = x2_2, weights = repeat([1, 0.5],50))
    f = @formula(y ~ x1 + x2 + x3 + x4)
    lm_model = lm(f, df, wts = df.weights)#, dropcollinear = true)
    X = [ones(length(y)) x1_2 x2_2]
    W = Diagonal(df.weights)
    coef_naive = (X'W*X)\X'W*y
    @test lm_model.model.pp.chol isa CholeskyPivoted
    @test rank(lm_model.model.pp.chol) == 3
    @test isapprox(filter(!=(0.0), coef(lm_model)), coef_naive)
end

Can this test set be added?

Is there any other feedback for @gragusa ? It would be great to get this merged if good to go.

@nalimilan
Copy link
Member

Sorry for the long delay, I hadn't realized you were waiting for feedback. Looks great overall, please feel free to finish it! I'll try to find the time to make more specific comments.

Copy link
Member

@nalimilan nalimilan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've read the code. Lots of comments, but all of these are minor. The main one is mostly stylistic: in most cases it seems that using if wts isa UnitWeights inside a single method (like the current structure) gives simpler code than defining several methods. Otherwise the PR looks really clean!

What are you thoughts regarding testing? There are a lot of combinations to test and it's not easy to see how to integrate that into the current organization of tests. One way would be to add code for each kind of test to each @testset that checks a given model family (or a particular case, like collinear variables). There's also the issue of testing the QR factorization, which isn't used by default.

src/GLM.jl Outdated Show resolved Hide resolved
src/GLM.jl Outdated Show resolved Hide resolved
src/glmfit.jl Outdated Show resolved Hide resolved
src/glmfit.jl Outdated Show resolved Hide resolved
src/glmfit.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
test/runtests.jl Outdated Show resolved Hide resolved
@bkamins
Copy link
Contributor

bkamins commented Aug 31, 2022

A very nice PR. In the tests can we have some test set that compares the results of aweights, fweights, and pweights for the same set of data (coeffs, predictions, covariance matrix of the estimates, p-values etc.).

@nalimilan
Copy link
Member

nalimilan commented Nov 20, 2022

CI failures on Julia 1.0 can be fixed by requiring Julia 1.6 (more and more packages have started doing that).

@alecloudenback
Copy link

Sorry for the noise, but thank you @gragusa and reviewers for this big PR. As a user I've been watching for weighting for a while and appreciate the technical expertise and dedication to quality here.

@gragusa
Copy link
Author

gragusa commented Nov 22, 2022

@nalimilan let’s give this a final push. Should I rebase this PR against #339 ? (rhetorical question!) What’s the most efficient way?

@nalimilan
Copy link
Member

Yes the PR needs to be rebased against master -- or, simpler, merge master into the branch. Most conflicts seem relatively simple to resolve. You can try doing this online on GitHub, though there's always a chance that it won't be 100% correct the first time. Otherwise you can do that locally with git fetch; git merge origin/master. Or I can do it in a few days if you want.

@gragusa
Copy link
Author

gragusa commented Nov 23, 2022 via email

Copy link
Member

@nalimilan nalimilan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for rebasing! I have more comments, as @bkamins had made a few ones above too.

src/lm.jl Outdated Show resolved Hide resolved
test/runtests.jl Outdated Show resolved Hide resolved
test/runtests.jl Outdated Show resolved Hide resolved
1.8686815106332157 0.0 0.0 0.0 1.8686815106332157;
0.010149793505874801 0.010149793505874801 0.0 0.0 0.010149793505874801;
-1.8788313148033928 -0.0 -1.8788313148033928 -0.0 -1.8788313148033928]
@test mm0_pois ≈ GLM.momentmatrix(gm_pois) atol=1e-06
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove double space here and elsewhere.

f = @formula(admit ~ 1 + rank)
gm_bin = fit(GeneralizedLinearModel, f, admit_agr, Binomial(); rtol=1e-8)
gm_binw = fit(GeneralizedLinearModel, f, admit_agr, Binomial(),
wts=aweights(admit_agr.count); rtol=1e-08)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to use analytic weights rather than frequency weights? Here I think the latter make more sense for this dataset.

src/lm.jl Outdated Show resolved Hide resolved
src/lm.jl Show resolved Hide resolved
src/lm.jl Outdated Show resolved Hide resolved
Comment on lines +136 to +140
- `FrequencyWeights` describe the inverse of the sampling probability for each observation,
providing a correction mechanism for under- or over-sampling certain population groups.
These weights may also be referred to as sampling weights.
- `ProbabilityWeights` describe how the sample can be scaled back to the population.
Usually are the reciprocals of sampling probabilities.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the same wording as in StatsBase for simplicity. If we want to improve it, we'll change it everywhere.

Suggested change
- `FrequencyWeights` describe the inverse of the sampling probability for each observation,
providing a correction mechanism for under- or over-sampling certain population groups.
These weights may also be referred to as sampling weights.
- `ProbabilityWeights` describe how the sample can be scaled back to the population.
Usually are the reciprocals of sampling probabilities.
- `FrequencyWeights` describe the number of times (or frequency) each observation was seen.
These weights may also be referred to as case weights or repeat weights.
- `ProbabilityWeights` represent the inverse of the sampling probability for each observation,
providing a correction mechanism for under- or over-sampling certain population groups.
These weights may also be referred to as sampling weights.

fitted, fit, fit!, model_response, response, modelmatrix, r2, r², adjr2, adjr²,
cooksdistance, hasintercept, dispersion
cooksdistance, hasintercept, dispersion, weights, AnalyticWeights, ProbabilityWeights, FrequencyWeights,
UnitWeights, uweights, fweights, pweights, aweights, leverage

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add the description of weights types to COMMON_FIT_KWARGS_DOCS below.

@jeremiedb
Copy link

@nalimilan were there remaining fixes to have this PR completed? I was worried to have the important work brought by this PR loose its momentum.

@gragusa
Copy link
Author

gragusa commented Mar 1, 2023 via email

@nalimilan
Copy link
Member

Sure !

@bkamins
Copy link
Contributor

bkamins commented Mar 5, 2023

While we are at weights my question is if we should not update the ftest implementation also? The issue is that currently ftest assumes that nobs and dof are integer (which does not have to hold with weights).

@ParadaCarleton
Copy link

ParadaCarleton commented Mar 5, 2023

While we are at weights my question is if we should not update the ftest implementation also? The issue is that currently ftest assumes that nobs and dof are integer (which does not have to hold with weights).

ftest, in its original form, won’t work here, I think. Mentioning @smishr who might know more about the specifics here.

@bkamins
Copy link
Contributor

bkamins commented Mar 5, 2023

ftest, in its original form, won’t work here

I agree that we would need to carefully consider all cases of weights. I have not thought about probabilistic weights. However, for frequency weights and analytical weights, assuming we produce correct deviance and dof_residual things should be correct.

I have just checked against examples in Wooldridge, chapter 8 and properly scaled analytical weights produce correct result.

Also, in general, I think we should ensure that every function in GLM.jl that accepts model estimated with weighting should either:

  • error if it does not produce a correct result, or
  • work correctly

(it does not have to be in this PR, but if we are taking weighting seriously, I think we should ensure this property when we make a release)

Thank you for working on this!

@smishr
Copy link

smishr commented Mar 31, 2023

In survey datasets weights are commonly calibrated to sum upto an (integral) population size.

While we are at weights my question is if we should not update the ftest implementation also? The issue is that currently ftest assumes that nobs and dof are integer (which does not have to hold with weights).

While most applications of F-test have integral dof, the F distribution is continuous and well-defined for non-integral values of dof.

In R:

> df(1.2, df1 = 10, df2 = 20)
[1] 0.5626125
> df(1.2, df1 = 10, df2 = 20.1)
[1] 0.5630353

Julia and R agree

julia> using Distributions

julia> d = FDist(10, 20)
FDist{Float64}(ν1=10.0, ν2=20.0)

julia> pdf(d, 1.2)
0.5626124566227022

julia> d = FDist(10, 20.1)
FDist{Float64}(ν1=10.0, ν2=20.1)

julia> pdf(d, 1.2)
0.5630352744353205

There is this StackExchange post discussing non-integral dof for t-tests and for GAMs in this post.

ftest, in its original form, won’t work here, I think. Mentioning @smishr who might know more about the specifics here.

The F-test is essentially the ratio of two variances. For the weighted GLM case, variances based on the weighted Least Squares could be used to calculate test statistic.

Note: whether doing an (adjusted) F-Test for comparing weighted GLM models is the right approach, that is up for debate...

@ParadaCarleton
Copy link

Hmm, did any of the people who worked on Survey.jl leave comments here? @iuliadmtru @aviks

@gragusa
Copy link
Author

gragusa commented Jun 16, 2023

I finally found the time to rebase this PR against the latest main repository. Tests pass locally; let's see whether they pass on the CI.

I have a few days of "free" time and would like to finish this. @nalimilan It is difficult to track the comments and which ones were addressed by the various commit. On my side, the primary decision is about weight scaling. But before engaging in a conversation, I will add documentation so that whoever will contribute to the discussion can do it coherently.

Test passed!

@nalimilan
Copy link
Member

Cool. Do you need any input from my side?

@SamuelMathieu-code
Copy link

Hi there! I wonder what will happen to this PR? As I understand, one review from a person with write access is needed?

@gragusa
Copy link
Author

gragusa commented Feb 12, 2024 via email

@andreasnoack
Copy link
Member

@gragusa Any chance that you'd be able to look at the remaining items here? It would be good to get in for a 2.0 release.

@gragusa
Copy link
Author

gragusa commented Nov 19, 2024

@andreasnoack I merged my branch with base. Tests are passing (documentation is failing, but that is easy to fix). There were a few outstanding decisions to make (mostly about ftest and other peripheral methods), but I need to review the code and see where we stand. I only have a little time, but if I get some help, I could add the finishing touches. For instance, there is JuliaStats/StatsAPI.jl#16 to merge eventually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Path towards GLMs with fweights, pweights, and aweights