-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add updates to wald_test method and unit testing files #536
Conversation
Codecov ReportAttention: Patch coverage is
|
I think the alert might be raised because the vcov methods allows a pyfixest/pyfixest/estimation/feols_.py Line 360 in 405ba73
pyfixest/pyfixest/estimation/feols_.py Line 441 in 405ba73
I think if we apply Anyways, If that is not it I would suggest you continue with |
Yes, this certainly fixes the issue. This also puzzles me as I didn't encounter this error before ;). Stay tuned with my next update! |
Looking forward to it! 😄 |
Several points should be mentioned regarding Wald test.
One concern that I have is that, I didn't test the case with Thanks! |
Hi @Jayhyung , thanks for the PR! I have started with a first review. Overall, this looks very good! =) I am really surprised that One way to test for different values of I.e. we could test that |
On the Here is the code for wald = function (x, keep = NULL, drop = NULL, print = TRUE, vcov, se,
cluster, ...)
{
check_arg(x, "class(fixest)")
check_arg(keep, drop, "NULL character vector no na")
if (isTRUE(x$onlyFixef))
return(NA)
dots = list(...)
if (!isTRUE(x$summary) || !missing(vcov) || !missing(se) ||
!missing(cluster) || ...length() > 0) {
x = summary(x, vcov = vcov, se = se, cluster = cluster,
...)
}
if (missing(keep) && missing(drop)) {
drop = "\\(Intercept\\)"
}
coef_name = names(x$coefficients)
coef_name = keep_apply(coef_name, keep)
coef_name = drop_apply(coef_name, drop)
if (length(coef_name) == 0)
return(NA)
qui = names(x$coefficients) %in% coef_name
my_coef = x$coefficients[qui]
df1 = length(my_coef)
df2 = x$nobs - x$nparams
stat = drop(my_coef %*% solve(x$cov.scaled[qui, qui]) %*%
my_coef)/df1
p = pf(stat, df1, df2, lower.tail = FALSE)
vcov = attr(x$cov.scaled, "type")
vec = list(stat = stat, p = p, df1 = df1, df2 = df2, vcov = vcov)
if (print) {
cat("Wald test, H0: ", ifsingle(coef_name, "", "joint "),
"nullity of ", enumerate_items(coef_name), "\n",
sep = "")
cat(" stat = ", numberFormatNormal(stat), ", p-value ",
ifelse(p < 2.2e-16, "< 2.2e-16", paste0("= ", numberFormatNormal(p))),
", on ", numberFormatNormal(df1), " and ", numberFormatNormal(df2),
" DoF, ", "VCOV: ", vcov, ".", sep = "")
return(invisible(vec))
}
else {
return(vec)
}
} The relevant part of the code is this one df1 = length(my_coef)
df2 = x$nobs - x$nparams
library(fixest)
data(mtcars)
fit = feols(mpg ~ cyl + as.factor(hp), data = mtcars)
my_coef = fit$coefficients
df1 = length(my_coef)
df2 = fit$nobs -fit$nparams
df1 # 23
df2 # 9
fit$nparams #23 -> covariates + swiped out fixed effects
fit = feols(mpg ~ cyl | hp, data = mtcars)
my_coef = fit$coefficients
df1 = length(my_coef)
df2 = fit$nobs -fit$nparams
df1 # 1
df2 # 9
fit$nparams # 23 For dfn = _N - _k_fe - _k
dfd = _k which is already what is implemented =) Interestingly, there is no special logic for clustered errors, i.e. if self._is_clustered:
self._dfd = np.min(np.array(self._G)) - 1 clause? Additionally, would you mind adding an exact test of |
Oh, and of course there is another way to test against Suppose we want to test the null of with Then we can define $$ with and then we can simply test the null of |
Thanks for the comment!
The degree of the freedom in the denominator in the F test is set up to be
Thanks for the comments! |
Ah, great! Then let's follow what Cameron & Miller recommend. Even better if this aligns closely with the implementation in |
Hm, in this case, the described method indeed does not work with Alternatively, we could use R's car::linearHypothesis function to test against, which is nice because the linearHypothesis(model, hypothesis.matrix, rhs=NULL,
test=c("F", "Chisq"), vcov.=NULL,
white.adjust=c(FALSE, TRUE, "hc3", "hc0", "hc1", "hc2", "hc4"),
singular.ok=FALSE, ...) It's only available for objects of type This is how it would work via %load_ext autoreload
%autoreload 2
import pyfixest as pf
import rpy2.robjects as ro
from rpy2.robjects import pandas2ri
# rpy2 imports
from rpy2.robjects.packages import importr
pandas2ri.activate()
fixest = importr("fixest")
stats = importr("stats")
broom = importr("broom")
car = importr("car")
data = pf.get_data()
fit_r = stats.lm("Y ~ X1 + X2", data = data)
R = ro.r.matrix(np.array([0, 1, 1]), nrow=1)
car.linearHypothesis(fit_r, R, 1) |
I just added wald test cases to the testing scripts. Let me know if something extra should be done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks phenomenal @Jayhyung. I particularly dig the super thorough unit tests! Thank you 🎉
], "distribution must be either 'F' or 'chi2'." | ||
if self._is_clustered: | ||
self._dfd = np.min(np.array(self._G)) - 1 | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it standard to include the fixed effects in the dof computation? I am actually not sure about this and will take a look at the fixest
docs!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh sorry for the late reply, I didn't notice your comment until now. According to my impression of clubSanwich
package, the fixed effects didn't change anything related with the inference part(it was just like demeaning the whole dataset and implementing estimation on this demeaned data and then doing inference). I will take a close look at this and update if serious corrections are necessary(I will PR this if they really are)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries - this is at least what fixest
seems to be doing (see the function I posted somewhere in the PR) =) I think clubSandwich
is not supporting fixed effects / fixest as far as I am aware (due to difficulties computing CRV2 and CRV3 vcov's with fixed effects I believe) - so I am afraid your chances to find anything there are probably not super big :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, I actually meant to say fixest
. Thanks for letting me know. :)
tests/test_wald_test.py
Outdated
r_fstat = pd.DataFrame(r_wald).T[1].values[0] | ||
r_pvalue = pd.DataFrame(r_wald).T[5].values[0] | ||
|
||
np.testing.assert_allclose(f_stat, r_fstat, rtol=1e-02, atol=1e-02) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do the tests still pass if we increase the error tolerance to e.g. 1e-04? At some point I think not exactly matching the clubSandwich
dof defauls might bite us?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the testing case for the regression specifications without clusters. I just tested with different error tolerance. It survives 1e-05 for relative error tolerance, but fails for 1e-03 for absolute tolerance. Here is the comparison of statistics between the two packages.
Python f_stat: 1.3298150228659409, R f_stat: 1.3220891299061177
Python p_stat: 0.2628217235735699, R p_stat: 0.2653441458766282
Reasonably close enough, but there is a bit of discrepancy. The following is the case when I set up the constant term being 0.1
Python f_stat: 15.748365910640432, R f_stat: 15.708691053689362
Python p_stat: 3.67809116497142e-10, R p_stat: 3.8949330392594636e-10
Do you think this is reasonable enough? I may take a look at this again in different PRs if it is not.
Thanks!
Hi # @s3alfisc !
More details will follow with this PR soon, but I have one question before I proceed.
If you see the first commit of this PR, I added
# type: ignore
to several spots infeols_.py
to avoid some error messages. I don't know why, but I encountered an incompatible data frame type errors, saying like the following;Fixing this issue is not hard. Just converting
data
dataframe into Pandas dataframe if it is in Polars fixes the issue, but I was wondering whether this is okay to do infeols_.py
. I thought that any dataframe that the variabledata
takes is already processed to be Pandas dataframe by the dev util filesdev_utils.py
and 'vcov_utils.py'.How should i deal with this?
Thanks!