-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rules for det are not general and likely unstable #468
Comments
The rules for determinant As an example, the cofactor matrix for a 2x2 matrix Unlike Using the SVD decomposition But we can get the same thing using the LU decomposition I implemented the LU rules, so I've been looking into this for the LU case. As mentioned in #456 (comment), I think the right thing to do is to have only |
It is easy to make a stable Ddet using the SVD. Yes it is slower, but it is not less accurate, where did you hear that? As a matter of fact, I would expect the SVD to be more accurate/more stable than LU. Cf e.g. here Computing the SVD is always numerically stable for any matrix, but is typically more expensive than other decompositions |
Interesting thought about the epsn on the diagonal. But what do you do if the matrix is singular and the LU decomposition won't even compute? |
this is correct and can be checked with a quick round-off error analysis. |
For the purposes of computing the determinant, julia> A = [1 2; 3 6];
julia> iszero(det(lu(A; check=false))) # check=false silences the error for a "failed" factorization
true
julia> iszero(prod(svd(A).S))
false This can happen with
A "failed" LU factorization is still completed, it's just that it can't be used for solving without some special-casing. See http://www.netlib.org/lapack/explore-3.1.1-html/dgetrf.f.html |
I came across this paper "On the adjugate matrix" by GW Stewart https://doi.org/10.1016/S0024-3795(98)10098-8, which does an error analysis to show that when working with decompositions of the form The way I suggest proceeding is:
|
My two cents from the PyTorch end of things. We implement the algorithm mentioned in the cited paper in PyTorch. It works fine for the first derivative, but for the second derivative sometimes it fails (we compute the second derivative by autodiff the first derivative). https://github.com/pytorch/pytorch/blob/6afb414c21543fb62e2f862268a707657c83cd72/aten/src/ATen/native/BatchLinearAlgebra.cpp#L3748-L3760 The tricky part is to come up with a reasonable \epsilon for the perturbation. We currently use as epsilon the epsilon for the given datatype. The implementation in PyTorch is not particularly clean (we plan to give it a clean-up sometime in the future), but the checks show that it is correct and pretty stable for a range of singular matrices :) For the second derivative we implement a "not-so-good" SVD-based algorithm when the matrix is singular. This works in practice, but the algorithm could be very much improved. Now, this is a story for another time :) |
If you change the rules for |
to be fair, the LU approach is remarkably resiliant...
But I am nervous that this is too simple a special case to draw any conclusions.
Either way - it seems to me that an implementation that always works and is guaranteed numerically stable is preferrable. If the maintainers agree, then I would be happy to make a PR and implement
frule
andrrule
via SVD instead of LU.Questions:
svd
- I'm guessing this means one will be able to differentate thefrule
orrrule
based on SVD?The text was updated successfully, but these errors were encountered: