-
-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.0000000000000000000000000000 is parsed completely differently than 1.0000000000000000000000000000 for no good reason #8896
Comments
comment:1
This is because 0.00000000000000000000000000000000005 is not considered to have "high precision." We should special-case 0. |
comment:2
Btw, Sage does not distinguish +0.0 and -0.0 (and 0.0). Nice how trac interprets long decimal fractions of floats. :D |
Attachment: 8896-highprec-zero.patch.gz |
comment:5
Looks okay in general, but the last block
needs to be indented, n'est pas? In the tests, there should be (brief) words to the effect that the first two have the default minimum precision and that is what is being tested for, while in the new tests we are testing for getting high precision. |
comment:6
Replying to @kcrisman:
No, the
I'm not sure if this is intentional, it's at least uncommon. It looks as if a decimal point is counted as a significant digit. And, sorry, the code is extremely ugly (not due to the patch) and inefficient. I'm not quite sure what kind of strings actually get in (i.e., where/if illegal syntax is catched, it seems all is left to |
comment:7
Also, is it intentional that the exponent is ignored when computing the required precision? |
comment:8
(It doesn't "strip" trailing/fractional zeros, but also doesn't treat a given exponent correctly. Bases (other than 10) aren't supported; perhaps it should just call |
apply on top of previous |
comment:9
Attachment: 8896-part2.patch.gz Replying to @nexttime:
Exactly.
That's the point of this ticket. For 0, all zeros are leading.
Good catch, fixed. That required some adjustment to keep the input/str in sync.
Updated the docstring a bit. This is mostly for use by the preparser, though of course it gets used directly as well. |
comment:10
Replying to @nexttime:
Yes, of course. The size of the exponent is completely orthogonal to the number of significant figures. I made create_ComplexNumber defer to create_RealNumber as well. |
comment:11
Here is the reference for this - I hadn't seen that before. Thanks for the additional docstring. |
comment:12
Just took a first glance:
More to come... ;-) (I've only looked at the patch to the patch.) |
comment:13
Replying to @robertwb:
Not really. We were (only I think) considering zeros of the fractional part. Truncation only makes sense from the right; and you don't express precision by padding zeros to the left. Stating that the earth's diameter is about 013 million meters doesn't give more information than stating it is about 13 million meters.
I was thinking of scientific notation syntax, too. (Also the examples do not contain that form.) |
comment:14
Replying to @nexttime:
To be more precise, 0.0 and 00.0 denote the same "thing", i.e. have the same meaning, while 0.0 and 0.00 do not. |
comment:15
For those interested in printing real numbers in general, I'll just plug #7682, which could use just a bit of polishing work, I believe, and makes printing real/complex numbers much more consistent and easy to adjust. |
comment:16
Replying to @robertwb:
I don't agree either. If the "effective" exponent (the one in normalized form) exceeds the maximum, one should either increase If not, this could be a pitfall. But I don't mind... ;-) |
comment:17
Replying to @nexttime:
Which has the same information as saying the Earth's diameter is about 0.000013 trillion meters. Whether or not a digit is significant is not a function of whether it is on the left or right of the decimal. Would you then say that |
comment:18
Replying to @nexttime:
Ah, oops.
Yes, but I'm not following what you're trying to say here.
I don't care if pad is negative (I'll fix the docstring). Same with min_prec, I just pass whatever I get onto the RealField constructor which will raise a perfectly fine exception if the precision is not valid (as defined by MPFR_MIN_PREC and MPFR_MAX_PREC).
Ah, I'll fix that.
Well, I'd say the above has only two significant digits of precision, no matter what the exponent, which is all this function tries to deduce. The fact that it's subnormal is outside of the scope of this ticket--if an exception should be raise it's not here. (And in this case you can't raise the precision enough, due to memory/library limitations.) |
comment:19
Replying to @robertwb:
If you test for |
comment:20
Replying to @nexttime:
Must have been some spot on the screen that covered Of course it's pretty ok to return |
comment:21
Attachment: 8896-doctests.patch.gz Doctest fixes for 4.6.1. |
Folded and rebased on 4.7. Apply only this patch. |
comment:22
Attachment: 8896-rebased.patch.gz Patch fixes problem. Ran 'make testlong'. All tests passed. Positive review! |
Reviewer: Mariah Lenox |
Author: Robert Bradshaw |
comment:23
Thanks! |
This comment has been minimized.
This comment has been minimized.
comment:25
In my opinion, this ticket is a huge can of worms and a bad idea. I don't see any mathematically consistent reason who 0.0 should be treated differently than 0.0000000000000. I really think the current behaviour of Sage is what makes the most sense (mathematically) so I am not in favour of this ticket. Of course, if the majority thinks this patch is a good idea then I'm all for it. One thing about the patch which is very unclear is why zero is treated differently from other numbers. Consider::
|
comment:26
Replying to @jdemeyer:
It's exactly the same reason that 1.0 is treated differently than 1.0000000000000000000000. The (value-preserving) trailing zeros indicate higher precision.
Because otherwise there's now way to write a high-precision zero (in fact I can't think of any other interpretation of a large number of trailing zeros).
|
comment:27
Another argument is that the BDFL suggested this change fairly strongly when opening this ticket :) More seriously, I do have a question here.
But I agree with Jeroen that the following seems problematic. Didn't you say yourself (robertwb) that leading zeros before the decimal point shouldn't make a difference? So in the case here maybe we should have the default precision...
Keeping in mind that I have very little invested in either outcome for now, so take these only as observations. |
comment:28
Replying to @robertwb:
It is absolutely not the same because with non-zero numbers only digits after a non-zero digit contribute to the precision. Indeed,
Well, you can always do:
An other way to demonstrate the problem with this patch is that truncating decimal numbers (i.e. removing digits from the right) can increase the precision, which is very unlogical::
|
comment:29
Replying to @jdemeyer:
This hinges on the ambiguity of the term "trialing zero." Sage interprets trailing zeros after the decimal point as additional precision. To me, trailing zeros are those that are not followed by a non-zero digit, to you trailing zeros must also follow a non-zero digit.
Yeah, but that's a pain, but to me the fact that 0.00000000000000000000000000000000 is not high precision is more surprising, as it has many significant digits.
It also changes the relative value by a huge amount :) Is it worth taking this discussion to sage-devel? |
comment:30
Replying to @robertwb:
I would say that it does have a lot of digits but that none of them is a significant digit. |
This comment has been minimized.
This comment has been minimized.
Changed keywords from none to sd32 |
In Sage 0.0 and 0.00000000000000000000000000000000000000 denote exactly the same thing (a zero with the same precision). However, that 1.0 and 1.00000000000000000000000000000000000000 are different in Sage:
Some people consider the above inconsistency a bug.
See the discussion below and at sage-devel.
Apply attachment: 8896-rebased.patch
CC: @jasongrout
Component: basic arithmetic
Keywords: sd32
Author: Robert Bradshaw
Reviewer: Mariah Lenox
Issue created by migration from https://trac.sagemath.org/ticket/8896
The text was updated successfully, but these errors were encountered: