-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make i0 a compile error #1593
Comments
pow(2, -1) is 0.5 and gives a i0 range of -0.5 to -0.5. Checking the value behaviors for i0 and i1 seem reasonable to me:
|
From a memory perspective I don't think EDIT: Well, it mathematically could. |
Obviously i0 has no bits, as does u0 and I think the "virtual" value assigined to the type is by convention. So I don't see the need for an explicit sign bit. Also, the notion of a negative number of bits doesn't seem that different from having no bits. I'm not trying to be flip, my suggestion is based on the goal that all Types that are members of TypeId.Int should support all operations of every other member. If that goal is not possible and a rational reason can't be found for the deviation from what other members can do, then that type should be considered for removal. For example, taking the address or determining a structs fields offest in some types is possible while others it is not. I would suggest that if a type has no address or offset then it should be considered for removal. On the other hand I find it odd that the current lack of support for addresses and offsets for u0. From my perspective u0 always has an address or offset whenever a non-zero type does. And I see no reason i0 is any different. |
JavaScript BigInt allows the equivalent of I believe the status quo If we look at the sequence of maximum and minimum values for signed integers as the number of bits approaches 0, we see:
The pattern here is:
Now that I've worked through this on my own, I see that this is exactly what @winksaville reported above. Nowhere else in the integer types do we see non-integer limits. For unsigned integers this is the formula:
That clearly suggests that I do not agree that it makes sense to convert
|
I disagree that i0 weird enough to remove at this stage, If there is no reasonable value for i0 then it should be removed. But, I'd say 0 is a reasonable value. If we can agree on the "properties" of TypeId.Int and we can have all iX's and uX's adhere to these properties then that would be best. The consistency of allowing X to range from 0..N where N is now 128 would be preferrable to exceptions for any particular value of X. IMHO. |
Perhaps some fresh eyes can help untangle this. The only material difference between
In the case of No need for negative bits or non-integer ranges. They arise from the assumption that the range of signed integers is evenly partitioned into negative and non-negative numbers. This is obviously false if there is only one integer value. Under this logical scheme, |
The minimum/maximum makes sense if you phrase the upper bound exclusively (Dijkstra argues this is the preferred way to phrase an interval): Also IMO it also makes sense to have it just for symmetry with |
There was a rejected proposal to remove
u0
: #1530However, I believe it does make sense to remove
i0
. While the range of unsigned integers is 0 to (pow(2, n) − 1), with n = 0, this gives the range [0, 0]. OK. But the range of signed integers is −(pow(2,n−1)) to (pow(2,n−1) − 1), with n = 0, this makes us do pow(2, -1) and that doesn't make sense.i0
would still be an identifier that maps to a signed 0-bit integer. The compile error would happen if you referencedi0
directly or if you used@IntType
to construct a signed 0 bit integer.The text was updated successfully, but these errors were encountered: