-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: int types #4639
Comments
For the more common cases of Unsigned: function toUint8(num: number): uint8 {
return num & 255;
}
function toUint16(num: number): uint16 {
return num & 65535;
}
function toUint32(num: number): uint32 {
return (num | 0) >= 0 ? (num | 0) : (num | 0) + 4294967296;
} Signed: function toInt8(num: number): int8 {
return (num & 255) < 128 ? (num & 255) : (num & 255) - 256;
}
function toInt16(num: number): int16 {
return (num & 65535) < 32768 ? (num & 65535) : (num & 65535) - 65536;
}
function toInt32(num: number): int32 {
return num | 0;
} These functions would truncate to the least significant I've ran some automated testing for these with millions of random values against the expected truncation behavior in typed arrays and they seem to give exactly similar results including when the number is There is an alternative method to do this that may or may not be faster (depending on the runtime), but would require runtime support for typed arrays (only available in IE10+): Unsigned: uint8Array_dummy = new Uint8Array(1);
uint16Array_dummy = new Uint16Array(1);
uint32Array_dummy = new Uint32Array(1);
function toUint8(num: number): uint8 {
uint8Array_dummy[0] = num;
return uint8Array_dummy[0];
}
function toUint16(num: number): uint16 {
uint16Array_dummy[0] = num;
return uint16Array_dummy[0];
}
function toUint32(num: number): uint32 {
uint32Array_dummy[0] = num;
return uint32Array_dummy[0];
} Signed: int8Array_dummy = new Int8Array(1);
int16Array_dummy = new Int16Array(1);
int32Array_dummy = new Int32Array(1);
function toInt8(num: number): int8 {
int8Array_dummy[0] = num;
return int8Array_dummy[0];
}
function toInt16(num: number): int16 {
int16Array_dummy[0] = num;
return int16Array_dummy[0];
}
function toInt32(num: number): int32 {
int32Array_dummy[0] = num;
return int32Array_dummy[0];
} Anyway, this should be used as a reference to test the correctness of the truncations. [Edit: I retested the first version and it does seem to return |
I've changed the emit for @rotemdan Those conversions exist for all integer types up to 32 bits. You can see them all in the section "Cast function". TS should inline these functions, and browsers can optimize these constructs. The |
Great find! so let's try to summarize the simplest, most efficient implementations we have so far for the most important cases: Unsigned:
Signed:
I ran them through the automated tests (1M random FP values up to about +/- 2^71) and they seem to work correctly. It seems reasonable to have the behavior standardized against assignments to typed arrays so these should all yield [Edit: I confirmed Another minor detail: I'm not sure if adding |
@rotemdan Most JS engines can optimize |
You mentioned that operations like I mean that: let num: uint8 = 255;
num = num + 1; Would emit: var num = 255;
num = num + 1;
num &= 255; The automatic casts would only happen on assignments, "anonymously typed" intermediate results would not be effected: let num: uint8 = 255;
num = (num * num) + (num * 2); Still emits only a single truncation: var num = 255;
num = (num * num) + (num * 2);
num &= 255; The reason I wanted to "optimize" and simplify the casts (including the logic for them) as much as possible was because I wanted to make them extremely cheap for these purposes (In terms of performance, I think the operations turned up cheap enough to be called very frequently, including in tight loops). Since typed arrays automatically truncate on assignment and don't error on overflows or even on type mismatches, it seems like a reasonable compromise to have a somewhat more permissive logic. Not having these automatic casts would make it difficult to work with these types in practice (in some cases an explicit cast would be needed at every single line). It is still possible to limit automatic casts/truncations to only work between integer types, though they would not be required to be compatible: let num: uint8 = 255;
num = num * 23; // OK, truncated to least significant 8 bits on assignment. ("modulo 2^8").
num = num * 23.23423425; // Error, incompatible numeric types
num = <uint8> num * 23.23423425; // OK Edit: It is possible to limit this even further to only apply to anonymous/intermediate expressions. Assignments between named variables may still be strict: let num1: uint8 = 45;
let num2: uint16 = 11533;
num1 = num2; // Error: incompatible numeric types
num1 = <uint8> num2; // OK It might be possible to strengthen it back with even more restrictions. I guess there's a balance here between safety and usability. |
@rotemdan The behavior you suggest would require emit based on type information. That means If you want less restrictions you can use let x: uint = 0;
x++; The |
@ivogabe Very correct. I would think it would make more sense to just start out with four types:
This is very similar to the asm.js types, and are already heavily optimized (V8 and Spidermonkey both generate type-specific code already for non-asm.js code that rely on these types). Matter of fact, almost all the asm.js types can be represented by these types + the type system.
Note: asm.js requires As for implementation, there is no realistic way to guarantee these types without relying on explicit coercions. int -> x | 0
uint -> x >>> 0
int & uint -> x & <int & uint> y // e.g. y = 0x7fffffff
float -> Math.fround(x) or a Float32Array member
double -> +x
// int, uint, and float are each subtypes of double
// double is a subtype of number
function f(x: number): int {
// Error!
// return x
return x | 0
} Even asm.js has to use such coercions to keep the math correct. Unlike asm.js, though, you don't have to coerce everything, so you don't have to write |
Floats and doubles would indeed be a good addition. For those who don't see the advantage, I'd suggest reading this: https://blog.mozilla.org/javascript/2013/11/07/efficient-float32-arithmetic-in-javascript/. I would call these types The reason I chose to introduce more integer types is type safety. My proposal introduces syntactic sugar for those casts, as I'd rather write @IMPinball |
@ivogabe I forgot about that... Oops. ;) And I'm aware that type casts that end up in the emit would definitely be useful. But I'm taking into account the TypeScript compiler devs generally steer away from it. If you want to create a third party compiler (a la CoffeeScriptRedux) with that as a nonstandard extension, go for it! |
Or even edit the current one to do that, that'd still be fine. |
And as for the link, I read it while creating this post. (I optimized the polyfill shortly after.) One thing that will have to be addressed with yours is right shifts with integers smaller than 32-bit. That can easily bring garbage in. And that would require those types making it into the emit. |
@IMPinball At first I found it strange that they didn't want to use type info, but now I think they're right about that. Using I don't really like the idea of forking, it's not hard to create a fork, maintenance is usually the problem.
I'm not sure what you mean with this, can you clarify that? Right shift ( |
Where the shifts can lead to unexpected behavior is this: Let's assume only the rightmost 8 bits are used. JavaScript numbers are only 32-bit, but for the sake of brevity, let's assume they're 16-bit. Let's try a left shift:
The example of the left shift introduces artifacts for signedness because then if you try to add, it's adding a positive. The example of the right shift is that you're literally introducing garbage into the program, like what you can get from allocating a raw memory pointer without zeroing the region out first. Now, let's assume only the leftmost 8 bits are used. That'll address the issue of sign propagation and addition, subtraction, etc., but what about left and right shifts?
Now, there's a similar garbage problem with left shifts. Simply because the other half wasn't zeroed out first. Obviously, these are only 16-bit integers and JavaScript uses 32-bit for its bitwise operations, but the argument could easily be expanded to cover those. In order to fix this, you need those types to correct the emit. And I didn't get into addition, subtraction, etc., because that's already been made. As another nit, I don't like the generic syntax, because a) you'd have to special case the parser (numbers aren't valid identifiers for generics), b) they're a finite, limited set of types, not some sort of parameterized type, and c) it doesn't seem right with the rest of the language, as primitives are just a simple identifier word. |
And every time I can recall a suggestion that would use/require type info in the emit, they've declined it. Including that of optimizing for types (they declared that out of scope). |
@IMPinball I don't think you understand the idea. If the rightmost 8 bits are used, I think you mean we're using an int8. If that's the case, the value must be in the range -128, 127. That means the 8 rightmost bits are used to represent the value, the other bits have the same value as the 8th bit, which is 0 for a positive number and 1 for a negative number. There's no situation where only the leftmost 8 bits are used. There is no difference in representation of the same number in different types (though the JS engine might do that internally). In short, by using compile time checks we know that a number is of a certain size. You can trick the type system with casts ( When you use the I'll demonstrate this using some code: let a: int32;
let b: int8;
b = 200; // Compile time error, 200 not in the range of an int8
b = 100; // Ok, since -128 <= 100 <= 127
// b >> a should return an int8, so this is ok
b = b >> a;
// b << a should return an int32, so
a = b << a; // this is ok
b = b << a; // and this is a (compile time) error
b = int8(b << a); // but this is ok The I'm not yet sure about the syntax, too. I chose the generics style, since it's a family of types that have a small difference between them. You can describe generics on interface or objects the same way. There is no limit on the size of the integer, you can declare an |
When will this be implemented ? |
@xooxdoo I don't think this got to any conclusive decision, and for what it's worth, this is probably not as important to me as some of the other more recent additions like non-nullable types. |
I'm with this, one thing I've found lacking is more types of numbers in typescript. There's definitely a difference between an byte, int, float, and double but there's no difference in typescript at the moment. I'd love to require a specific type of number than a generic number type. |
I think addition of int and float types will pave the way to efficient WebAssembly code generation in the future |
TypeScript needs more refined types for numbers indeed. At least |
@yahiko00 Well javascript numbers are by default doubles, so we should probably have a double type to start with. |
As a tribute to Turbo Pascal, I would suggest |
@yahiko00 I'd personally prefer I will note that |
Int over Integer is also preferable, since the typed array classes all start with "Int". |
Holding off on this until ES gets the int64 type or we get Web Assembly as a compile target |
That's a pity. I'm actually working on a TS to WebAssembly prototype as part of my master thesis and hoped that this feature would land soon so that I can take profit of using ints in the computation instead of doubles. In this case, I might have to fall back to a simple implementation that just adds int as a TS type but without changing the emitting of JS Code and type inference. |
Apparently, current TS already supports a subset of my proposal above: type byte = 0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|47|48|49|50|51|52|53|54|55|56|57|58|59|60|61|62|63|64|65|66|67|68|69|70|71|72|73|74|75|76|77|78|79|80|81|82|83|84|85|86|87|88|89|90|91|92|93|94|95|96|97|98|99|100|101|102|103|104|105|106|107|108|109|110|111|112|113|114|115|116|117|118|119|120|121|122|123|124|125|126|127|128|129|130|131|132|133|134|135|136|137|138|139|140|141|142|143|144|145|146|147|148|149|150|151|152|153|154|155|156|157|158|159|160|161|162|163|164|165|166|167|168|169|170|171|172|173|174|175|176|177|178|179|180|181|182|183|184|185|186|187|188|189|190|191|192|193|194|195|196|197|198|199|200|201|202|203|204|205|206|207|208|209|210|211|212|213|214|215|216|217|218|219|220|221|222|223|224|225|226|227|228|229|230|231|232|233|234|235|236|237|238|239|240|241|242|243|244|245|246|247|248|249|250|251|252|253|254|255;
type sbyte = -128|-127|-126|-125|-124|-123|-122|-121|-120|-119|-118|-117|-116|-115|-114|-113|-112|-111|-110|-109|-108|-107|-106|-105|-104|-103|-102|-101|-100|-99|-98|-97|-96|-95|-94|-93|-92|-91|-90|-89|-88|-87|-86|-85|-84|-83|-82|-81|-80|-79|-78|-77|-76|-75|-74|-73|-72|-71|-70|-69|-68|-67|-66|-65|-64|-63|-62|-61|-60|-59|-58|-57|-56|-55|-54|-53|-52|-51|-50|-49|-48|-47|-46|-45|-44|-43|-42|-41|-40|-39|-38|-37|-36|-35|-34|-33|-32|-31|-30|-29|-28|-27|-26|-25|-24|-23|-22|-21|-20|-19|-18|-17|-16|-15|-14|-13|-12|-11|-10|-9|-8|-7|-6|-5|-4|-3|-2|-1|0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15|16|17|18|19|20|21|22|23|24|25|26|27|28|29|30|31|32|33|34|35|36|37|38|39|40|41|42|43|44|45|46|47|48|49|50|51|52|53|54|55|56|57|58|59|60|61|62|63|64|65|66|67|68|69|70|71|72|73|74|75|76|77|78|79|80|81|82|83|84|85|86|87|88|89|90|91|92|93|94|95|96|97|98|99|100|101|102|103|104|105|106|107|108|109|110|111|112|113|114|115|116|117|118|119|120|121|122|123|124|125|126|127;
function takeByte(x: byte): byte {
x++;
return x + 1;
// ~~~~~~~~~~~~~ Type 'number' is not assignable to type 'byte'.
}
takeByte(0);
takeByte(10);
takeByte(888);
// ~~~ Argument of type '888' is not assignable to parameter of type 'byte'.
function byte2sbyte(x: byte): sbyte {
if (this) return x + 1;
// ~~~~~~~~~~~~~ Type 'number' is not assignable to type 'byte'.
else return x;
// ~~~~~~~~~ Type 'byte' is not assignable to type 'sbyte'.
// Type '128' is not assignable to type sbyte.
} The missing pieces are:
|
class KeyFrameManager {
currentFrame:number
// the rest of class exposes API for switching to different frames of a key-framed animation.
} There's currently no way to enforce that frames should be whole Let us enforce this. 👍 For example: class KeyFrameManager {
currentFrame:int
// the rest of class exposes API for switching to different frames of a key-framed animation.
} This is simply a matter of enforcing program correctness for me, I don't care that |
Here is a definition file for numeric types used in AssemblyScript, an experimental compiler from TypeScript to WebAssembly, written in TypeScript: https://github.com/dcodeIO/AssemblyScript/blob/master/assembly.d.ts This could be a good basis for the future numerical types in the official TypeScript compiler. |
@yahiko00 There's also TurboScript which does this: https://github.com/01alchemist/TurboScript |
That's right. I've also had a look at this project but it seems a little but more "experimental" than AssemblyScript, and its definition file is much less complete. |
@yahiko00 The main difference is that TurboScript aims to be a bit more fully featured and is closer to a TS derivative rather than AssemblyScript is (which is more or less just a retargeted TS). |
@isiahmeadows You are probably right since I am discovering these repos. Although, I like the idea of AssemblyScript to be a subset of TypeScript. Maybe I am wrong, but I think, in the future, this could be easier to merge AssemblyScript into the official TypeScript compiler. |
Here is another project, wasm-util, from TypeScript to WebAssembly: https://github.com/rsms/wasm-util |
In case you missed #15096 I think TC39 BigInt is the best way forward to get true integers in TypeScript. This proposal would enable an arbitrary precision The proposal is presently stage 2 and has multi-vendor support. In theory the syntax is mostly stable at this point. There's a proposal to support it in Babel as well: babel/proposals#2 |
I like how TypeScript follows the spec and tries not to add too much to it, but an integer type is just so fundamental. Would love minimal integer support that's based on safe integers in ES6. Here's a concise description of it: http://2ality.com/2015/04/numbers-math-es6.html |
GraphQL supports signed 32 bit integers and doesn't address other integer types. IMO that's a great starting point. http://graphql.org/learn/schema/ |
@benatkin if TC39 BigInt ever ships, there's a pretty big drawback to defining integer types based on If people were to add The TC39 BigInt proposal is now stage 3 and I think stands a pretty decent chance of eventually shipping, so I think it's probably best to "reserve" any potential integer types for ones which are actually integers at the VM level. |
So generally speaking other than using the slightly shorter array on the Bits and BinaryBits types Not much can be done regarding the unsigned int types. Your salvation may come with this proposal should the day come microsoft/TypeScript#15480 But Typescript will probably not support base uintx types as this discussion ended microsoft/TypeScript#4639 essentially JS is expecting BigInt and other types may be added someday and so no extra primitives will be added until T39 adds them on a side note this regex proposal should it get implemented would be interesting microsoft/TypeScript#6579
This proposal introduces four new number types:
int
,uint
,int<N>
anduint<N>
, where N is any positive integer literal. Anint
is signed, it can contain negative integers, positive integers and zero. Anuint
is an unsigned integer, it can contain positive integers and zero.int<N>
anduint<N>
are integers limited toN
bits:int<N>
can contain integers in the range-Math.pow(2, N-1) ... Math.pow(2, N-1) - 1
.uint<N>
can contain integers in the range0 ... Math.pow(2, N) - 1
This proposal doesn't use type information for emit. That means this doesn't break compilation using
isolatedModules
.Note: since JavaScript uses 64 bit floating point numbers, not all integers can be used at runtime. Declaring a variable with type like
uint<1000>
doesn't mean it can actually store a number likeMath.pow(2, 999)
. These types are here for completeness and they can be used in type widening as used in type inference of binary operators. Most languages only support integer types with 8, 16, 32 and 64 bits. Integers with other size are supported because they are used in the type inference algorithm.Goals
References
Ideas were borrowed from:
Overview
int
anduint
are both subtypes ofnumber
. These types are the easiest integer types. They are not limited to a certain amount of bits.int<N>
anduint<N>
are subtypes ofint
anduint
. These types have a limited size.In languages like C, the result type of an operator is usually the same as the input type. That means you can get strange behavior, and possibly bugs:
To mimic that behavior we would need to add type converters everywhere in the emitted code, and those heavily rely on type information. We don't want that, so instead we widen the return types. That means there is no runtime overhead. For example, adding two values of type
uint<8>
would result inuint<9>
. To assign that to auint<8>
, you can use an conversion function, likeuint<8>(x)
. That function converts x to anuint<8>
.This design means that operations like
x++
,x += 10
andx /= 2
are not allowed, as they are not safe. Instead you can usex = uint<8>(x + 1)
andx = uint<8>(x / 2)
.int
anduint
allow more operations. Since they are not defined with a limited size,x++
andx += 10
are allowed.x--
is only allowed on anint
, as anuint
might become negative. Below is an overview of which operators are allowed on which number types.Since emit isn't based on type information, integers can be used in generics. They can also be used in unions.
Assignability
int
anduint
are assignable tonumber
uint
is assignable toint
int<N>
is assignable toint
andnumber
uint<N>
is assignable toint
,uint
andnumber
int<N>
is assignable toint<M>
iff N <= Muint<N>
is assignable touint<M>
iff N <= Mint<N>
is not assignable touint<M>
for all N, Muint<N>
is assignable toint<M>
iff N < MInfinity
,-Infinity
andNaN
are not assignable to any integer type.If a type is not assignable to some other type, you can use a normal cast (
<int<8>> x
orx as int<8>
), which has no impact at runtime, or a cast function (int<8>(x)
).Cast function
A cast function takes a number and converts it to the target integer type.
Syntax:
Note: even though an
int
doesn't have a fixed size, we use the 32 bit cast function as that's easy and fast JavaScript.Semantics:
This gives the same behavior as type casts in languages like C. If the operand is not a number, TS should give a compile time error. Emit should succeed (unless
--noEmitOnError
is set), the operand should be converted to a number at runtime, using same semantics as+x
.undefined
andnull
should also be converted the same way.Implemented in TypeScript:
These functions are not always used in the generated code. When
n <= 32
, these functions are not needed.The generated code:
If
n === 32
:If
n < 32
,Question: can we make the emit ofint<n>(x)
better? The current isn't very nice and performs bad.Solved it using http://blog.vjeux.com/2013/javascript/conversion-from-uint8-to-int8-x-24.html
If
n > 32
:__castToUint
and__castToInt
are the functions above, emitted as helper functions.You can only use these cast functions in call expressions:
We cannot solve that with helper functions, as
int
wouldn't be equal toint
if they don't come from the same file when using external modules.Instead we should dissallow this.
Type inference
Introducing integer types can break existing code, like this:
But we do want to type
1
as anint
:There are several options:
In option two we infer to
int
, aslet a = 0
would otherwise infer touint<1>
, which would mean that the variable can only contain0 and 1.
Examples of option 3:
A literal will be infered to the smallest integer type that can contain the number. Examples:
Operators:
Operators should always infer to the smallest integer type that can contain all possible values.
Certain assignment operators are not supported on integer types. In short:
Let
Op
be an operator.x Op= y
is allowed iffx = x Op y
is allowed.That means that the following operators are not supported and usage will give an error:
Breaking changes
Type inference can change depending on how it will be implemented. Also changing existing definitions can break things:
Changing the definition of the
length
property to auint
would break this code, though I don't think this pattern will be used a lot.Such problems can easily be fixed using a type annotation (in this case
let length: number = arr.length;
).Questions
number
? Thus,let x: int
vs.let x: number.int
int<8>
/number.int<8>
orint8
/number.int8
?Can we make the emit ofSolved using http://blog.vjeux.com/2013/javascript/conversion-from-uint8-to-int8-x-24.htmlint<n>(x)
(n < 32) better? The current looks and performs bad.undefined
andnull
be assigned to an integer type? The conversion functions convert them to 0. Allowingundefined
would mean thatint + int
could beNaN
(if one operand isundefined
), whileNaN
is not assignable to anint
. I'd say thatundefined
andnull
shouldn't be assignable to an integer type, and that declaring a variable (also class property) with an integer type and without an initializer would be an error.All feedback is welcome. If you're responding to one of these questions, please include the number of the question. If this proposal will be accepted, I can try to create a PR for this.
The text was updated successfully, but these errors were encountered: