-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: type conversion, opInc, opDec for bigint and bigdec type #676
Conversation
Awesome! This will improve big integer ergonomics for contract side |
7fdf79e
to
cb12b41
Compare
In my opinion, it's imperative to integrate math.Big into Gno at some stage to enable the smooth transfer of Go libraries that rely on it, all while upholding Go's signature legibility. I strongly support the addition of new simple types, such as The question that arises is whether we should create Can you add the unit tests you wrote in the PR body to the PR? Edit: Assuming our confidence in converting later |
gnovm/stdlibs/big/bigdec.gno
Outdated
|
||
// should always panic | ||
// XXX how to catch go panic in gno ?? | ||
// shouldPanic(t, func() {bigdec("abc")} ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about this, but can we catch go's panic in gno?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it comes from a machine.Panic
instead of a panic
inside go code, yes, however need to think whether it makes sense here...
conversion that might lead to precision lost
I think bignum makes sense as a native type, and in general agree with Manfred's argument. While I think this is probably the first instance in which Gno deviates from Go at a language level, it is for good reason in the context of application. I have primarily two concerns:
|
We should definitely discuss this in depth before pulling the trigger on it. While BigInt I see as being more straightforward and beneficial, also acknowledging the variable size blowup issue; I think BigDec is much worse. The way BigDec works across different libraries (I chose cockroach's impl after comparing many libraries), there is no universal spec for it, especially the way precision is handled across multiplication/division. I don't understand BigDec's size properties very well and I don't need to thus far, because BigInt and BigDec are only used to compute constant values in a limited way. That is, when we declare constant expressions in the form of decimals or floating point constant expressions, we don't manipulate the value (say) in a for-loop. But once IMO supporting BigInt is more straightforward. But we would want the Gno->Go precompiler to do the right thing to ensure that type-safety is ensured. I guess we can have a shim.BigInt type for that. The main question is are we breaking away from Go now officially? If we could transpile Gno code to Go code that uses big.Int, (and surely we can do this), I don't see why not. Maybe people might even end up using this for Go. |
@jaekwon thx for your opinion. I think we don't have much problem making OTOH, for
|
If we're putting a hard limit on 512 bits, we should not call it "bignum" but a "uint512". If we put a limit we can also use an ad-hoc library such as uint256 (could be adapted to 512 bits without too much trouble I'd think, though 10^77 possible values are probably enough for most use cases), which has much better cpu/memory benchmarks than using big.Int. I didn't comment on this in my last comment, though I do agree that we should probably leave bigdec out. To me there's not really a compelling case to having it as a builtin type, as to me any quantity that is discrete (ie. money and tokens) should be in integers and any which should try to get close to the mathematical Real/Rational numbers should be a float, with very very few exceptions. |
The maximum memory size for GVM execution is determined in the allocator. This brings up another question: how do we set max memory size. I opened a new issue for the discussion #761. |
In Solidity, uint256 and int256 utilize a maximum memory allocation of 32 Bytes in the Ethereum Virtual Machine (EVM), which is comparable to an unrestricted bigint in GNO when the maximum allocation value is set. However, GNO's memory allocation tracking requires knowledge of the type size in order to monitor allocations. To emulate Go's bigint behavior, we need to develop a Bigint structure type in gno and port the big.Int implementation from Go to GNO, allowing the GNO allocator to be aware of the number of Words (unit types) used to store the Bigint. Otherwise, as in current implementation, we would need to assign a fixed size for bigint in the allocator, verify this size, and trigger a panic if an overflow occurs. In the current implementation, the GNO bigint type value is set at 200 bytes in the GNO allocator. |
In Go, big.dec is an internal data type within the big package, serving solely as a decimal representation for big.Float for display purposes and not being exposed for arithmetic calculations to users. In Solidity, Decimal is not a type but rather a literal (value format) of the fixed-point number type, as Solidity uses fixed-point numbers instead of floating-point numbers. There are no floating-point numbers in Solidity. Users can create custom decimal struct types and arithmetic calculation functions to wrap int256 operations and simulate decimal number arithmetic. CockroachDB is an RDBMS that adheres to SQL specifications and supports both fixed-point types (decimal type) and floating-point numbers (float type). In SQL, Decimal is a distinct number type rather than a representation of floating-point numbers. In GNO, we implement bigdec for fixed-point numbers to achieve deterministic big floating-point numbers. This seems to be the original intention (adopting the fixed-point number concept from CockroachDB). Moving forward, it might not be necessary to connect Go's big dec to GNO's bigdec, as they represent different point number types. Go's big.dec is for floating-point numbers, while Gno's bigdec is for fixed-point numbers since it is borrowed from CockroachDB A potential optimization involves creating the Bigdec structure in GNO, allowing the allocator to utilize the internal []Word size in order to calculate the Bigdec size, therefore leveraging the full memory of GNO for a dynamically-sized big number. Presently, the allocator has a fixed bigdec size of 200 bytes. For the time being, if we think adjusting the fixed type size and verifying overflow suffices, then this approach would be a relatively simple and satisfactory solution. |
We can't limit bigint size without breaking Go spec for constant expressions so we can't do that. Which means we will have to account for memory allocations. |
Since this pr includes both |
How about supporting just If |
|
Description
can be converted from bigint to
can be converted to bigint from
can be converted from bigdec to
can be convrted to bigdec from
How has this been tested?
included testcase.gno
Additional
related pr & issue: #306 #650
to support big number => bigint
to support arbitrary precision => bigdec
for better compatibility => type conversion
bigdec <-> float(64)
has not been implemented due to determinism