-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add bigints to standard library #14696
Comments
Yeah, maybe we should really do that. The compiler itself would benefit. |
Before sanctioning a particular implementation of bingints into the stdlib, some performance improvements are in order. The fact is, there is also https://github.com/FedeOmoto/bignum, which is a wrapper aroung GMP, and if I recall correctly it is significantly faster (maybe things have improved lately). See also the discussion in https://forum.nim-lang.org/t/3677. The ideal would be to have a fast, pure nim implementation of bigints |
for something like bigint, I'd say performance matters more than |
For the Nim compiler I don't care much about the speed, it's much more important that it has no dependencies. |
def-/nim-bigints is maintained by contributors. It is not a good candidate. |
Why not? Sometimes software is finished. |
If you look at the TODO, it's clear that this isn't finished yet. It's a good start though. I think it's important to have a pure Nim BigInt library, rather than calling out to some potentially faster other (C) based library. IMO, Nim is a systems programming language, and if we can't have a Nim (maybe Nim + asm) library that's roughly as performant as a C + asm library, then Nim needs to be enhanced. |
I think |
Er uh, I think that's not how it works. ;-) |
I was thinking |
GMP is a big no-no due to the license and not working at compile-time I don't mind contributing a bigint from scratch, it's not like I wrote those 3 times already:
Note that Constantine backend or Stint current refactor are either 2x faster than GMP on modular arithmetic to sometimes slower for bigints below 1024 bits (didn't test much beyond and didn't test on non-modular arithmetic) The main reasons behind stew dependencies are threefold:
In terms of BigInt usage, I would need to know more about the use-cases:
|
If the compiler needs big integers, shouldn't it finally start to use Nimble packages? |
@zah Agreed. However, I'm in no rush in adding big integer support for Nim and busy with other work considered more important. |
But, in competitive programming, the bigints are important, and you can't use external libraries |
What is competitive programming? Sites like HackerRank? IOI competitions? No libraries allowed there? Can't bring your own hard drive? |
If it helps competitive programming it's a side-effect but I don't think Nim should fall into competitive-programming driven development. Unless those people doing competitive programming have demonstrated that they stick around in the Nim community and so we can rely on them to maintain the code in the standard library (which is the main blocker for any module there). |
Maybe some inspiration can be D stdlib BigInt ?: https://dlang.org/phobos/std_bigint.html I think competitive programming is an example use case, but maybe others benefit too, like Scientific Nim projects. |
I agree; what this would require is:
|
@timotheecour, tracking the dependencies as git submodules solves both problems. You only have to do a little bit of manual work in adding the transitive dependencies by yourself. |
@zah since you brought this up can you please create either an issue or RFC so it can be discussed exclusively and separately in a dedicated location instead of here (it's more general than |
proposal
the only potential concern is licensing; as I understand, dynamic linking to gmp shouldn't cause issues under LGPL; if that's a problem (eg static linking needed), an alternative (slower but binary compatible) implementation could be bundled with nim, and users could opt to use mini-gmp or gmp for performance as needed
vmops should solve the problem working at CT (that's how new hashes work in VM for eg); and there's also benchmark: computing
|
Hard no for GMP in the compiler. The license would be a nightmare. Coincidentally, I've started to export Constantine bigints internal into a self-contained variable-size bigint library https://github.com/SciNim/megalo Regarding Stint perf, the blocker is this: status-im/nim-stint#87 which leads to quadratic bad behavior. A complete rewrite of the backend is WIP: status-im/nim-stint#104 |
considering that there are 5 options already for bigints in this thread and new ones keep popping up, and that the standard library is a graveyard for unmaintained code given how few people care to keep it up to date, perhaps it's wiser to let the dust settle before adding yet another library? |
Dear Nim-devs, About two years ago I wrote a C++ constexpr bigint library (https://github.com/niekbouman/ctbignum) Recently, I gave Nim a try and translated a small part of the library to Nim. kind regards, template doubleWidthType(x: type): type =
when x is uint8:
uint16
elif x is uint16:
uint32
elif x is uint32:
uint64
template bitWidth(x: type): int =
when x is uint8:
8
elif x is uint16:
16
elif x is uint32:
32
elif x is uint64:
64
assert doubleWidthType(uint8) is uint16
assert bitWidth(uint8) == 8
func addIgnoreLastCarry[N: static int; T](a: array[N, T], b: array[N, T]): array[N, T] =
var carry = T(0)
for i in 0..<N:
let aa: T = a[i]
let sum: T = aa + b[i]
let res: T = sum + carry
carry = T((sum < aa) or (res < sum))
result[i] = res
func mul[M, N: static int; T](u: array[M, T], v: array[N, T]): array[M + N, T] =
type TT = doubleWidthType(T)
for j in 0..<N:
var k = T(0)
for i in 0..<M:
var t : TT = TT(u[i]) * TT(v[j]) + result[i + j] + k
result[i + j] = cast[T](t)
k = T(t shr bitWidth(T))
result[j + M] = k
func partialMul[M, N: static int; T](L: static int, u: array[M, T], v: array[N, T]): array[L, T] =
type TT = doubleWidthType(T)
for j in 0..<N:
var k = T(0)
var m = min(M, L - j)
for i in 0..<m:
var t : TT = TT(u[i]) * TT(v[j]) + result[i + j] + k
result[i + j] = cast[T](t)
k = T(t shr bitWidth(T))
if j + M < L:
result[j + M] = k
func toDigitArray(t: type, str: static string): array[str.len(), t] =
for i, c in str:
let x = int(c) - 48
assert 0 <= x and x < 10
result[i] = t(x)
func tightLen[N: static int; T](x : array[N,T]) : int =
for i in countdown(x.len - 1 ,0):
if x[i] != 0:
return i + 1
return 0
func parseBigintHelper(T: type, x: static string): auto =
const l = x.len;
const N = int(1 + (10 * l) / (3 * bitWidth(T)))
let digits = toDigitArray(T,x)
var num : array[N, T]
var powOfTen : array[N, T]
powOfTen[0] = T(1)
for i in countdown(l - 1, 0):
num = addIgnoreLastCarry(num, partialMul(N,[digits[i]], powOfTen))
powOfTen = partialMul(N,[T(10)], powOfTen)
return num
func parseBigint(T: type, x: static string): auto =
const num = parseBigintHelper(typeof T, x)
const l = tightLen(num)
var result : array[l, T]
for i in 0..<l:
result[i] = num[i]
return result
template bn32(x: static[string]): auto =
parseBigint(uint32, x)
const example = bn32"4294967296"
echo example |
|
I agree with @def-. Why fork and redirect instead of moving over the repository? From the outside it looks like there wasn't even an attempt to include them in the discussion. Licensing aside, trying to talk to people first is imo the right way. |
After I move the repo, we can force-push the current nim-lang/bigints state onto it. |
I thought you might, for example, want to keep operations on I personally don't mind having both repos, especially now when nimble points bigints to the nim-lang version; but if y'all think it is better to just have one repo - fine by me. |
I'm not planning to maintain my own nim-bigints repo when there is an official one. Makes more sense to only have one. @narimiran I get this error trying to move: "nim-lang already has a repository in the def-/nim-bigints network ", I guess you have to remove it first. |
@narimiran any update on this? Is it planned to move the repo? I'd like to contribute to nim-lang/bigints, but it still has no issues enabled and I'm not sure if I should wait for a move. |
I'm guessing github disables them automatically for forked repos? Whatever it might be, issues are now enabled.
No updates, sorry: it fell behind some other stuff. But it is time to finally solve this. I'm not sure how to deal with:
i.e. if we remove our repo, do we lose our commits? Can we just cherry-pick the missing commits from |
@narimiran I think one way would be to create a PR from nim-lang/bigints to def-/bigints, merge it, then remove the nim-lang repo and move the def- one? |
In that case we still have two repos, so nothing won.
Yes, that would work. |
2/3 done (PR and remove of nim-lang's version), now it is time to move def-'s one over. |
Done: https://github.com/nim-lang/nim-bigints |
Speaking of permissions: I don't know if it is now up to you or @Araq, but I will need maintainer permissions for this new repo :) |
Not up to me, I don't have permissions to add maintainers either. |
Core team group should add more repos. |
The double redirect works btw, old link from https://github.com/def-/nim-bigints goes to https://github.com/nim-lang/bigints |
I think this can be closed now, since https://github.com/nim-lang/bigints exists. |
The library does not even appear in the docs https://nim-lang.org/docs/lib.html I know that @Araq does not care about performance, but the computations need to stay practically feasible. Benchmarking will enable us to detect regressions and/or improvements. For benchmarking and test purposes, we will need to generate random bigints. There is still some discrepancies between bigints and int, see : nim-lang/bigints#90 (div/mod behaviour) |
@timotheecour proposed some benchmarking at the beginning of the thread :
This kind of benchmark is not a good indicator for at least three kind of reasons. I would like to list them
I wonder which operations will be the most useful for the compiler. Which number of limbs interests us ? There are several other pitfalls in benchmarking, but the best I could think of is to actually test all the exported functions of the libraries on random bigints of given bit sizes and do these on batches of bigints. P.S: I'll edit the post when I'll be home for simplification. Sorry in the meantime. I am writing this from my job. |
Yes, but that's a different issue (and it also applies to the other official libraries, like https://github.com/nim-lang/threading). It would be nice to get some more exposure for it though, so that hopefully more people suggest features, report bugs, etc.
Agreed, that's also a different issue.
I'm well aware that there are still problems to be solved (heck, I opened those issues) and functions to be optimized and I appreciate your comments on that, but I don't think this issue is the right place to discuss that, https://github.com/nim-lang/bigints/issues is. This issue is about adding bigints to std and we have an official library for bigints now, so it has been solved from my POV. |
the issue is about adding a bigints library to stdlib. we have a good candidate (https://github.com/nim-lang/bigints), which is now managed by nim-lang organization, but it still is not inside stdlib, it is an external package. The difference is that you need to install bigints with nimble to access it (it does not ship with nim) and versioning is independent from nim (which has its pros and cons). The issue can be closed when:
|
I know that you are. Others aren't. I wasn't only addressing you. I have been confused just like you by the difference between standard and official library. So:
"Add bigints to standard library" is actually clear so no need to rename the issue. |
No, if bit manipulation was needed a BitSeq is very simple and easy to optimize.
The compiler needs are already covered by int128, which have been tested extensively after the past 3 years, but introduced many regressions at first
Furthermore, the current nim/bigint will be slower because of the limb size and also heap allocation. Re benchmarking:
|
But then |
No it doesn't, it's compiler only. A public int128 should instead use the C compiler builtins like: https://github.com/rockcavera/nim-nint128/blob/f88047b/src/nint128/nint128_cint128.nim#L72-L77 |
I understand, then maybe a public and documented |
https://github.com/nim-lang/bigints/ is official. |
Summary
Add arbitrary precision integers to standard library
Solution
Add the bigints library (it is in nimble) to standard library
https://github.com/def-/nim-bigints
The text was updated successfully, but these errors were encountered: