Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add bigints to standard library #14696

Closed
uninhm opened this issue Jun 17, 2020 · 88 comments
Closed

Add bigints to standard library #14696

uninhm opened this issue Jun 17, 2020 · 88 comments

Comments

@uninhm
Copy link

uninhm commented Jun 17, 2020

Summary

Add arbitrary precision integers to standard library

Solution

Add the bigints library (it is in nimble) to standard library

https://github.com/def-/nim-bigints

@Araq
Copy link
Member

Araq commented Jun 17, 2020

Yeah, maybe we should really do that. The compiler itself would benefit.

@andreaferretti
Copy link
Collaborator

Before sanctioning a particular implementation of bingints into the stdlib, some performance improvements are in order. The fact is, there is also https://github.com/FedeOmoto/bignum, which is a wrapper aroung GMP, and if I recall correctly it is significantly faster (maybe things have improved lately). See also the discussion in https://forum.nim-lang.org/t/3677. The ideal would be to have a fast, pure nim implementation of bigints

@timotheecour
Copy link
Member

The ideal would be to have a fast, pure nim implementation of bigints

for something like bigint, I'd say performance matters more than pure nim implementation if I had to choose between the 2, unless performance difference is small enough. But it also has to work at CT, which requires either: CT FFI, pure nim implementation fallback for the VM, or integrating via vmops (which is not ideal as it ties the compiler to a bignum library)

@Araq
Copy link
Member

Araq commented Jun 17, 2020

For the Nim compiler I don't care much about the speed, it's much more important that it has no dependencies.

@planetis-m
Copy link
Contributor

planetis-m commented Jun 17, 2020

def-/nim-bigints is maintained by contributors. It is not a good candidate.

@Araq
Copy link
Member

Araq commented Jun 17, 2020

Why not? Sometimes software is finished.

@bpr
Copy link

bpr commented Jun 18, 2020

Why not? Sometimes software is finished.

If you look at the TODO, it's clear that this isn't finished yet. It's a good start though.

I think it's important to have a pure Nim BigInt library, rather than calling out to some potentially faster other (C) based library. IMO, Nim is a systems programming language, and if we can't have a Nim (maybe Nim + asm) library that's roughly as performant as a C + asm library, then Nim needs to be enhanced.

@alaviss
Copy link
Collaborator

alaviss commented Jun 18, 2020

Personally I'd prefer stint due to it's maturity, but it appears that the library depends on parts of nim-stew...

Also, shouldn't something like this be added to fusion instead of the stdlib?

@juancarlospaco
Copy link
Collaborator

juancarlospaco commented Jun 18, 2020

I think varints can be replaced by a new bigints on stdlib, then stdlib remains kinda constant in size.

@Araq
Copy link
Member

Araq commented Jun 18, 2020

I think varints can be replaced by a new bigints on stdlib, then stdlib remains kinda constant in size.

Er uh, I think that's not how it works. ;-)

@juancarlospaco
Copy link
Collaborator

I was thinking varints suppose to be a bigint kinda thing but was never completely implemented, sorry.

@mratsim
Copy link
Collaborator

mratsim commented Jun 18, 2020

GMP is a big no-no due to the license and not working at compile-time

I don't mind contributing a bigint from scratch, it's not like I wrote those 3 times already:

Note that Constantine backend or Stint current refactor are either 2x faster than GMP on modular arithmetic to sometimes slower for bigints below 1024 bits (didn't test much beyond and didn't test on non-modular arithmetic)

The main reasons behind stew dependencies are threefold:

  • Nim endians doesn't work at compile-time and calls integers what are actually raw bytes
  • No utilities for seq[byte] and byte arrays in the stdlib
  • bitops noUndefined returning 0 instead of the bitwidth of the type forces everyone to reimplement countLeadingZero or countTrailingZero:

    Nim/lib/pure/bitops.nim

    Lines 397 to 399 in 234f4a2

    when noUndefined:
    if x == 0:
    return 0
    .
    I am unaware of any use-cases where 0 is the correct answer.

In terms of BigInt usage, I would need to know more about the use-cases:

  • What kind of API: out-of-place like integers proc +(a, b: BigInt): BigInt or in-place like GMP
  • Do we target embedded? Because BigInt require dynamic memory management, and so the API would have to be gmp-like instead of "integer-like" i.e. only in-place operations
  • Do we want it thread-safe?
  • Do we assume that people might do BigInt operation in a loop, in that case if we use the out-of-place API we might want a caching allocator as memory allocation will be a bottleneck

@zah
Copy link
Member

zah commented Jun 18, 2020

If the compiler needs big integers, shouldn't it finally start to use Nimble packages? stint, as it exists right now is is a mature implementation of fixed-size integers. Duplicating it wouldn't be beneficial for anyone. The notion that the standard library is what the compiler needs to use is a damaging policy that prevents the Nim team to benefit from collaboration with any other team. It wouldn't be that hard to add a bunch of git submodules and -p:<path> directives in the compiler nim.cfg file and solve the problem in an afternoon.

@Araq
Copy link
Member

Araq commented Jun 18, 2020

@zah Agreed. However, I'm in no rush in adding big integer support for Nim and busy with other work considered more important.

@uninhm
Copy link
Author

uninhm commented Jun 18, 2020

If the compiler needs big integers, shouldn't it finally start to use Nimble packages? stint, as it exists right now is is a mature implementation of fixed-size integers. Duplicating it wouldn't be beneficial for anyone. The notion that the standard library is what the compiler needs to use is a damaging policy that prevents the Nim team to benefit from collaboration with any other team. It wouldn't be that hard to add a bunch of git submodules and -p:<path> directives in the compiler nim.cfg file and solve the problem in an afternoon.

But, in competitive programming, the bigints are important, and you can't use external libraries

@zah
Copy link
Member

zah commented Jun 18, 2020

What is competitive programming? Sites like HackerRank? IOI competitions? No libraries allowed there? Can't bring your own hard drive?

@mratsim
Copy link
Collaborator

mratsim commented Jun 18, 2020

If it helps competitive programming it's a side-effect but I don't think Nim should fall into competitive-programming driven development. Unless those people doing competitive programming have demonstrated that they stick around in the Nim community and so we can rely on them to maintain the code in the standard library (which is the main blocker for any module there).

@juancarlospaco
Copy link
Collaborator

juancarlospaco commented Jun 18, 2020

Maybe some inspiration can be D stdlib BigInt ?: https://dlang.org/phobos/std_bigint.html

I think competitive programming is an example use case, but maybe others benefit too, like Scientific Nim projects.

@timotheecour
Copy link
Member

If the compiler needs big integers, shouldn't it finally start to use Nimble packages?

I agree; what this would require is:

  • locking down all recursive dependencies
    to avoid a situation in which some unrelated change breaks bootstrap
  • introducing fork mirrors regularly updated for each recursive dependency
    to avoid a situation in which a bad force push breaks bootstrap

@zah
Copy link
Member

zah commented Jun 18, 2020

@timotheecour, tracking the dependencies as git submodules solves both problems. You only have to do a little bit of manual work in adding the transitive dependencies by yourself.

@timotheecour
Copy link
Member

timotheecour commented Jun 18, 2020

@zah since you brought this up can you please create either an issue or RFC so it can be discussed exclusively and separately in a dedicated location instead of here (it's more general than bigint) ? I'll followup there

@timotheecour
Copy link
Member

timotheecour commented Oct 5, 2020

proposal

mini-gmp (subset of https://gmplib.org/) could be a good candidate to port/wrap:

  • single header, single source, no dependency
  • API-compatible with gmp so users could link against -lgmp for gmp-performane
  • could be used at CT using vmops
  • most importantly, performance is better than other alternatives apart from gmp itself:

The performance target for mini-gmp is to be at most 10 times slower than the real GMP library, for numbers of size up to a few hundred bits.

the only potential concern is licensing; as I understand, dynamic linking to gmp shouldn't cause issues under LGPL; if that's a problem (eg static linking needed), an alternative (slower but binary compatible) implementation could be bundled with nim, and users could opt to use mini-gmp or gmp for performance as needed

GMP is a big no-no due to the license and not working at compile-time

vmops should solve the problem working at CT (that's how new hashes work in VM for eg); and there's also --experimental:compiletimeFFI

benchmark: computing factorial(100) 1 million times

obviously a very partial benchmark but it's a start, feel free to suggest improvements/corrections

(all tests with clang -O2 or similar; on a recent mac)

# times in seconds
gmp: 1.48
EDIT: kaushalmodi/bignum (nimble install bignum): 1.50
mini-gmp: 2.24
python: 11s
std.bigint (D): 12s
nim using pkg/bigints: 14s
nim using pkg/stint: 41s
Big-Integer-C: 45s
tiny-bignum-c: 2260s

D:

for D's std.bigint (https://dlang.org/phobos/std_bigint.html) with:

ldmd2 -of/tmp/z05 -O -inline -release $timn_D/tests/bigints/z04.d
version   1.23.0 (DMD v2.093.1, LLVM 9.0.1)
import std.bigint;
import std.stdio;
void test1(bool isEcho){
  BigInt n = "100";
  BigInt ret = BigInt("1");
  while(n!=0){
    ret = ret * n;
    n = n-1;
  }
  if(isEcho) writeln(ret);
}

void main(){
  for(int i=0;i<1000000;i++){
    test1(false);
  }
  test1(true);
}

Big-Integer-C:

see test here: ilia3101/Big-Integer-C#3

python

def test1():
  n = 100
  ret = 1
  while(n!=0):
    ret = ret * n
    n = n-1
  # print(ret)

def main():
  for i in range(0,1000000): test1()
main()

nim + pkg/bigints

import pkg/bigints

proc test1(isEcho: bool) =
  let n = 100
  var n2 = n.initBigInt
  var ret = 1.initBigInt
  while n2 != 0:
    # ret = ret * n2
    ret *= n2
    n2.dec
  if isEcho:
    echo ret

proc main=
  for i in 0..<1000000:
    test1(isEcho=false)
  test1(isEcho=true)

main()

nim + pkg/stint

https://github.com/status-im/nim-stint

import pkg/stint

proc test1(isEcho: bool)=
  const d = 1024
  var n = 100.stint(d)
  var ret = 1.stint(d)
  while n != 0.stint(d):
    ret = ret * n
    n = n-1.stint(d)
  if isEcho:
    echo ret

proc main=
  for i in 0..<1_000_000:
    test1(isEcho=false)
  test1(isEcho=true)
main()

EDIT: nim + pkg/bignum

this option (https://github.com/kaushalmodi/bignum) hasn't been considered in above thread; it's a fork of https://github.com/FedeOmoto/bignum that got a few updates to make it work (unfortunately under same nimble name see FedeOmoto/bignum#3)
it (unsurprisingly) gives same performance as gmp, via a high level wrapper around nim's nim gmp wrapper; test file used in above benchmark:

when true:
  import pkg/bignum
  proc test1(n,ret: var Int, isEcho: bool)=
    discard n.set 100
    discard ret.set 1
    while n != 0:
      ret *= n
      n.dec(1)
    if isEcho:
      echo ret

  proc main=
    var n = 100.newInt
    var ret = 1.newInt
    for i in 0..<1_000_000:
      test1(n, ret, isEcho=false)
    test1(n, ret, isEcho=true)
  main()

links

alternative not evaluated: openssl bn

see https://www.openssl.org/docs/man1.0.2/man3/bn.html, its license should not cause concern

@mratsim
Copy link
Collaborator

mratsim commented Oct 5, 2020

Hard no for GMP in the compiler. The license would be a nightmare.

Coincidentally, I've started to export Constantine bigints internal into a self-contained variable-size bigint library https://github.com/SciNim/megalo

Regarding Stint perf, the blocker is this: status-im/nim-stint#87 which leads to quadratic bad behavior. A complete rewrite of the backend is WIP: status-im/nim-stint#104

@arnetheduck
Copy link
Contributor

considering that there are 5 options already for bigints in this thread and new ones keep popping up, and that the standard library is a graveyard for unmaintained code given how few people care to keep it up to date, perhaps it's wiser to let the dust settle before adding yet another library?

@niekbouman
Copy link

Dear Nim-devs,

About two years ago I wrote a C++ constexpr bigint library (https://github.com/niekbouman/ctbignum)

Recently, I gave Nim a try and translated a small part of the library to Nim.
Maybe it could help as inspiration for a standard library bignum implementation.

kind regards,
Niek

template doubleWidthType(x: type): type =
  when x is uint8:  
      uint16
  elif x is uint16:  
      uint32
  elif x is uint32:  
      uint64

template bitWidth(x: type): int =
  when x is uint8:  
      8
  elif x is uint16:  
      16
  elif x is uint32:  
      32
  elif x is uint64:  
      64
  
assert doubleWidthType(uint8) is uint16  
assert bitWidth(uint8) == 8 

func addIgnoreLastCarry[N: static int; T](a: array[N, T], b: array[N, T]): array[N, T] = 
  var carry = T(0)
  for i in 0..<N:
    let aa: T = a[i]
    let sum: T = aa + b[i]
    let res: T = sum + carry
    carry = T((sum < aa) or (res < sum))
    result[i] = res

func mul[M, N: static int; T](u: array[M, T], v: array[N, T]): array[M + N, T] =
  type TT = doubleWidthType(T)
  for j in 0..<N:
    var k = T(0)
    for i in 0..<M:
      var t : TT  = TT(u[i]) * TT(v[j]) + result[i + j] + k
      result[i + j] = cast[T](t)
      k = T(t shr bitWidth(T))
    result[j + M] = k

func partialMul[M, N: static int; T](L: static int, u: array[M, T], v: array[N, T]): array[L, T] =
  type TT = doubleWidthType(T)
  for j in 0..<N:
    var k = T(0)
    var m = min(M, L - j)
    for i in 0..<m:
      var t : TT  = TT(u[i]) * TT(v[j]) + result[i + j] + k
      result[i + j] = cast[T](t)
      k = T(t shr bitWidth(T))
    if j + M < L:  
      result[j + M] = k

func toDigitArray(t: type, str: static string): array[str.len(), t] =
  for i, c in str:
    let x = int(c) - 48
    assert 0 <= x and x < 10
    result[i] = t(x)

func tightLen[N: static int; T](x : array[N,T]) : int = 
  for i in countdown(x.len - 1 ,0):
    if x[i] != 0:
      return i + 1
  return 0

func parseBigintHelper(T: type, x: static string): auto =
  const l = x.len;
  const N = int(1 + (10 * l) / (3 * bitWidth(T)))
  let digits = toDigitArray(T,x)
  var num : array[N, T]
  var powOfTen : array[N, T]
  powOfTen[0] = T(1)
  for i in countdown(l - 1, 0):
    num = addIgnoreLastCarry(num, partialMul(N,[digits[i]], powOfTen))
    powOfTen = partialMul(N,[T(10)], powOfTen)
  return num

func parseBigint(T: type, x: static string): auto =
  const num = parseBigintHelper(typeof T, x)
  const l = tightLen(num)
  var result : array[l, T]
  for i in 0..<l:
    result[i] = num[i]
  return result  
  
template bn32(x: static[string]): auto =  
  parseBigint(uint32, x)    

const example = bn32"4294967296"
echo example

@pietroppeter
Copy link
Collaborator

add a big disclaimer on def-/bigints to point at nim-lang/bigints

https://github.com/def-/nim-bigints/pull/39

@Vindaar
Copy link
Contributor

Vindaar commented Nov 10, 2021

add a big disclaimer on def-/bigints to point at nim-lang/bigints

https://github.com/def-/nim-bigints/pull/39

I agree with @def-. Why fork and redirect instead of moving over the repository? From the outside it looks like there wasn't even an attempt to include them in the discussion. Licensing aside, trying to talk to people first is imo the right way.

@def-
Copy link
Member

def- commented Nov 10, 2021

nim-lang/bigints was forked some time ago and commit history diverged. I think the effort would be worth though.

After I move the repo, we can force-push the current nim-lang/bigints state onto it.
@narimiran is that ok for you?

@narimiran
Copy link
Member

we can force-push the current nim-lang/bigints state onto it

I thought you might, for example, want to keep operations on BigInt and int32, which we deliberately don't support.

I personally don't mind having both repos, especially now when nimble points bigints to the nim-lang version; but if y'all think it is better to just have one repo - fine by me.

@def-
Copy link
Member

def- commented Nov 11, 2021

I'm not planning to maintain my own nim-bigints repo when there is an official one. Makes more sense to only have one.

@narimiran I get this error trying to move: "nim-lang already has a repository in the def-/nim-bigints network ", I guess you have to remove it first.

@konsumlamm
Copy link
Contributor

@narimiran any update on this? Is it planned to move the repo? I'd like to contribute to nim-lang/bigints, but it still has no issues enabled and I'm not sure if I should wait for a move.

@narimiran
Copy link
Member

it still has no issues enabled

I'm guessing github disables them automatically for forked repos? Whatever it might be, issues are now enabled.

any update on this?

No updates, sorry: it fell behind some other stuff. But it is time to finally solve this.

I'm not sure how to deal with:

"nim-lang already has a repository in the def-/nim-bigints network ", I guess you have to remove it first.

i.e. if we remove our repo, do we lose our commits? Can we just cherry-pick the missing commits from def-/nim-bigints?

@ghost
Copy link

ghost commented Nov 23, 2021

@narimiran I think one way would be to create a PR from nim-lang/bigints to def-/bigints, merge it, then remove the nim-lang repo and move the def- one?

@def-
Copy link
Member

def- commented Nov 23, 2021

i.e. if we remove our repo, do we lose our commits? Can we just cherry-pick the missing commits from def-/nim-bigints?

In that case we still have two repos, so nothing won.

@narimiran I think one way would be to create a PR from nim-lang/bigints to def-/bigints, merge it, then remove the nim-lang repo and move the def- one?

Yes, that would work.

@narimiran
Copy link
Member

@narimiran I think one way would be to create a PR from nim-lang/bigints to def-/bigints, merge it, then remove the nim-lang repo and move the def- one?

2/3 done (PR and remove of nim-lang's version), now it is time to move def-'s one over.

@def-
Copy link
Member

def- commented Nov 24, 2021

Done: https://github.com/nim-lang/nim-bigints
I think I'm missing the permissions to move it from nim-bigints to bigints though (and I'm not 100% sure if Github will keep the redirects properly if we do two in a row)

@narimiran
Copy link
Member

I think I'm missing the permissions

Speaking of permissions: I don't know if it is now up to you or @Araq, but I will need maintainer permissions for this new repo :)

@def-
Copy link
Member

def- commented Nov 24, 2021

Not up to me, I don't have permissions to add maintainers either.

@ringabout
Copy link
Member

Core team group should add more repos.

@def-
Copy link
Member

def- commented Nov 25, 2021

The double redirect works btw, old link from https://github.com/def-/nim-bigints goes to https://github.com/nim-lang/bigints

@konsumlamm
Copy link
Contributor

I think this can be closed now, since https://github.com/nim-lang/bigints exists.

@dlesnoff
Copy link
Contributor

dlesnoff commented Jul 28, 2022

The library does not even appear in the docs https://nim-lang.org/docs/lib.html
Maybe the issue might be renamed (or closed and open a new issue) for integration within the compiler.
I would like to know what kind of critical feature for the integration is missing, before spending time into efficiency improvement (if I can actually find the time for it).
@mratsim raised an important issue about the library : we should consider using uint64 instead of uint32 which gives better asymptotic algorithms constants for all operations, especially if we implement naive operations like multiplication in O(n^2).
Half the number of limbs, that's four times less operations.

I know that @Araq does not care about performance, but the computations need to stay practically feasible. Benchmarking will enable us to detect regressions and/or improvements. For benchmarking and test purposes, we will need to generate random bigints.

There is still some discrepancies between bigints and int, see : nim-lang/bigints#90 (div/mod behaviour)
and : nim-lang/bigints#51 (bitwise operators for negative numbers).

@dlesnoff
Copy link
Contributor

dlesnoff commented Aug 1, 2022

@timotheecour proposed some benchmarking at the beginning of the thread :

benchmark: computing factorial(100) 1 million times
obviously a very partial benchmark but it's a start, feel free to suggest improvements/corrections

This kind of benchmark is not a good indicator for at least three kind of reasons. I would like to list them

  • this is not the algorithm we want to use in practice to compute a factorial. The binary splitting algorithm is more practical than the naïve algorithm, and depending on the number of digits there is an even better algorithm. GMP comes with a pre-built factorial mpz_fac_ui see : https://gmplib.org/manual/Number-Theoretic-Functions
    I have done some tests with the factorial too at : https://forum.nim-lang.org/t/8967
    We are on the order of several seconds for one factorial computation of the six digit number 123456! while it only takes 23ms for dedicated software for arithmetic in Pari/GP (like GMP).
    It gets worse when scaling, but isn't it already acceptable timings for what we want to do ?

  • Anyway, n! has about n*log(n) digits (Stirling's approximation) so 100! has of the order of 600/700 digits. If we divide by 64, we can see, it is only a number of 10 limbs.
    So we are only testing a very specific case of the library ! We are only testing the multiplication with the factorial computation !
    When increasing the number of limbs, memory management and cache tuning becomes more crucial and needs to be taken into account.

  • we do the same computation of the exact same thing. Processors with predictive branching will optimize repetitive computations.
    So results may vary greatly between different processors models/manufacturers.

  • we should launch some unmeasured computation before benchmarking. It "warms up" the processor for the computation. These "blank" computations does not appear in @timotheecour's scripts.

I wonder which operations will be the most useful for the compiler. Which number of limbs interests us ?
Aren't bit manipulation like setBit, logical operations and inc, dec operations more needed than costly arithmetic ops (multiplication, division essentially) ?
I believe we should tune the library specifically for the needs of the compiler. (If you disagree, please consider explaining why rather than just 👎🏽-ing this post).
The benefits of this library are very limited compared to the SciNim wrapper of GMP for other applications like scientific computation.

There are several other pitfalls in benchmarking, but the best I could think of is to actually test all the exported functions of the libraries on random bigints of given bit sizes and do these on batches of bigints.
If we try to measure the timings of just one multiplication operation, it might not be noticeable. So if we want to generate random numbers, and measure multiple multiplications, we don't want to measure the generation of the random numbers (nor the computation of multiplication).
But if we try to measure the timings of ten multiplication, we can average the timings.
So I would generate a batch of 2*ten BigInts, and do the multiplication on each pair of operands.

P.S: I'll edit the post when I'll be home for simplification. Sorry in the meantime. I am writing this from my job.

@konsumlamm
Copy link
Contributor

The library does not even appear in the docs https://nim-lang.org/docs/lib.html

Yes, but that's a different issue (and it also applies to the other official libraries, like https://github.com/nim-lang/threading). It would be nice to get some more exposure for it though, so that hopefully more people suggest features, report bugs, etc.

Maybe the issue might be renamed (or closed and open a new issue) for integration within the compiler.

Agreed, that's also a different issue.

There is still some discrepancies between bigints and int, see : nim-lang/bigints#90 (div/mod behaviour)
and : nim-lang/bigints#51 (bitwise operators for negative numbers).

I'm well aware that there are still problems to be solved (heck, I opened those issues) and functions to be optimized and I appreciate your comments on that, but I don't think this issue is the right place to discuss that, https://github.com/nim-lang/bigints/issues is.

This issue is about adding bigints to std and we have an official library for bigints now, so it has been solved from my POV.
@juancarlospaco can you explain why you don't think this issue should be closed?

@pietroppeter
Copy link
Collaborator

the issue is about adding a bigints library to stdlib. we have a good candidate (https://github.com/nim-lang/bigints), which is now managed by nim-lang organization, but it still is not inside stdlib, it is an external package. The difference is that you need to install bigints with nimble to access it (it does not ship with nim) and versioning is independent from nim (which has its pros and cons). The issue can be closed when:

@dlesnoff
Copy link
Contributor

dlesnoff commented Aug 1, 2022

I'm well aware that there are still problems to be solved (heck, I opened those issues) and functions to be optimized and I appreciate your comments on that, but I don't think this issue is the right place to discuss that, https://github.com/nim-lang/bigints/issues is.

I know that you are. Others aren't. I wasn't only addressing you.
I wanted to summarize important issues that I felt necessary to tackle for integration in the stdlib (before or after v2).
Yes people may check https://github.com/nim-lang/bigints/issues but actually nobody has the time to go through each of the ~15 currently opened issues, and the numerous technical posts of each of them, just to understand the state of the library.
That is why I created a special issue to do a roadmap (not up to date) nim-lang/bigints#76 as well as to discuss the versioning system.

I have been confused just like you by the difference between standard and official library. So:

Maybe the issue might be renamed (or closed and open a new issue) for integration within the compiler.

"Add bigints to standard library" is actually clear so no need to rename the issue.

@mratsim
Copy link
Collaborator

mratsim commented Aug 2, 2022

Aren't bit manipulation like setBit, logical operations and inc, dec operations more needed than costly arithmetic ops (multiplication, division essentially) ?

No, if bit manipulation was needed a BitSeq is very simple and easy to optimize.
Multiplication is the backbone of bigints.

I believe we should tune the library specifically for the needs of the compiler. (If you disagree, please consider explaining why rather than just 👎🏽-ing this post).

The compiler needs are already covered by int128, which have been tested extensively after the past 3 years, but introduced many regressions at first

Furthermore, the current nim/bigint will be slower because of the limb size and also heap allocation.
Combined that with huge regression risks (for example mod/remainder behavior) I only see downside to have the compiler use it.

Re benchmarking:

@juancarlospaco
Copy link
Collaborator

juancarlospaco commented Aug 2, 2022

needs are already covered by int128, which have been tested extensively after the past 3 years

But then int128 needs to be made public and documented properly. 🤔

@mratsim
Copy link
Collaborator

mratsim commented Aug 3, 2022

No it doesn't, it's compiler only.

A public int128 should instead use the C compiler builtins like: https://github.com/rockcavera/nim-nint128/blob/f88047b/src/nint128/nint128_cint128.nim#L72-L77

@juancarlospaco
Copy link
Collaborator

I understand, then maybe a public and documented int128 backed by compiler is doable,
just documenting its limitations should be enough.

@dlesnoff
Copy link
Contributor

dlesnoff commented Aug 3, 2022

I post here an old discussion (31 january 2022) on Discord concerning the desire to change to bigints:
2022-08-03-231508_679x541_scrot

@Araq
Copy link
Member

Araq commented Aug 27, 2023

https://github.com/nim-lang/bigints/ is official.

@Araq Araq closed this as completed Aug 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests