Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Meta] Generics/Static early symbol resolution #8677

Open
12 of 16 tasks
mratsim opened this issue Aug 17, 2018 · 5 comments
Open
12 of 16 tasks

[Meta] Generics/Static early symbol resolution #8677

mratsim opened this issue Aug 17, 2018 · 5 comments

Comments

@mratsim
Copy link
Collaborator

mratsim commented Aug 17, 2018

I am starting to lose track of all bug reports linked to generics and static early symbol resolution.

Context

Quoting @Araq

The pre-instantiation lookup is required to implement the proper "open vs closed symbol lookup" rules.

http://nim-lang.org/docs/manual.html#generics-symbol-lookup-in-generics

It's a tough issue, maybe we need to weaken the prepass if a 'this' parameter is used.

Impact

Solution

Most advanced discussion is in #8603
Quoting @zah

Er, it's a bug and a rather simple one. The generic pre-pass in semgnrc.nim have to examine the when condition expressions in the same way it examines all other expressions.

The fix I imagine is that you should just call semGenericStmt on the when condition expression (n.sons[0])

I'll have to look at some specific examples. Inside the compiler, we keep track of which generic symbols are in "unresolved" state and we need to add guards in the code detecting such symbols in many places. I suspect something similar is going on here as well.

@mratsim
Copy link
Collaborator Author

mratsim commented May 9, 2019

Relevant discussion and planning to tackle the issue: https://irclogs.nim-lang.org/09-05-2019.html#11:12:59

@mratsim
Copy link
Collaborator Author

mratsim commented May 13, 2020

Another instance of the bug affecting nimpy:

You cannot use the natural numpy syntax (or nimpy at all until we have a workaround) in generic procs due to early symbol resolution:

import ../src/arraymancer
import nimpy
import nimpy/raw_buffers
import sequtils

proc `+`[T](p: ptr T, val: int) : ptr T {.inline.}=
  cast[ptr T](cast[uint](p) + cast[uint](val * sizeof(T)))

proc pyBufToTensor[T](ndArray: PyObject): Tensor[T]=
  # Convert PyObject to RawPyBuffer
  var aBuf: RawPyBuffer
  ndArray.getBuffer(aBuf, PyBUF_WRITABLE or PyBUF_ND)
  aBuf.release()

  # Copy buffer into Tensor
  var shape: seq[int]
  for i in 0..<aBuf.ndim:
    shape.add((aBuf.shape+i)[])

  # Get memory asdress of buffer
  var bufPtr = cast[ptr UncheckedArray[T]](aBuf.buf)
  # Get underlying data buffer of Tensor
  result = newTensor[T](shape)
  var tensorDataPtr = cast[ptr UncheckedArray[T]](result.get_data_ptr)
  # Copy Buffer into Tensor
  var length = shape.foldl(a*b)
  copyMem(tensorDataPtr, bufPtr, length*sizeof(T))

let np = pyImport("numpy")

proc nimfftshift[T](t: Tensor[T]): Tensor[T]=
  ## Reshape PyObject to Arraymancer Tensor
  #var ndArray = wrappy.w_fftshift(t.toSeq, t.shape.toSeq)
  var shape = np.`array`(t.shape.toSeq)
  var ndArray = np.`array`(t.toSeq)
  ndArray = np.reshape(ndArray, shape)
  ndArray = np.fft.fftshift(ndArray)

  # Convert RawPyBuffer to Tensor
  result = pyBufToTensor[T](ndArray)

proc main() =
  let a = randomTensor([3, 3], 3).asType(float32)
  echo a
  let b = nimfftshift(a)
  echo b

main()

raises semcheck issue:

proc reshape(t: Tensor; new_shape: MetadataArray): Tensor
  first type mismatch at position: 1
  required type for t: Tensor
  but expression 'np' is of type: PyObject
proc reshape(t: Tensor; new_shape: varargs[int]): Tensor
  first type mismatch at position: 1
  required type for t: Tensor
  but expression 'np' is of type: PyObject
proc reshape[TT](a: Variable[TT]; shape: MetadataArray): Variable[TT]
  first type mismatch at position: 1
  required type for a: Variable[reshape.TT]
  but expression 'np' is of type: PyObject
proc reshape[TT](a: Variable[TT]; shape: varargs[int]): Variable[TT]
  first type mismatch at position: 1
  required type for a: Variable[reshape.TT]
  but expression 'np' is of type: PyObject

expression: reshape(np, ndArray, shape)

while this compiles

import ../src/arraymancer
import nimpy
import nimpy/raw_buffers
import sequtils

proc `+`[T](p: ptr T, val: int) : ptr T {.inline.}=
  cast[ptr T](cast[uint](p) + cast[uint](val * sizeof(T)))

proc pyBufToTensor[T](ndArray: PyObject): Tensor[T]=
  # Convert PyObject to RawPyBuffer
  var aBuf: RawPyBuffer
  ndArray.getBuffer(aBuf, PyBUF_WRITABLE or PyBUF_ND)
  aBuf.release()

  # Copy buffer into Tensor
  var shape: seq[int]
  for i in 0..<aBuf.ndim:
    shape.add((aBuf.shape+i)[])

  # Get memory asdress of buffer
  var bufPtr = cast[ptr UncheckedArray[T]](aBuf.buf)
  # Get underlying data buffer of Tensor
  result = newTensor[T](shape)
  var tensorDataPtr = cast[ptr UncheckedArray[T]](result.get_data_ptr)
  # Copy Buffer into Tensor
  var length = shape.foldl(a*b)
  copyMem(tensorDataPtr, bufPtr, length*sizeof(T))

let np = pyImport("numpy")

proc nimfftshift(t: Tensor[float32]): Tensor[float32]=
  ## Reshape PyObject to Arraymancer Tensor
  #var ndArray = wrappy.w_fftshift(t.toSeq, t.shape.toSeq)
  var shape = np.`array`(t.shape.toSeq)
  var ndArray = np.`array`(t.toSeq)
  ndArray = np.reshape(ndArray, shape)
  ndArray = np.fft.fftshift(ndArray)

  # Convert RawPyBuffer to Tensor
  result = pyBufToTensor[float32](ndArray)

proc main() =
  let a = randomTensor([3, 3], 3).asType(float32)
  echo a
  let b = nimfftshift(a)
  echo b

main()

@Araq
Copy link
Member

Araq commented May 13, 2020

Well this is not all "the same bug". There are different bugs, some of which are closed already. This is not actionable. It's not "early symbol resolution" either, the symbol resolution has 2 steps and the instantation scope is considered too for mixin symbols. And frankly, the fact that nimpy uses experimental dot operators makes it all even less actionable.

@Araq
Copy link
Member

Araq commented May 13, 2020

I might as well call this issue "Different DSLs don't compose well" or "untyped parameters must die". We need a decent problem analysis and not conflate everything under the "Generics" label. And yes, different DSLs tend to not compose well, it's a tough open research problem.

mratsim added a commit to mratsim/constantine that referenced this issue Jul 12, 2024
* feat(bench): PoC of integration with zkalc

* feat(bench): zkalc prepare for adding pairing benches - generic type resulution issue

* feat(bench): add pairing and G2 bench for zkalc and nimble build script

* fix nimble dependencies

* feat(bench-zkalc): polish and output json bench file

* fix(nimble): version gate task-level dependencies

* fix(deps): std/os instead of commandline for 1.6.x and gmp@#head

* fix(deps): .nimble and nimble have different versioning format

* fix(zkalc): workaround generic sandwich bug in Nim 1.6.x nim-lang/Nim#8677 nim-lang/Nim#11225

* fix(zkalc CI): skip zkalc build on 32-bit CI as it needs clang multilib

* fix(zkalc CI): skip Windows as Clang throws fatal error LNK1107
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants