Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

brainpylib #702

Closed
Emily3027244007 opened this issue Dec 3, 2024 · 8 comments · Fixed by #704
Closed

brainpylib #702

Emily3027244007 opened this issue Dec 3, 2024 · 8 comments · Fixed by #704
Labels
bug Something isn't working

Comments

@Emily3027244007
Copy link

pip install brainpylib
ERROR: Could not find a version that satisfies the requirement brainpylib (from versions: none)
ERROR: No matching distribution found for brainpylib

@Emily3027244007 Emily3027244007 added the bug Something isn't working label Dec 3, 2024
@Routhleck
Copy link
Collaborator

Could you provide the version of Python?

@alexfanqi
Copy link

having same issue. my python version is 3.12

@Routhleck
Copy link
Collaborator

Brainpylib only supports the Python versions ranging from 3.8 to 3.11. Therefore, if you want to install brainpylib, you should downgrade your Python version.
However, I recommend that you only install the latest version of BrainPy(BrainPy supports the Python versions ranging from 3.9 to 3.12), as we have removed the brainpylib dependency.

@alexfanqi
Copy link

Thanks for quick reply! version 2.6.0 seems to still depend on brainpylib, so I hopped on master branch, but I am having issues importing brainpy.

import brainpy as bp
import brainpy.math as bm
import numpy as np

bp.math.set_platform('cpu')

shows

{
	"name": "AttributeError",
	"message": "'NoneType' object has no attribute 'XLACustomOp'",
	"stack": "---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[2], line 1
----> 1 import brainpy as bp
      2 import brainpy.math as bm
      3 import numpy as np

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/__init__.py:77
     72 from brainpy._src.delay import (
     73   VarDelay as VarDelay,
     74 )
     76 # building blocks
---> 77 from brainpy import (
     78   dnn, layers,  # module for dnn layers
     79   dyn,  # module for modeling dynamics
     80 )
     81 NeuGroup = NeuGroupNS = dyn.NeuDyn
     83 # common tools

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/dnn/__init__.py:5
      3 from .conv import *
      4 from .interoperation import *
----> 5 from .linear import *
      6 from .normalization import *
      7 from .others import *

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/dnn/linear.py:2
----> 2 from brainpy._src.dnn.linear import (
      3   Dense as Dense,
      4   Linear as Linear,
      5   Identity as Identity,
      6   AllToAll as AllToAll,
      7   OneToOne as OneToOne,
      8   MaskedLinear as MaskedLinear,
      9   CSRLinear as CSRLinear,
     10   EventCSRLinear as EventCSRLinear,
     11   JitFPHomoLinear as JitFPHomoLinear,
     12   JitFPUniformLinear as JitFPUniformLinear,
     13   JitFPNormalLinear as JitFPNormalLinear,
     14   EventJitFPHomoLinear as EventJitFPHomoLinear,
     15   EventJitFPNormalLinear as EventJitFPNormalLinear,
     16   EventJitFPUniformLinear as EventJitFPUniformLinear,
     17 )

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dnn/linear.py:279
    275         else:
    276             out_w[i, j] = old_w[i, j]
--> 279 dense_on_post_prim = bti.XLACustomOp(cpu_kernel=_dense_on_post, gpu_kernel=_dense_on_post)
    282 # @numba.njit(nogil=True, fastmath=True, parallel=False)
    283 # def _cpu_dense_on_pre(weight, spike, trace, w_min, w_max, out_w):
    284 #   out_w[:] = weight
    285 #   for i in numba.prange(spike.shape[0]):
    286 #     if spike[i]:
    287 #       out_w[i] = np.clip(out_w[i] + trace, w_min, w_max)
    289 @ti.kernel
    290 def _dense_on_pre(
    291     old_w: ti.types.ndarray(ndim=2),
   (...)
    296     out_w: ti.types.ndarray(ndim=2)
    297 ):

AttributeError: 'NoneType' object has no attribute 'XLACustomOp'"
}

@Routhleck
Copy link
Collaborator

Thank you for your report!
The command  pip install braintaichi  should work perfectly. We will work on improving our error handling in the future.

@alexfanqi
Copy link

I tried the pip version and master branch of braintaichi. Running the run_fun() for_loop from ei_nets/Tian_2020_EI_net_for_fast_response.ipynb from brainpy examples repo has the following error then.

{
	"name": "RuntimeError",
	"message": "The CPU kernels do not build correctly. Please check the installation of braintaichi.",
	"stack": "---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/jax/_src/interpreters/mlir.py:2072, in _lower_jaxpr_to_fun_cached(ctx, fn_name, call_jaxpr, effects, name_stack, arg_names, result_names)
   2071 try:
-> 2072   func_op = ctx.cached_primitive_lowerings[key]
   2073 except KeyError:

KeyError: (None, let atleast_1d = { lambda ; a:f32[]. let
    b:f32[1] = broadcast_in_dim[broadcast_dimensions=() shape=(1,)] a
  in (b,) } in
{ lambda ; c:i32[16000000] d:i32[8001] e:i32[4000000] f:i32[8001] g:i32[4000000]
    h:i32[2001] i:i32[1000000] j:i32[2001] k:f32[8000] l:f32[2000] m:bool[8000] n:f32[2000]
    o:f32[8000] p:f32[2000] q:f32[8000] r:bool[2000]. let
    s:f32[] = sqrt 10000.0
    t:f32[] = mul 3.0 s
    u:f32[] = mul t 0.1
    v:f32[] = sqrt 10000.0
    w:f32[] = mul 2.0 v
    x:f32[] = mul w 0.1
    y:f32[1] = pjit[name=atleast_1d jaxpr=atleast_1d] 0.25
    z:f32[8000] = convert_element_type[new_dtype=float32 weak_type=False] m
    ba:f32[8000] = braintaichi_custom_op_2[
      outs=(ShapedArray(float32[8000]),)
      shape=(8000, 8000)
      transpose=True
    ] y c d z
    bb:f32[8000] = add q ba
    bc:f32[1] = pjit[name=atleast_1d jaxpr=atleast_1d] 0.4
    bd:f32[8000] = convert_element_type[new_dtype=float32 weak_type=False] m
    be:f32[2000] = braintaichi_custom_op_2[
      outs=(ShapedArray(float32[2000]),)
      shape=(8000, 2000)
      transpose=True
    ] bc e f bd
    bf:f32[2000] = add p be
    bg:f32[1] = pjit[name=atleast_1d jaxpr=atleast_1d] -1.0
    bh:f32[2000] = convert_element_type[new_dtype=float32 weak_type=False] r
    bi:f32[8000] = braintaichi_custom_op_2[
      outs=(ShapedArray(float32[8000]),)
      shape=(2000, 8000)
      transpose=True
    ] bg g h bh
    bj:f32[8000] = add k bi
    bk:f32[1] = pjit[name=atleast_1d jaxpr=atleast_1d] -1.0
    bl:f32[2000] = convert_element_type[new_dtype=float32 weak_type=False] r
    bm:f32[2000] = braintaichi_custom_op_2[
      outs=(ShapedArray(float32[2000]),)
      shape=(2000, 2000)
      transpose=True
    ] bk i j bl
    bn:f32[2000] = add l bm
    bo:f32[8000] = neg bb
    bp:f32[8000] = div bo 6.0
    bq:f32[8000] = broadcast_in_dim[broadcast_dimensions=() shape=(8000,)] 1.0
    br:f32[8000] = div bq 6.0
    bs:f32[8000] = neg br
    bt:f32[8000] = mul 0.10000000149011612 bs
    bu:f32[8000] = abs bt
    bv:bool[8000] = le bu 9.999999747378752e-06
    bw:f32[8000] = div bt 2.0
    bx:f32[8000] = add 1.0 bw
    by:f32[8000] = mul bt bt
    bz:f32[8000] = div by 6.0
    ca:f32[8000] = add bx bz
    cb:f32[8000] = exp bt
    cc:f32[8000] = sub cb 1.0
    cd:f32[8000] = div cc bt
    ce:f32[8000] = select_n bv cd ca
    cf:f32[8000] = mul 0.10000000149011612 ce
    cg:f32[8000] = mul cf bp
    ch:f32[8000] = add bb cg
    ci:f32[8000] = neg bj
    cj:f32[8000] = div ci 5.0
    ck:f32[8000] = broadcast_in_dim[broadcast_dimensions=() shape=(8000,)] 1.0
    cl:f32[8000] = div ck 5.0
    cm:f32[8000] = neg cl
    cn:f32[8000] = mul 0.10000000149011612 cm
    co:f32[8000] = abs cn
    cp:bool[8000] = le co 9.999999747378752e-06
    cq:f32[8000] = div cn 2.0
    cr:f32[8000] = add 1.0 cq
    cs:f32[8000] = mul cn cn
    ct:f32[8000] = div cs 6.0
    cu:f32[8000] = add cr ct
    cv:f32[8000] = exp cn
    cw:f32[8000] = sub cv 1.0
    cx:f32[8000] = div cw cn
    cy:f32[8000] = select_n cp cx cu
    cz:f32[8000] = mul 0.10000000149011612 cy
    da:f32[8000] = mul cz cj
    db:f32[8000] = add bj da
    dc:f32[] = convert_element_type[new_dtype=float32 weak_type=False] u
    dd:f32[8000] = add dc ch
    de:f32[8000] = add dd db
    df:f32[8000] = neg o
    dg:f32[8000] = add df de
    dh:f32[8000] = div dg 15.0
    di:f32[8000] = mul dh 0.10000000149011612
    dj:f32[8000] = mul di 1.0
    dk:f32[8000] = add o dj
    dl:bool[8000] = ge dk 15.0
    dm:f32[8000] = pjit[
      name=_where
      jaxpr={ lambda ; dn:bool[8000] do:f32[] dp:f32[8000]. let
          dq:f32[] = convert_element_type[new_dtype=float32 weak_type=False] do
          dr:f32[8000] = broadcast_in_dim[broadcast_dimensions=() shape=(8000,)] dq
          ds:f32[8000] = select_n dn dp dr
        in (ds,) }
    ] dl 0.0 dk
    dt:f32[2000] = neg bf
    du:f32[2000] = div dt 6.0
    dv:f32[2000] = broadcast_in_dim[broadcast_dimensions=() shape=(2000,)] 1.0
    dw:f32[2000] = div dv 6.0
    dx:f32[2000] = neg dw
    dy:f32[2000] = mul 0.10000000149011612 dx
    dz:f32[2000] = abs dy
    ea:bool[2000] = le dz 9.999999747378752e-06
    eb:f32[2000] = div dy 2.0
    ec:f32[2000] = add 1.0 eb
    ed:f32[2000] = mul dy dy
    ee:f32[2000] = div ed 6.0
    ef:f32[2000] = add ec ee
    eg:f32[2000] = exp dy
    eh:f32[2000] = sub eg 1.0
    ei:f32[2000] = div eh dy
    ej:f32[2000] = select_n ea ei ef
    ek:f32[2000] = mul 0.10000000149011612 ej
    el:f32[2000] = mul ek du
    em:f32[2000] = add bf el
    en:f32[2000] = neg bn
    eo:f32[2000] = div en 5.0
    ep:f32[2000] = broadcast_in_dim[broadcast_dimensions=() shape=(2000,)] 1.0
    eq:f32[2000] = div ep 5.0
    er:f32[2000] = neg eq
    es:f32[2000] = mul 0.10000000149011612 er
    et:f32[2000] = abs es
    eu:bool[2000] = le et 9.999999747378752e-06
    ev:f32[2000] = div es 2.0
    ew:f32[2000] = add 1.0 ev
    ex:f32[2000] = mul es es
    ey:f32[2000] = div ex 6.0
    ez:f32[2000] = add ew ey
    fa:f32[2000] = exp es
    fb:f32[2000] = sub fa 1.0
    fc:f32[2000] = div fb es
    fd:f32[2000] = select_n eu fc ez
    fe:f32[2000] = mul 0.10000000149011612 fd
    ff:f32[2000] = mul fe eo
    fg:f32[2000] = add bn ff
    fh:f32[] = convert_element_type[new_dtype=float32 weak_type=False] x
    fi:f32[2000] = add fh em
    fj:f32[2000] = add fi fg
    fk:f32[2000] = neg n
    fl:f32[2000] = add fk fj
    fm:f32[2000] = div fl 10.0
    fn:f32[2000] = mul fm 0.10000000149011612
    fo:f32[2000] = mul fn 1.0
    fp:f32[2000] = add n fo
    fq:bool[2000] = ge fp 15.0
    fr:f32[2000] = pjit[
      name=_where
      jaxpr={ lambda ; fs:bool[2000] ft:f32[] fu:f32[2000]. let
          fv:f32[] = convert_element_type[new_dtype=float32 weak_type=False] ft
          fw:f32[2000] = broadcast_in_dim[broadcast_dimensions=() shape=(2000,)] fv
          fx:f32[2000] = select_n fs fu fw
        in (fx,) }
    ] fq 0.0 fp
    debug_callback[
      callback=<function debug_callback.<locals>._flat_callback at 0x7f4b3b12f7e0>
      effect=Debug
    ] 
  in (db, fg, dl, fr, dm, em, ch, fq, dl, fq) }, ())

During handling of the above exception, another exception occurred:

JaxStackTraceBeforeTransformation         Traceback (most recent call last)
File <frozen runpy>:198, in _run_module_as_main()

File <frozen runpy>:88, in _run_code()

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel_launcher.py:18
     16 from ipykernel import kernelapp as app
---> 18 app.launch_new_instance()

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/traitlets/config/application.py:1075, in launch_instance()
   1074 app.initialize(argv)
-> 1075 app.start()

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/kernelapp.py:739, in start()
    738 try:
--> 739     self.io_loop.start()
    740 except KeyboardInterrupt:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/tornado/platform/asyncio.py:205, in start()
    204 def start(self) -> None:
--> 205     self.asyncio_loop.run_forever()

File ~/micromamba/envs/ml-py312/lib/python3.12/asyncio/base_events.py:641, in run_forever()
    640 while True:
--> 641     self._run_once()
    642     if self._stopping:

File ~/micromamba/envs/ml-py312/lib/python3.12/asyncio/base_events.py:1986, in _run_once()
   1985     else:
-> 1986         handle._run()
   1987 handle = None

File ~/micromamba/envs/ml-py312/lib/python3.12/asyncio/events.py:88, in _run()
     87 try:
---> 88     self._context.run(self._callback, *self._args)
     89 except (SystemExit, KeyboardInterrupt):

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/kernelbase.py:545, in dispatch_queue()
    544 try:
--> 545     await self.process_one()
    546 except Exception:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/kernelbase.py:534, in process_one()
    533         return
--> 534 await dispatch(*args)

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/kernelbase.py:437, in dispatch_shell()
    436     if inspect.isawaitable(result):
--> 437         await result
    438 except Exception:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/ipkernel.py:362, in execute_request()
    361 self._associate_new_top_level_threads_with(parent_header)
--> 362 await super().execute_request(stream, ident, parent)

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/kernelbase.py:778, in execute_request()
    777 if inspect.isawaitable(reply_content):
--> 778     reply_content = await reply_content
    780 # Flush output before sending the reply.

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/ipkernel.py:449, in do_execute()
    448 if accepts_params[\"cell_id\"]:
--> 449     res = shell.run_cell(
    450         code,
    451         store_history=store_history,
    452         silent=silent,
    453         cell_id=cell_id,
    454     )
    455 else:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/ipykernel/zmqshell.py:549, in run_cell()
    548 self._last_traceback = None
--> 549 return super().run_cell(*args, **kwargs)

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/IPython/core/interactiveshell.py:3075, in run_cell()
   3074 try:
-> 3075     result = self._run_cell(
   3076         raw_cell, store_history, silent, shell_futures, cell_id
   3077     )
   3078 finally:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/IPython/core/interactiveshell.py:3130, in _run_cell()
   3129 try:
-> 3130     result = runner(coro)
   3131 except BaseException as e:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/IPython/core/async_helpers.py:128, in _pseudo_sync_runner()
    127 try:
--> 128     coro.send(None)
    129 except StopIteration as exc:

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/IPython/core/interactiveshell.py:3334, in run_cell_async()
   3331 interactivity = \"none\" if silent else self.ast_node_interactivity
-> 3334 has_raised = await self.run_ast_nodes(code_ast.body, cell_name,
   3335        interactivity=interactivity, compiler=compiler, result=result)
   3337 self.last_execution_succeeded = not has_raised

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/IPython/core/interactiveshell.py:3517, in run_ast_nodes()
   3516     asy = compare(code)
-> 3517 if await self.run_code(code, result, async_=asy):
   3518     return True

File ~/micromamba/envs/ml-py312/lib/python3.12/site-packages/IPython/core/interactiveshell.py:3577, in run_code()
   3576     else:
-> 3577         exec(code_obj, self.user_global_ns, self.user_ns)
   3578 finally:
   3579     # Reset our crash handler in place

Cell In[18], line 7
      6 indices = np.arange(1000)  # 100. ms
----> 7 e_sps, i_sps = bm.for_loop(run_fun, indices, progress_bar=True)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/math/object_transform/controls.py:891, in for_loop()
    890 if jit:
--> 891   dyn_vals, out_vals = transform(operands)
    892 else:

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/math/object_transform/controls.py:736, in call()
    735 def call(operands):
--> 736   return jax.lax.scan(f=fun2scan,
    737                       init=dyn_vars.dict_data(),
    738                       xs=operands,
    739                       reverse=reverse,
    740                       unroll=unroll)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/math/object_transform/controls.py:727, in fun2scan()
    726   dyn_vars[k]._value = carry[k]
--> 727 results = body_fun(*x, **unroll_kwargs)
    728 if progress_bar:

Cell In[18], line 4, in run_fun()
      3 i_inp = f_I * bm.sqrt(num) * mu_f
----> 4 return net.step_run(i, e_inp, i_inp)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:213, in step_run()
    212 share.save(i=i, t=i * bm.dt)
--> 213 out = self.update(*args, **kwargs)
    214 clear_input(self)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:370, in _compatible_update()
    369     return ret
--> 370 return update_fun(*args, **kwargs)

Cell In[16], line 17, in update()
     16 def update(self, e_inp, i_inp):
---> 17   self.E2E()
     18   self.E2I()

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:421, in __call__()
    420 # update the model self
--> 421 ret = self.update(*args, **kwargs)
    423 # ``after_updates``

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:370, in _compatible_update()
    369     return ret
--> 370 return update_fun(*args, **kwargs)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:605, in update()
    604   for node in nodes:
--> 605     node.update(*args, **kwargs)
    606 else:

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:370, in _compatible_update()
    369     return ret
--> 370 return update_fun(*args, **kwargs)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dyn/projections/align_post.py:273, in update()
    272 x = self.refs['pre'].get_aft_update(delay_identifier).at(self.name)
--> 273 current = self.comm(x)
    274 self.refs['syn'].add_current(current)  # synapse post current

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:421, in __call__()
    420 # update the model self
--> 421 ret = self.update(*args, **kwargs)
    423 # ``after_updates``

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dynsys.py:370, in _compatible_update()
    369     return ret
--> 370 return update_fun(*args, **kwargs)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/dnn/linear.py:666, in update()
    665 if x.ndim == 1:
--> 666     return bm.sparse.csrmv(self.weight, self.indices, self.indptr, x,
    667                            shape=(self.conn.pre_num, self.conn.post_num), transpose=self.transpose)
    668 elif x.ndim > 1:

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/math/sparse/csr_mv.py:67, in csrmv()
     65   raise_braintaichi_not_found()
---> 67 return bti.csrmv(data, indices, indptr, vector, shape=shape, transpose=transpose)

File /mnt/zpool-febdash/src/novolume/braintaichi/braintaichi/_sparseop/main.py:207, in csrmv()
    205     return jnp.zeros(shape[1] if transpose else shape[0], dtype=data.dtype)
--> 207 return raw_csrmv_taichi(data, indices, indptr, vector, shape=shape, transpose=transpose)[0]

File /mnt/zpool-febdash/src/novolume/braintaichi/braintaichi/_sparseop/csrmv.py:57, in raw_csrmv_taichi()
     55         prim = _csr_matvec_homo_p
---> 57 return prim(data,
     58             indices,
     59             indptr,
     60             vector,
     61             outs=[jax.ShapeDtypeStruct((out_shape,), dtype=data.dtype)],
     62             transpose=transpose,
     63             shape=shape)

File /mnt/zpool-febdash/src/novolume/braintaichi/braintaichi/_primitive/_xla_custom_op.py:116, in __call__()
    115 ins = jax.tree.map(jax.numpy.asarray, ins)
--> 116 return self.primitive.bind(*ins, outs=outs, **kwargs)

JaxStackTraceBeforeTransformation: RuntimeError: The CPU kernels do not build correctly. Please check the installation of braintaichi.

The preceding stack trace is the source of the JAX operation that, once transformed by JAX, triggered the following exception.

--------------------

The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)
Cell In[18], line 7
      4   return net.step_run(i, e_inp, i_inp)
      6 indices = np.arange(1000)  # 100. ms
----> 7 e_sps, i_sps = bm.for_loop(run_fun, indices, progress_bar=True)

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/math/object_transform/controls.py:891, in for_loop(body_fun, operands, reverse, unroll, remat, jit, progress_bar, unroll_kwargs, dyn_vars, child_objs)
    887 transform = _get_for_loop_transform(body_fun, stack, bar,
    888                                     progress_bar, remat, reverse,
    889                                     unroll, unroll_kwargs)
    890 if jit:
--> 891   dyn_vals, out_vals = transform(operands)
    892 else:
    893   with jax.disable_jit():

File /mnt/zpool-febdash/src/novolume/BrainPy/brainpy/_src/math/object_transform/controls.py:736, in _get_for_loop_transform.<locals>.call(operands)
    735 def call(operands):
--> 736   return jax.lax.scan(f=fun2scan,
    737                       init=dyn_vars.dict_data(),
    738                       xs=operands,
    739                       reverse=reverse,
    740                       unroll=unroll)

    [... skipping hidden 38 frame]

File /mnt/zpool-febdash/src/novolume/braintaichi/braintaichi/_primitive/_mlir_translation_rule.py:441, in _taichi_mlir_cpu_translation_rule(kernel, c, *ins, **kwargs)
    439 def _taichi_mlir_cpu_translation_rule(kernel, c, *ins, **kwargs):
    440     if cpu_ops is None:
--> 441         raise RuntimeError(
    442             'The CPU kernels do not build correctly. '
    443             'Please check the installation of braintaichi.'
    444         )
    446     in_out_info = _compile_kernel(c.avals_in, kernel, 'cpu', **kwargs)
    447     ins = [mlir.ir_constant(v) for v in in_out_info] + list(ins)

RuntimeError: The CPU kernels do not build correctly. Please check the installation of braintaichi."
}

@alexfanqi
Copy link

this error is fixed after applying your new patch from pr 704

@Routhleck
Copy link
Collaborator

@alexfanqi Thank you for your feedback! We will merge the corresponding PR as soon as possible, and then we will close this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants