BrainPy is a flexible, efficient, and extensible framework for computational neuroscience and brain-inspired computation based on the Just-In-Time (JIT) compilation (built on top of JAX, Taichi, Numba, and others). It provides an integrative ecosystem for brain dynamics programming, including brain dynamics building, simulation, training, analysis, etc.
- Website (documentation and APIs): https://brainpy.readthedocs.io/en/latest
- Source: https://github.com/brainpy/BrainPy
- Bug reports: https://github.com/brainpy/BrainPy/issues
- Source on OpenI: https://git.openi.org.cn/OpenI/BrainPy
BrainPy is based on Python (>=3.8) and can be installed on Linux (Ubuntu 16.04 or later), macOS (10.12 or later), and Windows platforms.
For detailed installation instructions, please refer to the documentation: Quickstart/Installation
We provide a docker image for BrainPy. You can use the following command to pull the image:
$ docker pull brainpy/brainpy:latest
Then, you can run the image with the following command:
$ docker run -it --platform linux/amd64 brainpy/brainpy:latest
We provide a Binder environment for BrainPy. You can use the following button to launch the environment:
- BrainPy: The solution for the general-purpose brain dynamics programming.
- brainpy-examples: Comprehensive examples of BrainPy computation.
- brainpy-datasets: Neuromorphic and Cognitive Datasets for Brain Dynamics Modeling.
- 《神经计算建模实战》 (Neural Modeling in Action)
- 第一届神经计算建模与编程培训班 (First Training Course on Neural Modeling and Programming)
- 第二届神经计算建模与编程培训班 (Second Training Course on Neural Modeling and Programming)
BrainPy is developed by a team in Neural Information Processing Lab at Peking University, China. Our team is committed to the long-term maintenance and development of the project.
If you are using brainpy
, please consider citing the corresponding papers.
We highlight the key features and functionalities that are currently under active development.
We also welcome your contributions (see Contributing to BrainPy).
- model and data parallelization on multiple devices for dense connection models
- model parallelization on multiple devices for sparse spiking network models
- data parallelization on multiple devices for sparse spiking network models
- pipeline parallelization on multiple devices for sparse spiking network models
- multi-compartment modeling
- measurements, analysis, and visualization methods for large-scale spiking data
- Online learning methods for large-scale spiking network models
- Classical plasticity rules for large-scale spiking network models