nk-diffusion is a project dedicated to transferring amazing diffusion model-based projects from PyTorch to Jittor, harnessing Jittor's high performance and unique advantages.
At the core of Jittor is its JIT compiler, which converts Python code into efficient CUDA instructions in real-time, automatically optimizing computations for speed and memory efficiency based on input shapes and types.
By leveraging these features, nk-diffusion not only enhances performance but also provides flexibility and ease of use. Furthermore, these projects serve as exemplars for future high-quality Jittor projects, showcasing the framework's potential for research, education, and production environments.
Our Work is based on:
git clone https://github.com/JittorRepos/JDiffusion.git
#We recommend using conda to configure the Python environment.
conda create -n jdiffusion python=3.9
conda activate jdiffusion
Our code is based on JTorch, a high-performance dynamically compiled deep learning framework fully compatible with the PyTorch interface, please install our version of library.
pip install git+https://github.com/JittorRepos/jittor
pip install git+https://github.com/JittorRepos/jtorch
pip install git+https://github.com/JittorRepos/diffusers_jittor
pip install git+https://github.com/JittorRepos/transformers_jittor
or just
pip install -r requirement.txt
cd JDiffusion
pip install -e .
We also provide a docker image (md5:62c305028dae6e62d3dff885d5bc9294) about our environment.
If you encounter No module named 'cupy'
:
# Install CuPy from source
pip install cupy
# Install CuPy for cuda11.2 (Recommended, change cuda version you use)
pip install cupy-cuda112
Follow the README in example folder to install the application you want!
The projects we currently support in the Jittor version:
- PhotoMaker : Customizing Realistic Human Photos via Stacked ID Embedding
- StoryDiffusion : Consistent Self-Attention for Long-Range Image and Video Generation