BlackMamba: Mixture of Experts for State-space models
Quentin Anthony*, Yury Tokpanov*, Paolo Glorioso*, Beren Millidge*
Paper: https://arxiv.org/abs/2402.01771
In this repository we provide inference code for our BlackMamba model.
BlackMamba is an novel architecture which combines state-space models (SSMs) with mixture of experts (MoE). It uses Mamba as its SSM block and switch transformer as its MoE block base. BlackMamba is extremely low latency for generation and inference, providing significant speedups over all of classical transformers, MoEs, and Mamba SSM models. Additionally, due to its SSM sequence mixer, BlackMamba retains linear computational complexity in the sequence length.
pip install causal-conv1d>=1.1.0
: required for Mamba. The rest of the kernels should be built locally.
Other requirements:
Linux NVIDIA GPU PyTorch 1.12+ CUDA 11.6+
pip install torch packaging
pip install .
to install from source from this repository
Our pretrained models are uploaded to our HuggingFace:
*Since models are MoE, they're named according to (Forward Pass Parameters) / (Total Parameters)
for clarity.
from mamba_model import MambaModel
import torch
model = MambaModel.from_pretrained(pretrained_model_name="Zyphra/BlackMamba-2.8B")
model = model.cuda().half()
inputs = torch.tensor([1, 2]).cuda().long().unsqueeze(0)
out = model(inputs)