Official PyTorch implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone.
Ali Hatamizadeh and Jan Kautz.
For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing
MambaVision demonstrates a strong performance by achieving a new SOTA Pareto-front in terms of Top-1 accuracy and throughput.
We introduce a novel mixer block by creating a symmetric path without SSM to enhance the modeling of global context:
MambaVision has a hierarchical architecture that employs both self-attention and mixer blocks:
-
[07.14.2024] We added support for processing any resolution images.
-
[07.12.2024] Paper is now available on arXiv !
-
[07.11.2024] Mambavision pip package is released !
-
[07.10.2024] We have released the code and model checkpoints for Mambavision !
We can import pre-trained MambaVision models with 1 line of code:
pip install mambavision
A pretrained MambaVision model with default hyper-parameters can be created as in:
>>> from mambavision import create_model
# Define mamba_vision_T model
>>> model = create_model('mamba_vision_T', pretrained=True, model_path="/tmp/mambavision_tiny_1k.pth.tar")
Available list of pretrained models include mamba_vision_T
, mamba_vision_T2
, mamba_vision_S
, mamba_vision_B
, mamba_vision_L
and mamba_vision_L2
.
We can also simply test the model by passing a dummy image with any resolution. The output is the logits:
>>> import torch
>>> image = torch.rand(1, 3, 512, 224).cuda() # place image on cuda
>>> model = model.cuda() # place model on cuda
>>> output = model(image) # output logit size is [1, 1000]
Using the pretrained models from our pip package, you can simply run validation:
python validate_pip_model.py --model mamba_vision_T --data_dir=$DATA_PATH --batch-size $BS
- Does MambaVision support processing images with any input resolutions ?
Yes ! you can pass images with any arbitrary resolutions without the need to change the model.
- Can I apply MambaVision for downstream tasks like detection, segmentation ?
Yes ! we are working to have it released very soon. But employing MambaVision backbones for these tasks is very similar to other models in mmseg
or mmdet
packages.
- I am interested in re-implementing MambaVision in my own repository. Can we use the pretrained weights ?
Yes ! the pretrained weights are released under CC-BY-NC-SA-4.0. Please submit an issue in this repo and we will add your repository to the README of our codebase and properly acknowledge your efforts.
MambaVision ImageNet-1K Pretrained Models
Name | Acc@1(%) | Acc@5(%) | Throughput(Img/Sec) | Resolution | #Params(M) | FLOPs(G) | Download |
---|---|---|---|---|---|---|---|
MambaVision-T | 82.3 | 96.2 | 6298 | 224x224 | 31.8 | 4.4 | model |
MambaVision-T2 | 82.7 | 96.3 | 5990 | 224x224 | 35.1 | 5.1 | model |
MambaVision-S | 83.3 | 96.5 | 4700 | 224x224 | 50.1 | 7.5 | model |
MambaVision-B | 84.2 | 96.9 | 3670 | 224x224 | 97.7 | 15.0 | model |
MambaVision-L | 85.0 | 97.1 | 2190 | 224x224 | 227.9 | 34.9 | model |
MambaVision-L2 | 85.3 | 97.2 | 1021 | 224x224 | 241.5 | 37.5 | model |
We provide a docker file. In addition, assuming that a recent PyTorch package is installed, the dependencies can be installed by running:
pip install -r requirements.txt
The MambaVision models can be evaluated on ImageNet-1K validation set using the following:
python validate.py \
--model <model-name>
--checkpoint <checkpoint-path>
--data_dir <imagenet-path>
--batch-size <batch-size-per-gpu
Here --model
is the MambaVision variant (e.g. mambavision_tiny_1k
), --checkpoint
is the path to pretrained model weights, --data_dir
is the path to ImageNet-1K validation set and --batch-size
is the number of batch size. We also provide a sample script here.
If you find MambaVision to be useful for your work, please consider citing our paper:
@article{hatamizadeh2024mambavision,
title={MambaVision: A Hybrid Mamba-Transformer Vision Backbone},
author={Hatamizadeh, Ali and Kautz, Jan},
journal={arXiv preprint arXiv:2407.08083},
year={2024}
}
Copyright © 2024, NVIDIA Corporation. All rights reserved.
This work is made available under the NVIDIA Source Code License-NC. Click here to view a copy of this license.
The pre-trained models are shared under CC-BY-NC-SA-4.0. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original.
For license information regarding the timm repository, please refer to its repository.
For license information regarding the ImageNet dataset, please see the ImageNet official website.
This repository is built on top of the timm repository. We thank Ross Wrightman for creating and maintaining this high-quality library.