Welcome to the THOP repository, your comprehensive solution for profiling PyTorch models by computing the number of Multiply-Accumulate Operations (MACs) and parameters. This tool is essential for deep learning practitioners to evaluate model efficiency and performance.
THOP offers an intuitive API to profile PyTorch models by calculating the number of MACs and parameters. This functionality is crucial for assessing the computational efficiency and memory footprint of deep learning models.
You can install THOP via pip:
pip install ultralytics-thop
Alternatively, install the latest version directly from GitHub:
pip install --upgrade git+https://github.com/ultralytics/thop.git
To profile a model, you can use the following example:
import torch
from torchvision.models import resnet50
from thop import profile
model = resnet50()
input = torch.randn(1, 3, 224, 224)
macs, params = profile(model, inputs=(input,))
You can define custom rules for unsupported modules:
import torch.nn as nn
class YourModule(nn.Module):
# your definition
pass
def count_your_model(model, x, y):
# your rule here
pass
input = torch.randn(1, 3, 224, 224)
macs, params = profile(model, inputs=(input,), custom_ops={YourModule: count_your_model})
Use thop.clever_format
for a more readable output:
from thop import clever_format
macs, params = clever_format([macs, params], "%.3f")
The following table presents the parameters and MACs for popular models. These results can be reproduced using the script benchmark/evaluate_famous_models.py
.
|
|
We welcome community contributions to enhance THOP. Please check our Contributing Guide for more details. Your feedback and suggestions are highly appreciated!
THOP is licensed under the AGPL-3.0 License. For more information, see the LICENSE file.
For bugs or feature requests, please open an issue on GitHub Issues. Join our community on Discord for discussions and support.