Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large discrepancy between ptflops 0.7.0 and 0.7.1 #143

Open
postrational opened this issue Sep 3, 2024 · 2 comments
Open

Large discrepancy between ptflops 0.7.0 and 0.7.1 #143

postrational opened this issue Sep 3, 2024 · 2 comments
Labels
question Further information is requested

Comments

@postrational
Copy link

I'm having a problem with ptflops 0.7.1 and higher.

I managed to isolate the problem to this sample code:

import torch.nn as nn

class Model(nn.Module):

    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv1d(1, 20, 5)
        self.conv2 = nn.Conv1d(20, 20, 5)

    def forward(self, x):
        y = nn.functional.relu(self.conv1(x))
        y = nn.functional.relu(self.conv2(y))
        return y

from ptflops import get_model_complexity_info

model = Model()
input_shape = (48000,)
get_model_complexity_info(model, input_shape, as_strings=True, print_per_layer_stat=True)

In ptflops 0.7.0 I get 40.52 KMac:

Model(
  2.14 k, 100.000% Params, 40.52 KMac, 100.000% MACs,
  (conv1): Conv1d(120, 5.607% Params, 120.0 Mac, 0.296% MACs, 1, 20, kernel_size=(5,), stride=(1,))
  (conv2): Conv1d(2.02 k, 94.393% Params, 40.4 KMac, 99.704% MACs, 20, 20, kernel_size=(5,), stride=(1,))
)
Out: ('40.52 KMac', '2.14 k')

In version 7.0.1 and higher I get a value of 1.96 MMac:

Model(
  2.14 k, 100.000% Params, 40.52 KMac, 2.067% MACs,
  (conv1): Conv1d(120, 5.607% Params, 120.0 Mac, 0.006% MACs, 1, 20, kernel_size=(5,), stride=(1,))
  (conv2): Conv1d(2.02 k, 94.393% Params, 40.4 KMac, 2.061% MACs, 20, 20, kernel_size=(5,), stride=(1,))
)
Out[2]: ('1.96 MMac', '2.14 k')

Where does the large discepancy come from and which value is more reliable?

@sovrasov sovrasov added the question Further information is requested label Sep 26, 2024
@sovrasov
Copy link
Owner

sovrasov commented Sep 26, 2024

Hi @postrational
actually neither of it ;) When omitting the batch dimension in 1d case, ptflops pytorch engine can get wild a bit.
I'd suggest to use run the following:

input_shape = (1, 48000,)
get_model_complexity_info(model, input_shape, as_strings=True, print_per_layer_stat=True, backend="pytorch")

Then you'll get

Model(
  2.14 k, 100.000% Params, 102.7 MMac, 100.000% MACs, 
  (conv1): Conv1d(120, 5.607% Params, 5.76 MMac, 5.608% MACs, 1, 20, kernel_size=(5,), stride=(1,))
  (conv2): Conv1d(2.02 k, 94.393% Params, 96.94 MMac, 94.392% MACs, 20, 20, kernel_size=(5,), stride=(1,))
)
('104.62 MMac', '2.14 k')

The difference between the versions is that newer ptflops counts nn.functional.relu as well. If we omit relu in your sample, the result would be 102.7 MMac. Also, you can pass backend=aten to double check (only for new ptflops), or always use aten (it will not consider relu as well)

@sovrasov
Copy link
Owner

image
For 1d conv we should have one of these input layout. ptflops automatically inserts the batch dimension to the input presented, and it was the source of confusion: on (48000,) input we should have the following error:

RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [48000]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants