Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exceed INT_MAX #113

Open
sdwfzczck opened this issue Feb 27, 2024 · 1 comment
Open

Exceed INT_MAX #113

sdwfzczck opened this issue Feb 27, 2024 · 1 comment

Comments

@sdwfzczck
Copy link

When I use the "vit_h" model to infer anything images, once the image size is too large, such as 1500*2000, I get an error saying it exceeds INT_MAX. However, when I reduce the image size, the error no longer occurs. How can I solve this issue?

Looking forward to the author's response, thank you!

image

@cpuhrsch
Copy link
Contributor

Hello @sdwfzczck - I think this is a limitation of segment-anything itself? The paper says they trained on up to 1024x1024.

Now, the error you report can potentially be helped by turning off this optimization and update the code to use mask_to_rle_pytorch instead of mask_to_rle_pytorch_2. If this resolves the issue, I can add a flag to disable this at runtime. But I think even if it did work, I'm surprised you get this far since the model itself seems to be limited to 1024x1024?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants