-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding confidence to SAM predictions from box prompts #4904
Conversation
WalkthroughThe pull request introduces changes to the Changes
Possibly related PRs
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (1)
🧰 Additional context used🔇 Additional comments (2)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
fiftyone/utils/sam.py (1)
274-286
: LGTM! Consider adding a comment for clarity.The changes successfully implement the addition of confidence scores to the SAM predictions, aligning with the PR objectives. The
scores
variable is correctly captured from thepredict_torch
method and added to the output dictionary.For improved clarity, consider adding a brief comment explaining the significance of the
scores
variable:masks, scores, _ = sam_predictor.predict_torch( point_coords=None, point_labels=None, boxes=transformed_boxes, multimask_output=False, ) +# scores represent the confidence of each prediction outputs.append( { "boxes": input_boxes, "labels": labels, "masks": masks, "scores": scores, } )
fiftyone/utils/sam2.py (1)
Line range hint
204-219
: LGTM! Consider a minor readability improvement.The changes successfully implement the addition of confidence scores to the SAM2 predictions, aligning with the PR objective. The scores are correctly handled and added to the output dictionary.
To improve code readability, consider extracting the device assignment to a separate line:
+ device = sam2_predictor.device masks, scores, _ = sam2_predictor.predict( point_coords=None, point_labels=None, box=sam_boxes[None, :], multimask_output=False, ) if masks.ndim == 3: masks = np.expand_dims(masks, axis=0) outputs.append( { "boxes": input_boxes, "labels": labels, - "masks": torch.tensor(masks, device=sam2_predictor.device), - "scores": torch.tensor( - scores, device=sam2_predictor.device - ), + "masks": torch.tensor(masks, device=device), + "scores": torch.tensor(scores, device=device), } )This change reduces repetition and makes the code slightly more maintainable.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (2)
- fiftyone/utils/sam.py (1 hunks)
- fiftyone/utils/sam2.py (2 hunks)
🧰 Additional context used
🔇 Additional comments (2)
fiftyone/utils/sam.py (2)
Line range hint
1-486
: Overall assessment: Changes successfully implement confidence scores.The modifications to
fiftyone/utils/sam.py
effectively add confidence scores to SAM predictions from box prompts, meeting the PR objectives. The implementation is consistent across different input types (_forward_pass_boxes
and_forward_pass_points
), and the existing_forward_pass_auto
method already includes a similar score. These changes enhance the output of the model's prediction methods by providing valuable confidence information for each prediction.
Line range hint
1-486
: Verify consistency of score implementation across methods.The addition of
scores
to the output dictionary in_forward_pass_boxes
is consistent with the existing implementation in_forward_pass_points
. The_forward_pass_auto
method already includes a score (predicted_iou
) in its output, maintaining uniformity across different input types.To ensure complete consistency, run the following script to check if all relevant methods include a score or confidence value in their outputs:
This script will help confirm that all forward pass methods consistently include a score or confidence value in their outputs.
Not a review comment, just a tangential observation - @prernadh any reason why we shouldn't also include the final return value ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No comments, fairly straightforward addition to expose prediction IOU scores.
00c8ffb
to
c6a2e33
Compare
I would like to avoid saving too many additional fields in the sample unless necessary. My understanding of the |
What changes are proposed in this pull request?
Added confidence scores to SAM predictions that are generate from bounding box prompts
How is this patch tested? If it is not, please explain why.
Tested manually by applying model on a dataset and confirming that confidence scores are saved on the predictions.
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changesSummary by CodeRabbit
New Features
Bug Fixes