-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable global coordinates in spatial crop transforms #8206
Comments
Hi @surajpaib, thanks for the proposal! The reason I ask is that while some of our cropping transforms do allow specifying MONAI/monai/transforms/croppad/array.py Line 462 in 746a97a
|
Thanks for your prompt response! Use-cases I'm thinking of are mainly when I would want to crop based on objects such as a markup annotation from slicer -w hich would be in physical coordinates by default. Even when using coordinates from objects like masks, I generally prefer using a global coordinate so that I wouldn't have to recalculate image coordinates in the case that I might want to change the resolution/spacing of the image. I was wondering if there might be interest to a larger audience with this functionality. |
Does |
I can definitely use a lot of the MONAI transforms with my mask object without any trouble. My usecase was specifically for when I have extracted physical coordinates from these mask objects and then want to use those points to crop out regions in the image - very similar to the slicer scenario I mentioned. |
A potential solution from today's meeting: a helper function to help convert the global coordinates to the image coordinate. |
@KumoLiu This sounds good to me. Where would this helper function go ideally? Somewhere like Happy to send a PR then |
Perhaps the helper function should be a member function of an image? The image has the necessary and sufficient info. |
My thought process on this was to have a function wrapped as a dictionary transform which would be applied to a key containing the physical coordinates and transform them as needed. The ROI for subsequent transforms can be defined in terms of that key name rather than given values. This seems like a more modular way of doing things rather than adding more things to existing transforms. This could take advantage of ApplyTransformToPoints as well. An alternative is a transform used as a wrapper around another which manipulates argument values. |
Ok - I see your point. I agree, it should be a dictionary transform. What I'm a bit uncertain about is how do we specify which physical transform matrix is applied to the points. It would be best (imo) if the concept/scope of this dictionary transform was limited to transforming physical points into a specific image's coordinate frame (and not simply arbitrarily transformed). And we should have a parallel dictionary transform that can go from image coordinates back to physical coordinates. What do you see as the parameter list for those dictionary transforms? |
Consider these transforms in a transform sequence with a dictionary containing ROI start and end coordinates under their own keys. We would use [
...
ApplyTransformToPointsd(key="roi_start_key", refer_key="image_to_crop"),
ApplyTransformToPointsd(key="roi_end_key", refer_key="image_to_crop"),
SpatialCropd(key="image_to_crop",roi_start="roi_start_key", roi_end="roi_end_key"),
...
] |
Thanks for the clarification! Nice implementation. One suggestion...perhaps we need three functions:
For the other two, they make it explicit what spaces we're moving the points between and in what directions. The question is the name for those two functions. Here are options: Option A (my favorite, but that's because it is similar to ITK): or Option B: or Option C: or ??? Aside from Images, are ther |
This would be something like: class TransformPointsWorldToImaged(ApplyTransformToPointsd):
"""[Sensible docstring here]"""
def __init__(
self,
keys: KeysCollection,
refer_keys: KeysCollection,
dtype: DtypeLike | torch.dtype = torch.float64,
affine_lps_to_ras: bool = False,
allow_missing_keys: bool = False,
):
super().__init__(keys, refer_keys, dtype, None, True, affine_lps_to_ras, allow_missing_keys)
class TransformPointsImageToWorldd(ApplyTransformToPointsd):
"""[Sensible docstring here]"""
def __init__(
self,
keys: KeysCollection,
refer_keys: KeysCollection,
dtype: DtypeLike | torch.dtype = torch.float64,
affine_lps_to_ras: bool = False,
allow_missing_keys: bool = False,
):
super().__init__(keys, refer_keys, dtype, None, False, affine_lps_to_ras, allow_missing_keys) These vary by whether the transform applied is inverted or not. The |
Nice! Agreed! TransformImageToWorldPointsd |
Hi all - sorry for being a bit checked out on this - was on a sabbatical. The suggestions look great and would be perfect for the use-case I'd envisioned. Also adding a suggestion, based on personal preference for the naming: |
Is your feature request related to a problem? Please describe.
Currently, image coordinates are passed as roi center in the spatial crop transforms. It would be nice to have the ability to pass physical coordinates as all the information required to generate the image coordinates exists in the metadata. This would allow using coordinates without worrying about things like spacing changes.
Describe the solution you'd like
I've got an implementation lying around in a custom package but would like if I can use just MONAI moving forward
https://github.com/AIM-Harvard/foundation-cancer-image-biomarker/blob/e447a6f69f5240b658f1a6e874bdbd4f4f45df99/fmcib/preprocessing/seed_based_crop.py#L121
The text was updated successfully, but these errors were encountered: