-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TFDSv4 carla_video_tracking upgrade. #1836
TFDSv4 carla_video_tracking upgrade. #1836
Conversation
… much about keeping track of module name, function name and how it registers within armory to global variable 'supported', a MetricNameSpace, and how it is passed as input to GlobalMeter/Meter creation in MetricsLogger
* update ci tests * combine dockerfiles * update image map * remove carla from `build.py` * fix issue 1787 * remove `tf2` container * remove `deepspeech` * update documentation
…)]] to represent each batch rather than returning Dict[ndarray: (batch, frames, coords)].
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reapplying review comments
armory/datasets/preprocessing.py
Outdated
@register | ||
def carla_video_tracking_dev(element, max_frames=None): | ||
return carla_video_tracking( | ||
element["video"], max_frames=max_frames, split="dev" | ||
), carla_video_tracking_labels( | ||
element["video"], | ||
(element["bboxes"], element["patch_metadata"]), | ||
max_frames=max_frames, | ||
) | ||
|
||
|
||
@register | ||
def carla_video_tracking_test(element, max_frames=None): | ||
return carla_video_tracking( | ||
element["video"], max_frames=max_frames, split="test" | ||
), carla_video_tracking_labels( | ||
element["video"], | ||
(element["bboxes"], element["patch_metadata"]), | ||
max_frames=max_frames, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am assuming max_frames
gets passed in as a dataset_kwargs
but this is untested as I cannot find an example of it being used previously. Planning to circle back and test over the next couple days.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be passed in the config in preprocessor_kwargs
. See the configs in #1788 as an example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 9716c58. Tested working with max_frames
2 and 10. Throws error due to dimension mismatch when max_frames=1
. The tensor must be getting squeezed somewhere within the underlying model causing it to fail:
File "/opt/conda/lib/python3.9/site-packages/art/estimators/object_tracking/pytorch_goturn.py", line 360, in _preprocess
img = img.permute(2, 0, 1)
│ └ <method 'permute' of 'torch._C._TensorBase' objects>
└ tensor([[322.6400, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[349.3067, 0.0000, 0.0000, ..., 0....
RuntimeError: number of dims don't match in permute
...rio_configs/eval5/carla_video_tracking/carla_video_tracking_goturn_advtextures_defended.json
Show resolved
Hide resolved
armory/scenarios/scenario.py
Outdated
@@ -355,6 +355,11 @@ def evaluate_all(self): | |||
def next(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to be updating next()
in the CarlaVideoTracking class, not the base Scenario. The former already overrides next()
and separates the y
from y_patch_metadata
in the 2-tuple here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also the conversion from Dict
to List[Dict]
I'd think should be as simple as something like
if isinstance(y, dict)`:
y = [y]
I assume there should be a cleaner way than need to reference specific keys in the base scenario class
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To your former comment, yes I can implement it within CarlaVideoTracking
but I wasn't sure if this should be a fix for other dataloaders, I will move it to this function.
Regarding conversion from Dict the problem is that the tensors within the dictionary have an extra dimension to represent batches (batch, frames, boxes). I am iterating through that first dimension with the for loop and sending each "batch" to its own dictionary with two dimensions (frames, boxes) as each value in the resultant list. I am assuming I should still try to support multi-batch correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
inevitably, I think, we need a way to handle this extra batch dimension that the new tf preprocessing adds. Note with the CARLA OD PR from a while earlier, I punted on this question for now. Wherever we end up adding the logic to handle it should be as specific to the issue as possible (i.e. would rather not add to a base scenario.py
that every Armory run touches). Ideally it would be added to the carla preprocessing code but I'm not sure how to get that to work. If we can find a way to do it there, that'd be best. Offhand, the other places I can think of would be whichever inherited Scenario
class is most appropriate. And also in metrics code, but that changes our API as far as expected format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately there doesn't seem to be a way to implement this within the preprocessor code as the data is not batched when it passes through there; I have moved this fixup to CarlaVideoTracking
, see jprokos26@4837db0.
Additionally to avoid referencing a specific key, it can be changed to:
def next(self):
super().next()
self.y, self.y_patch_metadata = self.y
# Fix for tfdsv4 upgrade - separate batches into list
if isinstance(self.y, dict):
assert (
len(self.y.keys()) == 1
), f"Expected only one key in labels dict, got {self.y.keys()}"
key = next(iter(self.y))
self.y = [{key: batch} for batch in self.y[key]]
self.probe.update(y=self.y, y_patch_metadata=self.y_patch_metadata)
Which do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lcadalzo Are you alright with the changes in 4a29dba? I felt it unnecessary to split the preprocessor up by data & label due to #1836 (comment). |
I would still keep |
This reverts commit 4a29dba.
Reverted in 7c2da86. |
can we keep the old files as-is in |
…ta field. Set default frame_rate for scenarios that use it as well.
LGTM |
Throws an error when run without flags but completes successfully with --skip-attack. See #1814 for more information.
Both
dev
andtest
datasets uploaded to s3 and ready for review.Note, duplicate of #1829 to properly apply rebase.