Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ignoring point order in skeleton annotations when comparing annotations #57

Merged
merged 8 commits into from
Aug 28, 2024

Conversation

Eldies
Copy link

@Eldies Eldies commented Aug 21, 2024

Summary

  • ignoring point order in skeleton annotations when comparing annotations
  • do not create empty annotation files in yolo8

How to test

Checklist

License

  • I submit my code changes under the same MIT License that covers the project.
    Feel free to contact the maintainers if that's a concern.
  • I have updated the license header for each file (see an example below)
# Copyright (C) 2022 CVAT.ai Corporation
#
# SPDX-License-Identifier: MIT

Summary by CodeRabbit

  • New Features

    • Enhanced annotation comparison logic for skeleton annotations.
    • Introduced a method for saving YOLO annotations to improve code modularity.
  • Bug Fixes

    • Improved robustness in annotation export for empty YOLO annotations.
  • Tests

    • Added a test case specifically for comparing skeleton annotations.
    • Updated tests to ensure no annotation files are created when saving datasets with unsupported annotations.
  • Documentation

    • Improved type safety and clarity in function signatures across various components.

@Eldies Eldies mentioned this pull request Aug 21, 2024
7 tasks
Copy link

coderabbitai bot commented Aug 21, 2024

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The recent changes enhance the robustness and clarity of annotation comparison and export functionalities across the Datumaro framework. Key improvements include refined type annotations for better code safety, improved handling of skeleton annotations, and modularized file operations for YOLO format. Additionally, comprehensive testing has been introduced to validate these enhancements, ensuring more reliable behavior in annotation processing.

Changes

Files Change Summary
datumaro/components/operations.py Updated _compare_annotations method to specify parameter types and added new logic for handling skeleton annotations, improving comparison robustness.
datumaro/plugins/yolo_format/converter.py Introduced _save_annotation_file method for better file handling in YOLO annotation exports and adjusted _export_item_annotation for improved control flow and robustness.
datumaro/util/test_utils.py Enhanced compare_annotations function with updated parameter types and improved logic for handling skeleton annotations, enhancing adaptability and clarity.
tests/test_diff.py Added a new test for comparing skeleton annotations, improving the testing framework to cover more comprehensive scenarios around skeleton handling.
tests/unit/data_formats/test_yolo_format.py Renamed test for saving/loading datasets to clarify focus; ensured that unsupported annotations do not create files, enhancing clarity and intent in testing.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Converter
    participant Annotation

    User->>Converter: Call export method with item
    Converter->>Annotation: Check if annotation is valid
    Annotation-->>Converter: Return validation status
    Converter->>Converter: Save annotation file if valid
    Note right of Converter: Uses _save_annotation_file method
Loading
sequenceDiagram
    participant User
    participant TestSuite
    participant AnnotationComparator

    User->>TestSuite: Run skeleton annotation comparison test
    TestSuite->>AnnotationComparator: Compare expected and actual annotations
    AnnotationComparator->>AnnotationComparator: Sort and filter elements
    AnnotationComparator-->>TestSuite: Return comparison result
    TestSuite-->>User: Assert results
Loading

Poem

In a world of code, so bright and fair,
Changes abound, with care and flair.
Annotations dance, in skeletons' grace,
Robust and clear, they find their place.
With tests to ensure, all flaws are caught,
Hopping through logic, in joy we ought! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits

Files that changed from the base of the PR and between 125840f and 89ee634.

Files selected for processing (5)
  • datumaro/components/operations.py (3 hunks)
  • datumaro/plugins/yolo_format/converter.py (3 hunks)
  • datumaro/util/test_utils.py (2 hunks)
  • tests/test_diff.py (2 hunks)
  • tests/unit/data_formats/test_yolo_format.py (2 hunks)
Additional comments not posted (8)
datumaro/util/test_utils.py (2)

118-118: Type annotations improve clarity.

The addition of type annotations for expected and actual parameters enhances code readability and type safety.


132-144: Verify sorting logic for skeleton annotations.

Ensure that the sorting of elements based on visibility and label is correct and aligns with the intended functionality.

Run the following script to verify the sorting logic in the codebase:

Verification successful

Sorting logic for skeleton annotations is correctly implemented.

The elements are filtered to exclude those with Points.Visibility.absent and sorted by their label, which aligns with the intended functionality. No issues found with the sorting logic.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the sorting logic for skeleton annotations.

# Test: Search for the sorting logic in the codebase. Expect: Correct implementation of sorting by visibility and label.
rg --type python -A 5 $'elements=sorted'

Length of output: 70


Script:

#!/bin/bash
# Description: Verify the sorting logic for skeleton annotations.

# Test: Search for the sorting logic in the codebase. Expect: Correct implementation of sorting by visibility and label.
rg --type py -A 5 $'elements=sorted'

Length of output: 780

datumaro/plugins/yolo_format/converter.py (2)

197-199: Encapsulation of file operations improves modularity.

The _save_annotation_file method centralizes the file writing operation, enhancing code reusability and maintainability.


294-296: Check for empty annotations enhances robustness.

The check for non-empty yolo_annotation before saving prevents unnecessary file operations and improves error handling.

tests/test_diff.py (1)

282-419: New test case for skeleton annotation comparison is well-structured.

The test_skeleton_annotation_comparison method effectively validates the comparison logic for skeleton annotations using ExactComparator.

tests/unit/data_formats/test_yolo_format.py (1)

471-489: Test correctly ensures no annotation files for unsupported types.

The renamed test method test_can_save_without_creating_annotation_file_and_load accurately validates that no annotation files are created for unsupported types like Mask.

datumaro/components/operations.py (2)

1860-1860: Type annotations improve clarity.

The addition of type annotations for parameters a and b enhances code readability and type safety.


1870-1878: Verify filtering and sorting logic for skeleton annotations.

Ensure that the filtering and sorting of elements based on visibility and label are correctly implemented.

Run the following script to verify the filtering and sorting logic in the codebase:

Verification successful

Filtering and Sorting Logic Verified

The filtering and sorting logic for skeleton annotations is correctly implemented in the codebase. The elements are filtered based on visibility and sorted by label, as expected. This logic is found in datumaro/util/test_utils.py, indicating that it is likely tested. No issues were found.

Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the filtering and sorting logic for skeleton annotations.

# Test: Search for the filtering and sorting logic in the codebase. Expect: Correct implementation of filtering and sorting by visibility and label.
rg --type python -A 5 $'elements=sorted'

Length of output: 70


Script:

#!/bin/bash
# Description: Verify the filtering and sorting logic for skeleton annotations.

# Test: Search for the filtering and sorting logic in the codebase. Expect: Correct implementation of filtering and sorting by visibility and label.
rg --type py -A 5 $'elements=sorted'

Length of output: 780

@Eldies Eldies requested a review from SpecLad August 21, 2024 12:22
@KTXKIKI
Copy link

KTXKIKI commented Aug 23, 2024

cvat-ai/cvat#8340
Now there is PR to control it. I think whether there is an empty label file depends on whether the user uses an empty background during the training process. Training with an empty background can help reduce false positives, so sometimes it is also necessary

datumaro/components/operations.py Outdated Show resolved Hide resolved
datumaro/util/test_utils.py Outdated Show resolved Hide resolved
tests/test_diff.py Outdated Show resolved Hide resolved
self.CONVERTER.convert(source_dataset, test_dir, save_media=True)

assert os.listdir(osp.join(test_dir, "labels", "train")) == []
parsed_dataset = Dataset.import_from(test_dir, self.IMPORTER.NAME)
self.compare_datasets(source_dataset, parsed_dataset)
Dataset.import_from(test_dir, self.IMPORTER.NAME)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be useful to check that the imported dataset has the expected value (i.e. the same as the original dataset, just without the annotation).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link

sonarcloud bot commented Aug 28, 2024

@SpecLad SpecLad merged commit 393cb66 into develop Aug 28, 2024
19 checks passed
@SpecLad SpecLad deleted the dl/yolo8-again branch August 28, 2024 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants