forked from facebookresearch/maskrcnn-benchmark
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update with main stream #1
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* make pixel indexes 0-based for bounding box in pascal voc dataset * replacing all instances of torch.distributed.deprecated with torch.distributed * replacing all instances of torch.distributed.deprecated with torch.distributed * add GroupNorm * add GroupNorm -- sort out yaml files * use torch.nn.GroupNorm instead, replace 'use_gn' with 'conv_block' and use 'BaseStem'&'Bottleneck' to simply codes * modification on 'group_norm' and 'conv_with_kaiming_uniform' function * modification on yaml files in configs/gn_baselines/ and reduce the amount of indentation and code duplication
…371) * Add new section "Projects using maskrcnn-benchmark". * Update README.md update the format. * Update README.md
* Add new section "Projects using maskrcnn-benchmark". * Update README.md update the format. * Update README.md * Add coco_2017_train and coco_2017_val * Update README.md Add the instructions about COCO_2017
…#377) * make pixel indexes 0-based for bounding box in pascal voc dataset * replacing all instances of torch.distributed.deprecated with torch.distributed * replacing all instances of torch.distributed.deprecated with torch.distributed * add GroupNorm * add GroupNorm -- sort out yaml files * use torch.nn.GroupNorm instead, replace 'use_gn' with 'conv_block' and use 'BaseStem'&'Bottleneck' to simply codes * modification on 'group_norm' and 'conv_with_kaiming_uniform' function * modification on yaml files in configs/gn_baselines/ and reduce the amount of indentation and code duplication * use 'kaiming_uniform' to initialize resnet, disable gn after fc layer, and add dilation into ResNetHead
Since IMS_PER_BATCH is global, it shouldn't be multiplied with the # of GPUs.
* make pixel indexes 0-based for bounding box in pascal voc dataset * replacing all instances of torch.distributed.deprecated with torch.distributed * replacing all instances of torch.distributed.deprecated with torch.distributed * add GroupNorm * add GroupNorm -- sort out yaml files * use torch.nn.GroupNorm instead, replace 'use_gn' with 'conv_block' and use 'BaseStem'&'Bottleneck' to simply codes * modification on 'group_norm' and 'conv_with_kaiming_uniform' function * modification on yaml files in configs/gn_baselines/ and reduce the amount of indentation and code duplication * use 'kaiming_uniform' to initialize resnet, disable gn after fc layer, and add dilation into ResNetHead * agnostic-regression for bbox
…eilDiv. Adding type-cast to (long) to first parameter to fix the errors. (#409)
* Registry for RoI Box Predictors - Add a registry ROI_BOX_PREDICTOR - Use the registry in roi_box_predictors.py, replacing the local factory - Minor changes in structures/bounding_box.py: when copying a box with fields, check if the field exists - Minor changes in logger.py: make filename a optional argument with default value of "log.txt" * Add Argument skip_missing=False
* [WIP] Keypoints inference on C2 models work * Training seems to work Still gives slightly worse results * e2e training works but gives 3 and 5 mAP less * Add modification proposed by @ChangErgou Improves mAP by 1.5 points, to 0.514 and 0.609 * Keypoints reproduce expected results * Clean coco.py * Linter + remove unnecessary code * Merge criteria for empty bboxes in has_valid_annotation * Remove trailing print * Add demo support for keypoints Still need further cleanups and improvements, like adding fields support for the other ops in Keypoints * More cleanups and misc improvements * Fixes after rebase * Add information to the readme * Fix md formatting
* Add RPN config files * Add more RPN models
* Adding support to Caffe2 ResNeXt-152-32x8d-FPN-IN5k backbone for Mask R-CNN * Clean up * Fixing path_catalogs.py * Back to old ROIAlign_cpu.cpp file
* Add RetinetNet parameters in cfg. * hot fix. * Add the retinanet head module now. * Add the function to generate the anchors for RetinaNet. * Add the SigmoidFocalLoss cuda operator. * Fix the bug in the extra layers. * Change the normalizer for SigmoidFocalLoss * Support multiscale in training. * Add retinannet training script. * Add the inference part of RetinaNet. * Fix the bug when building the extra layers in retinanet. Update the matching part in retinanet_loss. * Add the first version of the inference of RetinaNet. Need to check it again to see if is there any room for speed improvement. * Remove the retinanet_R-50-FPN_2x.yaml first. * Optimize the retinanet postprocessing. * quick fix. * Add script for training RetinaNet with ResNet101 backbone. * Move cfg.RETINANET to cfg.MODEL.RETINANET * Remove the variables which are not used. * revert boxlist_ops. Generate Empty BoxLists instead of [] in retinanet_infer * Remove the not used commented lines. Add NUM_DETECTIONS_PER_IMAGE * remove the not used codes. * Move retinanet related files under Modeling/rpn/retinanet * Add retinanet_X_101_32x8d_FPN_1x.yaml script. This model is not fully validated. I only trained it around 5000 iterations and everything is fine. * set RETINANET.PRE_NMS_TOP_N as 0 in level5 (p7), because previous setting may generate zero detections and could cause the program break. This part is used in original Detectron setting. * Fix the rpn only bug when the training ends. * Minor improvements * Comments and add Python-only implementation * Bugfix and remove commented code * keep the generalized_rcnn same. Move the build_retinanet inside build_rpn. * Add USE_C5 in the MODEL.RETINANET * Add two configs using P5 to generate P6. * fix the bug when loading the Caffe2 ImageNet pretrained model. * Reduce the code depulication of RPN loss and RetinaNet loss. * Remove the comment which is not used. * Remove the hard coded number of classes. * share the foward part of rpn inference. * fix the bug in rpn inference. * Remove the conditional part in the inference. * Bug fix: add the utils file for permute and flatten of the box prediction layers. * Update the comment. * quick fix. Adding import cat. * quick fix: forget including import. * Adjust the normalization part according to Detectron's setting. * Use the bbox reg normalization term. * Clean the code according to recent review. * Using CUDA version for training now. And the python version for training on cpu. * rename the directory to retinanet. * Make the train and val datasets are consistent with mask r-cnn setting. * add comment.
A few addition: I added the top level directory `cityscapes` since the `tools/cityscapes/convert_cityscapes_to_coco.py` script has the directory structure `gtFine_trainvaltest/gtFine` hardcoded into it which is fine but was not clear at first. Also added a **Note** to warn people to install detectron as well, since the script uses `detectron.utils.boxes` and `detectron.utils.segm` modules which has further dependencies in the detectron lib.
Need this)
fix minor typos
install local pip with conda
Update INSTALL.md
Add uncompressed rle condition in Binarymask
Update predictor.py
remove redundant reshape of box_regression
Update lr scheduling to pytorch 1.1.0
Validation during training (version 2)
…r is deprecated in pytorch v1.2.0
replacing dtype torch.uint8 with torch.bool for indexing in pytorch 1.2.0
Add native CityScapes evaluation tool
Extend COCO evaluation for AbstractDataset
bugfix: use correct config for tta and device handling during inference
Add native CityScapes dataset
Move horizontal flip probability to config
add the MULAN project in readme.MD, "Projects using maskrcnn-benchmark"
Thanks for your excellent work! @fmassa Based on this repository, we research on the object detectors without sampling heuristics (e.g., Focal Loss, GHM, Undersampling). The paper (https://arxiv.org/abs/1909.04868) and code (https://github.com/ChenJoya/sampling-free) have been released. Thank you again for maskrcnn-benchmark. It is a really simple, efficient, high performance object detection benchmark.
update with main depository |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.