Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging #9

Open
michaelisc opened this issue Oct 1, 2019 · 1 comment
Open

Logging #9

michaelisc opened this issue Oct 1, 2019 · 1 comment
Labels
bug Something isn't working

Comments

@michaelisc
Copy link
Contributor

🐛 Bug

Logging can interfere with the logging used in object detection toolboxes like mmdetection.

To Reproduce

git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
  • add from lvis.lvis import LVIS to a file (e.g. mmdet/datasets/coco.py)
  • run training:
python3 tools/train.py configs/faster_rcnn_r50_fpn_1x.py

The training function will start its printed outputs with something like:

[10/02 00:35:51] root WARNING: The model and loaded state dict do not match exactly

and no logging outputs will be printed during the training.

Expected behavior

Normally the first printed lines are:

2019-10-02 00:38:32,040 - INFO - Distributed training: False
2019-10-02 00:38:32,557 - INFO - load model from: torchvision://resnet50
2019-10-02 00:38:33,420 - WARNING - The model and loaded state dict do not match exactly

and logging outputs are printed during training, e.g.:

2019-10-02 00:40:31,962 - INFO - Start running, host: xxx@zzz, work_dir: $CURRENT_DIR/work_dirs/faster_rcnn_r50_fpn_1x
2019-10-02 00:40:31,967 - INFO - workflow: [('train', 1)], max: 12 epochs
2019-10-02 00:40:03,400 - INFO - Epoch [1][50/14186]    lr: 0.00199, eta: 1 day, 5:42:13, time: 0.628, data_time: 0.031, memory: 3852, loss_rpn_cls: 0.4321, loss_rpn_bbox: 0.0947, loss_cls: 1.2581, acc: 90.4229, loss_bbox: 0.1021, loss: 1.8871
2019-10-02 00:40:27,730 - INFO - Epoch [1][100/14186]   lr: 0.00233, eta: 1 day, 2:20:42, time: 0.487, data_time: 0.019, memory: 3852, loss_rpn_cls: 0.3118, loss_rpn_bbox: 0.0992, loss_cls: 0.7181, acc: 93.3760, loss_bbox: 0.1250, loss: 1.2541

Additional context

This seems to be a conflict between the two loggers. My quick fix was, to remove the logger in the lvis api and replace it by print functions where appropriate. I am happy to submit this as a pull request but I guess the issue needs some discussion and a decision from your side, how to proceed.

@michaelisc michaelisc added the bug Something isn't working label Oct 1, 2019
@wondervictor
Copy link

I've met the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants