You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
PyTorch 1.0 currently doesn't support Synchronized Batch Norm, but there are discussions on how to support it, see for example pytorch/pytorch#2584 and pytorch/pytorch#12198
Because the discussion on how to support Synchronized Batch Norm is still ongoing, we decided to follow the Detectron implementation of freezing batch norm statistics during training so that we don't have issues when training with small batch sizes.
A possible solution for now would be to train using GroupNorm, which makes training with small batches possible
🚀 Feature
does pytorch1.0 support Synchronized BatchNorm, and in this code the FrozenBatchNorm has the same function as Synchronized BN?
why use FrozenBatchNorm, as same thirdparty faster rcnn implement's backbone resnet do'nt has this feature
The text was updated successfully, but these errors were encountered: