Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Deformable Convolutional Op Support #179 #1129

Closed

Conversation

smallsunsun1
Copy link

Add cpu and gpu implementation for Deformable Conv2d and DeformablePSROIAlign according to https://arxiv.org/abs/1811.11168, EquiConv according to https://arxiv.org/pdf/1903.08094.pdf.
The original #1056 has closed because of use wrong email-address in commit history.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

You'll be able to see Jupyter notebook diff and discuss changes. Powered by ReviewNB.

@googlebot

This comment has been minimized.

@gabrieldemarmiesse

This comment has been minimized.

@googlebot

This comment has been minimized.

@gabrieldemarmiesse
Copy link
Member

We used black to format some files in the master branch. To avoid future conflicts, I updated your pull request. Please do git pull before working on it again locally.

@vlordier
Copy link

vlordier commented Feb 28, 2020

@smallsunsun1 when swapping Conv2D with EquiConv, kernel_size and strides need to be changed from int to tuple.
It might be helpful to align Conv2D and EquiConv prototypes as in Conv2D prototype

where kernel_size or strides is An integer or tuple/list of 2 integers

@smallsunsun1
Copy link
Author

@smallsunsun1 when swapping Conv2D with EquiConv, kernel_size and strides need to be changed from int to tuple.
It might be helpful to align Conv2D and EquiConv prototypes as in Conv2D prototype

where kernel_size or strides is An integer or tuple/list of 2 integers

Thanks for you suggestion, I will align Conv2D and EquiConv prototypes in later commit 0.0 . Now EquiConv support dynamic input and change all the numpy operation into TF operation.

@smallsunsun1
Copy link
Author

I think I have already finish all the work. Can any one review this PR and give some comments. @seanpmorgan @Squadrick

@Squadrick
Copy link
Member

@smallsunsun1 Can you pull from master and update this PR?

Also, please take a look at #1328, we've changed our testing procedure.

Sorry about the delay, I'll review the PR soon.

@Squadrick Squadrick self-assigned this Mar 25, 2020
@Squadrick
Copy link
Member

@mdv3101 This is quite a massive PR, it'll take me a while to review and test its correctness. If you want it immediately, you can clone @smallsunsun1's repo and test it out yourself (in case there's any issues you can report back). Also, any help with reviews would be much appreciated too.

@tuanhanhdmprof
Copy link

tuanhanhdmprof commented May 4, 2020

@mdv3101 This is quite a massive PR, it'll take me a while to review and test its correctness. If you want it immediately, you can clone @smallsunsun1's repo and test it out yourself (in case there's any issues you can report back). Also, any help with reviews would be much appreciated too.

Hi bro,
How is the status of this PR?
Thank you

@bhack
Copy link
Contributor

bhack commented May 6, 2020

This is quite a massive PR, it'll take me a while to review and test its correctness. If you want it immediately

It is +4,614 −0 probably you need to split this in multiple PRs

DeformableConv2D: serialization fixes
@ujjwal-ai
Copy link

Any progress on this ?

@mohamedmindee
Copy link

It would be nice to use deformable convolutions with tensorflow. When do you think it will be merged ?

deformable_conv2d.py serialization fix
@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@googlebot googlebot added cla: no and removed cla: yes labels Jul 8, 2020
@WindQAQ
Copy link
Member

WindQAQ commented Jul 12, 2020

This PR is quite stale. I would recommend that we can split kernels/layers and example into different PR to help us review faster. Also, there are many code segments that can be directly reused from core TF such as CheckFormatConstraintsOnShape, DimensionsFromShape, FastBoundsCheck e.t.c. Please update them accordingly. If @smallsunsun1 doesn't mind, I can help with this.

@smallsunsun1
Copy link
Author

This PR is quite stale. I would recommend that we can split kernels/layers and example into different PR to help us review faster. Also, there are many code segments that can be directly reused from core TF such as CheckFormatConstraintsOnShape, DimensionsFromShape, FastBoundsCheck e.t.c. Please update them accordingly. If @smallsunsun1 doesn't mind, I can help with this.

It is appreciate if you can help deal with these. I don't have much time to update this PR now.

@WindQAQ
Copy link
Member

WindQAQ commented Jul 13, 2020

This PR is quite stale. I would recommend that we can split kernels/layers and example into different PR to help us review faster. Also, there are many code segments that can be directly reused from core TF such as CheckFormatConstraintsOnShape, DimensionsFromShape, FastBoundsCheck e.t.c. Please update them accordingly. If @smallsunsun1 doesn't mind, I can help with this.

It is appreciate if you can help deal with these. I don't have much time to update this PR now.

Sure, then I'll open a new PR and ping you.

@oubathAI
Copy link

Hi @smallsunsun1

Thanks for the nice PR.
I merged your PR locally and I was able to run DeformableConv2D without any major problems.

However, when I test DeformableConv2D on Scaled MNIST, the results are very variable depending on the im2col parameter.
When im2col = 1 and batch_size=32 the results are rather good (accuracy > 95% in 5 epochs) ; but when im2col = 32 with batch_size=32, the results are very bad (accuracy < 50% in 5 epochs).
Is this drop in accuracy normal? If so, can you explain me the reasons?

I also found this implementation very similar to yours: https://github.com/BIGKnight/deformable_conv2d_v2_tensorflow

In this repository, the author mentions that it is necessary to make a swapaxis if we want to use im2col > 1. In the file deformable_conv2d_op/deformable_conv2d.cc line 333, you can even see which line the author has planned to do it (https://github.com/BIGKnight/deformable_conv2d_v2_tensorflow/blob/e0491b3bae08561a6a932000e535759c605b0837/deformable_conv2d_op/deformable_conv2d.cc#L333); however I don't see any trace of swapaxis in your code.
Can you explain to me why you removed the swapaxis? Isn't that the reason for bad learning when im2col > 1 ?

Thank you in advance for your answer

@smallsunsun1
Copy link
Author

smallsunsun1 commented Jul 21, 2020 via email

@oubathAI
Copy link

Thanks for your quick reply.

when im2col not equal to 1, current implemention does not work

This corresponds to my observation; thanks for your confirmation.

The c++ part of DeformableConv2D is largely based on this repo (https://github.com/BIGKnight/deformable_conv2d_v2_tensorflow

I thought so. There's a lot of identical code between your two repositories. But why didn't you mention that anywhere? I didn't see any reference to his work in your code.

I think the Parameter im2col do nothing with the speed of kernel execution,

I'm not sure about that statement, what do you base it on?
The code you rely on is itself an adaptation of the code repository Deformable-ConvNets.
It is clearly stated in the FAQ of the original code that im2col can enhance performance; I quote "The updated operator is significantly faster than the existing one when the image batch size is large".
This seems logical when you look at the code; indeed in the PR you submitted, if you look at the file deformable_conv_op.cc line 1017 you can see that the loop for (int32_t n = 0; n < num_ / im2col_step_; ++n) depends on im2col; the greater im2col will be, the less loops will be; therefore the calculation will be faster.

From what I've seen from BIGKnight's code but also from the original code, swapaxis is necessary to perform the operations with im2col (even if I didn't understand why). I guess the reason you didn't include swapaxis in your code is because you couldn't port it?

Thanks in advance for your answer

@smallsunsun1
Copy link
Author

smallsunsun1 commented Jul 22, 2020 via email

@axeldavy
Copy link

axeldavy commented Aug 3, 2020

Hi, I tried to run the PR on tensorflow 2.3 and had the following errors:
. Some ' #include "tensorflow/core/framework/kernel_shape_util.h" ' were missing causing compilation failures
. When using mixed_float16, the layer triggers an error saying it needs float32 or float64. If passing dtype='float32', it fails saying offset is float16 while input is float32. The fix is to pass to the conv2D producing 'offset' the argument dtype.
. get_config is missing 'kernel_initializer', etc

I thought I'd let you know so it can be fixed before the PR is merged :-).

@alexattia
Copy link

alexattia commented Sep 29, 2020

Looking forward to it! Do you have any news about the progress of this PR?

@qin2294096
Copy link

I have try BIGKnight's code in my project, replacing three conv2d layers by deformable_conv2d in center-net-resnet18.
And I saw:

  1. the training time is about twice as much as before
  2. backward is really slow
  3. im2col_step has no effect on training speed

environment: tf1.15, cuda10, cudnn7, gtx1080ti
batchsize=16, im2col_step=1
image

batchsize=16, im2col_step=16
image

No deformable_conv2d
image

By the way, in my project, use deformable_conv2d with im2col_step=1 brings a good performance (haven't try im2col_step=16). If the training speed becomes faster, it will be a very good job!

@bhack
Copy link
Contributor

bhack commented Apr 27, 2022

I don't think we have the bandwidth to maintain this custom ops. If you are still interested please contribute the python layer to https://github.com/keras-team/keras-cv.
If you need more native ops please contribute these to TensorFlow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.