-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Conv1DTranspose, Conv2DTranspose, Conv3DTranspose layers #124
Comments
I would like to take this issue to get to know the ConvTranspose layers if it possible 😊 @zaleslaw I suppose we can add also the |
I'm not sure about the Conv1DTranspose layer, it's not added to the Keras, please write here, if you have any ideas/or found some useful articles about its implementations |
I found it here and then think about adding also to KotlinDL. Isn't it the layer that we are thinking about? |
@zaleslaw Actually, it exists in TF Keras: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv1DTranspose |
@avan1235 partially implemented this, but now have no time to finish this PR. It could be a good starting point for future implementations. |
Currently, the support for
Conv2DTranspose
andConv3DTranspose
,Conv1DTranspose
are missed, and it would be great to add support for these layers. The desired PR addressing this issue should include:Conv2DTranspose
(you can take inspiration from the implementation ofConv2D
as reference)Conv3DTranspose
(you can take inspiration from the implementation ofConv3D
as reference)Conv1DTranspose
api
moduleThis operation is sometimes called "deconvolution" after Deconvolutional Networks
De-convolution could be implemented internally via
tf.nn.conv2dBackpropInput
ortf.nn.conv3dBackpropInput
, also you need to implement analogue ofconvOutputLength
function.Also, if needed, you can take a look at Keras documentation for
Conv2DTranspose
, andConv3DTranspose
.NOTE: for the moment, there is no need to add support for "data format" (i.e., channels last vs. channels first) in your PR; you can assume the channels are always in the last dimension.
P.S. If you want to take this ticket, please leave the comment below
P.P.S Read the Contributing Guidelines.
The text was updated successfully, but these errors were encountered: