-
Notifications
You must be signed in to change notification settings - Fork 6.8k
I dont quite understand how Upsampling works. #1412
Comments
Bilinear upsampling is implemented via convolution, therefore the weights. If you don't want them to be modified by training you need to set lr_scale to 0 for them. |
So it works like standard bilinear interpolation if I dont do weights update? Could you explain if I do update the weights, what will change? I ask this because I think bilinear interpolation is somehow a fixed method. |
If you update the weights it is trianed like a deconvolution layer, which
|
So does it mean I need to create a convolution filter for it and use it to do bilinear interpolation? The problem is that the program complains that I didnot put the weights into net parameters. |
You need a weight matrix. It can be initial ized with bilinear initializer
|
I did not find the corresponding documentation. I found Xavier, Normal, Uniform and so on. Would you tell me how to do it specifically. I suggest you could add this to the document. |
I ran into an error "Floating point exception: 8". The above is the minimum working code. I am not sure it is because of the UpSampling itself or I missed something. Could you check into this? |
|
Can we reopen this so that we have a reminder to fix the @pluskid The error for this on the Julia side was quite opaque |
can i know what is num_args in the upsampling layer? I don't quite understand what i should put. |
Do I still need init the weight of upsampling with 'bilinear' mode? |
Each upsample layer consists of (1) Upsample (bilinear, zero padding, nearest- neighbor) and (2) convolution with a transposed filter. That is why you need the convolution operation in these layers. Convolution always reduce the input matrix to the output, to go to a bigger output, you have to upsample the input first and potentially do zero - padding to the output later on in order to fix the dimensions. Also, the filters are called learned filters, because the un-transposed version (filter) that we used to calculate the transposed filter learned the features during training. |
I am trying to implement a fully convolutional neural network which requires an upsampling step before softmaxout. The small score maps need to be upsampled to the same size as the ground-truth label. I tried to use Upsampling layer but it seems the layer requires weights parameters. I simply just want to use bilinear methods to scale the score maps, for example I have 21 score maps with size 64_64. I just need them to be resized to for example 128_128. I dont see the need of those weights. Could anyone explain to me how this Upsampling layer works? If this layer is not what I am looking for, what else can I use to achieve this?
The text was updated successfully, but these errors were encountered: