Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preserve shape of extracted features #1457

Closed

Conversation

jyegerlehner
Copy link
Contributor

Is there a reason why extract_features changes the shape of a feature datum from channels x height x width to 1 x channels_height_width x 1? This caused a problem where I extracted features to use as the training data for the next stacked encoder/decoder pair of layers. I was unable to train with them because their shape was lost: the convolutional layer needs to know the correct width and height of its input, and it had been lost. I couldn't find a way to tell the net to reshape the training data back to the correct feature dimensions. The simplest solution appears to me to just not mangle the dimensions in the first place.

@shelhamer
Copy link
Member

The extract features tool was contributed by the community, so a few
choices like this might have been particular to a given user's application.
The core devs just use Python or MATLAB for feature extraction.

This seems like a reasonable change of behavior. I'm traveling, but we'll
pick up the review pace soon and double-check this.
On Thu, Nov 20, 2014 at 11:57 jyegerlehner notifications@github.com wrote:

Is there a reason why extract_features changes the shape of a feature
datum from channels x height x width to 1 x channels_height_width x 1? This
caused a problem where I extracted features to use as the training data for
the next stacked encoder/decoder pair of layers. I was unable to train with
them because their shape was lost: the convolutional layer needs to know
the correct width and height of its input, and it had been lost. I couldn't
find a way to tell the net to reshape the training data back to the correct
feature dimensions. The simplest solution appears to me to just not mangle

the dimensions in the first place.

You can merge this Pull Request by running

git pull https://github.com/jyegerlehner/caffe preserve-extracted-blob-shapes

Or view, comment on, or merge it at:

#1457
Commit Summary

  • Preserve shape of extracted features.

File Changes

Patch Links:


Reply to this email directly or view it on GitHub
#1457.

@jyegerlehner
Copy link
Contributor Author

OK that makes sense. Not being a python person I wasn't aware of the net-surgery you referenced in other thread. Maybe I need to take the plunge. Thanks for the reply. We'll be glad to get you all back.

shelhamer added a commit that referenced this pull request Mar 8, 2015
…apes

  extract_features preserves feature shape
@shelhamer
Copy link
Member

I merged this to master in c942dc1 since other users have encountered this. Although on the whole the extract_features tool seems like it should either go away or be rewritten especially after #1970.

Thanks for the change @jyegerlehner.

@jyegerlehner
Copy link
Contributor Author

extract_features tool seems like it should either go away or be rewritten especially after #1970.

@shelhamer I started looking at the rewrite. Datum is still used heavily in the code, and it is hard wired to have channels/height/width explicitly. So is the way forward on that to have a deprecated V1Datum (like we have with V0LayerParameter, V1LayerParameter), and the new Datum would have repeated dim instead of channels/height/width? If so, the scope of that change goes beyond just the extract features util. I can attempt that, but am a bit daunted by my lack of understanding of how the all these classes fit together.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants