-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On-the-fly net resizing, without reallocation (where possible) #594
Commits on Sep 18, 2014
-
use Blob directly instead of shared_ptr for EltwiseLayer::max_idx_
This is in keeping with BVLC#742.
Configuration menu - View commit details
-
Copy full SHA for 69bf6b5 - Browse repository at this point
Copy the full SHA 69bf6b5View commit details -
Configuration menu - View commit details
-
Copy full SHA for 3194bb1 - Browse repository at this point
Copy the full SHA 3194bb1View commit details -
don't reallocate blobs when shrinking memory use
This allows nets to be reshaped very quickly (essentially for free) as long as sufficient memory has been allocated. Calling Blob::Reshape in order to free up memory becomes impossible; however, this is not a normal use case (and deleting blobs does free memory).
Configuration menu - View commit details
-
Copy full SHA for 4fff966 - Browse repository at this point
Copy the full SHA 4fff966View commit details -
enable reshaping in the forward pass
Note that calling Reshape when no reshape is necessary should be effectively a no-op, so this is not a performance regression.
Configuration menu - View commit details
-
Copy full SHA for 87de5ed - Browse repository at this point
Copy the full SHA 87de5edView commit details -
separate setTensor4dDesc from createTensor4dDesc
This will make it possible to add reshaping to cuDNN layers.
Configuration menu - View commit details
-
Copy full SHA for 5ce519c - Browse repository at this point
Copy the full SHA 5ce519cView commit details -
Configuration menu - View commit details
-
Copy full SHA for d7e8f2a - Browse repository at this point
Copy the full SHA d7e8f2aView commit details -
Configuration menu - View commit details
-
Copy full SHA for 4b34c72 - Browse repository at this point
Copy the full SHA 4b34c72View commit details -
Configuration menu - View commit details
-
Copy full SHA for 62bc0a8 - Browse repository at this point
Copy the full SHA 62bc0a8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 256209d - Browse repository at this point
Copy the full SHA 256209dView commit details -
Configuration menu - View commit details
-
Copy full SHA for 07d6246 - Browse repository at this point
Copy the full SHA 07d6246View commit details -
split off Reshape for vision layers
Note that we are dropping some checks from LRN layer. However, these checks are fairly redundant; something is very wrong if these layers are producing top blobs that are different sizes than their inputs, and tests are the right place to catch that. The thing that really should be checked (that isn't) is that that local_size needs to be odd; this will be added in a future commit.
Configuration menu - View commit details
-
Copy full SHA for 6c63b8c - Browse repository at this point
Copy the full SHA 6c63b8cView commit details -
Strictly speaking, Reshape doesn't need to be called until the first Forward call; however, much existing code (especially tests) assumes that top blobs will be set up in SetUp, so we may as well do it there.
Configuration menu - View commit details
-
Copy full SHA for d2de2ee - Browse repository at this point
Copy the full SHA d2de2eeView commit details -
default LayerSetUp to no-op instead of NOT_IMPLEMENTED
Now that top blobs are set up in Layer::Reshape, it's Reshape that is mandatory, and simple layers often don't need to implement LayerSetUp. Reshape is (already) declared abstract, so not implementing it is a compile-time error.
Configuration menu - View commit details
-
Copy full SHA for 4f1b668 - Browse repository at this point
Copy the full SHA 4f1b668View commit details -
Configuration menu - View commit details
-
Copy full SHA for db5bb15 - Browse repository at this point
Copy the full SHA db5bb15View commit details -
Since we are now calling Reshape in the Forward pass, it's only fair to include it when timing. Reshape calls should normally be four or so orders of magnitude faster than Forward calls; this change also makes it easy to notice a mistake that causes something slow to happen in Reshape.
Configuration menu - View commit details
-
Copy full SHA for 24350a6 - Browse repository at this point
Copy the full SHA 24350a6View commit details -
add Net::Reshape for only reshaping
Note that it is not normally necessary to call this function when using reshapable nets, but sometimes it can be useful to compute the sizes of intermediate layers without waiting for the forward pass.
Configuration menu - View commit details
-
Copy full SHA for 490077e - Browse repository at this point
Copy the full SHA 490077eView commit details -
Configuration menu - View commit details
-
Copy full SHA for fdf2de1 - Browse repository at this point
Copy the full SHA fdf2de1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 0b5e11d - Browse repository at this point
Copy the full SHA 0b5e11dView commit details -
Configuration menu - View commit details
-
Copy full SHA for d833ab3 - Browse repository at this point
Copy the full SHA d833ab3View commit details