-
Notifications
You must be signed in to change notification settings - Fork 578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark TensorFlow #66
Comments
The benchmark scripts and raw outputs are located here: https://github.com/soumith/convnet-benchmarks/tree/master/tensorflow |
The lack of in place operations is rather surprising. Once you have the full DAG it should be rather easy to apply a liveness algorithm to it to optimize tensor allocations. For an example see this: http://www.diku.dk/hjemmesider/ansatte/torbenm/ICD/Register.pdf (just replace register with tensor). I'm kind of curious if there's any support for automatically compounding operations together or of leveraging kernels that have some compounding built in (like the alpha/beta params of gemm). I'm pretty close to maximizing the amount of compounding that's possible in my benchmark networks. And because I write all my own kernels I can further compound things that aren't possible with closed source libraries like cuDNN. For example, I'm now able to compute the mean along the PQN dimension inside the conv and gemm kernels at no cost. This cuts down the bandwidth required by batch norm in fprop by a third. Though on the whole I think TensorFlow seems like a great platform to build on. I'd say there's a good chance my kernels will make their way there sooner rather than later. You can find new benchmarks of my latest winograd kernels in the updated paper here: http://arxiv.org/abs/1509.09308 What I'll be working on next is basically going to be taking a lot of what I learned implementing winograd and refreshing all of my conv/pooling/gemm kernels to support very small minibatches at near full utilization. This should have a big impact on the level at which you can scale these networks and the speed at which they converge. Here's a great paper exploring this: http://arxiv.org/abs/1509.04210 |
Hi, I strongly recommand to add mxnet https://github.com/dmlc/mxnet into comparision which in my opinion may be the fastest DL library :) |
+1 for benchmarking mxnet, the fastest now. |
+1 for benchmarking mxnet |
I would also love to see a comparison with Theano http://deeplearning.net/software/theano/ as it is another widely adopted deep learning library. |
Thanks for benchmarking! |
+1 would love to see tensorflow benchmarked against mxnet, Theano, Autograd for Torch, and Caffe. |
Thanks @soumith! Yes, our only launch criterion for convnets was 'GoogLeNet within distance from CuDNN[R2]', and we've punted on a lot of performance work, including upgrading CuDNN, until after the initial release. Expect a lot of movement on that front in the coming weeks. |
@aaronwro @fvisin it's already benchmarked against Torch, Theano, Caffe. Look at the readme on the main page ( https://github.com/soumith/convnet-benchmarks/blob/master/README.md ). @vincentvanhoucke thanks for your response. I assumed that you'll fix it over the next weeks / months :) |
@scott-gray let us know if you need help with compounding / graph rewriting. The graph representation is meant to make these kinds of operations possible, and the common subexpression elimination that TF currently uses is also meant as a demonstration of that. I suspect we might need to do a bit more to provide good APIs to make it easier to bake in compound kernels. |
there seems to be some misinterpretation by random people in social media that because I work for Facebook, I'm attacking TensorFlow. That seems super weird, because I love the vision of TensorFlow, and there's no competition (one can write a XXX frontend for a TensorFlow backend). My benchmarks have always been independently run, and completely neutral, I've been running them forever now, sad that people misinterpret the slightest of things. |
will defend Soumith on this one – he has indeed been running these On Wed, Nov 11, 2015 at 11:33 AM, Soumith Chintala <notifications@github.com
|
@soumith Excellent, thank you!! |
@soumith no good deed goes unpunished ;) Please don't let this deter you from providing this valuable service to the community! |
@soumith , I am sorry that some people interpreted things that way. I've always appreciated your benchmark, which creates a great atmosphere for us to look at bottlenecks and push forward the field as a whole community. We all owe you a big debt of gratitude. |
@soumith thanks! |
As always, that's super interesting. Thanks for pushing all of us toward more performance. |
For memory optimizations, what we have found is that inplace optimization does not matter that much, if the allocator is smart enough to do a static allocation before running the graph(as opposed to relying on a dynamic allocator). We have detailed what can be done here https://mxnet.readthedocs.org/en/latest/developer-guide/note_memory.html Which I assume applies to computation graph frameworks such as TF, caffe2 and CGT. |
The general idea is not only to share memory of same shape(i.e. inplace) , but also different shapes and size |
@soumith Thanks for running the benchmarks! As @vincentvanhoucke noted in this thread, our goal was to get an early release out so users can start playing with it and provide feedback on what they care about. We are committed to making TensorFlow fast and are actively working on the performance issues you highlight here. |
@soumith You're doing a good deed! Haters gonna hate. |
I'm a little confused by the number. 1300 samples/sec seems too fast even for alexnet on single TitanX. Is this standard training, e.g. io+forward+backward+update, or just forward+backward? |
Nice work. |
@piiswrong I will help @soumith make the benchmark script. Anyway we opened everything since beginning. The main purpose is learning from each other but not advertise boring number. |
I will also add my support to Soumith. He has been running these benchmarks for sometime with complete transparency and neutrality. |
@koraykv +1, thanks Soumith! |
Someone on reddit suggested that I build tensorflow from source, to fix speed issues. That did not help, It produces the same numbers as the pip version on my alexnet script : |
Tf 0.7.0 released! |
👍 +1: |
Great results 👍 👍 👍 Looking forward to the results with cuDNN v4 |
+1 On Tue, Feb 23, 2016 at 10:29 PM, Ronghang Hu notifications@github.com
|
As requested, TF 0.7 + CuDNN R4 has been benchmarked. CuDNN R4 + Torch has also been benchmarked as a baseline. Within the span of Nervana's Neon, Torch + CuDNN4, TensorFlow + CuDNN4 (and Caffe + CuDNN is likely in the same ballpark as torch), TensorFlow ( at commit tensorflow/tensorflow@1d4f00d ) still lags behind the others by 2x to 3x performance on Alexnet, VGG and Googlenet. It is within 1.5x of Overfeat. |
For full details, see the main README.md: https://github.com/soumith/convnet-benchmarks/blob/master/README.md and the raw logs are located here: 2888b23 |
i have not changed the benchmark scripts in any way, so if the TF benchmark scripts need any change (such as new allocator settings etc.), I welcome the TF folks to let me know. |
Thanks Soumith@, this isn't quite where we had seen our numbers at, but we Thanks again for running these benchmarks! On Sun, Feb 28, 2016, 4:32 PM Soumith Chintala notifications@github.com
|
Thanks Rajat, happy to investigate further. I built TF from source, and configured it with CUDA 7.5 + CuDNN-4, if that helps. The commit is tensorflow/tensorflow@1d4f00d |
I've had similar numbers using CUDA 7.0, cuDNN v4, and tensorflow/tensorflow@b889710 on a Titan X. Tried fiddling with device placement and the session config, but it made no material difference in the results. @rajatmonga, out of curiosity are you using cuDNN and nvcc internally, or gpucc? |
@nryant Thanks for the additional data point. I am honestly very nervous whenever I have to deliver any negative news on convnet-benchmarks. fwiw, @spezzer on reddit also confirmed that it was a data layout thing as well https://www.reddit.com/r/MachineLearning/comments/487fmo/convnetbenchmarks_updated_with_numbers_for/d0i7ord . |
@soumith: I think in this case it's a combination of layout and some Eigen improvements that hadn't made its way upstream -- we're looking at both of these actively. Thanks again for your effort -- we'll let you know when it makes sense to update the numbers (and provide our own for comparison). |
A recent commit adds NCHW support for BiasAdd, which results in about 40% speed up. |
That's really cool, thanks for letting me know. I'm doing a new, complete set of benchmarks for deep learning, not just convnets, will cover this commit in them |
Thanks @soumith! No rush though. We have most of the pieces together to support NCHW and expect to see more On Sat, Mar 5, 2016 at 9:35 PM Soumith Chintala notifications@github.com
|
How about tensorflow 0.7? |
Thanks for the benchmark @soumith . Looking forward for new updated TensorFlow. |
Google's TensorFlow benchmarks are here!
I've run the benchmarks on the Imagenet Winners.
When I saw issues with the numbers, memory etc., I emailed @Yangqing to confirm what I'm seeing, and that it is expected.
With that disclaimer out of the way, here's some things that you should know about TensorFlow (as of the pip version that I installed today):
Coming to the benchmarks:
The largest batch-size I could fit is 32 (tried 32, 64).AlexNet (One Weird Trick paper) - Input 128x3x224x224
Overfeat [fast] - Input 128x3x231x231
OxfordNet [Model-A] - Input 64x3x224x224
GoogleNet V1 - Input 16x3x224x224
Note that at batch size of 16, googlenet with CuDNN-R2 + Torch likely runs into dispatching overhead, so it's an exotic comparison, but not practically very interesting or encouraging.
There you go.
I'm assuming that the first release of TensorFlow is still quite unpolished, and that they will improve it over time with various memory and time optimizations baked in.
The text was updated successfully, but these errors were encountered: