Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freezing Graph Issue #286

Closed
brianantonelli opened this issue Jun 16, 2017 · 19 comments
Closed

Freezing Graph Issue #286

brianantonelli opened this issue Jun 16, 2017 · 19 comments

Comments

@brianantonelli
Copy link

brianantonelli commented Jun 16, 2017

Has anyone had issues freezing the exported protobuf file for Tensorflow mobile? To save my latest checkpoint (YoloV2 VOC) to pb I'm running:

flow --model cfg/yolo-voc-6c.cfg \
     --load -1 \
     --savepb \
     --gpu 0.8

I've verified this graph/metafile are valid by running predictions (with valid results) with the following code:

options = {'pbLoad': './built_graph/yolo-voc-6c.pb',
           'metaLoad': './built_graph/yolo-voc-6c.meta',
           'threshold': 0.1,
           'gpu': 1.0}
tfnet = TFNet(options)

img = Image.open(urllib2.urlopen(imgurl))
data = np.asarray(img, dtype='uint8')
image = cv2.imdecode(data, cv2.IMREAD_COLOR)
result = tfnet.return_predict(data)

I'm now trying to import my graph into Tensorflow for iOS following instructions from Tensorflow.. When I try to freeze the graph (referencing the protobuf file and last checkpoint) running the following command I get an exception. I got the output_node_names from here.. I have tried a variety of other node names (softmax, etc) with no success..

Freeze Graph Command:

bazel-bin/tensorflow/python/tools/freeze_graph \
     --input_graph=/Users/brianantonelli/Desktop/yolo-voc-6c.pb \
     --input_checkpoint=/Users/brianantonelli/Desktop/yolo-voc-6c-4500 \
     --output_node_names=classes_prob,classes_arg,boxes \
     --input_binary \
     --output_graph=tensorflow/examples/ios/simple/data/tf_yolo_graph.pb

Freeze Graph Result:

Traceback (most recent call last):
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 255, in <module>
    app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 187, in main
    FLAGS.variable_names_blacklist)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 179, in freeze_graph
    variable_names_blacklist)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 105, in freeze_graph_with_def_protos
    saver = saver_lib.Saver(var_list=var_list)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1140, in __init__
    self.build()
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1162, in build
    raise ValueError("No variables to save")
ValueError: No variables to save
@zhyj3038
Copy link

output_node_names must be "output". but can you detect object with your .pb file?

@brianantonelli
Copy link
Author

I changed output_node_names to output but its still failing:

🍔  bazel-bin/tensorflow/python/tools/freeze_graph \
→      --input_graph=/Users/brianantonelli/Desktop/tst/yolo-voc-6c.pb \
→      --input_checkpoint=/Users/brianantonelli/Desktop/tst/yolo-voc-6c-22000 \
→      --output_node_names=output \
→      --input_binary \
→      --output_graph=tensorflow/examples/ios/simple/data/tf_yolo_graph.pb
Traceback (most recent call last):
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 255, in <module>
    app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 187, in main
    FLAGS.variable_names_blacklist)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 179, in freeze_graph
    variable_names_blacklist)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 105, in freeze_graph_with_def_protos
    saver = saver_lib.Saver(var_list=var_list)
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1140, in __init__
    self.build()
  File "/Users/brianantonelli/Dev/ml/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/training/saver.py", line 1162, in build
    raise ValueError("No variables to save")
ValueError: No variables to save

If I run flow with my PB and metafile it works as expected:

root@cdda1bd2b477:~/workspace# flow --imgdir /yolodata/images/test \
>      --pbLoad /yolodata/tmp/yolo-voc-6c.pb \
>      --metaLoad /yolodata/tmp/yolo-voc-6c.meta \
>      --threshold 0.3 \
>      --gpu 0.9 \
>      --json


Loading from .pb and .meta
GPU mode with 0.9 usage
2017-06-22 15:30:22.416551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0)
Forwarding 16 inputs ...
Total time = 3.77898693085s / 16 inps = 4.2339389611 ips
Post processing 16 inputs ...
Total time = 0.246430873871s / 16 inps = 64.9269296037 ips

@zhyj3038
Copy link

I think
bazel-bin/tensorflow/python/tools/freeze_graph
→ --input_graph=/Users/brianantonelli/Desktop/tst/yolo-voc-6c.pb
→ --input_checkpoint=/Users/brianantonelli/Desktop/tst/yolo-voc-6c-22000
→ --output_node_names=output
→ --input_binary
→ --output_graph=tensorflow/examples/ios/simple/data/tf_yolo_graph.pb
may not be the rught usage. it build a .pb frome ckpt, so why your input has a .pb?

@brianantonelli
Copy link
Author

You're completely right, I was following a few different tutorials and most of them had you freeze the graph with the Bazel tools. I see now that output graph is also freezing it.. I ran the graph through the optimize_for_inference script with success but now I'm getting this issue:

2017-06-23 10:11:06.603912: E /Users/brianantonelli/Dev/ml/tensorflow/tensorflow/examples/ios/simple/RunModelViewController.mm:151] Could not create TensorFlow Graph: Invalid argument: No OpKernel was registered to support Op 'ExtractImagePatches' with these attrs.  Registered devices: [CPU], Registered kernels:
  <no registered kernels>

	 [[Node: ExtractImagePatches = ExtractImagePatches[T=DT_FLOAT, ksizes=[1, 2, 2, 1], padding="VALID", rates=[1, 1, 1, 1], strides=[1, 2, 2, 1]](47-leaky)]]

@zhyj3038
Copy link

I am sorry with this issue, it seems this operator was not recognized by tf. but i haver never met it befor.

@brianantonelli
Copy link
Author

Testing it now with this solution: tensorflow/tensorflow#8502 (comment)

@brianantonelli
Copy link
Author

To resolve this you have to remove the cococpod for tensorflow and provide your own compiled tensorflow. This also requires you to add tensorflow/core/kernels/extract_image_patches_op.cc to tensorflow/contrib/makefile/tf_op_files.txt prior to running ./build_all_ios.sh

Configure your xcode project following these steps: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios#creating-your-own-app-from-your-source-libraries

@jeffxtang
Copy link

@brianantonelli so you're able to use the pb generated from optimize_for_inference in iOS? I tried the same thing but got this error: Running model failed: Not found: FetchOutputs node output: not found. This is the command I use: bazel-bin/tensorflow/python/tools/optimize_for_inference --input=/Users/jeff/github_repos/darkflow/built_graph/yolo.pb --output=/Users/jeff/github_repos/darkflow/built_graph/optimized_yolo.pb --frozen_graph=True --input_names=input --output_names=output.

If I use the non-optimized yolo.pb generated from flow --model cfg/yolo.cfg --load bin/yolo.weights --savepb, I get the same error Running model failed: Invalid argument: The first dimension of paddings must be the rank of inputs[4,2] [224,224,3] as #170

Any ideas or could you please share your iOS code and your optimize_for_inference command? Thanks! Jeff

@brianantonelli
Copy link
Author

Here's what I did.. First I generated the PB on my EC2 P2 instance:

flow --model cfg/yolo-voc-6c.cfg \
     --load -1 \
     --savepb \
     --gpu 0.8

Then I configured Tensorflow to support extract_image_patches and compiled all of the iOS depedencies. Finally I optimized the graph for inference before importing it into my Xcode project.

sudo pip install backports.weakref==1.0rc1

git clone tensorflow
cd tensorflow

echo "tensorflow/core/kernels/extract_image_patches_op.cc" >> tensorflow/contrib/makefile/tf_op_files.txt
cd tensorflow/contrib/makefile && ./build_all_ios.sh
cd -

bazel-bin/tensorflow/python/tools/optimize_for_inference \
     --input=/Users/brianantonelli/Desktop/tst/yolo-voc-6c.pb \
     --output=tensorflow/examples/ios/simple/data/tf_yolo_graph.pb \
     --input_names=input \
     --output_names=output \
     --frozen_graph=True

@jeffxtang
Copy link

Thanks @brianantonelli for your reply. Two follow up questions please: where did you get yolo-voc-6c.cfg? I don't see it at https://github.com/thtrieu/darkflow/tree/master/cfg. What Tensorflow version were you using for it - can you please share the result of git log |head -5 within your Tensorflow source root? Thanks!

@brianantonelli
Copy link
Author

I'm training my own dataset so thats my customized config.

My TF gitsha: 0d2f6918322c7bf29d1de3075b0d4ed3b1b72919

@jeffxtang
Copy link

Thanks, @brianantonelli. I made some progress - am able to run optimize_for_inference after your helpful tip sudo pip install backports.weakref==1.0rc1 in TF 1.2 but when I use the optimized pb in iOS, I still get Running model failed: Not found: FetchOutputs node output: not found. If I use the non-optimized pb (yolo-tiny.pb, obtained after running flow --model cfg/v1/yolo-tiny.cfg --load bin/yolo-tiny.weights --savepb), I still get the Running model failed: Invalid argument: The first dimension of paddings must be the rank of inputs[4,2] [224,224,3] error.

I'm training a test dataset to see if it'll make the difference. If it's not too much trouble, could you please try the yolo-tiny.pb generated above and its optimized version in your iOS code to see if it works too? Much appreciated! Jeff

@brianantonelli
Copy link
Author

@jeffxtang I'll try and get that for you either this weekend or Monday.

@jeffxtang
Copy link

Thanks @brianantonelli. Just some new progress: I found my iOS code was

  tensorflow::Tensor image_tensor(
      tensorflow::DT_FLOAT,
      tensorflow::TensorShape(
          {image_height, image_width, wanted_channels}));
  auto image_tensor_mapped = image_tensor.tensor<float, 3>();

After I changed it to:

  tensorflow::Tensor image_tensor(
      tensorflow::DT_FLOAT,
      tensorflow::TensorShape({
          1, wanted_height, wanted_width, wanted_channels}));
  auto image_tensor_mapped = image_tensor.tensor<float, 4>();

I'm able to run the yolo-tiny.pb successfully in iOS but its optimized version still gives Running model failed: Not found: FetchOutputs node output: not found. I recall I used tensorboard to look at the graph for yolo-tiny.pb and its optimized version and indeed there's no output node in the optimized version. Does your retrained model still have the output node in the optimized version? I used the quantized versions of pb files before to reduce the size and make the code run without crash on actual iOS device, but haven't tried the optimized version yet.. Thanks for your help. Jeff

@jeffxtang
Copy link

Can @brianantonelli or anyone please share how to parse (post process) the output tensor values in iOS and get the labels and boxes info? I'm kind of lost at the code in def box_constructor(meta,np.ndarray[float,ndim=3] net_out_in): of cy_yolo2_findboxes.pyx. Thanks!

@jeffxtang
Copy link

I figured this out and just shared a new repo: https://github.com/jeffxtang/yolov2_tf_ios

@macro-dadt
Copy link

@brianantonelli did you resolve this problem? it seems like you freezed .pb to implement in this repo (https://github.com/KleinYuan/tensorflow-yolo-ios). i always get this error: (Input checkpoint '' doesn't exist!) when i try to freeze .pb by using kind of:
bazel-bin/tensorflow/python/tools/freeze_graph
--input_graph=/Users/brianantonelli/Desktop/yolo-voc-6c.pb
--input_checkpoint=/Users/brianantonelli/Desktop/yolo-voc-6c-4500
--output_node_names=output
--input_binary
--output_graph=tensorflow/examples/ios/simple/data/tf_yolo_graph.pb

@Fattouh92
Copy link

@brianantonelli Hi, I am facing similar issues with freezing my network. Would highly appreciate if you could take a look at my issue #903 Thanks

@macro-dadt
Copy link

macro-dadt commented Sep 25, 2018

@brianantonelli Hi, I am facing similar issues with freezing my network. Would highly appreciate if you could take a look at my issue #903 Thanks

hi, i did freeze graph successfully. this is what i did, hope it help you
sudo pip install backports.weakref==1.0rc1
bazel-bin/tensorflow/python/tools/optimize_for_inference --input=/.../yolo.pb --output=/.../ff_yolo_graph.pb --input_names=input --output_names=output --frozen_graph=True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants