-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use your own dataset: ValueError: Node 'generator_1/latent_vector/Pad' has an _output_shapes attribute inconsistent with the GraphDef for output #0: #1
Comments
I'm not completely sure, but I believe the problem in the image size you used for training is different from what you used for inference. The error was complaining that the padding size is different from the network created in training. Notice the If that doesn't work please copy/paste the exact command you used. Thanks. |
@jerryli27 (P.S. --train_image_size=4 is working well)
|
@jerryli27 image_translation_infer.py also not working.
ValueError: Node 'generator_1/latent_vector/Pad' has an _output_shapes attribute inconsistent with the GraphDef for output #0: Dimension 1 in both shapes must be equal, but are 10 and 7. Shapes are [1,10,10,?] and [?,7,7,256]. |
You have a misunderstanding of what is PGGAN vs TwinGAN. PGGAN is not an image translation model so it can't translate human faces. Try to use TwinGAN following the training guide. Also on a side note, I tried to use Danbooru datasets as well. The quality was quite poor because there are much more variation in that dataset. Not all faces are forward facing etc. |
@jerryli27
so I understand i need to convert the celeba dataset. I used this command to run convert_celeba.py
How to solve AttributeError: 'NoneType' object has no attribute 'endswith'? |
@jerryli27 I want to make Celeba dataset like Getchu preprocessed dataset.
I proceeded through this process and i get three .tfrecord file. (train / test / validation) |
The first one you pointed out was a warning, not an error. It does not affect the code. It simply does not output the image translation of a fixed set of images. (Which is useful to see how training progresses, but is totally optional). The second one I think you figured out already. When you train, just point your
|
If that gives you an error, try to set |
@jerryli27 I executed the command like this.
(Three Command Makes same error...) INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.FailedPreconditionError'>, test/train-B; Is a directory OutOfRangeError (see above for traceback): FIFOQueue '_5_prefetch_queue/fifo_queue' is closed and has insufficient elements (requested 1, current size 0) [Folder Structure]
How can i solve this error? Please answer to my question. |
|
@jerryli27 What should i do? i executed the command like this, but it makes error.
|
The error message was I will double check that everything provided on this repo works out of the box. Sorry for the inconvenience, but it will take some time. In the mean time, please try to debug yourself as it is probably the fastest way of getting things done and I can't provide 24/7 debugging support. |
I'm making dataset that contains CelebA (provided on the website) and Danbooru Face Dataset, to create Japanese animated characters when a human face inputs.
I visited your github (https://github.com/jerryli27/TwinGAN) and i refer this link to make own dataset (https://github.com/jerryli27/TwinGAN/blob/master/docs/use_your_dataset.md).
After Training, I executed python image_translation_infer.py
(I changed image_translation_infer.py like this)
ValueError: Node 'generator_1/latent_vector/Pad' has an _output_shapes attribute inconsistent with the GraphDef for output #0: Dimension 1 in both shapes must be equal, but are 262 and 7. Shapes are [1,262,262,?] and [?,7,7,256].
this code is not working... but the pre-trained dataset works fine.
what should i do? how can i make this checkpoint works fine in this code?
please answer to my question!
Thank You!
The text was updated successfully, but these errors were encountered: