Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The result of my own training, using the image_translation_infer.py test, looks very bad. #16

Open
c1a1o1 opened this issue Sep 25, 2018 · 4 comments

Comments

@c1a1o1
Copy link

c1a1o1 commented Sep 25, 2018

The result of my own training, using the image_translation_infer.py test, looks very bad.

@jerryli27
Copy link
Owner

It'd be helpful if you can share the dataset used, the script for training, and the results here. Thanks!

@c1a1o1
Copy link
Author

c1a1o1 commented Sep 26, 2018

@jerryli27
For celeba to cat,
My train parameter:

--program_name=twingan
--dataset_name="ren"
--dataset_dir="datasets/ren2cat/ren/tfrecord/"
--unpaired_target_dataset_name="cat"
--unpaired_target_dataset_dir="datasets/ren2cat/cat/tfrecord/"
--train_dir="./checkpoints/rencat/"
--dataset_split_name=train
--preprocessing_name="danbooru"
--resize_mode=RESHAPE
--do_random_cropping=True
--learning_rate=0.0001
--learning_rate_decay_type=fixed
--is_training=True
--generator_network="pggan"
--use_unet=True
--num_images_per_resolution=300000
--loss_architecture=dragan
--gradient_penalty_lambda=0.25
--pggan_max_num_channels=256
--generator_norm_type=batch_renorm
--hw_to_batch_size="{4: 8, 8: 8, 16: 8, 32: 8, 64: 8, 128: 4, 256: 3, 512: 2}"

My test parameter:

--model_path="../checkpoint/rencat/256/"
--image_hw=256
--input_tensor_name="sources_ph"
--output_tensor_name="custom_generated_t_style_source:0"
--input_image_path="../demo/face/var256/"
--output_image_path="../demo/face/cat256/"

Is there some problem?

Thank you very much!

@jerryli27
Copy link
Owner

Please try to add --do_pixel_norm=True to the training script. That stabilizes things a bit. Another tip is that the image output during training should look well at 32x32 or 64x64. If not, you don't need to train it all the way to 256. That'll be a waste of time.

And if you want, sharing the image and the tensorboard output would help a lot for debugging
Let me know how it goes.

@c1a1o1
Copy link
Author

c1a1o1 commented Nov 2, 2018

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants