A basic architecture of "DA-GAN: Instance-level Image Translation by Deep Attention Generative Adversarial Networks"
This is a basic architecture implementation, and the structure of the article is outlined below:
1. The image is encoded using an encoder (convolutional architecture).
2. Use another set of convolutions combined with a full join to generate an attention area (similar to the border of the target detection) and perform a masking operation on the original image.
3. The mask operation does not use the 01 mask, but instead uses sigmoid instead of direction propagation, making the change more 'soft'.
1. There are some samples marked incorrectly in the svhn data set, first clean the sample:
python D:\SVHN_dataset\train\DAE_GAN\Data.py --op_type="clear" --im_dir="D:/SVHN_dataset/train/forcmd/" --raw_dir="D:/SVHN_dataset/train/forcmd/" --if_clip=False
2. Create positive and negative sample data:
python D:\SVHN_dataset\train\DAE_GAN\Data.py --op_type="create" --im_dir="D:/SVHN_dataset/train/forcmd/" --raw_dir="D:/SVHN_dataset/train/forcmd/" --if_clip=False
3. Perform training or loading:
Train:
python D:\SVHN_dataset\train\DAE_GAN\train.py --is_train="train" --im_size=64 --batch=2 --epoch=100 --hw_size=30 --k=2e5 --alpa=0.9 --beta=0.5 --im_dir="D:/SVHN_dataset/train/forcmd/" --save_dir="D:/SVHN_dataset/train/forcmd/ckpt/" --saveS_dir="D:/SVHN_dataset/train/forcmd/SampleS/" --saveT_dir="D:/SVHN_dataset/train/forcmd/SampleT/"
Test:
python D:\SVHN_dataset\train\DAE_GAN\train.py --is_train="test" --load_dir="D:/SVHN_dataset/train/ckpt/" --raw_im_dir="D:/SVHN_dataset/test" --save_im_dir="D:/SVHN_dataset/test_save/"