This project aims to generate anime faces using Generative Adversarial Networks (GANs). We have implemented and trained a generator and a discriminator model to achieve this task.
- Clone the repository:
git clone https://github.com/PJF9/Animefaces-GAN.git)
cd AnimeFaces-GAN
- Create a virtual environment and activate it:
python -m venv env
source env/bin/activate # On Windows use `env\Scripts\activate`
- Install the dependencies:
pip install -r requirements.txt
-
Add
kaggle.json
file to /home/usr/.kaggle (you can generate a key here.) -
Modify
src/config.py
as needed to adjust the settings as you prefer.
- To downlaod the dataset, run:
python3 data.py
- To train the GAN models, run:
python3 train.py
- To make a single prediction, run:
python3 predict.py
generator.py
: Defines the generator model architecture.discriminator.py
: Defines the discriminator model architecture.
trainer.py
: Contains the training loop for the GAN models.
data.py
: Handles data loading and preprocessing.device.py
: Manages device configuration (CPU/GPU).log.py
: Contains logging functionalities.save.py
: Functions for saving models and outputsvisualization.py
: Functions for visualizing generated images.config.py
: Handles configuration file parsing.
data.py
: Script to preprocess the dataset.train.py
: Script to start training the GAN models.predict.py
: Script to generate images using the trained generator model.
While running the scripts, some extra directories will be created:
./checkpoints
(config.MODELS_PATH): This directory will save all the checkpoints of training and the best generator model../plots
(config.PLOTS_PATH): This directory will save the loss curves after training../generated
(config.IMAGES_PATH): This directory will save the generated images during training and the results from the predict.py script.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or suggestions.