We use params-proto to specify the hyperparameters, and generate sweep.jsonl files. We use ml-logger to centralize metrics logging and checkpointing. We use jaynes to launch the experiments on the cloud.
conda create -n sculpting python=3.8
conda install pycurl
pip install params-proto jaynes ml-logger cloudpickle==1.3.0
There are two main experiments: Binary composition between two base diffusion models, and chained composition between three base diffusion models. The steps are:
- train base diffusion models on various colored MNIST digit distributions
- train binary classifier over pairs
- sample from the binary compositions
- train binary classifier over pairs used in the chaining experiments
- sample from the chained compositions
The experiments folder contains the scripts used to launch the training and sampling. The models folder contains the model definitions.
diffusion_chaining
├── README.md
├── __init__.py
├── experiments
│ ├── chain.jsonl
│ ├── chain.py
│ ├── ddpm.jsonl
│ ├── ddpm.py
│ ├── sculpting.jsonl
│ └── sculpting.py
├── bcomp.py
├── bcomp_sampler.py
├── chain.py
├── chain_sampler.py
├── ddpm.py
├── ddpm_sampler.py
└── models
├── classifier_model.py
├── score_model.py
└── util.py
3 directories, 17 files
ddpm.py
and to sample from these models:
ddpm_sampler.py
bcomp.py
now, to sample from the binary compositions
bcomp_sampler.py
chain.py
now to sample from the chained compositions
run:
python chain_sampler.py