Skip to content

Dissertation project in machine learning for image reconstructions in Computed Tomography

License

Notifications You must be signed in to change notification settings

LarryWang29/Learned-Primal-Dual

Repository files navigation

MPhil Data Intensive Science Dissertation Repository

This repository contains all the code used for generating figures and results in the report, as well as the neural network models.

Installation

Downloading the repository

To download the repository, simply clone it from the GitLab page:

Clone with SSH:

git clone git@gitlab.developers.cam.ac.uk:phy/data-intensive-science-mphil/projects/dw661.git

Clone with HTTPS:

git clone https://gitlab.developers.cam.ac.uk/phy/data-intensive-science-mphil/projects/dw661.git

Usage

Installation of the LoDoPab-CT dataset

The dataset used for training and evaluating the models, i.e. the LoDoPab-CT dataset, is hosted at Zenodo at https://zenodo.org/records/3384092. To download this dataset, if zenodo_get hasn't been installed, it can be installed using the following command:

pip install zenodo_get

Nevertheless, it's included in the environment.yml file already, therefore there is no need to run the above command if the conda environment is built out of environment.yml.

Next, to download the dataset, first make the directory data inside root directory by running the following command:

mkdir data

Then, inside the data directory, run the following command to install the .zip files for the dataset:

zenodo_get 3384092

It's important to note that the dataset's storage is approximately 55GB before decompression (total storage is approximately 120GB after extraction), therefore it's important to ensure there is sufficient storage before installing. For the same reason, we also advise that the dataset should be downloaded locally first, then mounted into the Docker container with appropriate storage settings instead of downloading it inside the container. For testing purposes, it's sufficient to only mount a subset of the whole dataset into the container. Note that if only a subset of the whole dataset is used, make sure to include the last files (the files ending in *027.hdf5 for test files, *279.hdf5 for training files, *027.hdf5 for validation files), as these last files contain different number of images compared to the usual files ($128$ images), and the length attributes of dataloaders are specifically configured to incorporate this difference.

Organisation of the repository

As mentioned both the report and documentations, the repository is ordered into 4 directories and 2 helper files. Specifically:

  • src/models: this directory contains all the implementations of the Pytorch models. Models included are:
    • Learned Primal Dual (LPD, contained in the file primal_dual_nets.py)
    • Learned Primal Dual Hybrid Gradient (LPDHG, also called Learned Chambolle-Pock in the report, contained in the file learned_PDHG.py)
    • Learned Primal (LP, contained in the file learned_primal.py)
    • Continuous Learned Primal Dual (cLPD, contained in the file continuous_primal_dual_nets)
    • Total Variation Learned Primal Dual (TV-LPD, contained in the file tv_primal_dual_nets.py)
    • FBPConvNet (contained in the file u_net.py)
  • src/training_scripts: this directory contains all the training scripts for these aforementioned models.
  • src/evaluation_scripts: this directory contains all the evaluation scripts for the trained models, by importing trained checkpoints and evaluating on test images. The evaluated metrics (MSE, PSNR, SSIM) are saved to csv files, in the same directories where the checkpoints are imported from.
  • src/plotting_scripts: this directory contains all the scripts used for generating figures that are used in the reports. The specific figure numbers generated by each script are highlighted in the comments at the top of the files.
  • src/dataloader.py: this file is used for loading and converting HDF5 files into torch.tensor to be passed into the neural networks.
  • src/utils.py: this file contains miscalleneous helper functions, including functions to add noise to observations, custom plotting functions and checkpoint saving functions during training.

Training the model

To train the model, run the corresponding training scripts as follows. For the models that are only mentioned in the first subsection of results (namely Learned Primal, Learned PDHG), the networks can be trained by running the following in terminal.

Example:

# Trains the Learned PDHG model
python src/training_scripts/learned_PDHG_network_trainer.py

For the models that are mentioned in the extension section of results, additional command line input is required to specify the physical setup to train the model in.

  • default- Corresponds to the original proposed physical geometry.
  • sparse - Has only $60$ angles of projection over $[0, 180]$ degrees, and initial intensity $I_{0} = 1000$.
  • limited - Has only $60$ angles of projection over $[0, 60]$ degrees, and initial intensity $I_{0} = 1000$.

Example:

# Trains the Learned Primal Dual model under `sparse` physical setup
python src/training_scripts/LPD_network_trainer.py sparse

Docker Instructions

All the packages used and their versions were included in the file environments.yml. This file can be used to replicate the Conda environment used during the development of this repository.

To run the Python scripts inside Docker, first build the image

docker build -t dw661 .

This would generate an image called dw661. To deploy and run the container, run the following command:

docker run --rm --gpus all -ti dw661

This would start the process inside the container. --gpus all tag is needed to enable GPU access inside the container, which is necessary for running of most scripts. To enable GPU access inside containers, NVIDIA Container Toolkit needs to be installed. On devices with GPU's, run the following commands to install NVIDIA Container Toolkit if it isn't already available:

  1. Get the .gpg key set up:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
  && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
  && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
  1. Install the toolkit:
sudo apt-get update
sudo apt-get install -y nvidia-docker2

After running the above commands, NVIDIA Container Toolkit should be availble for use (if it isn't already).

Hardware Specifications

Since this project uses the package tomosipo extensively, availability of GPU's is necessary since tomosipo is only compatible with GPU. It's recommended to train the networks on HPC, since training of full models may require extensive hours ($10 \sim 36$ hours, depending on the specific model trained).

Documentation

All files necessary to generate documentations are under the directory docs. To generate HTML documenetation, first ensure that Sphinx is installed (which should be already in the specifications of environment.yml), then change working directory to docs, then run the following command:

make html

Readily trained checkpoints

Trained checkpoints for Learned Primal Dual, Learned PDHG, Learned Primal, Continuous Learned Primal Dual and Total Variation Learned Primal Dual are included inside the repository, under the directory checkpoints. In the notebook tutorial/example.ipynb, an example use case for one of the models is given. This reference notebook can be modified accordingly to run pretrained checkpoints for other models, too.

The pretrained FBPConvNet checkpoints take up a lot of storage, therefore they weren't included in the repository. Nevertheless, they can be made available on Google Drive if required.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

License

MIT

Acknowledgement of Generative AI Tools

During the completion of thesis, generative tools such as ChatGPT and CoPilot were used supportively and minimally. All code involving any algorithms or calculations were entirely produced by myself; Copilot was only partially used for Docstring and plotting, and ChatGPT was only used for latex syntax queries. Examples of prompts include:

"Arrange the two following to figures such that they are aligned properly in latex."

About

Dissertation project in machine learning for image reconstructions in Computed Tomography

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published