Algorithm template docker for submissions to
SynthRAD2023 Grand Challenge
Explore the docs »
View Demo
·
Report Bug
·
Request Feature
The goal of this repository is to provide a seamless and efficient integration of your algorithm into the Grand Challenge platform. This repository offers a template for packaging your algorithm in a Docker container, ensuring compatibility with the input and output formats of the SynthRAD2023 challenge.
With this template, you can submit your algorithm (with minimal effort) to the test phases of the challenge (and optionally the validation phases). You can use the preliminary test phase to make sure everything is in order with your submission and get an initial estimate of your algorithms performance.
For building the algorithm for submission, the user should have access to Docker [https://docs.docker.com/] on their system. (Note: Submissions to the test phase are allowed only with docker containers). This algorithm template was tested using Ubuntu 20.04.6 and Docker 23.0.4.
Please make sure to list the requirements for your algorithm in the requirements.txt
file as this will be picked up when building the docker container.
- Clone the repo
git clone https://github.com/SynthRAD2023/algorithm-template.git
or
git clone git@github.com:SynthRAD2023/algorithm-template.git
Dummy data (randomly created scans) for testing the docker has already been provided in the test
folder. It should be in the following format,
algorithm-template
└── test
├── images
│ ├── body
│ │ └── 1BAxxx.nii.gz
│ └── mri
│ └── 1BAxxx.nii.gz
├── region.json
└── expected_output.json
test/images
simulates the input data provided to the docker image while it is run on the test data by the Grand Challenge platform. test/expected_output.json
is present to check if your algorithm provides the expected output on the inputs provided to it.
First, run test.sh
test.sh
will build the docker container (which contains the algorithm), provide the test
folder as input to the docker container and run the algorithm. The output of the algorithm will be placed in the output
directory. Tests will also be run to check if the algorithm provides the expected output.
Note: It is recommended to run this before you integrate your own algorithm into the template.
The output of the test.sh script should look similar to this:
########## ENVIRONMENT VARIABLES ##########
TASK_TYPE: mri
INPUT_FOLDER: /input
Scan region: Head and Neck
[
{
"outputs": [
{
"type": "metaio_image",
"filename": "/output/images/synthetic-ct/1BAxxx.nii.gz"
}
],
"inputs": [
{
"type": "metaio_image",
"filename": "/input/images/mri/1BAxxx.nii.gz"
},
{
"type": "metaio_image",
"filename": "/input/images/body/1BAxxx.nii.gz"
},
{
"type": "String",
"filename": "/input/region.json"
}
],
"error_messages": []
}
]
Tests successfully passed...
To integrate your algorithm into this template, you need to modify the predict
function of the SynthradAlgorithm
in the process.py
file.
class SynthradAlgorithm(BaseSynthradAlgorithm):
...
def predict(self, input_dict: Dict[str, sitk.Image]) -> sitk.Image:
"""
Your algorithm implementation.
"""
# Your code here
return output_image
You might need to load your models first before running the algorithm and this can be done in the __init__
function of the SynthradAlgorithm
class. For instance, to load a pytorch model,
class SynthradAlgorithm(BaseSynthradAlgorithm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Load the PyTorch model
model_path = os.path.join(os.path.dirname(__file__), 'your_model.pt')
self.model = torch.load(model_path)
self.model.eval()
GrandChallenge processes algorithm dockers on a per-scan basis as mentioned here. The test
folder in this repo represents one such case and simulates how the algorithm docker will be run on a single case when evaluating on gc. If you would like to test on multiple cases, please create copies of the test repository and add each case's data into them. You could make a trial submission and look at debug instructions to see exactly how it is run on a case-by-case basis.
This is a newly added functionality based on requests from our participants. You can access a "region"
key in the input_dict
argument to the predict function. process.py
shows a sample of how it can be obtained and used.
Since the challenge contains two tasks, you will need to provide separate docker containers for each (even if you run the exact same algorithm on both). To configure which task your docker will be built for, we have provided a .env
file. You can modify it before building the docker image and your docker will be built for the selected task.
TASK_TYPE="cbct" # Set to mri (Task 1) or cbct (Task 2)
INPUT_FOLDER="/input" # Do not change unless you want to test locally
Change the TASK_TYPE
to "cbct" or "mri" depending on the task you want to make the submission for. Do not change the INPUT_FOLDER
unless you are testing locally.
It is recommended to run test.sh
first as it will ensure that your new algorithm always runs as expected.
-
Run the
export.sh
command. You can provide a name as the next argument to allow naming your docker containers. Example:./export.sh cbct_docker
-
You might need to wait a bit for this process to complete (depending on your model size and dependencies added by you) as it builds and saves the docker as
.tar.gz
. Once you have this.tar.gz
file, you can submit it on the grand challenge portal in the SynthRAD submissions!
Please find detailed video instructions @ https://www.youtube.com/watch?v=RYj9BOJJNV0
This step requires a bit of familiarity with the docker ecosystem as you will need to edit the Dockerfile
to do so. The models will be embedded into the docker container allowing the docker to run independently on any system!
As a start, you can copy model files into the docker by adding something like this into the Dockerfile
,
COPY --chown=algorithm:algorithm your_model.pt /opt/algorithm/
Once you do this, your_model.pt
should be accessible in the ___init___
function as described above.
Ensure that you have the nvidia-container-toolkit installed along with your docker installation.
You can follow the instructions here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
Once you have this installed, you can run ./test_gpu.sh
instead of test.sh
In order to first test your algorithm locally (without the docker build process etc. - significantly speeds up over multiple iterations),
-
Configure
.env
for local mode by setting theINPUT_FOLDER
to the path where you want to provide the inputs from. For instance, thetest
folder is a good starting place. But you could also provide your own data. (!!! NOTE: SET THEINPUT_FOLDER
back to/input
before your build the docker) -
Run
python process.py
in an environment with all your dependencies installed.
This should run your algorithm locally and allow you to test different iterations before making a docker container.
In the setup
folder, the create_dummy_data.py
gives an example of how the dummy data is created for the docker container. You can reformat your own data in accordance to be run by the docker container.
For the MRI task, this is how the data should be organized.
data
├── images
│ ├── body
│ │ └── 1BAxxx.nii.gz
│ │ └── ...
│ ├── mri
│ │ └── 1BAxxx.nii.gz
│ │ └── ...
Similarly for the CBCT task,
data
├── images
│ ├── body
│ │ └── 1BAxxx.nii.gz
│ │ └── ...
│ ├── cbct
│ │ └── 1BAxxx.nii.gz
│ │ └── ...
See the open issues for a list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the GNU General Public License v3.0. See LICENSE.md
for more information.
Suraj Pai - b.pai@maastrichtuniversity.nl Matteo Maspero - @matteomasperonl - m.maspero@umcutrecht.nl
Project Link: https://github.com/SynthRAD2023/algorithm-template