In this studt, we assessed the standard U-Net for end-to-end image translations across three MR image contrasts for the brain.
Fig. 2. The comparison of synthetic-MR images generated using a U-Net model for one subject on a test data set. From left to right: input MR images (source image-contrast); synthetic-MR images (target image-contrast); ground-truth/real MR image (target image-contrast); difference map (predicted – real MR image); and SSIM map. Rows show image-to-image translations across T1, T2, and FLAIR MR contrasts.
The datasets analyzed during the current study are available in the BRATS 2018 challenge repository at https://www.med.upenn.edu/sbia/brats2018/data.html.
Please cite this paper: Osman AFI, Tamam NM. Deep learning-based convolutional neural network for intramodality brain MRI synthesis. J Appl Clin Med Phys. 2022;e13530. https://pubmed.ncbi.nlm.nih.gov/35044073/.