Skip to content
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.

Niftynet meeting 4th september 2017

Wenqi Li edited this page Jun 20, 2018 · 1 revision

Minutes NiftyNet meeting 04/09/2017

Present: Wenqi, Irme, Guotai, Tom W, Tom V, Carole, Felix, Jorge (Chairing)

Update to 0.2/0.2 merged into dev

General code changes/new features:

The update to 0.2 has been committed this morning. The key changes are that NiftyNet now supports multiple functionalities apart from segmentation, such as auto-encoders and GANs. The user can also specify their own application. Wenqi explain the newest features, including the different applications, the new layout of the config file. (This had to be changed due to the addition of the GANs and auto-encoders.)

The syntax is more general, maybe some changed needed in later version Jorge has tested the tensorboard support and it works really well. Wenqi has also done a pylint test for the engine part of the code.

Selective sampling and the evaluation has been disabled in the current version (they were in 0.1.1) the evaluation function is there but not in the configuration.

The demos:

The promise12 works, the brain parcellation as well, the auto-encoders and the GANs are not fully working yet. No new features will be added this week. Wenqi is working on implementing Guotai network in the current version. This will be added as a demo for BRATS.

Outstanding issues for update to 0.2 coding style: results vary on what is used to change the style

upgrade to tensorflow 1.3 (change reqs to 1.1-1.3)

publish examples and demos online: almost done

issue #18: for now it is only dropbox links, but it can be implemented in 0.3 public API is done

the mirror on github is live at: github.com/NifTK/NiftyNet (dev is the latest branch, master is 2 months old, more test are needed says Dzhosh)

new version (0.2) should be pushed to pip so pip install yields the latest version.

Version 0.3: -Model zoo should be fixed/implemented

-Sampling issues: Augmentation is done to the entire image, and then the patches are created. This causes the sample to be slow or the samples to be outside the FOV. The new plan is to do it in a lazy way, give the sampler some coordinates and a vector/way how to augment the samples. Jorge and Wenqi are planning on making the sampler work so that the network ‘knows’ what it needs and requests this from the sampler. Ideally the sampling would be implemented in NiftyNet and the resampling in Tensorflow (but this required CUDA coding knowledge)

Jorge will ask Pawel/Marc/someone else to ask about the CUDA code to implement this. Note that there currently is a 2D GPU version of bilinear interpolation: https://www.tensorflow.org/api_docs/python/tf/contrib/resampler/resampler https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/contrib/resampler/kernels/resampler_ops_gpu.cu.cc

Superresolution: Jorge spoke to Ryu hoping to convince to implement the superresolution into NiftyNet.

MICCAI: Tom is working on flyers to bring for us to MICCAI. We’ll print some flyers and hand them out on MICCAI. They look really good, and apart from a minor change they are ready to be sent to the printer

Next meeting:

First Monday of October: 2-09-2017 14:00

Next journalclub:

In two weeks: 18-09-2017