-
Notifications
You must be signed in to change notification settings - Fork 43
UNet with weighted loss and morphological postprocessing
Fourth solution introduced three ideas to the computing pipeline: weighted loss
, morphological post-processing
and fourth U-Net output contour_touching
.
Weighted loss is a concept, where we assign weight to each value of the loss function. Function operated on each output. Implementation is straightforward (models.py:L38):
loss_function = [('mask', segmentation_loss, 0.3),
('contour', segmentation_loss, 0.5),
('contour_touching', segmentation_loss, 0.1),
('center', segmentation_loss, 0.1)]
where:
- weights sum to 1.0
-
segmentation_loss
is loss function, (segmentation_loss(output, target)
)
We think that contours are most important to the model performance, so we assign highest weight to it. Similarly, we didn't observe crucial role of the center, so we decrease its importance.
We have created procedure that runs on the predictions. It uses several morphological transformations such as erosion, dilation and watershed to make final masks better. Implementation is available here: postprocessing.py:L127.
Here, we create masks with non-zero values where nuclei overlap. This is additional auxiliary target to learn. Implementation, here: preparation.py:L60
Run command:
$ neptune login
$ neptune send main.py --worker gcp-gpu-large --environment pytorch-0.2.0-gpu-py3 -- train_evaluate_predict_pipeline --pipeline_name unet_multitask
When training is completed, collect Kaggle submit from: /output/dsb/experiments/submission.csv
.
- Solution 1: U-Net
- Solution 2: Multi-output U-Net
- Solution 3: Improved Multi-output U-Net
- Solution 4: U-Net with weighted loss and morphological postprocessing
- Solution 5: U-Net specialists, faster processing, weighted loss function and improved validation