When developing a segmentation model, one is often asked how well a model works. Quantitative metrics like Intersection-over-Union and Dice score can be used to give a sense of the model performance, as can visually examining the model outputs.
For segmentation of geospatial imagery, each image is linked to an actual place. The goal of this repository is to look at quantitative metrics in geographic space to determine if a model works better in some physical places vs others.
The imagery comes from NOAA, and is taken after Hurricanes or large storms. The imagery is segmented into 4 classes; water, sand, vegetation, and human development (roads, buildings, etc).
Notebooks:
Examine segmentation model output: Model_example-Output.ipynb
Calculate mIoU and mDice for all labeled Training and Val images: MetricsForAll_TV.ipynb
Calculate mIoU and mDice for all labeled Testing images: MetricsForAll_Testing.ipynb
Make TF model to predict metrics (IoU, Dice) from image by cutting off UNet decoder, freezing encoder, and add dense layers: MetricsModel.ipynb
Predict metrics for all NOAA images: Metrics_Predict_all.ipynb
Predict metrics for Test set images: Metrics_Predict_Test.ipynb
Visualize metrics for all NOAA images: this is currently done with kepler.gl