The goal of this project is to explore the task of manipulating weather in images using paired image-to-image translation. This problem has been specifically approached before by WeatherGAN through unsupervised translation. The main hypothesis of the project is that images from fixed webcams with different weather conditions is a very good source of data for the paired translation and could yield better results than unsupervised translation.
The name Teshub comes from the Hurrian weather god.
This project is very experimental and is done as a learning exercise. Since generative models require a lot of resources (data, models and time), this is just a proof of concept, so the results are far from impressive. All models are trained on private data.
The WeatherInFormer multi-task model is based on SegFormer and its goal to "understand" weather in images. The model tries to rate images based on 4 possible conditions: snowy, cloudy, rainy and foggy. To do this, the model first segments weather-cues in the image and then uses them to take conclusions about the ratings.
The WeatherMorph is based on Pix2Pix and makes use of the information obtained from WeatherInFormer. For each weather translation category, a specific configuration is chosen that specifies the importance of each weather-cue.
For training, the model is fed combinations of images from the same webcam/location, but with the
attributes specified by the translation configuration. For example, if we want to develop a model
that adds clouds, source images should have cloudy < 0.2
and target images should have cloudy > 0.8
.
A very simple website is developed to showcase the models (Flask backend + React frontend):
The website can be started locally by running cd deploy && docker compose run
and then connecting in a
browser to localhost:5000
.