We implement a module that blurs objects (in an image) determined by the user (text) prompts. While constructing the module, we utilized the pretrained models OWLViT-v2 and mobile-SAM provided by HuggingFace and ultralytics, respectively. The demo is accessible at the HuggingFace space.
- Install Conda, if not already installed.
- Clone the repository
git clone https://github.com/byrkbrk/blurring-image-via-prompts.git
- Change the directory:
cd blurring-image-via-prompts
- For macos, run:
For linux or windows, run:
conda env create -f blurring-via-prompts_macos.yaml
conda env create -f blurring-via-prompts_linux.yaml
- Activate the environment:
conda activate blurring-via-prompts
Check it out how to use:
python3 blur.py -h
Output:
Blurs image based on given text prompts
positional arguments:
image_name Name of the image file that be processed. Image file
must be in `images-to-blur` folder
text_prompts Text prompts for the objects that get blurred
options:
-h, --help show this help message and exit
--blur_intensity BLUR_INTENSITY
Intensity of the blur that be applied. Default: 50
--image_size IMAGE_SIZE [IMAGE_SIZE ...]
Size (width, height) to which the image be
transformed. Default: None
--device DEVICE Device that be used during inference. Default: None
python3 blur.py dogs.jpg "jacket"
The output image (see below, on the right) will be saved into blurred-images
folder.
To run the gradio app on your local computer, execute
python3 app.py
Then, visit the url http://127.0.0.1:7860 to open the interface seen below.
See the display below for an example usage of the module via Gradio for the image hat_sunglasses.jpg
(foundable in the directory images-to-blur
).