Skip to content

asilvas/salient-maps

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

salient-maps

Various open source salient maps.

Developer Usage

Example using the Deep Gaze model.

const models = require('salient-maps');
const cv = require('opencv4nodejs');

const Deep = models.deep.load();
const deep = new Deep({ width: 200, height: 200 });
const salientMap = deep.computeSaliency(cv.imread('myimage.jpg'));

Options

Option Type Default Info
width number 200 Width of saliency map. It's not recommended to go above 300 or below 100.
height number 200 Height of saliency map. It's not recommended to go above 300 or below 100.

What to do with salient map?

While it's entirely up to you how use these maps, the original intent of this project was to be paired with the salient-autofocus project for providing fast image auto-focus capabilities.

Models

ID Description License Usage
deep MIT Deep Gaze port of FASA (Fast, Accurate, and Size-Aware Salient Object Detection) algorithm Recommended for most static usage where high accuracy is important, and near-realtime is sufficient performance (tunable by reducing map size). May not be ideal for video unless you drop map size to 150^2 or lower.
deep-rgb MIT A varient of Deep Gaze port but leveraging the RGB colour space instead of LAB. Not recommended. Useful for comparison. Can perform better.
spectral BSD A port of the Spectral Residual model from OpenCV Contributions. Amazing performance, great for video, but at the cost of quality/accuracy.
fine BSD A port of the Fine Grained model from OpenCV Contributions. Interesting for testing but useless for realtime applications.

Want to contribute?

Installation

Typical local setup.

git clone git@github.com:asilvas/salient-maps.git
cd salient-maps
npm i

Import Assets

By default testing looks at trainer/image-source, so you can put any images you like there. Or follow the below instructions to import a known dataset.

  1. Download and extract CAT2000
  2. Run node trainer/scripts/import-CAT2000.js {path-to-CAT2000}

The benefit of using the above script is it'll seperate the truth maps into trainer/image-truth, which are optional.

Preview

You can run visual previews of the available saliency maps against the dataset via:

npm run preview

Benchmark

Compare performance data between models:

npm run benchmark

Export

Also available is the ability to export the salient map data to trainer/image-saliency folder, broken down by the saliency model. This permits review of maps from disk, in addition to being in a convenient format for submission to the mit saliency benchmark for quality analysis against other models.

npm run export

License

While this project falls under an MIT license, each of the models are subject to their own license. See Models for details.

About

Various open source salient maps

Resources

License

Stars

Watchers

Forks

Packages

No packages published