A problem exists when building broad scale models, for example, Australia.
- Geophysics (Gravity, Magnetics, Radiometrics, Seismic, Electromagnetic, Induced Polarisation, Magnetotelluric...)
- Geology (Lithology, Stratigraphy, Structure, Hydro..)
- Remote Sensing (Landsat, ASTER, Sentinel...)
- Geochemistry (Rock, Soil, Water, Assay techniques...)
- Direct observations
- Gridded Data
- Interpretations (Solid geology, SEEBase...)
- Derivations (e.g. ASTER band ratios, Rolling up of rock units...)
- Machine Learning Models (Regolith Depth...)
- Inversions
- Age of science
- Technology used
- Resolution (Pixel size, map scale, survey spacing, detection limits..)
- Survey Type
- Human ratings? e.g. 1-10
- Downsampling/Upsampling
- Missing data (Geophysic survey blanks, Remote sensing gaps on old satellites..)
- 1
- 2
- 3
- 4
- more? (Depth Slices...)
- World
- Country
- State
- Region
- Local
- Variance of different model runs
How, thinking in a raster fashion, to get a combined per-pixel Data Quality rating for a map output.
-
Some sort of normalised ranking for each quality area?
-
Weightings?
-
Simple qualitative (3/2/1, Good/Average/Bad, High/Medium/Low or other ordinals).
-
Exists / Missing