Product Evaluation Introduction
In order to assess the PROBA-V data products’ quality, comparisons with various reference datasets are performed. The validations are performed using similar or comparable land surface variables (e.g. surface reflectance and NDVI) that are derived from other satellite platforms, such as from the MODIS satellites. The product validation follows a standardised protocol, in which the following statistical metrics are calculated:
- Geometric mean (GM) regression: the geometric mean regression uses an orthogonal model to calculate the slope and intercept values and includes the errors of both satellite datasets.
- Root Mean Square Error (RMSE): a metric that expresses the overall difference (including both systematic and random differences) between two datasets from zero. The RMSE is generally referred to as uncertainty.
- Mean Bias Error (MBE): a measure of the overall average difference, generally referred to as accuracy, preserves the sign of the difference.
- Systematic and unsystematic difference (MPDSs and MPDu): these metrics express the amount of systematic and unsystematic (random) difference and are derived from the Mean Square Difference (MSD).
- Relative difference: expresses the mean or median difference (dependent on the frequency distribution’s skewness) relative to the reference dataset.
In the product evaluations, the following aspects are examined:
The completeness of a dataset is assessed in a spatial and temporal extent. Data can be incomplete due to for example bad radiometric quality in one of the four PROBA-V bands.
The difference magnitude is presented as difference histograms between the examined and reference dataset, as well as global and/or regional scatterplots.
The similarity and differences between the examined and one or more reference dataset time series is shown in global maps, using the metrics introduced above.
The metrics are computed per time step in order to present the temporal evolution of overall, systematic, and random differences between two or more datasets.
The Product Evaluation Section contains the following reports:
- PROBA-V re-processing Change Summary: from August 2016 – January 2017, the entire PROBA-V archive (spanning from October 2013 – present) was re-processed. This document summarises the major and minor modifications to algorithms, data, and metadata.
- PROBA-V Collection 1 Evaluation: this paper provides a comparison of the reprocessed PROBA-V data (Collection 1) with the previous version of the data archive (Collection 0). The comparison was carried out on S10 surface reflectance and NDVI data over the entire Collection 1 and Collection 0 data archives. The evaluation focuses on (i) qualitative and quantitative assessment of the new cloud detection scheme; (ii) quantification of the effect of the reprocessing by comparing C1 to C0; and (iii) evaluation of the spatio-temporal stability of the combined SPOT/VGT and PROBA-V archive through comparison to METOP/AVHRR.
- SPOT-VGT Collection 3 Evaluation: The entire SPOT-VGT data archive (21 April 1998 – 31 May 2014) was reprocessed in 2015 – 2016. The paper describes the comparison S10 surface reflectance and NDVI from the re-processed Collection 3 with the previous Collection 2, as well as a consistency analysis between Collection 3 VGT1 and VGT2 data. Further, the Collection 3 data were compared with external satellite reference data (MODIS and METOP-AVHRR).
- Comparison SPOT-VGT – PROBA-V: A short note reporting on the comparison between SPOT-VGT and PROBA-V surface reflectances and NDVI for data before and after the re-processings. The comparison was performed on S10 composites for the overlapping observational period (November 2013 – May 2014).