Error Analysis and Assessment Criteria for solar channel inter-calibration methods
Agenda
Please consider how
Patrice's method of Error Analysis and
Fred's Qualitative Assessment Criteria could be applied to "your" method, what information is needed to perform this analysis and when we can expect results.
Draft Minutes
Attendees
EUMETSAT |
Tim Hewison (Chair), Yves Govaerts |
CNES |
Patrice Henry, Bertran Fougnie, Denis Blumstein (briefly) |
NOAA |
Fred Wu, Fangfang Yu, Andy Heidinger, Bob Iacovazzi |
KMA |
Dohyeong Kim, Seung-Hee Ham |
JMA |
Arata Okuyama, Hiromi Owada, Yuki Kosaka, Kenji Date, Ryuichiro Nakayama |
CMA |
Xiuquing Hu, Jingjing Liu, Ling Sun, Ronghua Wu, Chengli Qi |
NIST |
Raju Datla, Ruediger Kessel |
NASA |
Dave Doelling |
Key Points
Patrice described his
method of uncertainty analysis as applied to the solar channels of POLDER-MERIS using Rayleigh scattering. As was reinforced by other presentations, we heard of the
need to identify whether each term in the error budget contributes to the systematic (bias) or random (noise) uncertainties.
Action: Patrice to forward draft procedure prepared by NPL for the assessment of these type I and II errors, which follows CEOS guidance.
Patrice also highlighted the
need to repeat the analysis for each instrument being inter-calibrated, as the uncertainties can vary with instrument. It was also pointed out that the difference between
absolute and relative errors should be noted as the former are important for traceability, while the latter determine the methods'
stability and ability to detect temporal trends in the instruments' calibration.
Bertrand Fougnie outlined six steps needed to conduct a success error analysis:
- Select different sensors
- Complete the calibration for the select dataset
- Decide how to include terms in the error budget - either as systematic or random
- Quantify for each contribution the impact on the systematic and random uncertainty
- Summarise the results in a single table - separating the systematic and random contributions
- Evaluate the scalability of the results with the size of the dataset
Fred Wu presented his
draft Assessment Criteria and later Fangfang Yu gave an example of these applied to lunar method. While there is some overlap of some criteria with the quantitative uncertainty analysis, it was still felt to be useful to complete such assessments - particularly when considering how to combine different inter-calibration methods. Ruediger confirmed the terminology was inline with usual metrological conventions. It was later suggested that these criteria be extended to include an assessment of whether observable parameters (such as target scene variance) could be used to provide quality indicators.
Action: Fred to consider including this as an additional criteria.
Dave Doelling outlined his plans to conduct an uncertainty analysis for Deep Convective Clouds based on an empirical assessment of observational data. Ideally this should be complemented by modelling work, although these results should be independent of the observations - not based on cloud properties derived from satellite data.
Seung-Hee Ham outlined her plans to conduct the
analysis of Liquid Water Clouds as invariant targets. This again followed six generic steps, which are applicable to all methods:
- Short description of the method
- Define required input parameters for and their sources
- Estimate typical uncertainty range in input parameters
- Estimate resultant calibration uncertainty by input parameters (from sensitivity test, case studies)
- Evaluate the calibration method by applying to reference sensors
- Examine dependence of calibration errors on each input parameter
Patrice Henry also described the use of a large dataset of observations to derive estimates for different terms in the
error budget of the desert method. He emphasised the need for datasets covering >>1 year to account for season effects.
Arata Okuyama presented an intial analysis of the
uncertainties in the Liquid Water Cloud method. Again he emphasised the
need for a detailed description of the method ( draft ATBD) and to
consider the contibrutions from every process in the algorithm - even if it is negligible. He also proposed we should agree standard terms and typical uncertainty values for each common input (e.g. surface pressure, wind speed, ...).
Andy Heidinger presented an
initial analysis of the uncertainties in the Sun Glint method. CNES have also performed such an analsys.
Action: Andy Heidinger and CNES to compare results.
It was suggested that the different terms in the error budget are combined in 3 ways: Best case, typical, worst case. Although this was thought to be beneficial for non-Gaussian error distributions, the overall uncertainty should be indepedent of this. It was also pointed out that
attention should be paid to correlation between error terms, as this could lead to serious underestimates of the overall uncertainty.
Tim Hewison introduced a
draft uncertainty analysis he conducted on the inter-calibration of the infrared channels of SEVIRI-IASI. He noted the most difficult term to quantify was the contribution from spectral mismatches, which introduce systematic uncertainties that can dominate the error budget. GSICS developers are encouraged to review and comment on this. Dave Doelling also outlined plans to conduct a similar analysis for the ray-matching comparison of the solar channels, for which a more comprehensive analysis of the geometric mismatch terms would be needed. However, Dave felt the largest uncertainties are likely to be introduced by spectral mismatches. It was suggested that methods to account for this term in the error budget could be validated using comparisons with hyperspectral instruments such as GOME-2 or SCIAMARCHY.
Ruediger Kessel's presentation on the
comparison of inter-calibration results without a common reference was deferred to a dedicated web meeting on the subject of consistency between different GSICS products, tentatively scheduled for July 2010.
It was agreed that another web meeting to review these uncertainty analyses and assessments would be useful. It was expected that most PIs will have provisional results ready by this autumn. This can then be followed by a stratergy to combine different methods to generate GSICS products for solar channels of GEO imagers.
Action: Tim Hewison to schedule follow-up web meeting in late October 2010.