Error Analysis and Assessment Criteria for solar channel inter-calibration methods

Agenda

10min T. Hewison Aim of Meeting & AOB
20min P. Henry Error Analysis for Rayleigh Scattering
20min X. Wu Assessment Criteria for inter-calibration methods
5min D. Doelling Application of above to Deep Convective Clouds
5min S-H. Ham Application of above to Liquid Water Clouds
5min P. Henry Application of above to Deserts/Bright Land Surfaces
5min A. Okuyama Application of above to JMA Vicarious Calibration
5min A. Heidinger Application of above to Sun Glint
5min X. Wu Application of above to Moon and Stars
5min T. Hewison Application of above to GEO-LEO IR hyperspectral
5min D. Doelling Application of above to Direct Ray-matching Comparisons
10min R. Kessel Potential for combining inter-calibration methods

Please consider how Patrice's method of Error Analysis and Fred's Qualitative Assessment Criteria could be applied to "your" method, what information is needed to perform this analysis and when we can expect results.

Draft Minutes

Attendees

EUMETSAT

Tim Hewison (Chair), Yves Govaerts

CNES

Patrice Henry, Bertran Fougnie, Denis Blumstein (briefly)

NOAA

Fred Wu, Fangfang Yu, Andy Heidinger, Bob Iacovazzi

KMA

Dohyeong Kim, Seung-Hee Ham

JMA

Arata Okuyama, Hiromi Owada, Yuki Kosaka, Kenji Date, Ryuichiro Nakayama

CMA

Xiuquing Hu, Jingjing Liu, Ling Sun, Ronghua Wu, Chengli Qi

NIST

Raju Datla, Ruediger Kessel

NASA Dave Doelling

Key Points

Patrice described his method of uncertainty analysis as applied to the solar channels of POLDER-MERIS using Rayleigh scattering. As was reinforced by other presentations, we heard of the need to identify whether each term in the error budget contributes to the systematic (bias) or random (noise) uncertainties. Action: Patrice to forward draft procedure prepared by NPL for the assessment of these type I and II errors, which follows CEOS guidance.

Patrice also highlighted the need to repeat the analysis for each instrument being inter-calibrated, as the uncertainties can vary with instrument. It was also pointed out that the difference between absolute and relative errors should be noted as the former are important for traceability, while the latter determine the methods' stability and ability to detect temporal trends in the instruments' calibration.

Bertrand Fougnie outlined six steps needed to conduct a success error analysis:
  1. Select different sensors
  2. Complete the calibration for the select dataset
  3. Decide how to include terms in the error budget - either as systematic or random
  4. Quantify for each contribution the impact on the systematic and random uncertainty
  5. Summarise the results in a single table - separating the systematic and random contributions
  6. Evaluate the scalability of the results with the size of the dataset
Fred Wu presented his draft Assessment Criteria and later Fangfang Yu gave an example of these applied to lunar method. While there is some overlap of some criteria with the quantitative uncertainty analysis, it was still felt to be useful to complete such assessments - particularly when considering how to combine different inter-calibration methods. Ruediger confirmed the terminology was inline with usual metrological conventions. It was later suggested that these criteria be extended to include an assessment of whether observable parameters (such as target scene variance) could be used to provide quality indicators. Action: Fred to consider including this as an additional criteria.

Dave Doelling outlined his plans to conduct an uncertainty analysis for Deep Convective Clouds based on an empirical assessment of observational data. Ideally this should be complemented by modelling work, although these results should be independent of the observations - not based on cloud properties derived from satellite data.

Seung-Hee Ham outlined her plans to conduct the analysis of Liquid Water Clouds as invariant targets. This again followed six generic steps, which are applicable to all methods:
  1. Short description of the method
  2. Define required input parameters for and their sources
  3. Estimate typical uncertainty range in input parameters
  4. Estimate resultant calibration uncertainty by input parameters (from sensitivity test, case studies)
  5. Evaluate the calibration method by applying to reference sensors
  6. Examine dependence of calibration errors on each input parameter
Patrice Henry also described the use of a large dataset of observations to derive estimates for different terms in the error budget of the desert method. He emphasised the need for datasets covering >>1 year to account for season effects.

Arata Okuyama presented an intial analysis of the uncertainties in the Liquid Water Cloud method. Again he emphasised the need for a detailed description of the method ( draft ATBD) and to consider the contibrutions from every process in the algorithm - even if it is negligible. He also proposed we should agree standard terms and typical uncertainty values for each common input (e.g. surface pressure, wind speed, ...).

Andy Heidinger presented an initial analysis of the uncertainties in the Sun Glint method. CNES have also performed such an analsys. Action: Andy Heidinger and CNES to compare results.

It was suggested that the different terms in the error budget are combined in 3 ways: Best case, typical, worst case. Although this was thought to be beneficial for non-Gaussian error distributions, the overall uncertainty should be indepedent of this. It was also pointed out that attention should be paid to correlation between error terms, as this could lead to serious underestimates of the overall uncertainty.

Tim Hewison introduced a draft uncertainty analysis he conducted on the inter-calibration of the infrared channels of SEVIRI-IASI. He noted the most difficult term to quantify was the contribution from spectral mismatches, which introduce systematic uncertainties that can dominate the error budget. GSICS developers are encouraged to review and comment on this. Dave Doelling also outlined plans to conduct a similar analysis for the ray-matching comparison of the solar channels, for which a more comprehensive analysis of the geometric mismatch terms would be needed. However, Dave felt the largest uncertainties are likely to be introduced by spectral mismatches. It was suggested that methods to account for this term in the error budget could be validated using comparisons with hyperspectral instruments such as GOME-2 or SCIAMARCHY.

Ruediger Kessel's presentation on the comparison of inter-calibration results without a common reference was deferred to a dedicated web meeting on the subject of consistency between different GSICS products, tentatively scheduled for July 2010.

It was agreed that another web meeting to review these uncertainty analyses and assessments would be useful. It was expected that most PIs will have provisional results ready by this autumn. This can then be followed by a stratergy to combine different methods to generate GSICS products for solar channels of GEO imagers. Action: Tim Hewison to schedule follow-up web meeting in late October 2010.
I Attachment Action Size Date Who Comment
20100422_JMA_ErrorAnalysis.pptppt 20100422_JMA_ErrorAnalysis.ppt manage 329 K 21 Apr 2010 - 22:09 HiromiOwada Trial error estimation Liquid cloud target JMA/MSC
Absolute_Calibration_Using_Rayleigh_Scattering.pdfpdf Absolute_Calibration_Using_Rayleigh_Scattering.pdf manage 218 K 15 Apr 2010 - 12:27 TimHewison Patrice Henry's error analysis for LEO Rayleigh Scattering
Action_GRWG05-01.pptppt Action_GRWG05-01.ppt manage 429 K 20 Apr 2010 - 03:22 XiangqianWu "Criteria for Qualitative Evaluation" presentation from Fred Wu
Agenda.pptppt Agenda.ppt manage 166 K 21 Apr 2010 - 09:09 TimHewison Agenda, Aims, Brief and Conclusions
Comparability_without_common_reference.pdfpdf Comparability_without_common_reference.pdf manage 72 K 21 Apr 2010 - 06:53 TimHewison Ruediger Kessel's Comparison without Common Reference
Desert_calibration_uncertainties.pptppt Desert_calibration_uncertainties.ppt manage 346 K 22 Apr 2010 - 09:24 TimHewison Patrice's review of uncertainties in desert calibration
GSICS_SEVIRI-IASI_Inter-calibration_Uncertainty_Analysis.DOCDOC GSICS_SEVIRI-IASI_Inter-calibration_Uncertainty_Analysis.DOC manage 441 K 22 Apr 2010 - 07:16 TimHewison Uncertainty Analysis for SEVIRI-IASI inter-calibration - Word document
GSICS_SEVIRI-IASI_Inter-calibration_Uncertainty_Analysis.xlsxls GSICS_SEVIRI-IASI_Inter-calibration_Uncertainty_Analysis.xls manage 122 K 22 Apr 2010 - 13:59 TimHewison Uncertainty Analysis for SEVIRI-IASI inter-calibration - Excel spreadsheet
Liquid_water_cloud.pptppt Liquid_water_cloud.ppt manage 829 K 21 Apr 2010 - 06:51 TimHewison Seung-Hee's Liquid Water Cloud
error.budget.GOES.vs.Moon_Star.pptppt error.budget.GOES.vs.Moon_Star.ppt manage 468 K 22 Apr 2010 - 09:24 FangfangYu  
glint_cal_update.pptppt glint_cal_update.ppt manage 551 K 22 Apr 2010 - 06:40 TimHewison Andy's error estimates of the solar glint method
Topic revision: r19 - 26 Apr 2010, TimHewison
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding GSICS Wiki? Send feedback