You are here

Data Quality and Limitations

Significant or known data limitations should be identified. Performance data need not be perfect to be reliable. At the same time, significant data limitations can lead to bad decisions -- U.S. Office of Management and Budget (OMB).

In ADS 203.3.3.5, USAID calls for the identification of known data limitations associated with every performance indicator included in a PMP and a description of what steps will be taken to address them. When a Mission approves a PMP it is also approving and committing to implement plans for reducing known threats to data timeliness, validity, reliability, precision and integrity.

To further ensure the quality of its performance data, USAID requires that a Data Quality Assessment be carried out for every performance indicator on which it will report externally during the three year period before that data is published or otherwise released. PMPs are expected to include plans for meeting this requirement. For new indicators included in a CDCS this requirement means that Missions may have very little time between CDCS approval and the first time they are to report on new indicators for which Data Quality Assessments will be needed.

A key challenge for USAID staff involved in the development of a PMP lies in identifying, often in advance of using them, what the data limitations of new performance indicators and indictor data collection plans are likely to be. The table below, which is keyed to USAID’s data quality standards for timeliness, validity, reliability, precisions and integrity describes some of the issues that could threaten data quality.

USAID Data Quality Standards Possible Data Quality Threats/Limitations
Timeliness
  • Late delivery
  • Out of date information
  • Collected too infrequently
Validity/Accuracy
  • Poorly structured instruments
  • Reliance on proxy measures
  • Inconsistencies in data collection process
  • Instruments not always completed
  • Transcription errors
  • Small or possibly biased sample/not representative
  • Under-skilled or under-supervised data collectors
Reliability
  • Data collection technique is unstructured
  • Data collection too costly to repeat
  • Changes were made in the instruments
Precision
  • Response categories not sufficiently fine grain
  • Rounding at too high a level
  • Unacceptable margin of error
Integrity
  • Incentives in the data delivery system
  • Incentives in the partner performance arrangements
  • Uncertainty about data quality from secondary source