HOW TO CONDUCT A DATA QUALITY ASSESSMENT 2
This “systematic approach” is valuable because it assesses a broader set of issues that are likely to ensure data
quality over time (as opposed to whether one specific number is accurate or not). For example, it is possible
to report a number correctly, but that number may not be valid.
As mentioned above, the purpose of a DQA is to assess the data management systems of USAID’s
Implementing Partners (IPs), by analyzing program indicators using data quality standards of validity,
integrity, precision, reliability, and timeliness (V-I-P-R-T). These five standards are defined below in Table 1.
A DQA assesses the quality of data and information an IP submits by analyzing the process used to collect,
store, and transmit data to USAID/DRC. It highlights strengths and weaknesses of partners’ primary and
secondary data and provides recommendations for improving the data management system of the IP. In
sum, a DQA:
Assesses the quality of data submitted by these IPs in relation to the V-I-P-R-T data quality standards.
Assesses the system that the IP uses to collect and analyze data.
Assesses the management information system the partner uses to record, maintain, and report data.
Identifies areas of potential vulnerability that affect the general credibility and usefulness of the data.
Recommends measures to address any identified weaknesses in the data submitted by the IP and in the
M&E procedures and systems in place at the partner’s level.
Table 1: DQA STANDARDS
STANDARD DEFINITION
Validity Data should clearly and adequately represent the intended results. While proxy data may be used, the
Mission must consider how well the data measure the intended result. Another issue is whether data
reflect bias, such as interviewer bias, unrepresentative sampling, or transcription bias.
Integrity When data are collected, analyzed, and reported, there should be mechanisms in place to reduce the
possibility that they are intentionally manipulated for any reason. Data integrity is at greatest risk of
being compromised during data collection and analysis.
Precision Data should be precise enough to present a fair picture of performance and enable management
decision making at the appropriate levels. One issue is whether data is at an appropriate level of detail
to influence related management decisions. A second issue is whether or not the margin of error (the
amount of variation normally expected from a given data collection process) is acceptable given the
management decisions likely to be affected.
Reliability Data should reflect stable and consistent data collection processes and analysis methods over time.
The key issue is whether analysts and managers would come to the same conclusions if the data
collection and analysis process were repeated. The Mission should be confident that progress toward
performance targets reflects real changes rather than variations in data collection methods. When data
collection and analysis change, PMPs should be updated.
Timeliness Data should be timely enough to influence management decision making at the appropriate levels. One
key issue is whether the data are available frequently enough to influence the appropriate level of
management decisions. A second is whether data are current enough when they are reported.
WHAT ARE THE MAIN TYPES OF DQAS?
The DQA is an assessment exclusively focused on the quality of data. It is not an audit conducted on
selected indicators – even though some data quality issues may sometimes question the nature of indicators
selected for a given project. USAID/DRC has decided to characterize three main categories of DQAs: