Methodology This page describes the data sources, processing pipeline, and methodological choices used to construct the Target 3 funding dataset. The methodology was refined to maximize transparency, replication, and long-term sustainability through a year of consultations with data and subject matter experts. Because reporting is incomplete and heterogeneous across donors, the resulting figures should be interpreted as conservative estimates of Target 3 relevant funding.
1. Scope and Purpose. We track Target 3 relevant international public and philanthropic funding to ODA-eligible countries (2014–2024). The base unit is an IATI-style activity, which may include sub-activities when detailed budgets are available.
2. Data Sources and Ingestion. Primary sources include IATI archives, OECD CRS, public philanthropic grant databases, multilateral fund records with project documentation, and a confidential aggregated marine philanthropy series.
3. Harmonization and Standardization. All records are mapped to a standard activity schema with identifiers, organizations, geography, text fields, and validated transactions. Transactions are converted to constant 2024 USD using ECB FX rates and BLS inflation indices.
4. Identifying Target 3 Relevant Activities. A layered approach combines rule-based preselection, LLM-assisted review, and expert judgment to classify relevance (principal, significant, needs review), site type, stage, and domain.
5. Construction of the Final Dataset. Human-approved activities from all sources are merged, with duplicate hierarchies removed (e.g., bilateral-to-fund transfers, GEF vs. implementing agency reports). Special handling covers GEF components, debt-for-nature swaps, and multi-year grant annualization.
6. Relevant Share Adjustments and Target 3 Weights. Where budget detail exists, reviewers derive project/component relevant shares. Otherwise qualitative cues inform weights; remaining cases use principal (100%) and significant (40%) adjustments applied to disbursement/expenditure transactions only.
7. Quality Assurance, Audits, and Sensitivity Analysis. Iterative spot checks of parsing, merged text, and LLM outputs; audits of rejected and principal projects; sensitivity tests on relevance weights and classifications confirm stability of trends and rankings.
8. Limitations and Interpretation. Coverage is constrained by public reporting; totals represent a conservative lower bound. LLMs accelerate review but require schema validation, audits, and human confirmation of approved activities.