Climate Claims: Fact-Checked Archive | Lie Library

Searchable, citation-backed archive of Climate Claims. Misleading statements about climate change, renewable energy, and environmental policy. Every entry links to primary sources.

Why climate claims matter in public discourse

Climate policy shapes energy prices, infrastructure, disaster readiness, and jobs. The public debate is saturated with climate claims that range from carefully sourced to misleading and sometimes outright false. A topic landing page that aggregates and audits those statements helps journalists, educators, and developers avoid misinformation and find reliable primary sources quickly.

This guide explains how to evaluate and operationalize climate claims in a structured, developer-friendly way. It highlights the types of statements you will encounter, the data you need to test them, and the implementation details for building a searchable, citation-backed archive. It also shows how the archive connects to QR-coded merch that points to the receipts, so evidence travels with every sticker, tee, mug, or hat.

Throughout, you will find practical examples, code snippets, and checklists you can reuse. The goal is to make the process repeatable, transparent, and credible so that a single statement about climate can be traced to evidence in seconds.

Core concepts and fundamentals

Common types of climate claims

  • Temperature and trend claims - global or regional warming rates, record years, pauses
  • Emissions claims - national vs global CO2 trajectories, per capita comparisons, baseline year framing
  • Extreme weather attribution - hurricanes, wildfires, floods, droughts, and links to climate change
  • Energy mix and reliability - renewables vs fossil fuels, grid stability, capacity factors, storage
  • Economic impacts - jobs, costs of regulations, energy prices, consumer bills, subsidies and tax credits
  • Policy performance - Paris Agreement commitments, pipeline or drilling approvals, permitting speed

Data hygiene - metrics, scope, and baselines

Every climate statement lives or dies on scope, baseline, units, and timeframe. A credible archive always asks:

  • What is the geographic scope - global, national, state, or local
  • What is the timeframe - single year, rolling average, trend, projection
  • Which baseline year anchors the comparison - 1990, 2005, pre-industrial, or other
  • Which units - CO2 vs CO2e, kWh vs MWh, nameplate vs delivered energy
  • Is it observation or model projection - and what is the model

Canonical sources for verification

  • Temperature and attribution - NASA GISTEMP, NOAA, Berkeley Earth, IPCC assessments
  • Emissions inventories - Global Carbon Project, EPA, EIA, UNFCCC National Inventory Reports
  • Energy mix and prices - U.S. EIA, FERC, regional grid operators like PJM, CAISO, ERCOT
  • Employment and economic impacts - BLS, BEA, EIA Annual Energy Outlook, CBO
  • Policy documents - treaty texts, executive orders, agency rules, court decisions

A minimal schema for climate statements

Use a normalized object to capture the claim, context, and evidence. This reduces ambiguity and supports precise search.

{
  "claim_id": "clm-2026-0001",
  "speaker": "Public figure",
  "date_uttered": "2019-04-08",
  "topic": ["climate", "energy", "health"],
  "statement": "Wind turbines cause cancer.",
  "verdict": "False",
  "metrics": [],
  "context": {
    "scope": "US",
    "medium": "speech",
    "location": "Event name, city"
  },
  "citations": [
    {
      "type": "primary",
      "title": "Speech transcript",
      "url": "https://example.gov/transcript",
      "retrieved": "2026-05-01"
    },
    {
      "type": "scientific",
      "title": "WHO fact sheet on wind turbine noise and health",
      "url": "https://www.who.int/",
      "retrieved": "2026-05-01"
    }
  ],
  "notes": "No peer-reviewed evidence links audible or infrasound from turbines to cancer."
}

Practical applications and examples

Search that surfaces evidence, not just quotes

A robust topic landing experience for climate claims should return statements alongside direct links to primary sources. Use fielded search so users can filter by speaker, date range, claim type, verdict, and dataset used in the evaluation.

For PostgreSQL full text search, index the statement, speaker, and notes fields, and use trigram search for fuzzy matching of misheard phrases. Example:

-- Enable extensions
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE EXTENSION IF NOT EXISTS unaccent;

-- Search index
CREATE INDEX idx_claim_fts ON claims
USING GIN (to_tsvector('english', unaccent(statement || ' ' || speaker || ' ' || coalesce(notes, ''))));

-- Fuzzy filter for misspellings
SELECT claim_id, speaker, date_uttered, verdict
FROM claims
WHERE to_tsvector('english', unaccent(statement || ' ' || speaker || ' ' || coalesce(notes, '')))
      @@ plainto_tsquery('english', 'wind turbines cancer')
   OR similarity(statement, 'windmill cancer') > 0.35
ORDER BY ts_rank(to_tsvector('english', unaccent(statement)), plainto_tsquery('english', 'wind turbines cancer')) DESC
LIMIT 25;

Example workflows

  • Myth-busting a health claim about renewables - collect the original quote, capture the exact wording, then list peer-reviewed studies and agency summaries that address the mechanism. Contrast mechanism vs mere correlation.
  • Evaluating a cost claim about climate rules - identify the baseline scenario, discount rate, and whether costs are private, social, or net of tax credits. Link to agency Regulatory Impact Analysis.
  • Assessing an emissions brag or blame - specify the baseline year and sectoral scope. Distinguish territorial emissions from consumption-based accounting.

From archive to QR-coded merch

Each entry can generate scannable receipts. When a user buys a sticker with a quote, the QR code should resolve to a stable claim URL with transcript, datasets, and verdict. This makes fact-checking portable and shareable at rallies, classrooms, and newsrooms.

  • Stable slugs - include claim_id and a short, human-friendly slug
  • UTM tags - track scans from merch vs social posts
  • Image alt text - accurate quote and context for accessibility

If you are bridging climate with other topics in ecommerce, see Best Immigration Claims Sources for Political Merch and Ecommerce for a source-first approach you can mirror for climate claims. For issue tie-ins around election narratives, browse 2020 Election and Aftermath Hats | Lie Library.

Programmatic claim enrichment

Automate unit conversions and baseline adjustments to keep cross-claim comparisons consistent.

# Convert kWh to MWh and normalize a claim to 2005 baseline
def normalize_emissions(value_mtco2e, baseline_year, target_year, inventory):
    """
    value_mtco2e: numeric emissions value
    baseline_year: int, e.g., 2005
    target_year: int, e.g., 2023
    inventory: dict {year: value_mtco2e}
    returns percent_change from baseline
    """
    base = inventory.get(baseline_year)
    curr = inventory.get(target_year)
    if base is None or curr is None or base == 0:
        return None
    return 100.0 * (curr - base) / base

def kwh_to_mwh(kwh):
    return kwh / 1000.0

Best practices and tips

A precision checklist for climate statements

  • Quote the claim exactly - do not paraphrase until after you capture the verbatim text
  • State scope and baseline up front - geography, sector, baseline year
  • Pin the metric - temperature anomaly, CO2 mass, $ per kWh, jobs, capacity factor
  • Cite at least one primary source - transcript, video, official document - plus a technical source if relevant
  • Separate observation from projection - label each clearly
  • Document uncertainty - confidence intervals, ensemble ranges, model caveats
  • Use plain units and supply conversions - do not bury readers in acronyms
  • Keep verdicts short and evidence rich - readers should see the receipts in two clicks

Neutral, technically accurate language

Do not ascribe intent. Evaluate the words as spoken or written. Avoid charged adjectives. Where relevant, include the speaker's own corrections. For scientific topics, link to methodology pages so users can reproduce your calculations.

Leverage related checklists

Climate claims often intersect with foreign policy, industrial policy, and personal biography. Apply adjacent checklists when a statement crosses domains:

Implementation patterns for developers

Use a transparent pipeline for ingesting and reviewing claims. Example with a simple REST API pattern:

// Minimal Express.js route for claim submission and server-side validation
import express from 'express';
import Joi from 'joi';
const app = express();
app.use(express.json());

const claimSchema = Joi.object({
  speaker: Joi.string().min(2).required(),
  date_uttered: Joi.date().iso().required(),
  statement: Joi.string().min(10).required(),
  topic: Joi.array().items(Joi.string()).required(),
  citations: Joi.array().items(Joi.object({
    type: Joi.string().valid('primary', 'scientific', 'official', 'news').required(),
    title: Joi.string().required(),
    url: Joi.string().uri().required()
  })).min(1).required()
});

app.post('/claims', async (req, res) => {
  const { error, value } = claimSchema.validate(req.body);
  if (error) return res.status(400).json({ error: error.message });
  // persist to DB, enqueue for editorial review
  // respond with claim_id and review status
  return res.status(202).json({ claim_id: 'clm-2026-0002', status: 'under_review' });
});

app.listen(8080);

Common challenges and solutions

Baseline cherry-picking

Problem: A claim compares a low outlier year to a high outlier, exaggerating a trend. Solution: Normalize to a standard baseline like 1990 or 2005 for emissions, or use rolling means for temperature. Always show both the chosen baseline and a domain standard.

Scope confusion

Problem: A statement about global emissions is evaluated with U.S. data, or vice versa. Solution: Force scope fields in your schema and validation checks that block mismatched datasets. For example, tag each citation with its geographic scope and filter at query time.

Observation vs projection

Problem: A projection for 2030 is presented as an achieved result. Solution: Label projections distinctly, include the model name and version, and clearly separate realized values from forecast ranges in visuals and text.

Units and energy system misunderstandings

Problem: kW, kWh, and MWh are conflated, or nameplate capacity is compared to energy delivered. Solution: Store units in the schema and convert early. Provide capacity factor context when comparing solar or wind to thermal generation. Example helper:

# Capacity factor calculation
def capacity_factor(actual_mwh, nameplate_mw, period_hours):
    if nameplate_mw == 0 or period_hours == 0:
        return None
    return actual_mwh / (nameplate_mw * period_hours)

Attribution overreach in extreme weather

Problem: A claim attributes a single event entirely to climate change or denies any link. Solution: Cite formal attribution studies that quantify the change in likelihood or intensity. Use confidence levels and ensemble ranges. Avoid binary language.

Cost and jobs counting

Problem: A statement highlights gross job losses without netting gains or ignores discount rates in cost analyses. Solution: Require net accounting and disclosure of discount rates, time horizons, and whether figures are nominal or real. Link to Regulatory Impact Analyses with methodological transparency.

Conclusion

Climate claims are noisy, political, and technical all at once. A useful archive cuts through that noise with precise scope, consistent baselines, clear units, and direct links to primary sources. Building that workflow means thinking like a data engineer and an editor at the same time.

If you are a reporter, educator, or developer, start by defining your schema, automating conversions, and enforcing source quality. Then connect each entry to scannable merch so the evidence rides along with the quote in the real world. The climate topic landing experience should make it effortless to test a statement against the public record.

Browse the climate claims archive, scan the receipts, and contribute improvements where the evidence can be sharpened. With careful structure and transparent sourcing, Lie Library turns a one-liner into a reproducible, citable record that anyone can verify.

FAQ

How do you decide which climate claims to include?

We prioritize statements that are widely repeated, materially affect public understanding of climate or energy policy, or come from high-profile speakers. Each entry must have a primary source such as a transcript or video. Scientific or official sources are then attached to evaluate the claim's accuracy.

What counts as a primary source for climate statements?

A primary source is the original record of the statement, for example a speech transcript, official social post, press release, or congressional record. For evidence, we add scientific or official datasets like NASA GISTEMP, EPA inventory data, or EIA energy statistics to support verification.

Can I integrate this archive into my CMS or app?

Yes. Use a REST or GraphQL interface to ingest claim objects and citations. Normalize units and baselines on ingestion. Expose fielded search so users can filter by verdict, topic, date range, and dataset used. For multisubject workflows in civics education, see Crowd and Poll Claims Checklist for Civics Education.

How do QR codes on merch help with accuracy?

Each code resolves to the claim page with the exact quote and the underlying receipts. In classrooms or interviews, anyone can scan to check context and sources in seconds. This reduces screenshot cherry-picking and keeps the evidence front and center.

How does Lie Library handle updates or corrections?

When a speaker clarifies or corrects a statement, we add an update with date, link, and the new context. Verdicts can change when robust new evidence emerges. Users see version history so they can track how the record evolved across time with full transparency.

Keep reading the record.

Jump into the full Lie Library archive and search every catalogued claim.

Open the Archive