Election Claims: Fact-Checked Archive | Lie Library

Searchable, citation-backed archive of Election Claims. False claims about voter fraud, stolen elections, rigged voting machines, and mail-in ballots. Every entry links to primary sources.

Why election claims need a verifiable archive

Election seasons concentrate attention, emotion, and misinformation. False claims about voter fraud, stolen elections, rigged voting machines, and mail-in ballots spread quickly, and the half-life of a viral post often outruns official corrections. For researchers, developers, and newsrooms, a searchable, citation-backed archive that links every assertion to primary sources is critical. It turns scattered debunks into durable, queryable knowledge that can be cited and reused.

This topic guide shows how to design and use an evidence-first archive for election claims. You will find a practical data model, developer-focused patterns for building topic landing pages, query examples, and operational tips that keep the work maintainable. The aim is simple: make it easy to find what was said, when it was said, why it is false or misleading, and where the receipts live, so audiences can verify the truth themselves.

Fundamentals of election claims fact-checking

Define the unit of truth

Start by defining a claim as a discrete, quotable statement about an election that can be evaluated against evidence. Focus on reproducible facts: dates, numbers, processes, and verifiable procedures. Avoid bundling multiple assertions into one record.

  • Scope: voter fraud, ballot counting procedures, voting machine integrity, mail-in ballots, certification, and audits.
  • Granularity: each unique assertion gets its own record, even if repeated across rallies, social posts, and interviews.
  • Evidence: every claim must link to primary materials like court filings, election manuals, machine test logs, Secretary of State releases, and official canvass reports. Add secondary fact-checks as corroboration.

Use a normalized schema

A normalized model keeps the archive consistent, searchable, and easier to integrate into topic landing pages. Here is a minimal, developer-friendly JSON schema you can extend:

{
  "id": "claim_2020_az_000123",
  "topic": "election",
  "subtopics": ["voter_fraud", "mail_in_ballots"],
  "claimant": {
    "name": "Donald J. Trump",
    "role": "candidate",
    "party": "Republican"
  },
  "statement": "Hundreds of thousands of illegal votes were counted in Arizona.",
  "verdict": {
    "rating": "false",
    "scale": ["true", "misleading", "unsupported", "false"],
    "explanation": "Maricopa and statewide audits found no evidence of illicit bulk ballots."
  },
  "first_seen": "2020-11-05T22:10:00Z",
  "last_seen": "2021-10-15T13:20:00Z",
  "jurisdictions": ["US-AZ", "US-FED"],
  "evidence": {
    "primary": [
      {
        "type": "court_document",
        "title": "Arizona election lawsuit dismissal",
        "url": "https://example.gov/courts/az/2020/decision.pdf",
        "archival_hash": "sha256:...",
        "retrieved_at": "2021-01-02T10:00:00Z"
      },
      {
        "type": "official_report",
        "title": "Maricopa County audit summary",
        "url": "https://example.gov/elections/audits/maricopa-summary",
        "retrieved_at": "2021-09-24T08:16:00Z"
      }
    ],
    "secondary": [
      {
        "type": "fact_check",
        "outlet": "AP",
        "url": "https://apnews.com/article/arizona-audit-fact-check"
      }
    ]
  },
  "citations": [
    {
      "type": "source_claim",
      "context": "Rally in Phoenix",
      "url": "https://video.example.com/phoenix-rally-clip"
    },
    {
      "type": "social_post",
      "platform": "Twitter",
      "url": "https://twitter.com/example/status/12345"
    }
  ],
  "merch": {
    "slug": "az-illegal-votes",
    "qr_target": "https://yourdomain.org/claim/claim_2020_az_000123"
  },
  "tags": ["2020", "arizona", "audit", "court_ruling"],
  "status": "published"
}

Classification for search and discovery

Build a small, opinionated taxonomy so users and downstream systems can filter the archive quickly:

  • Subtopics: voter_fraud, rigged_machines, mail_in_ballots, certification, audits, recounts, litigation, poll_watchers.
  • Jurisdictions: ISO style codes like US-PA, US-AZ. Use a consistent format across topics.
  • Verdict scale: true, misleading, unsupported, false. Do not invent new ratings without a migration plan.

This consistent tagging makes a topic landing for election claims fast, predictable, and indexable for search engines.

Build a topic landing and developer tooling for election claims

Design a compact, scannable topic landing

Election pages must load fast, surface the latest false claims, and route users to evidence. Use a two-pane layout: filters on the left, results on the right. Stick to a compact card that shows the statement, verdict, jurisdiction, and a quick link to primary sources. Put the QR link on detail pages and in merch modules so a printed tee or sticker can jump straight to the receipts.

Provide a simple JSON API for integration

Developers will want to build internal tools, dashboards, or CMS widgets. A stable read-only endpoint with predictable filters gets adopted quickly.

// Fetch false claims about mail-in ballots since October 2020
fetch("https://yourdomain.org/api/claims?topic=election&subtopic=mail_in_ballots&verdict=false&since=2020-10-01")
  .then(r => r.json())
  .then(data => {
    // Render the top five results in your app
    const top = data.items.slice(0, 5);
    console.log(top.map(c => c.statement));
  });

Response shape should include pagination metadata, stable IDs, and evidence counts so clients can prefetch detail pages efficiently.

{
  "items": [/* array of claim objects, see schema above */],
  "page": 1,
  "page_size": 20,
  "total": 818,
  "next": "/api/claims?topic=election&page=2&page_size=20&subtopic=mail_in_ballots&verdict=false&since=2020-10-01"
}

Composable filters that map to UX

Make filter keys mirror the schema so it is obvious how UI components connect to queries:

  • topic, subtopic, verdict, jurisdictions, year, claimant, q (full text)
  • since, until for time windows
  • sort=recent, sort=impact
// Example: Pennsylvania litigation claims in 2020, sorted by recency
GET /api/claims?topic=election&subtopic=litigation&jurisdictions=US-PA&year=2020&sort=recent

Clickable receipts and QR codes

Every claim detail page should prioritize the evidence. Primary sources first, secondary fact-checks second. For print or physical items, generate a QR that targets the canonical claim URL. Use a tiny helper so any merch slug resolves to the claim detail page.

// Node example using 'qrcode' to generate a QR for a claim detail URL
import QRCode from "qrcode";

const claimUrl = "https://yourdomain.org/claim/claim_2020_az_000123";
QRCode.toFile("qr-claim_2020_az_000123.png", claimUrl, {
  margin: 1,
  color: { dark: "#000000", light: "#FFFFFF" },
  width: 512
});

Cross-topic navigation

Users exploring election claims often want context across policy areas. Link to adjacent libraries, for example immigration narratives that spiked during campaign cycles. See also: Immigration Claims: Fact-Checked Archive | Lie Library.

Best practices and tips for a trustworthy archive

1) Evidence-first ordering

Always lead with primary sources. Put official reports, court documents, and machine test logs above news articles. Add a quick explainer that synthesizes the receipts into a short verdict. Avoid rhetorical language. Keep the tone clinical and precise.

2) Immutable claim IDs with friendly slugs

Make IDs opaque and stable, like claim_2020_az_000123. Use human-friendly slugs for URLs but never rely on them as keys. Support redirects when slugs change so external references and printed QR codes do not break.

3) Versioning and redactions

Claims evolve, new documents appear, and courts rule. Version the record with a changelog that lists what changed and why. If you must redact a sensitive source, replace the link with a public archive surrogate and explain the decision briefly.

4) Normalize jurisdictions and time

Use a single standard for locations and time zones. Store UTC timestamps, then format per viewer locale. For U.S. states, stick to ISO codes like US-GA or US-MI. This makes it easy to query cross-topic trends.

5) Deduplication heuristics

High-profile false claims often repeat. Keep a single canonical record per unique assertion, then attach each occurrence as a citation. Use fingerprinting of tokenized statements to suggest duplicates, but always review manually before merge.

6) Accessibility and performance

Election pages receive heavy spikes. Use static generation for the topic landing and cache API responses at the edge. Add keyboard navigation for filters, visible focus states, and descriptive link text like View primary evidence so assistive technology can parse the page.

7) SEO that respects users

Keep titles concise and descriptive. Use verdicts in meta descriptions to set expectations, such as False claim about rigged voting machines in Georgia, with links to audits and court orders. Avoid clickbait. Provide structured data with breadcrumbs and publish dates.

Common challenges and solutions when tracking false election claims

Ambiguous or compound assertions

Problem: a rally line mixes multiple allegations. Solution: split into separate claim records, one per testable proposition. Reference the shared event in each record's citations so readers can trace context. Keep explanations short, focused, and evidence driven.

Link rot and transient sources

Problem: official pages change or disappear. Solution: mirror metadata and checksums. Store an archival hash, timestamp, and a backup at a trusted archive. Display both the live and archived URLs so readers can verify provenance.

Conflicting fact-checks

Problem: reputable outlets rate the same claim differently. Solution: prioritize primary sources, then summarize the conflict. If necessary, mark the verdict as unsupported while you expand evidence. Record each outlet's methodology and link it beneath the verdict.

Jurisdictional nuance

Problem: election procedures vary by state or county. Solution: attach statutes and official manuals for the exact jurisdiction. Use jurisdiction tags in bold near the statement so readers do not misapply a rule from one state to another.

Scaling ingestion and review

Problem: claims surge during peak news cycles. Solution: build a two-stage queue. An ingestion worker scrapes and normalizes candidate statements, then a reviewer promotes records to published after evidence is attached. Use lightweight automations to flag missing primary sources or inconsistent verdicts.

-- Example: flag claims without primary evidence
SELECT id, statement
FROM claims
WHERE topic = 'election'
  AND status = 'published'
  AND JSON_LENGTH(evidence->"$.primary") = 0;

Conclusion and next steps

A durable, citation-backed archive for election claims helps audiences cut through noise by centering verifiable evidence. Developers can accelerate this work with a clean schema, predictable filters, and a topic landing that highlights primary sources. Editorial teams gain repeatable processes for verdicts, while merch and QR codes extend the impact into the physical world.

Start by modeling a handful of high-impact false claims, wire up read-only endpoints, and ship a lightweight topic page. Expand category filters and jurisdictions as your repository grows. Cross-link to adjacent topics so readers can move across narratives, for example: Immigration Claims: Fact-Checked Archive | Lie Library.

When you are ready to scale, integrate your CMS with the API and add background jobs for link validation, archival checks, and periodic evidence refresh.

FAQs about the election claims topic landing

How do you decide when an election claim is false, misleading, or unsupported?

Each record is evaluated against primary sources like court rulings, official canvass reports, and election manuals. False means the assertion contradicts authoritative documents or verified data. Misleading means it uses true fragments to imply a false conclusion. Unsupported means the claim lacks evidence and cannot be verified despite investigation. Every verdict links to receipts so readers can audit the reasoning.

What happens if new evidence appears after publication?

Claims are versioned. A new audit or court order triggers a review. The changelog notes what changed, when, and why, and links to the new evidence. If a verdict needs to shift from false to unsupported or vice versa, the update is recorded and dated so external references remain transparent.

Can I embed election claims in my newsroom or SaaS app?

Yes. Use the JSON endpoints with filters like topic=election, verdict=false, jurisdictions=US-PA, or subtopic=rigged_machines. Cache responses at the edge, and prefetch detail pages for fast navigation. Provide readers with a View evidence button that jumps directly to primary sources and include QR codes for any printed materials.

How do you handle repeated statements across rallies and social posts?

One canonical claim captures the unique assertion. Each occurrence is appended as a citation with date, format, and link. This keeps the archive tidy while giving users a full timeline of where and how the narrative spread.

Where can I compare election claims to other high-volume topics?

Explore adjacent topics to understand how narratives cross-pollinate. For example, the immigration section catalogs false statements about border enforcement and asylum processing: Immigration Claims: Fact-Checked Archive | Lie Library.

Built around verifiable evidence and developer-friendly patterns, this topic landing equips readers and builders to navigate high-stakes election cycles with clarity. When you need a reliable reference and integrations that just work, the project-wide standards used here align with the approach taken by Lie Library.

Keep reading the record.

Jump into the full Lie Library archive and search every catalogued claim.

Open the Archive