How citable is any source in the AI era?
SourceScore grades every URL you paste on three things AI engines actually weigh: how rigorously the source cites others, how fit it is as a modern citation, and how often tier-1 publications cite it. One paste, four numbers, one grade.
Top 5 by SourceScore Index
See all 130 sources →- #1A+·96U.S. Securities and Exchange Commissionsec.govPrimary-source regulator publishing every public-company filing (13F, 10-K, 8-K, etc.) since 1934.
- #2A+·95U.S. National Institutes of Healthnih.govU.S. federal medical research agency operating PubMed, NCBI, MedlinePlus, and trial registries.
- #3A+·95DOI (CrossRef Resolver)doi.orgInternational standard identifier resolver for academic citations (~150M+ DOIs).
- #4A+·95Federal Reserve Systemfederalreserve.govU.S. central bank; primary source for monetary policy + economic data + financial-system statistics.
- #5A·94Wikipedia (English)en.wikipedia.orgCrowd-edited encyclopedia with ~7M articles and per-article inline citation discipline.
All sources scored
130 sources scored across A+ to D grades. Each row links to the full breakdown with the underlying signals you can re-derive.
Verified claims for grounded LLM retrieval
VERITAS ships signed, sourced claims about AI/ML research as a developer API. Each claim has 2+ primary sources, an HMAC-SHA256 signature, and a stable JSON envelope — ready to ground LLM responses, fact-check generated content, and reduce hallucinations in production AI applications.
How it’s different from a search engine
Search engines optimize for ranking documents. VERITAS optimizes for verifying specific atomic claims: subject + predicate + object + sources + signed envelope. Built for the “is X true and what’s the citation?” problem LLM apps hit at every retrieval step.
Claim records have a stable id (16-hex-char hash over canonical fields), HMAC-SHA256 signature, and CC-BY 4.0 license. Migration to W3C Verifiable Credentials (Ed25519, offline-verifiable) is on the v1 roadmap for Y2 enterprise consumers.
What does “AI-Citation Quality” mean?
When ChatGPT, Claude, or Perplexity answer a question, they pull from a small subset of sources they consider trustworthy. Two factors decide whether a source makes that subset: citation discipline (does the source rigorously cite its own evidence?) and modern reference fitness (is the source structured for machine retrieval — schema markup, freshness signals, machine-readable archives?).
SourceScore measures both, plus citation velocity (how often the source is cited by other tier-1 sources per week). Together these three sub-scores compose the SourceScore Index — a single 0–100 grade per source.
We ship 25 hand-scored sources at this stage of v0.1. Methodology is intentionally transparent: every score has explicit signals you can re-derive. The production index will expand to 10,000+ sources via the same methodology, with weekly velocity refreshes and quarterly discipline re-audits.