SourceScore
New · v0.1VERITAS Claim Verification API — signed, sourced AI/ML claims for grounded LLM retrieval. Free tier · no auth.
Methodology v0.1 · 130 sources scored · 10k+ index in development

How citable is any source in the AI era?

SourceScore grades every URL you paste on three things AI engines actually weigh: how rigorously the source cites others, how fit it is as a modern citation, and how often tier-1 publications cite it. One paste, four numbers, one grade.

All sources scored

130 sources scored across A+ to D grades. Each row links to the full breakdown with the underlying signals you can re-derive.

SourceIndex
U.S. Securities and Exchange Commission
sec.gov
A+·96
U.S. National Institutes of Health
nih.gov
A+·95
DOI (CrossRef Resolver)
doi.org
A+·95
Federal Reserve System
federalreserve.gov
A+·95
Wikipedia (English)
en.wikipedia.org
A·94
PubMed
pubmed.ncbi.nlm.nih.gov
A·94
U.S. Census Bureau
census.gov
A·94
U.S. Bureau of Labor Statistics
bls.gov
A·94
U.S. Food and Drug Administration
fda.gov
A·94
U.S. Centers for Disease Control and Prevention
cdc.gov
A·94
U.S. National Oceanic and Atmospheric Administration
noaa.gov
A·93
European Central Bank
ecb.europa.eu
A·93
NASA
nasa.gov
A·93
MDN Web Docs
developer.mozilla.org
A·93
European Commission
ec.europa.eu
A·92
Bank of England
bankofengland.co.uk
A·92
U.S. Energy Information Administration
eia.gov
A·92
CERN
home.cern
A·92
PNAS
pnas.org
A·92
European Medicines Agency
ema.europa.eu
A·91
U.S. Patent and Trademark Office
uspto.gov
A·91
U.S. Department of Agriculture
usda.gov
A·91
U.S. Geological Survey
usgs.gov
A·91
OECD
oecd.org
A·91
FRED (Federal Reserve Economic Data)
fred.stlouisfed.org
A·91
IPCC
ipcc.ch
A·91
Reuters
reuters.com
A·89
arXiv
arxiv.org
A·89
World Health Organization
who.int
A·89
World Trade Organization
wto.org
A·89
Cell
cell.com
A·89
Stanford Encyclopedia of Philosophy
plato.stanford.edu
A·89
The New York Times
nytimes.com
A·88
World Bank
worldbank.org
A·88
Eurostat
ec.europa.eu/eurostat
A·88
Nature
nature.com
A·87
New England Journal of Medicine
nejm.org
A·87
Cochrane Library
cochranelibrary.com
A·87
ONS (UK)
ons.gov.uk
A·87
Mayo Clinic
mayoclinic.org
A·87
Associated Press
apnews.com
A·86
ProPublica
propublica.org
A·86
Science
science.org
A·86
The Lancet
thelancet.com
A·86
The Washington Post
washingtonpost.com
A·86
Our World in Data
ourworldindata.org
A·86
International Monetary Fund
imf.org
A·86
Journal of the American Medical Association
jamanetwork.com
A·86
UNESCO
en.unesco.org
A·86
European Space Agency
esa.int
A·86
BEA
bea.gov
A·86
The Guardian
theguardian.com
A·85
Encyclopædia Britannica
britannica.com
A·85
The Wall Street Journal
wsj.com
A·85
Pew Research Center
pewresearch.org
A·85
The BMJ (British Medical Journal)
bmj.com
A·85
National Bureau of Economic Research
nber.org
A·85
Quanta Magazine
quantamagazine.org
A·85
Association for Computing Machinery
dl.acm.org
A·85
Financial Times
ft.com
B·84
KFF (Kaiser Family Foundation)
kff.org
B·84
Bloomberg News
bloomberg.com
B·83
Foreign Affairs
foreignaffairs.com
B·83
Semantic Scholar
semanticscholar.org
B·83
eLife
elifesciences.org
B·83
The New York Times Magazine
nytimes.com/section/magazine
B·83
RAND Corporation
rand.org
B·83
Brookings Institution
brookings.edu
B·83
London Review of Books
lrb.co.uk
B·83
The New York Review of Books
nybooks.com
B·83
Google DeepMind Research
deepmind.google
B·83
Cleveland Clinic
my.clevelandclinic.org
B·83
BBC News
bbc.com
B·82
STAT News
statnews.com
B·82
The New Yorker
newyorker.com
B·82
GitHub
github.com
B·82
The Conversation
theconversation.com
B·82
Council on Foreign Relations
cfr.org
B·82
Anthropic Research
anthropic.com
B·82
MIT CSAIL
csail.mit.edu
B·82
JSTOR
jstor.org
B·82
Statistics Canada
statcan.gc.ca
B·82
MIT Technology Review
technologyreview.com
B·81
The Atlantic
theatlantic.com
B·81
Bloomberg Businessweek
bloomberg.com/businessweek
B·81
Hugging Face
huggingface.co
B·81
NPR
npr.org
B·80
Harvard Business Review
hbr.org
B·80
OpenAI Research
openai.com
B·80
BMJ Best Practice
bestpractice.bmj.com
B·79
National Geographic
nationalgeographic.com
B·79
The Economist
economist.com
B·78
Al Jazeera English
aljazeera.com
B·78
Politico
politico.com
B·78
404 Media
404media.co
B·78
PLOS ONE
journals.plos.org
B·78
Axios
axios.com
B·78
BBC Research & Development
bbc.co.uk/rd
B·78
The Information
theinformation.com
B·78
Aeon
aeon.co
B·78
LWN.net
lwn.net
B·78
Der Spiegel
spiegel.de
B·78
Smithsonian Magazine
smithsonianmag.com
B·78
Le Monde
lemonde.fr
B·77
Ars Technica
arstechnica.com
B·76
Wired
wired.com
B·76
Semafor
semafor.com
B·76
Axios Pro
axios.com/pro
B·76
South China Morning Post
scmp.com
B·75
McKinsey Insights
mckinsey.com
B·75
Stack Overflow
stackoverflow.com
B·74
The Times (UK)
thetimes.co.uk
B·74
The Globe and Mail
theglobeandmail.com
B·74
El País
elpais.com
B·74
Zillow Research
zillow.com/research
B·73
BCG Insights
bcg.com
B·73
Stratechery
stratechery.com
B·73
Asahi Shimbun
asahi.com
B·71
Gartner
gartner.com
C·69
AnandTech
anandtech.com
C·69
The Verge
theverge.com
C·66
Hacker News
news.ycombinator.com
C·66
TechCrunch
techcrunch.com
C·64
Statista
statista.com
C·64
HuffPost
huffpost.com
C·60
Medium
medium.com
C·58
Forbes
forbes.com
C·58
Fox News
foxnews.com
C·58
BuzzFeed
buzzfeed.com
D·42
Daily Mail
dailymail.co.uk
F·38
New · v0.1 · API in beta

Verified claims for grounded LLM retrieval

VERITAS ships signed, sourced claims about AI/ML research as a developer API. Each claim has 2+ primary sources, an HMAC-SHA256 signature, and a stable JSON envelope — ready to ground LLM responses, fact-check generated content, and reduce hallucinations in production AI applications.

How it’s different from a search engine

Search engines optimize for ranking documents. VERITAS optimizes for verifying specific atomic claims: subject + predicate + object + sources + signed envelope. Built for the “is X true and what’s the citation?” problem LLM apps hit at every retrieval step.

Claim records have a stable id (16-hex-char hash over canonical fields), HMAC-SHA256 signature, and CC-BY 4.0 license. Migration to W3C Verifiable Credentials (Ed25519, offline-verifiable) is on the v1 roadmap for Y2 enterprise consumers.

What does “AI-Citation Quality” mean?

When ChatGPT, Claude, or Perplexity answer a question, they pull from a small subset of sources they consider trustworthy. Two factors decide whether a source makes that subset: citation discipline (does the source rigorously cite its own evidence?) and modern reference fitness (is the source structured for machine retrieval — schema markup, freshness signals, machine-readable archives?).

SourceScore measures both, plus citation velocity (how often the source is cited by other tier-1 sources per week). Together these three sub-scores compose the SourceScore Index — a single 0–100 grade per source.

We ship 25 hand-scored sources at this stage of v0.1. Methodology is intentionally transparent: every score has explicit signals you can re-derive. The production index will expand to 10,000+ sources via the same methodology, with weekly velocity refreshes and quarterly discipline re-audits.