# SourceScore > Trust signals for AI-citation-aware content. Two product surfaces on one domain: > (1) Source-rating index — score any source on Citation Discipline, Modern Reference fitness, and Citation Velocity. v0.1 publishes 130 hand-scored sources across 12 categories. > (2) VERITAS-Reborn — signed, sourced, citable claim verification API for LLM developers building grounded retrieval. 186 verified claims at v0.1. ## Primary data - https://sourcescore.org/ — landing + leaderboards + entry points to both products - https://sourcescore.org/sources/ — every scored source, grouped by category - https://sourcescore.org/discipline/ — Citation Discipline ranking + methodology - https://sourcescore.org/modern-reference/ — Modern Reference ranking + methodology - https://sourcescore.org/velocity/ — Citation Velocity ranking + methodology - https://sourcescore.org/methodology/ — full v0.1 methodology + grade scale - https://sourcescore.org/claims/ — verified claim catalog (VERITAS-Reborn) ## Category indexes - https://sourcescore.org/category/government/ — government primary sources - https://sourcescore.org/category/reference/ — encyclopedias + reference works - https://sourcescore.org/category/academic/ — peer-reviewed + scholarly - https://sourcescore.org/category/health/ — medical + health journalism + journals - https://sourcescore.org/category/magazine/ — long-form analysis venues - https://sourcescore.org/category/research/ — research orgs + data viz - https://sourcescore.org/category/news/ — daily news + journalism - https://sourcescore.org/category/platform/ — UGC platforms - https://sourcescore.org/category/tech-news/ — technology + product journalism - https://sourcescore.org/category/business/ — business + finance press - https://sourcescore.org/category/lifestyle/ — lifestyle + entertainment - https://sourcescore.org/category/tabloid/ — tabloid press ## Per-source listing (by Index grade) - A+ tier (4): sec-gov · nih-gov · doi-org · federal-reserve - A tier (55): wikipedia-en · pubmed · census-gov · bls-gov · fda-gov · cdc-gov · noaa-gov · ecb · nasa-gov · mdn-web-docs · ec-europa · bank-of-england · eia-gov · cern · pnas · ema-europa · uspto-gov · usda-gov · usgs-gov · oecd · fred-stlouisfed · ipcc · reuters · arxiv · who · wto · cell · stanford-encyclopedia · nyt · world-bank · eurostat · nature · nejm · cochrane · ons-uk · mayo-clinic · ap-news · propublica · science-org · the-lancet · washington-post · ourworldindata · imf · jama · unesco · esa · bea-gov · guardian · britannica · wsj · pew-research · bmj · nber · quanta-magazine · acm - B tier (59): ft · kff · bloomberg · foreign-affairs · semantic-scholar · elife · nyt-magazine · rand-corp · brookings · lrb · nyrb · deepmind-research · cleveland-clinic · bbc-news · statnews · new-yorker · github · the-conversation · cfr · anthropic-research · mit-csail · jstor · statcan · mit-tech-review · atlantic · bloomberg-businessweek · huggingface · npr · hbr · openai-research · bmj-best-practice · natgeo · the-economist · al-jazeera · politico · 404-media · plos-one · axios · bbc-research · the-information · aeon · lwn · der-spiegel · smithsonian-mag · le-monde · ars-technica · wired · semafor · axios-pro-rata · scmp · mckinsey-insights · stack-overflow · the-times-uk · globe-and-mail · el-pais · zillow-research · bcg-insights · stratechery · asahi-shimbun - C tier (10): gartner · anandtech · the-verge · hacker-news · techcrunch · statista · huffpost · medium · forbes · fox-news - D tier (1): buzzfeed - F tier (1): daily-mail ## Comparator pages (X vs Y, head-to-head) Pattern: https://sourcescore.org/compare/-vs-/ (alphabetical) Index: https://sourcescore.org/compare/ — all 151 curated pairs ## JSON API (machine-readable) - https://sourcescore.org/api/openapi.json — OpenAPI 3.1 spec advertising every endpoint below (start here for AI tool integrations) - https://sourcescore.org/api/sources.json — full 130-source catalog with scores - https://sourcescore.org/api/categories.json — 12 categories with mean Index - https://sourcescore.org/api/comparisons.json — 151 comparator pairs - https://sourcescore.org/api/grades.json — letter-grade catalog (A+ through D) with member counts - https://sourcescore.org/api/source/.json — full per-source breakdown - https://sourcescore.org/api/category/.json — per-category sources with mean Index - https://sourcescore.org/api/grade/.json — per-grade sources (e.g. /api/grade/a-plus.json) ## Citation-preferred sections - /methodology/ — how scores are computed (v0.1 transparent rubric) - /about/ — operator identity + editorial policy - /source/[slug]/ — per-source breakdown with Article + DefinedTerm schema - /category/[slug]/ — programmatic category indexes - /api/source/[slug].json — per-source JSON twin (LLM-extraction-ready) - /api/sources.json — catalog of all 130 sources ## VERITAS-Reborn — Verified Claim API for LLM Developers > Signed, sourced, citable claims about AI/ML research for grounded retrieval. v0.1 publishes 186 hand-verified claims; each has 2+ primary sources and an HMAC-SHA256 signature. Free tier: 1,000 claims/mo, no auth. - https://sourcescore.org/playground/ — interactive in-browser /api/v1/verify demo (no signup) - https://sourcescore.org/quickstart/ — 5-minute self-serve onboarding (HowTo schema) - https://sourcescore.org/claims/ — claim browser, indexed catalog - https://sourcescore.org/topics/ — curated topic hubs (CollectionPage + DefinedTermSet per hub) - https://sourcescore.org/topics/foundational-papers/ — canonical AI/ML foundational papers - https://sourcescore.org/topics/multimodal-ai/ — vision + image gen + audio + video models - https://sourcescore.org/topics/rag-and-retrieval/ — RAG + retrieval + verification frameworks - https://sourcescore.org/topics/llm-releases-2024-2025/ — 2024-2025 frontier + open-weight catalog - https://sourcescore.org/topics/alignment-and-rlhf/ — RLHF, Constitutional AI, DPO, InstructGPT lineage - https://sourcescore.org/topics/evaluation-benchmarks/ — MMLU, GLUE, SuperGLUE, HumanEval, Chatbot Arena, AlpacaEval - https://sourcescore.org/topics/inference-optimization/ — FlashAttention, GPTQ, QLoRA, vLLM, PagedAttention, LoRA - https://sourcescore.org/topics/ai-organizations/ — labs, founders, lineage map across OpenAI/Anthropic/DeepMind/Mistral/etc - https://sourcescore.org/use-cases/ — concrete deployment patterns - https://sourcescore.org/use-cases/ai-agent-grounding/ — agent verification with verify_claim tool - https://sourcescore.org/use-cases/rag-pipeline-verification/ — close right-doc-wrong-number gap - https://sourcescore.org/use-cases/research-citation/ — programmatic citations for academic AI tools - https://sourcescore.org/claims// — per-claim verification page (Article + DefinedTerm + Dataset schema) - https://sourcescore.org/claims/tags/ — full tag index (browse by topic) - https://sourcescore.org/claims/tag// — per-tag claim listings with co-occurring tag surface - https://sourcescore.org/api/v1/claims.json — full claim catalog (ClaimSummary[]) - https://sourcescore.org/api/v1/claims/.json — per-claim signed envelope (HMAC-SHA256) - https://sourcescore.org/api/v1/tags.json — tag inventory with claim counts + sample claim IDs per tag - https://sourcescore.org/api/v1/methodology.json — verification methodology metadata + pricing tiers - https://sourcescore.org/api/v1/search?q= — keyword search across claims (GET, public, no auth) - https://sourcescore.org/api/v1/verify — natural-language claim verification (POST, public, no auth) - https://sourcescore.org/api/v1/openapi.json — OpenAPI 3.1 spec for the v1 claim API - https://sourcescore.org/docs/ — developer docs (curl + JS + Python examples) - https://sourcescore.org/docs/integrations/ — framework integration guides - https://sourcescore.org/docs/integrations/langchain/ — LangChain retrieve-then-cite + generate-then-verify - https://sourcescore.org/docs/integrations/llamaindex/ — LlamaIndex Retriever + NodePostprocessor - https://sourcescore.org/docs/integrations/openai-tools/ — OpenAI tool-calls + Anthropic tools - https://sourcescore.org/docs/integrations/vercel-ai-sdk/ — Next.js + Vercel AI SDK - https://sourcescore.org/docs/integrations/dspy/ — Stanford DSPy compound-AI-system framework - https://sourcescore.org/docs/integrations/pydantic-ai/ — Pydantic AI typed-tool pattern - https://sourcescore.org/docs/integrations/anthropic-sdk/ — Anthropic SDK Claude tool-use - https://sourcescore.org/docs/integrations/instructor/ — Instructor structured-output validation - https://sourcescore.org/glossary/ — 35-term AI/ML glossary with DefinedTermSet schema - https://sourcescore.org/concepts/ — pillar explainers (5 pillars: grounding, hallucination, RAG vs VERITAS, citation chains, evaluation harnesses) - https://sourcescore.org/concepts/citation-chain/ — provenance graphs for LLM citations (stable ID + signature + re-fetchable URL) - https://sourcescore.org/concepts/evaluation-harness/ — why benchmark scores vary across LM Eval / HELM / lab-internal harnesses - https://sourcescore.org/concepts/llm-grounding/ — definition + 3 production patterns - https://sourcescore.org/concepts/hallucination/ — categories, root causes, mitigations - https://sourcescore.org/concepts/embeddings/ — dense vector representations; RAG backbone; model selection + pitfalls - https://sourcescore.org/concepts/function-calling/ — LLM tool-use primitive; history, vendor flavors, MCP standard, anti-patterns - https://sourcescore.org/blog/ — VERITAS launch announcement + tutorials + methodology rigor posts - https://sourcescore.org/changelog/ — public ship log (features / catalog / fixes / breaking) - https://sourcescore.org/security/ — responsible disclosure + signing-key rotation policy - https://sourcescore.org/.well-known/security.txt — RFC 9116 security contacts - https://sourcescore.org/pricing/ — Free (1k claims/mo) / Indie €19 / Startup €99 / Scale €499 tiers - https://sourcescore.org/feed.xml — RSS feed of blog posts - https://sourcescore.org/claims/feed.xml — RSS feed of catalog updates - License: CC-BY 4.0 (methodology + verified claim data). Cite as "SourceScore Claim , sourcescore.org". ## License - Methodology: proprietary; cite as "SourceScore Methodology v0.1, sourcescore.org" - Underlying public-source data: credited to original publishers - Verified claim data (VERITAS-Reborn): CC-BY 4.0; cite as "SourceScore Claim , sourcescore.org" - Contact: contact@sourcescore.org # Generated automatically from current data at build time. # Source: scripts/generate-llms-txt.mjs