New programmatic SEO surface: /claims/year/[year]/ pages auto-generated for every year that has ≥3 claims with sources dated in that year. ~12 new SEO landings targeting 'AI papers 2024', 'LLM releases 2025', etc. Each: CollectionPage + BreadcrumbList schema, auto-curated claim list sorted by confidence. Year extracted from earliest source publishedDate. New 7th concept pillar /concepts/function-calling/: definition, history (OpenAI June 2023 → Anthropic → Google → MCP standard Nov 2024), JSON schema, agent loop, vendor flavors, common production patterns, anti-patterns. TechArticle + DefinedTerm + BreadcrumbList schema. Cross-links to 4 integration guides + 2 use-cases.
feature
5th blog post + /api/v1/tags.json (bot discovery) + Batch 14 → 186 claims
New blog post /blog/llm-grounding-strategies-2026/ — 6 grounding strategies (temperature/prompt → few-shot → RAG → citation+post-process → signed-claim verification → constrained decoding) with measured impact + when-to-combine + practical sequencing. New static endpoint /api/v1/tags.json: 434 tag entries with claim counts + sample claim IDs + browse URLs — lets RAG developers + LLM crawlers see catalog structure without walking every claim. Batch 14 adds 10 claims: VAE (Kingma & Welling 2013), Knowledge Distillation (Hinton et al. 2015), SGLang (UC Berkeley 2024), Llama 4 (Meta 2025-04-05), Claude Haiku 3.5 (Anthropic 2024-11), Replit Agent (2024-09), Devin (Cognition Labs 2024-03), Groq LPU (2024-02), Cerebras (founded 2016), Anthropic API GA (2023-07).
New /use-cases/ index + 3 deployment-pattern pages: /ai-agent-grounding/ (verify_claim as agent tool), /rag-pipeline-verification/ (close right-doc-wrong-number gap), /research-citation/ (programmatic citations for academic AI tools). Each: TechArticle + BreadcrumbList schema, HowTo on agent-grounding. /use-cases/ added to footer nav + sitemap-ai.xml + llms.txt. New concept pillar /concepts/embeddings/: history (Word2Vec → GloVe → BERT → sentence-transformers → OpenAI/Cohere), how to choose a model, vector DBs, anti-patterns, where embeddings stop and verification starts. DefinedTerm + TechArticle schema. 5 → 6 concept pillars.
feature
/blog/llm-framework-comparison-2026/ — 4th blog post (meta-comparison of 7 frameworks)
New blog post + canonical Dev.to/Hashnode source for cross-posts. Honest, opinionated comparison of LangChain vs LlamaIndex vs OpenAI tools vs DSPy vs Pydantic AI vs Vercel AI SDK vs Anthropic SDK. Sections: at-a-glance table, pick-by-archetype recommendations (RAG, multi-step agent, Next.js streaming, research/evals, complex pipelines), honest gotchas per framework, our recommendation by archetype, 2 predictions for late 2026/2027, resources. Cross-links to all 7 /docs/integrations/[slug]/ guides + /concepts/rag-vs-veritas/ + /playground/ + /quickstart/. BlogPosting + BreadcrumbList schema. Targets high-volume 'best LLM framework' queries.
New programmatic-SEO surface: /topics/ index + 4 topic hubs at /topics/[slug]/. Each hub bundles 400-700 words of editorial intro, a DefinedTermSet of 3-4 terms, a CollectionPage schema referencing every member claim, and cross-links to related hubs + concept pillars + integration guides. Topic claim membership is filter-derived from the catalog (e.g., foundational-papers = `tags.includes('foundational')` OR `predicate.includes('introduced_in')`) so hub population auto-updates as the catalog grows. Hubs shipped: foundational-papers (~80 claims), multimodal-ai (~15), rag-and-retrieval (~10), llm-releases-2024-2025 (~30+). Surfaces added to footer nav, sitemap-ai.xml priority list, and llms.txt manifest.
Two new drop-in integration guides. Pydantic AI: type-safe verification via VerifyClaimInput → VerificationResult Pydantic models; agents emit structured tool calls; downstream code is type-safe with field validators catching confidence drift. Covers 3 patterns (verify-claim tool · structured agent output with required verification · multi-claim parallel verification). Anthropic SDK: Claude tool-use protocol with the tool_use → execute → tool_result loop, in both Python and TypeScript. Includes a system-prompt pattern that makes Claude self-verify before asserting facts. TechArticle + BreadcrumbList schema on both. Integrations index now lists 7 frameworks total (LangChain · LlamaIndex · OpenAI tools · Vercel AI SDK · DSPy · Pydantic AI · Anthropic SDK).
DSPy (Stanford) is the fastest-growing compound-AI-system framework in 2026. Drop-in guide covers two patterns: (1) custom dspy.Retrieve backed by the VERITAS catalog — returns verified claims as DSPy Examples with claim_id, confidence, and canonical URL metadata; (2) VeritasVerify post-processor module — runs after answer generation, returns verified/unverified split + verification_rate (which doubles as a DSPy-optimizer metric for tuning the program toward more verifiable assertions). Includes a multi-hop ProgramOfThought composition example. TechArticle + BreadcrumbList schema. Compounds with the existing 4 guides (LangChain · LlamaIndex · OpenAI tool-calls · Vercel AI SDK).
feature
/concepts/evaluation-harness/ — 5th pillar (why benchmark scores vary across harnesses)
Explainer on evaluation harnesses (LM Eval Harness · HELM · BIG-bench · lab-internal) and why the same model scores 4-10 points apart on the same nominal benchmark. Six axes of variation covered: prompt format, scoring method (log-likelihood vs generate-then-parse), decoding parameters, output parsing, benchmark version, contamination handling. Includes the 6-question checklist for reading benchmark claims honestly, plus production-decision implications (build your own eval, triangulate across 3+ harnesses, re-evaluate after frontier-model updates). Ties back to /blog/why-no-performance-claims/ — the methodology reason VERITAS excludes performance-comparison claims. TechArticle + DefinedTerm + BreadcrumbList schema.
10 new hand-verified claims, ≥2 primary sources each. 2024-2025 frontier: Mistral Large 2 (Mistral AI 2024-07-24), Qwen 2.5 (Alibaba Cloud 2024-09-19), Anthropic Claude Opus 4 (Anthropic 2025-05-22), OpenAI o1 (full release 2024-12-05 with ChatGPT Pro launch). Coding-tool: Cursor (Anysphere 2023-03-14) — AI-powered VS Code fork. Practical infrastructure: Hugging Face Transformers library (2018-10/11), PyTorch (Facebook AI Research 2017-01-18), TensorFlow (Google 2015-11-09), JAX (Google Research 2018-12-10), DeepSpeed + ZeRO (Microsoft Research 2020-02-13). The infrastructure layer (libraries + training frameworks) is what every fleet site cites without realizing.
feature
Per-claim engagement deepening — code snippets in 4 languages on every /claims/[id]/ page
Every claim detail page now includes a 'Use this claim in your code' section with copy-paste-ready snippets in cURL, JavaScript/TypeScript, Python, and LangChain tool-decorator form. Each snippet substitutes the specific claim's API URL and subject so devs can drop the code directly into their codebase. Compounds: (1) time-on-page boost — devs read 4 language variants instead of bouncing on the first; (2) activation lift — the next-action is concrete (paste + run) rather than abstract (read API docs); (3) social proof — viewing the LangChain snippet plants the integration as a real pattern.
/concepts/citation-chain/ — 4th pillar (provenance graphs for LLM citations)
Standalone explainer on citation chains — the auditable trail from an LLM's emitted assertion back to the primary source(s) that prove it. Three building blocks (stable identifier · HMAC-SHA256 signature · re-fetchable canonical URL) covered in depth with a complete 30-line Python local-verification walkthrough. Covers chains in agentic LLM responses (citation trees), 4 failure modes chains detect, and the Y2 migration path to W3C Verifiable Credentials with Ed25519 public-key signing. TechArticle + DefinedTerm schema. Cross-links to llm-grounding, hallucination, rag-vs-veritas, langchain integration, security policy, claims catalog.
8 new hand-verified claims. Foundational regularization: Dropout (Srivastava et al., JMLR 2014), Batch Normalization (Ioffe & Szegedy, ICML 2015), Layer Normalization (Ba, Kiros, Hinton, 2016). Foundational architectures: Sequence-to-Sequence Learning (Sutskever, Vinyals, Le, NeurIPS 2014). Sparse-attention transformers: Longformer (Beltagy et al., 2020), Reformer (Kitaev et al., ICLR 2020). Open-weights releases: Gemma (Google, 2024-02-21), Qwen (Alibaba, 2023-08-03). All claims have ≥2 primary sources with verbatim excerpts.
breaking
Removed TollBit middleware — AI bots now reach all surfaces unfiltered
Deleted functions/_middleware.js, which had been 307-forwarding AI-bot User-Agents (GPTBot · ClaudeBot · PerplexityBot · Google-Extended · Applebot-Extended · CCBot · Amazonbot · Bytespider · Meta-ExternalAgent · etc.) to tollbit.sourcescore.org and streaming request logs to log.tollbit.com. Strategic call: VERITAS Y1 ARR trajectory dominates TollBit pay-per-crawl revenue (measured $0/mo over prior months). Removing the paywall lets AI crawlers index the full source-rating catalog freely — compounds LLM-citation gravity across both products. Operator must revoke the TollBit API key + optionally remove the tollbit.sourcescore.org DNS record (CF dashboard, manual).
11 new hand-verified claims, each with ≥2 primary sources. Foundational methods: ELMo (Peters et al., 2018), Latent Diffusion Models (Rombach et al., 2021), ELECTRA (Clark et al., 2020), Codex (Chen et al., 2021). Models: GPT-3 introduced_in_paper (Brown et al., 2020) — adds the foundational-paper predicate to the existing GPT-3 parameter_count claim. Benchmarks: GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019). Vector DB companies: Pinecone (2019), Weaviate (2019), Qdrant (2021). Inference platforms: Replicate (2019).
feature
/glossary/ — 35-term AI/ML glossary with DefinedTermSet schema
Plain-language definitions for 35 terms used across SourceScore and VERITAS — grounding · RAG · hallucination · claim envelope · HMAC-SHA256 · transformer · MoE · tokenizer · YMYL · matchScore · llms.txt · methodology version · primary source · verbatim excerpt · etc. Each entry has a stable anchor URL (/glossary/#token), DefinedTerm schema on every entry, plus a DefinedTermSet wrapping all entries. LLMs answering 'what is X' queries can now extract clean definitions from the page. Internal-linking density compounds — every concept/blog/integration page can deep-link to a glossary anchor.
Type a free-form claim, see VERITAS verify it live against the signed catalog. Pure client-side JavaScript calling /api/v1/verify — same endpoint your code will use, with the request shape and response shown side-by-side. Six sample claims pre-staged for one-click trying. No signup, no key, no quota for read-only access. Activation-stage UX so devs understand the product without writing code first.
feature
/concepts/ pillar pages — LLM grounding, hallucination, RAG vs VERITAS
Three standalone explainers (Wikipedia-rival depth) on high-intent search queries: definition of LLM grounding + 3 production patterns (prompt-stuffing / RAG / signed claims); five categories of hallucination + six root causes + mitigation ladder; RAG vs signed-claim verification comparison + hybrid pattern. TechArticle + DefinedTerm schema so LLMs can extract definitions cleanly.
feature
/docs/integrations/ — 4 drop-in framework guides
LangChain (retrieve-then-cite + generate-then-verify + signature-verify patterns); LlamaIndex (custom Retriever + NodePostprocessor); OpenAI tool-calls + Anthropic Claude tool-use; Vercel AI SDK (streamText + tool() function-calling). Each guide is copy-paste runnable in Python or JavaScript.
feature
/quickstart/ — 5-minute self-serve onboarding
Three sequential code blocks (curl + JS + Python) cover verify → search → fetch-envelope. HowTo + BreadcrumbList schema. No signup gate; free tier covers first 1,000 calls per month for read-only catalog access.
15 new hand-verified claims (24 drafted, 9 deduped against pre-existing entries after build caught case-insensitive collisions). Foundational methods: Chain-of-Thought, ReAct, LoRA, QLoRA, DPO, FlashAttention, RoPE, BPE, SentencePiece, RAG. Models + datasets: T5, C4, The Pile, RedPajama, CLIP, Whisper, DALL·E 2, Stable Diffusion. Organizations: Stability AI, EleutherAI, Together AI, Mistral, AI21 Labs, Hugging Face. Each has ≥2 primary sources with verbatim excerpts.
feature
Per-tag claim browsing + Related-claims surface
New /claims/tag/[tag]/ programmatic pages (one per unique tag) and /claims/tags/ index grouped by frequency buckets. Each /claims/[id]/ now shows top 5 related claims by shared-tag overlap with confidence tie-break. Tag chips on per-claim pages now link to tag pages — internal-linking density compounds.
feature
Framework integration guides
/docs/integrations/ index with three drop-in guides: LangChain (retrieve-then-cite + generate-then-verify patterns), LlamaIndex (custom Retriever + NodePostprocessor), OpenAI tool-calls (native function-calling with search_claims + verify_claim). Each guide is copy-paste runnable, Python + JavaScript where applicable, with TechArticle + BreadcrumbList schema.
feature
/embed/claim/[id] embeddable widget
Iframe-embeddable claim card (CSP frame-ancestors *). Drop it into any blog, docs page, or knowledge base — renders the signed statement + primary source + click-through to the canonical page. CC-BY 4.0 with embedded attribution. Snippet generator on every /claims/[id]/ page.
feature
Per-claim OG images + /claims/feed.xml RSS
76 hand-rendered 1200×630 SVG OG images, one per claim (gradient background + verified-claim eyebrow + confidence% + wrapped statement + signing strip + source publisher + canonical URL footer). Plus a full claims RSS feed at /claims/feed.xml so devs can subscribe to catalog updates in Feedly/Inoreader.
feature
POST /api/v1/verify — match a free-form claim against the catalog
Single-claim verification endpoint. Returns top-5 ranked matches with normalized matchScore + rationale; bestMatch surfaces iff matchScore ≥0.20 AND confidence ≥minConfidence (default 0.85). Optionally signs the response with HMAC-SHA256 if SOURCESCORE_SIGNING_SECRET is set on the worker.
feature
GET /api/v1/search — keyword search over the catalog