Blog
Posts on AI-citation quality, LLM grounding, and the SourceScore methodology.
2026-05-17 · grounding · hallucination · rag · verification · production · patterns
Six grounding strategies that actually reduce LLM hallucination (and the trade-offs)
Prompt engineering buys 10-30%. Retrieval-augmented generation buys another 20-40%. Signed-claim verification closes the long tail. Six strategies, their measured impact, and when to combine.
2026-05-16 · framework · comparison · langchain · llamaindex · openai · anthropic · dspy · pydantic-ai · vercel-ai-sdk
LLM framework comparison 2026 — LangChain vs LlamaIndex vs OpenAI tools vs DSPy vs Pydantic AI vs Vercel AI SDK vs Anthropic SDK
Seven LLM frameworks own most of 2026 dev mindshare. They optimize for different things — orchestration, retrieval, type-safety, vendor-native, deployment ergonomics. Pick by archetype + audience + commitment.
2026-05-16 · methodology · trust · benchmarks · veritas
Why VERITAS doesn't ship performance-comparison claims (and what we ship instead)
Benchmark numbers vary by prompt format, model version, shot count, and evaluation harness. Shipping them as 'verified claims' is the surest way to make the catalog wrong by Thursday. Here's the alternative.
2026-05-16 · tutorial · python · veritas · hallucination
Verifying AI-generated facts in 5 lines of Python
Drop SourceScore VERITAS into your LLM pipeline as a post-generation check. Every claim the model emits gets a confidence score + canonical citation before the user sees it.
2026-05-16 · launch · veritas · api · llm-grounding
Stop hallucinating: a developer API for grounding LLM responses with signed, sourced claims
VERITAS is a free-tier-friendly API that returns hand-verified AI/ML claims with their primary sources, an HMAC-SHA256 signature, and a ready-to-paste citation.