Integrations
Drop-in guides for grounding LLM responses with signed, sourced claims. Each guide is copy-paste runnable and covers retrieve-then- cite + generate-then-verify patterns.
LangChain
Retrieve-then-cite + generate-then-verify patterns. Drop-in Python + JS examples for grounding LangChain chain responses in signed VERITAS claims.
LlamaIndex
Custom retriever wrapping the VERITAS /search endpoint + post-process node verification. Compatible with QueryEngine + ChatEngine.
OpenAI Tool Calls
Expose VERITAS as native function-calls in the OpenAI Chat Completions API. The model auto-invokes verify_claim() when uncertain.
Vercel AI SDK
Wire VERITAS into Next.js + AI SDK chains. Two patterns: tool() function-calling via streamText, and post-stream verification for free-form completions. TypeScript-first.
DSPy
Stanford's compound-AI-system framework. Custom dspy.Retrieve backed by the VERITAS catalog + verify-and-flag post-processor module. Compatible with DSPy optimizers — they tune prompts around the retriever, not the catalog.
Pydantic AI
Type-safe claim verification as a Pydantic AI tool. The model calls verify_claim() with a structured input, gets back a typed VerificationResult envelope. Validators catch errors early; downstream code is type-safe.
Anthropic SDK
Expose VERITAS as a Claude tool via the Anthropic SDK. tool_use → execute → tool_result loop. Python + TypeScript examples. Pairs with a system prompt that instructs Claude to self-verify before asserting.
Instructor
Jason Liu's structured-output library. Pydantic models with model_validator hooks that look up claims via VERITAS at parse-time; failed verification triggers Instructor's automatic retry. Type-safe verified-claim outputs end-to-end.
Need a framework that isn't listed? Tell us — the next guide is whichever framework gets the most requests this month.