VERITAS vs LLM search-grounding (Perplexity, ChatGPT search)
LLM-with-search grounds via live web retrieval. VERITAS grounds via signed verified-claim envelopes. Latency, reliability, citation quality, signature integrity — trade-offs explained.
Two different grounding philosophies
LLM-with-search (Perplexity, ChatGPT search, Bing, You.com, Brave Search AI): the model issues a search query at response-time. Pulls live web results. Generates an answer citing the fetched pages.
SourceScore VERITAS: the model (or your application code) queries a hand-curated verified-claim API. Returns a signed atomic claim envelope.
Both are valid grounding patterns. They optimize for different properties: search-grounding for recency + breadth; VERITAS for citation integrity + signature + atomic shape.
At a glance
| LLM-with-search | SourceScore VERITAS | |
|---|---|---|
| Source | Live web search results | Hand-curated catalog |
| Coverage | Whatever's on the web | AI/ML (v0); narrow but deep |
| Latency | ~2-10s (search + read + generate) | ~80ms (single envelope fetch) |
| Recency | Up-to-the-minute | Weekly + on-event refresh |
| Citation quality | Variable (depends on retrieved pages) | Always ≥2 primary sources |
| Signature | None | HMAC-SHA256 on every envelope |
| Reliability | Variable (search index quality matters) | Deterministic (catalog hand-verified) |
| Cost | $5-20/1000 queries | Free 1k/mo, then €19+ |
| Atomic-claim shape | Free-text response with citations | Subject + predicate + object envelope |
The recency trade-off
Search-grounding wins for recent events: today's news, this week's product release, breaking research papers. VERITAS's catalog refreshes weekly + on major events, but isn't real-time.
However: for the AI/ML niche where VERITAS specializes, search-grounding routinely surfaces stale or unreliable sources. We've seen search-grounded LLMs cite a 2023 blog post about Claude 3 capabilities (incorrect by 2025) or a fan-wiki page about Llama 3.1 with the wrong context window. The web is noisy; hand verification beats search ranking for established facts.
The signature trade-off
Search-grounding has no signature. If you log a search- grounded response for an audit trail, you log free text plus a list of URLs. Those URLs can rot, change, or be tampered. The audit trail degrades over time.
VERITAS's HMAC-SHA256 signature gives you a cryptographically-anchored response: log the envelope, log the signature, log the public DID (did:web:sourcescore.org). Three years later you can verify the response was genuine and unmodified.
For regulated industries (finance, legal, healthcare-research, academic citation), signed envelopes matter. For general consumer apps, they don't.
The atomic-claim shape trade-off
Search-grounding emits free-text responses with citations interleaved. Your downstream code must parse the response to extract structured information.
VERITAS emits structured atomic claims:{subject: "Llama 3.1", predicate: "released_on", object: "2024-07-23"}. Your downstream code consumes the structure directly. Useful for fact-only displays, citation databases, structured report generation.
Honest verdict per use case
Use search-grounding when:
- You need recent events (today's news, this month's product launches)
- You need broad knowledge coverage
- You're fine with ~2-10s latency
- You don't need cryptographic signatures
- Your application generates free-text responses (chat, search)
Use VERITAS when:
- Your domain is AI/ML (or future Y2 verticals)
- You need sub-100ms latency
- You need cryptographic signatures for audit trails
- You need atomic claim shape
- You need deterministic responses (same query → same answer)
- You're building generate-then-verify pipelines
Use both when:
Sophisticated agents do both. Search-grounding for breaking news / breadth; VERITAS for AI/ML facts requiring signature + structure. The agent decides which tool to invoke based on query shape.
What we're not
VERITAS isn't a web search engine. We can't retrieve fresh news. We don't crawl. Our catalog is weekly-refreshed at best.
We're not trying to replace Perplexity or ChatGPT search for general-knowledge queries. We're a precision tool for the verification pass that happens AFTER retrieval — or the deterministic-answer pass for established facts that don't need re-searching every time.