SourceScore

Integration guide

Vercel AI SDK + VERITAS

Add signed-claim verification to a Next.js + AI SDK chat or completion. Two patterns: tool function-calling for model- initiated lookup, and post-stream verification for free-form completions.

Install

pnpm add ai @ai-sdk/openai zod

Pattern 1 — tool() function-calling

Define two tools and let the model invoke them. The AI SDK handles the call/result loop transparently.

// app/api/chat/route.ts
import { streamText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

const VERITAS = "https://sourcescore.org/api/v1";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o-mini"),
    system:
      "Use search_claims or verify_claim to ground every AI/ML factual " +
      "assertion before answering. Cite [claim_id] inline with every grounded fact.",
    messages,
    tools: {
      search_claims: tool({
        description: "Search the SourceScore VERITAS catalog of verified AI/ML claims.",
        parameters: z.object({
          query: z.string(),
          limit: z.number().int().min(1).max(20).default(5),
        }),
        execute: async ({ query, limit }) => {
          const r = await fetch(`${VERITAS}/search?q=${encodeURIComponent(query)}&limit=${limit}`);
          return await r.json();
        },
      }),
      verify_claim: tool({
        description: "Verify a specific assertion against the VERITAS catalog. Returns confidence + canonical citation if matched.",
        parameters: z.object({
          statement: z.string(),
          min_confidence: z.number().min(0).max(1).default(0.85),
        }),
        execute: async ({ statement, min_confidence }) => {
          const r = await fetch(`${VERITAS}/verify`, {
            method: "POST",
            headers: { "content-type": "application/json" },
            body: JSON.stringify({ claim: statement, minConfidence: min_confidence }),
          });
          return await r.json();
        },
      }),
    },
    maxSteps: 4, // allow up to 4 tool-call iterations
  });

  return result.toDataStreamResponse();
}

Front-end uses useChat() as normal. Tool calls + results stream alongside the text — the AI SDK's data protocol handles surfacing them to UI for citation badges.

Pattern 2 — Post-stream verification

When you want free-form generation but a confidence layer, run verification AFTER the stream completes. Renders unverified assertions with a warning chip in your UI.

// lib/verify.ts
export async function verifyLines(text: string) {
  const lines = text.split("\n").map(s => s.trim()).filter(Boolean);
  const VERITAS = "https://sourcescore.org/api/v1";

  return Promise.all(lines.map(async line => {
    const r = await fetch(`${VERITAS}/verify`, {
      method: "POST",
      headers: { "content-type": "application/json" },
      body: JSON.stringify({ claim: line, minConfidence: 0.85 }),
    });
    const { bestMatch } = await r.json();
    return {
      statement: line,
      verified: !!bestMatch,
      confidence: bestMatch?.confidence ?? 0,
      claimId: bestMatch?.id ?? null,
      url: bestMatch ? `https://sourcescore.org/claims/${bestMatch.id}/` : null,
    };
  }));
}

// app/page.tsx (client component)
"use client";
import { useState } from "react";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

export default function Page() {
  const [out, setOut] = useState<Array<Awaited<ReturnType<typeof verifyLines>>[number]>>([]);

  async function ask(question: string) {
    const { text } = await generateText({
      model: openai("gpt-4o-mini"),
      prompt: `Answer with one fact per line:\n${question}`,
      temperature: 0,
    });
    setOut(await verifyLines(text));
  }

  return (
    <div>
      <button onClick={() => ask("When was the Transformer introduced?")}>Ask</button>
      <ul>
        {out.map((r, i) => (
          <li key={i}>
            {r.statement}{" "}
            {r.verified
              ? <a href={r.url!}>✅ [{r.claimId}] ({r.confidence.toFixed(2)})</a>
              : <span className="text-amber-600">⚠️ unverified</span>}
          </li>
        ))}
      </ul>
    </div>
  );
}

Edge runtime considerations

The fetch-based VERITAS client works in both Node and Edge runtimes — no native dependencies. For Vercel Edge Functions:

  • Set export const runtime = "edge" in your route.
  • VERITAS p95 latency ~80ms — comfortable within Edge function timeouts.
  • No SDK import needed — VERITAS is plain HTTP.

UI patterns

Render verified claims with a clickable badge that opens the canonical SourceScore page in a new tab. Render unverified claims with an amber chip and a tooltip explaining the catalog scope. Two CSS patterns shipped in your design system:

<span className="verified-badge">
  ✓ [{claimId}] {confidence.toFixed(2)}
</span>

<span className="unverified-chip" title="Outside VERITAS catalog scope">
  ⚠ unverified
</span>

The verified badge should be a link to https://sourcescore.org/claims/<id>/ for full provenance.

Next steps