SourceScore

Integration guide

OpenAI tool calls + VERITAS

Define VERITAS as two function-call tools and let the model decide when to ground itself. Zero retrieval prompt-engineering; the model invokes search_claimsor verify_claim automatically when uncertain.

Tool definitions

tools = [
    {
        "type": "function",
        "function": {
            "name": "search_claims",
            "description": (
                "Search the SourceScore VERITAS catalog of verified AI/ML claims. "
                "Returns top-K matching claims with statement, confidence, "
                "and source URLs. Use this when you need a grounded fact about "
                "model releases, architectures, foundational research, or AI/ML organizations."
            ),
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "Natural-language query"},
                    "limit": {"type": "integer", "default": 5, "minimum": 1, "maximum": 20},
                },
                "required": ["query"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "verify_claim",
            "description": (
                "Verify a specific assertion against the VERITAS catalog. "
                "Returns a confidence score and the matching claim id if found. "
                "Use this when you have a specific statement to check before asserting it."
            ),
            "parameters": {
                "type": "object",
                "properties": {
                    "statement": {"type": "string"},
                    "min_confidence": {"type": "number", "default": 0.85},
                },
                "required": ["statement"],
            },
        },
    },
]

Tool-call loop (Python)

import json, requests
from openai import OpenAI

client = OpenAI()
VERITAS = "https://sourcescore.org/api/v1"

def call_tool(name: str, args: dict) -> dict:
    if name == "search_claims":
        r = requests.get(f"{VERITAS}/search", params={"q": args["query"], "limit": args.get("limit", 5)})
        return r.json()
    if name == "verify_claim":
        r = requests.post(
            f"{VERITAS}/verify",
            json={"claim": args["statement"], "minConfidence": args.get("min_confidence", 0.85)},
        )
        return r.json()
    return {"error": f"unknown tool {name}"}

messages = [
    {"role": "system", "content": (
        "Use search_claims or verify_claim to ground any AI/ML factual claim "
        "before asserting it. Cite the returned claim_id with every grounded "
        "fact in the final answer."
    )},
    {"role": "user", "content": "When was the Transformer architecture introduced and by whom?"},
]

while True:
    resp = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=messages,
        tools=tools,
        temperature=0,
    )
    msg = resp.choices[0].message
    messages.append(msg.model_dump(exclude_none=True))

    if not msg.tool_calls:
        print(msg.content)
        break

    for tc in msg.tool_calls:
        args = json.loads(tc.function.arguments)
        result = call_tool(tc.function.name, args)
        messages.append({
            "role": "tool",
            "tool_call_id": tc.id,
            "content": json.dumps(result),
        })

Same loop in JavaScript

import OpenAI from "openai";
const client = new OpenAI();
const VERITAS = "https://sourcescore.org/api/v1";

async function callTool(name, args) {
  if (name === "search_claims") {
    const q = new URLSearchParams({ q: args.query, limit: args.limit ?? 5 });
    return (await fetch(`${VERITAS}/search?${q}`)).json();
  }
  if (name === "verify_claim") {
    return (await fetch(`${VERITAS}/verify`, {
      method: "POST",
      headers: { "content-type": "application/json" },
      body: JSON.stringify({ claim: args.statement, minConfidence: args.min_confidence ?? 0.85 }),
    })).json();
  }
  return { error: `unknown tool ${name}` };
}

const messages = [
  { role: "system", content: "Use search_claims or verify_claim to ground any AI/ML factual claim. Cite claim_id." },
  { role: "user", content: "When was the Transformer architecture introduced?" },
];

while (true) {
  const resp = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages,
    tools, // same shape as Python example above
    temperature: 0,
  });
  const msg = resp.choices[0].message;
  messages.push(msg);
  if (!msg.tool_calls?.length) { console.log(msg.content); break; }

  for (const tc of msg.tool_calls) {
    const result = await callTool(tc.function.name, JSON.parse(tc.function.arguments));
    messages.push({ role: "tool", tool_call_id: tc.id, content: JSON.stringify(result) });
  }
}

Why this pattern

  • Zero prompt engineering — the model invokes tools by signature alone. No "you must always search before answering" boilerplate.
  • Conditional retrieval — the model skips the tool for trivial questions. Cost stays low; latency stays human-feeling on questions VERITAS can't help with.
  • Composable — VERITAS lives alongside your other tools (calendar lookup, internal search, web search, calculator). The model decides which to chain.

Anthropic Claude tool use

Same pattern, slightly different shape. Translate the OpenAI tools above to the Anthropic toolsparameter — the field names + payloads transfer cleanly:

tools_anthropic = [
    {
        "name": "search_claims",
        "description": "Search the SourceScore VERITAS catalog of verified AI/ML claims.",
        "input_schema": {
            "type": "object",
            "properties": {
                "query": {"type": "string"},
                "limit": {"type": "integer", "default": 5},
            },
            "required": ["query"],
        },
    },
    # ... same for verify_claim
]

Next steps