SourceScore

Verified claim · AI-ML · 100% confidence

BLEU score introduced in paper: BLEU: a Method for Automatic Evaluation of Machine Translation (Papineni et al., 2002).

Last verified 2026-05-16 · Methodology veritas-v0.1 · bf5bdd9756278449

Structured fields

Subject
BLEU score
Predicate
introduced_in_paper
Object
BLEU: a Method for Automatic Evaluation of Machine Translation (Papineni et al., 2002)
Confidence
100%
Tags
bleu · evaluation-metric · machine-translation · foundational · 2002 · acl · ibm

Sources (2)

  1. [1] peer reviewed · ACL Anthology (Papineni, Roukos, Ward, Zhu) · 2002-07-07

    BLEU: a Method for Automatic Evaluation of Machine Translation
    We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run.
  2. [2] docs · Wikipedia

    BLEU — Wikipedia

Cite this claim

Ready-to-paste citation (Markdown / plain text):

BLEU score introduced in paper: BLEU: a Method for Automatic Evaluation of Machine Translation (Papineni et al., 2002). — SourceScore Claim bf5bdd9756278449 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/bf5bdd9756278449.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/bf5bdd9756278449/" width="100%" height="360" frameborder="0" loading="lazy" title="BLEU score introduced in paper: BLEU: a Method for Automatic Evaluation of Machine Translation (Papineni et al., 2002)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/bf5bdd9756278449.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/bf5bdd9756278449.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "BLEU score introduced in paper: BLEU: a Method for Automatic Evaluation of Machine Translation (Papineni et al., 2002)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/bf5bdd9756278449.json") envelope = r.json() print(envelope["claim"]["statement"]) # "BLEU score introduced in paper: BLEU: a Method for Automatic Evaluation of Machine Translation (Papineni et al., 2002)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_bleu_score_fact() -> dict: """Fetch the verified SourceScore claim for BLEU score.""" r = httpx.get("https://sourcescore.org/api/v1/claims/bf5bdd9756278449.json") return r.json()