SourceScore

Verified claim · AI-ML · 100% confidence

GLUE benchmark introduced in paper: GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018).

Last verified 2026-05-16 · Methodology veritas-v0.1 · aa113b5e61d5c214

Structured fields

Subject
GLUE benchmark
Predicate
introduced_in_paper
Object
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018)
Confidence
100%
Tags
glue · benchmark · evaluation · foundational · 2018

Sources (2)

  1. [1] preprint · arXiv (Wang, Singh, Michael, Hill, Levy, Bowman) · 2018-04-20

    GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
    We introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks.
  2. [2] official blog · NYU

    GLUE — official site

Cite this claim

Ready-to-paste citation (Markdown / plain text):

GLUE benchmark introduced in paper: GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018). — SourceScore Claim aa113b5e61d5c214 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/aa113b5e61d5c214.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/aa113b5e61d5c214/" width="100%" height="360" frameborder="0" loading="lazy" title="GLUE benchmark introduced in paper: GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/aa113b5e61d5c214.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/aa113b5e61d5c214.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "GLUE benchmark introduced in paper: GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/aa113b5e61d5c214.json") envelope = r.json() print(envelope["claim"]["statement"]) # "GLUE benchmark introduced in paper: GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding (Wang et al., 2018)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_glue_benchmark_fact() -> dict: """Fetch the verified SourceScore claim for GLUE benchmark.""" r = httpx.get("https://sourcescore.org/api/v1/claims/aa113b5e61d5c214.json") return r.json()