SourceScore

Verified claim · AI-ML · 100% confidence

SuperGLUE benchmark introduced in paper: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (Wang et al., 2019).

Last verified 2026-05-16 · Methodology veritas-v0.1 · 1a1e87145608c91a

Structured fields

Subject
SuperGLUE benchmark
Predicate
introduced_in_paper
Object
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (Wang et al., 2019)
Confidence
100%
Tags
superglue · benchmark · evaluation · foundational · 2019

Sources (2)

  1. [1] preprint · arXiv (Wang, Pruksachatkun, Nangia, Singh, Michael, Hill, Levy, Bowman) · 2019-05-02

    SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
    We present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard.
  2. [2] official blog · NYU/Facebook AI/DeepMind

    SuperGLUE — official site

Cite this claim

Ready-to-paste citation (Markdown / plain text):

SuperGLUE benchmark introduced in paper: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (Wang et al., 2019). — SourceScore Claim 1a1e87145608c91a (verified 2026-05-16). https://sourcescore.org/api/v1/claims/1a1e87145608c91a.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/1a1e87145608c91a/" width="100%" height="360" frameborder="0" loading="lazy" title="SuperGLUE benchmark introduced in paper: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (Wang et al., 2019)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/1a1e87145608c91a.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/1a1e87145608c91a.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "SuperGLUE benchmark introduced in paper: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (Wang et al., 2019)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/1a1e87145608c91a.json") envelope = r.json() print(envelope["claim"]["statement"]) # "SuperGLUE benchmark introduced in paper: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems (Wang et al., 2019)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_superglue_benchmark_fact() -> dict: """Fetch the verified SourceScore claim for SuperGLUE benchmark.""" r = httpx.get("https://sourcescore.org/api/v1/claims/1a1e87145608c91a.json") return r.json()