SourceScore

Verified claim · AI-ML · 100% confidence

MTEB benchmark introduced in: Muennighoff et al. 2022 — Massive Text Embedding Benchmark.

Last verified 2026-05-16 · Methodology veritas-v0.1 · cccd161dd058a31e

Structured fields

Subject
MTEB benchmark
Predicate
introduced_in
Object
Muennighoff et al. 2022 — Massive Text Embedding Benchmark
Confidence
100%
Tags
mteb · benchmark · embeddings · huggingface · evaluation · 2022 · introduced_in

Sources (2)

  1. [1] preprint · arXiv (Muennighoff, Tazi, Magne, Reimers / Hugging Face + ContextualAI) · 2022-10-13

    MTEB: Massive Text Embedding Benchmark
    Text embeddings are commonly evaluated on a small set of datasets from a single task not covering their possible applications to other tasks. It is unclear whether state-of-the-art embeddings on semantic textual similarity (STS) can be equally well applied to other tasks like clustering or reranking.
  2. [2] official blog · Hugging Face / MTEB community · 2022-10-13

    MTEB Leaderboard — live ranking on Hugging Face

Cite this claim

Ready-to-paste citation (Markdown / plain text):

MTEB benchmark introduced in: Muennighoff et al. 2022 — Massive Text Embedding Benchmark. — SourceScore Claim cccd161dd058a31e (verified 2026-05-16). https://sourcescore.org/api/v1/claims/cccd161dd058a31e.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/cccd161dd058a31e/" width="100%" height="360" frameborder="0" loading="lazy" title="MTEB benchmark introduced in: Muennighoff et al. 2022 — Massive Text Embedding Benchmark."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/cccd161dd058a31e.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/cccd161dd058a31e.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "MTEB benchmark introduced in: Muennighoff et al. 2022 — Massive Text Embedding Benchmark."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/cccd161dd058a31e.json") envelope = r.json() print(envelope["claim"]["statement"]) # "MTEB benchmark introduced in: Muennighoff et al. 2022 — Massive Text Embedding Benchmark."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_mteb_benchmark_fact() -> dict: """Fetch the verified SourceScore claim for MTEB benchmark.""" r = httpx.get("https://sourcescore.org/api/v1/claims/cccd161dd058a31e.json") return r.json()