Verified claim · AI-ML · 100% confidence
RoBERTa introduced in: Liu et al. 2019 — A Robustly Optimized BERT Pretraining Approach.
Last verified 2026-05-16 · Methodology veritas-v0.1 · d4fecb26a4c9cdca
Structured fields
- Subject
- RoBERTa
- Predicate
introduced_in- Object
- Liu et al. 2019 — A Robustly Optimized BERT Pretraining Approach
- Confidence
- 100%
- Tags
- roberta · bert · facebook-ai · pretraining · foundational · 2019 · introduced_in
Sources (2)
[1] preprint · arXiv (Liu, Ott, Goyal, Du, Joshi, Chen, Levy, Lewis, Zettlemoyer, Stoyanov / Facebook AI) · 2019-07-26
RoBERTa: A Robustly Optimized BERT Pretraining Approach“We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it.”
[2] official blog · Hugging Face · 2019-07-26
RoBERTa — Hugging Face Transformers documentation
Cite this claim
Ready-to-paste citation (Markdown / plain text):
RoBERTa introduced in: Liu et al. 2019 — A Robustly Optimized BERT Pretraining Approach. — SourceScore Claim d4fecb26a4c9cdca (verified 2026-05-16). https://sourcescore.org/api/v1/claims/d4fecb26a4c9cdca.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/d4fecb26a4c9cdca/" width="100%" height="360" frameborder="0" loading="lazy" title="RoBERTa introduced in: Liu et al. 2019 — A Robustly Optimized BERT Pretraining Approach."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
DistilBERT introduced in: Sanh et al. 2019 — a smaller, faster, cheaper BERT via knowledge distillation.
245af747a3d21061 · 100% confidence · shares 4 tags (bert, foundational, 2019…)
BART introduced in: Lewis et al. 2019 — denoising sequence-to-sequence pretraining.
f5b422e3255fd7c0 · 100% confidence · shares 4 tags (facebook-ai, foundational, 2019…)
BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018).
4c1ee70007dc89c1 · 100% confidence · shares 2 tags (bert, foundational)
GPT-2 introduced in paper: Language Models are Unsupervised Multitask Learners (Radford et al., 2019).
859551dc078c46f8 · 100% confidence · shares 2 tags (foundational, 2019)
T5 (Text-to-Text Transfer Transformer) introduced in paper: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel et al., 2019).
ef28341c3b308737 · 100% confidence · shares 2 tags (foundational, 2019)
Use this claim in your code
Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.
cURL
curl https://sourcescore.org/api/v1/claims/d4fecb26a4c9cdca.jsonJavaScript / TypeScript
const r = await fetch("https://sourcescore.org/api/v1/claims/d4fecb26a4c9cdca.json");
const envelope = await r.json();
console.log(envelope.claim.statement);
// "RoBERTa introduced in: Liu et al. 2019 — A Robustly Optimized BERT Pretraining Approach."Python
import httpx
r = httpx.get("https://sourcescore.org/api/v1/claims/d4fecb26a4c9cdca.json")
envelope = r.json()
print(envelope["claim"]["statement"])
# "RoBERTa introduced in: Liu et al. 2019 — A Robustly Optimized BERT Pretraining Approach."LangChain (retrieve-then-cite)
from langchain_core.tools import tool
import httpx
@tool
def get_roberta_fact() -> dict:
"""Fetch the verified SourceScore claim for RoBERTa."""
r = httpx.get("https://sourcescore.org/api/v1/claims/d4fecb26a4c9cdca.json")
return r.json()