SourceScore

Verified claim · AI-ML · 100% confidence

BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018).

Last verified 2026-05-16 · Methodology veritas-v0.1 · 4c1ee70007dc89c1

Structured fields

Subject
BERT (Bidirectional Encoder Representations from Transformers)
Predicate
introduced_in_paper
Object
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018)
Confidence
100%
Tags
bert · foundational · devlin · 2018 · google · nlp

Sources (2)

  1. [1] preprint · arXiv (Devlin, Chang, Lee, Toutanova) · 2018-10-11

    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
  2. [2] peer reviewed · Association for Computational Linguistics · 2019-06-02

    BERT (NAACL 2019 proceedings)

Cite this claim

Ready-to-paste citation (Markdown / plain text):

BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018). — SourceScore Claim 4c1ee70007dc89c1 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/4c1ee70007dc89c1.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/4c1ee70007dc89c1/" width="100%" height="360" frameborder="0" loading="lazy" title="BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/4c1ee70007dc89c1.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/4c1ee70007dc89c1.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/4c1ee70007dc89c1.json") envelope = r.json() print(envelope["claim"]["statement"]) # "BERT (Bidirectional Encoder Representations from Transformers) introduced in paper: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al., 2018)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_bert_bidirectional_encoder_representations_from_transformers_fact() -> dict: """Fetch the verified SourceScore claim for BERT (Bidirectional Encoder Representations from Transformers).""" r = httpx.get("https://sourcescore.org/api/v1/claims/4c1ee70007dc89c1.json") return r.json()