SourceScore

Verified claim · AI-ML · 100% confidence

CLIP introduced in paper: Learning Transferable Visual Models From Natural Language Supervision (Radford et al., 2021).

Last verified 2026-05-16 · Methodology veritas-v0.1 · bcdef949cc6d3644

Structured fields

Subject
CLIP
Predicate
introduced_in_paper
Object
Learning Transferable Visual Models From Natural Language Supervision (Radford et al., 2021)
Confidence
100%
Tags
clip · vision-language · multimodal · foundational · 2021 · openai

Sources (2)

  1. [1] preprint · arXiv (Radford, Kim, Hallacy, Ramesh, Goh, Agarwal, Sastry, Askell, Mishkin, Clark, Krueger, Sutskever) · 2021-02-26

    Learning Transferable Visual Models From Natural Language Supervision
    We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
  2. [2] official blog · OpenAI · 2021-01-05

    CLIP: Connecting Text and Images

Cite this claim

Ready-to-paste citation (Markdown / plain text):

CLIP introduced in paper: Learning Transferable Visual Models From Natural Language Supervision (Radford et al., 2021). — SourceScore Claim bcdef949cc6d3644 (verified 2026-05-16). https://sourcescore.org/api/v1/claims/bcdef949cc6d3644.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/bcdef949cc6d3644/" width="100%" height="360" frameborder="0" loading="lazy" title="CLIP introduced in paper: Learning Transferable Visual Models From Natural Language Supervision (Radford et al., 2021)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/bcdef949cc6d3644.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/bcdef949cc6d3644.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "CLIP introduced in paper: Learning Transferable Visual Models From Natural Language Supervision (Radford et al., 2021)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/bcdef949cc6d3644.json") envelope = r.json() print(envelope["claim"]["statement"]) # "CLIP introduced in paper: Learning Transferable Visual Models From Natural Language Supervision (Radford et al., 2021)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_clip_fact() -> dict: """Fetch the verified SourceScore claim for CLIP.""" r = httpx.get("https://sourcescore.org/api/v1/claims/bcdef949cc6d3644.json") return r.json()