SourceScore

Verified claim · AI-ML · 100% confidence

InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022).

Last verified 2026-05-16 · Methodology veritas-v0.1 · 5da8f8dffc038b8e

Structured fields

Subject
InstructGPT methodology
Predicate
introduced_in_paper
Object
Training language models to follow instructions with human feedback (Ouyang et al., 2022)
Confidence
100%
Tags
instructgpt · alignment · openai · 2022 · ouyang · rlhf

Sources (2)

  1. [1] preprint · arXiv (Ouyang et al., OpenAI) · 2022-03-04

    Training language models to follow instructions with human feedback
    We show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. … The resulting InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets.
  2. [2] official blog · OpenAI · 2022-01-27

    Aligning language models to follow instructions

Cite this claim

Ready-to-paste citation (Markdown / plain text):

InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022). — SourceScore Claim 5da8f8dffc038b8e (verified 2026-05-16). https://sourcescore.org/api/v1/claims/5da8f8dffc038b8e.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/5da8f8dffc038b8e/" width="100%" height="360" frameborder="0" loading="lazy" title="InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/5da8f8dffc038b8e.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/5da8f8dffc038b8e.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/5da8f8dffc038b8e.json") envelope = r.json() print(envelope["claim"]["statement"]) # "InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_instructgpt_methodology_fact() -> dict: """Fetch the verified SourceScore claim for InstructGPT methodology.""" r = httpx.get("https://sourcescore.org/api/v1/claims/5da8f8dffc038b8e.json") return r.json()