Verified claim · AI-ML · 100% confidence
InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT.
Last verified 2026-05-16 · Methodology veritas-v0.1 · 590b9de765b8126e
Structured fields
- Subject
- InstructGPT
- Predicate
introduced_in- Object
- Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT
- Confidence
- 100%
- Tags
- instructgpt · openai · rlhf · alignment · foundational · 2022 · introduced_in
Sources (2)
[1] preprint · arXiv (Ouyang, Wu, Jiang, Almeida, Wainwright, Mishkin, Zhang, Agarwal, et al. / OpenAI) · 2022-03-04
Training language models to follow instructions with human feedback“Making language models bigger does not inherently make them better at following a user's intent. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback.”
[2] official blog · OpenAI · 2022-01-27
Aligning language models to follow instructions
Cite this claim
Ready-to-paste citation (Markdown / plain text):
InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT. — SourceScore Claim 590b9de765b8126e (verified 2026-05-16). https://sourcescore.org/api/v1/claims/590b9de765b8126e.jsonEmbed this claim
Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.
<iframe src="https://sourcescore.org/embed/claim/590b9de765b8126e/" width="100%" height="360" frameborder="0" loading="lazy" title="InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT."></iframe>Preview: open in new tab
Related claims
Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.
InstructGPT methodology introduced in paper: Training language models to follow instructions with human feedback (Ouyang et al., 2022).
5da8f8dffc038b8e · 100% confidence · shares 5 tags (instructgpt, alignment, openai…)
Anthropic Constitutional AI Harmlessness introduced in paper: Bai et al. 2022 — training a helpful and harmless assistant.
6fa575eb9df5ac32 · 100% confidence · shares 4 tags (alignment, foundational, 2022…)
Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017).
67866330cd60e54d · 100% confidence · shares 3 tags (rlhf, alignment, foundational)
Proximal Policy Optimization (PPO) introduced in paper: Proximal Policy Optimization Algorithms (Schulman et al., 2017).
00f224e1ccc158ef · 100% confidence · shares 3 tags (foundational, openai, rlhf)
Speculative decoding introduced in: Leviathan, Kalman, Matias 2023 — Google Research.
6cdc7730bf41bb3d · 100% confidence · shares 3 tags (foundational, 2022, introduced_in)
Use this claim in your code
Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.
cURL
curl https://sourcescore.org/api/v1/claims/590b9de765b8126e.jsonJavaScript / TypeScript
const r = await fetch("https://sourcescore.org/api/v1/claims/590b9de765b8126e.json");
const envelope = await r.json();
console.log(envelope.claim.statement);
// "InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT."Python
import httpx
r = httpx.get("https://sourcescore.org/api/v1/claims/590b9de765b8126e.json")
envelope = r.json()
print(envelope["claim"]["statement"])
# "InstructGPT introduced in: Ouyang et al. 2022 — RLHF-tuned GPT-3, direct ancestor of ChatGPT."LangChain (retrieve-then-cite)
from langchain_core.tools import tool
import httpx
@tool
def get_instructgpt_fact() -> dict:
"""Fetch the verified SourceScore claim for InstructGPT."""
r = httpx.get("https://sourcescore.org/api/v1/claims/590b9de765b8126e.json")
return r.json()