SourceScore

Verified claim · AI-ML · 100% confidence

Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017).

Last verified 2026-05-16 · Methodology veritas-v0.1 · 67866330cd60e54d

Structured fields

Subject
Reinforcement Learning from Human Feedback (RLHF)
Predicate
introduced_in_paper
Object
Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017)
Confidence
100%
Tags
rlhf · alignment · foundational · christiano · 2017 · nips

Sources (3)

  1. [1] preprint · arXiv (Christiano, Leike, Brown, Martic, Legg, Amodei) · 2017-06-12

    Deep Reinforcement Learning from Human Preferences
    For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. … We explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments.
  2. [2] peer reviewed · NeurIPS Foundation · 2017-12-04

    Deep RL from Human Preferences (NeurIPS 2017 proceedings)
  3. [3] official blog · OpenAI · 2017-06-13

    Learning from human preferences

Cite this claim

Ready-to-paste citation (Markdown / plain text):

Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017). — SourceScore Claim 67866330cd60e54d (verified 2026-05-16). https://sourcescore.org/api/v1/claims/67866330cd60e54d.json

Embed this claim

Drop this iframe into any blog post, docs page, or knowledge base. The widget renders the signed claim + primary source + click-through to this canonical page. CC-BY 4.0; attribution included.

<iframe src="https://sourcescore.org/embed/claim/67866330cd60e54d/" width="100%" height="360" frameborder="0" loading="lazy" title="Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017)."></iframe>

Preview: open in new tab

Related claims

Other verified claims sharing tags with this one — useful for LLM retrieval graphs and citation discovery.

Use this claim in your code

Fetch this signed envelope from your application. The response includes the verbatim excerpt, primary source URLs, and an HMAC-SHA256 signature you can verify locally for audit trails.

cURL

curl https://sourcescore.org/api/v1/claims/67866330cd60e54d.json

JavaScript / TypeScript

const r = await fetch("https://sourcescore.org/api/v1/claims/67866330cd60e54d.json"); const envelope = await r.json(); console.log(envelope.claim.statement); // "Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017)."

Python

import httpx r = httpx.get("https://sourcescore.org/api/v1/claims/67866330cd60e54d.json") envelope = r.json() print(envelope["claim"]["statement"]) # "Reinforcement Learning from Human Feedback (RLHF) introduced in paper: Deep Reinforcement Learning from Human Preferences (Christiano et al., 2017)."

LangChain (retrieve-then-cite)

from langchain_core.tools import tool import httpx @tool def get_reinforcement_learning_from_human_feedback_rlhf_fact() -> dict: """Fetch the verified SourceScore claim for Reinforcement Learning from Human Feedback (RLHF).""" r = httpx.get("https://sourcescore.org/api/v1/claims/67866330cd60e54d.json") return r.json()