Year hub · 15 claims
AI/ML claims from 2020
Hand-verified research claims with primary sources dated 2020. Each claim has ≥2 primary sources and an HMAC-SHA256 signature.
AlphaFold 1 introduced in: Senior et al. 2020 — DeepMind protein structure prediction.
a77a8dd48941a53d · 2 sources · 100% confidence
AlphaFold 2 published in paper: Highly accurate protein structure prediction with AlphaFold (Jumper et al., 2021).
477bc19212addf9a · 2 sources · 100% confidence
DeepSpeed publicly released on: 2020-02-13 by Microsoft Research.
53cc193ef08fc5c0 · 2 sources · 100% confidence
Denoising Diffusion Probabilistic Models (DDPM) introduced in paper: Denoising Diffusion Probabilistic Models (Ho, Jain, Abbeel, 2020).
e700f81fff6f38c7 · 2 sources · 100% confidence
ELECTRA introduced in paper: ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators (Clark et al., 2020).
2f9c79357e9d4da9 · 2 sources · 100% confidence
GPT-3 parameter count: 175000000000.
1ca2cc2864dfb376 · 2 sources · 100% confidence
GPT-3 introduced in paper: Language Models are Few-Shot Learners (Brown et al., 2020).
7d3e6a39b1656571 · 2 sources · 100% confidence
Haystack publicly released on: 2020-04 by deepset GmbH.
6218cda83723327f · 2 sources · 100% confidence
Hugging Face Hub publicly released on: 2020-09 — model + dataset sharing platform.
9ef57203f2dd34ec · 2 sources · 100% confidence
Longformer introduced in paper: Longformer: The Long-Document Transformer (Beltagy, Peters, Cohan, 2020).
c3d2ec81d9faf837 · 2 sources · 100% confidence
MMLU benchmark introduced in paper: Measuring Massive Multitask Language Understanding (Hendrycks et al., 2020).
428d754e7c651be6 · 2 sources · 100% confidence
Reformer introduced in paper: Reformer: The Efficient Transformer (Kitaev, Kaiser, Levskaya, 2020).
76f7f00e79bc18c8 · 2 sources · 100% confidence
Retrieval-Augmented Generation (RAG) introduced in paper: Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Lewis et al., 2020).
d15057ced937a103 · 2 sources · 100% confidence
The Pile dataset released on: 2020-12-31.
4aef1422b96df26c · 2 sources · 100% confidence
Vision Transformer (ViT) introduced in paper: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Dosovitskiy et al., 2020).
d3681b0981e0b700 · 2 sources · 100% confidence
Foundational papers · 2024-2025 releases · All claims · All topic hubs