SourceScore

Topic hub · 19 claims

Agent frameworks — orchestration libraries for LLM apps

Frameworks that orchestrate LLMs in multi-step agent pipelines. Each picks different defaults for tool-use, memory, retrieval, and observability.

Why frameworks emerged

By mid-2022 the agent loop pattern — model emits tool call, runtime executes, model receives result, repeat — was clearly the production shape. Writing it from scratch for each project produced inconsistent error handling, inconsistent retries, inconsistent observability. Frameworks like LangChain (October 2022) and LlamaIndex (November 2022) emerged within weeks of each other to standardize.

The current landscape

As of 2026: LangChain (orchestration breadth) + LlamaIndex (retrieval-first RAG) dominate Python. DSPy (Stanford) offers programs-not-prompts. Pydantic AI brings type-safety. OpenAI Agents SDK + Anthropic SDK are vendor-native. Vercel AI SDK owns Next.js. Each has a different mental model — pick by archetype + audience + commitment level.

The cross-vendor convergence

Anthropic's Model Context Protocol (November 2024) is the cross-vendor standard for tool exposure. Adopted by Anthropic, OpenAI, and most major frameworks within ~6 months. The framework count may eventually drop as MCP absorbs per-vendor SDKs — but as of 2026 the seven-framework landscape is what production developers face.

Defined terms (3)

Agent framework
A library that orchestrates LLM tool-use loops, retrieval, memory, and observability. Examples: LangChain, LlamaIndex, DSPy.
Tool-use loop
The multi-turn pattern: model emits tool call, runtime executes tool, model receives result, model decides next step or final answer.
Programs-not-prompts
DSPy's paradigm: write structured programs (modules + signatures) that get optimized for prompts and few-shot examples rather than hand-writing prompts.

All claims in this topic (19)

Related

Framework integrations