Building Effective Agents

post

Anthropic · December 2024

  • Argues the most effective agent architectures are augmented LLMs with simple tool loops, not multi-agent frameworks
  • Distinguishes “workflows” (predetermined tool orchestration) from “agents” (model-directed tool use) — both reduce to tool loops at different autonomy levels
  • Recommends starting with the simplest implementation and adding complexity only when measurably needed

Model Context Protocol (MCP)

protocol

Anthropic · November 2024

  • Open protocol for connecting AI assistants to external data sources and tools through a standardized JSON-RPC interface
  • Servers expose tools, resources, and prompts; clients (LLMs) discover and invoke them — the AI equivalent of USB-C for context
  • Keeps tool integration composable: each server is a single-purpose process, orchestrated by the model’s own tool loop

ReAct: Synergizing Reasoning and Acting in Language Models

paper

arXiv · October 2022

  • Interleaves chain-of-thought reasoning traces with concrete actions in an observe-think-act loop
  • Outperforms pure reasoning (chain-of-thought) and pure acting (action-only) on knowledge-intensive tasks by grounding thoughts in tool outputs
  • Foundational pattern behind most modern agent frameworks — the shell-like “read, eval, print” loop applied to LLMs

Toolformer: Language Models Can Teach Themselves to Use Tools

paper

arXiv · February 2023

  • Demonstrates that language models can learn when and how to call external tools (calculator, search, calendar) through self-supervised training
  • The model inserts API calls into its own text generation when doing so reduces perplexity — tool use emerges from utility, not instruction
  • Shows that tool augmentation is a natural extension of next-token prediction, not a bolted-on capability

LangChain

framework

GitHub · October 2022

  • Framework for composing LLM calls with tools, memory, and retrieval into multi-step chains and agents
  • Popularized the “chain” abstraction — sequential LLM calls where each step’s output feeds the next — and the “agent” pattern with dynamic tool selection
  • Useful as a reference for what complexity emerges when tool loops scale; argues for the shell thesis by showing what happens without simplicity constraints

Anthropic Tool Use Documentation

docs

Anthropic Docs · 2024

  • Reference for Claude’s native tool-use interface: define tools as JSON schemas, the model emits structured tool_use blocks, you execute and return results
  • The interaction pattern is a synchronous tool loop — exactly the shell paradigm of prompt → command → output → prompt
  • Supports forced tool use, parallel tool calls, and streaming, showing how the simple loop extends without changing its fundamental shape

Taste Is Not a Moat

post

sshh.io · 2026

  • Argues that taste is “alpha” (a decaying edge) not a “moat” — as AI baselines improve every few months, individual judgment only matters relative to what the tools do by default
  • Reframes the human role as “taste extractor”: articulating tacit preferences so tool loops can operationalize them, which is exactly the shell pattern of encoding intent into composable commands
  • Proposes concrete extraction techniques (A/B interviews, ghost writing, external reviews) that all reduce to the same structure — a human-in-the-loop refining outputs through iterative feedback cycles