Competitive Agentic Forking

Competitive Agentic Forking is a software development workflow pattern where specific tasks are “forked” to multiple independent AI agents or models. Instead of relying on a single agent, the system spawns parallel competitors that attempt the same task. Their outputs are then evaluated, compared, and either selected or merged. This approach brings the competitive evaluation model popularized by lmarena.ai/leaderboard directly into development workflows—not as external benchmarking, but as native process integration. ...

Compounding Engineering Loop

Dan Shipper (Every) describes compounding engineering as an AI-native way of building software where each shipped feature makes the next feature easier (not harder). The loop: Plan → Delegate → Assess → Codify (Also phrased as Plan → Work → Review → Compound in Every’s write-up. “Codify” is the “Compound” step.) Core Idea Treat process knowledge as a first-class artifact. The code is not the only output—your prompts, templates, checks, and conventions should improve with every iteration. ...

Editorial Scoring as Code

Encode editorial judgment as an explicit scoring rubric so prioritization is consistent, inspectable, and tunable.

Geoffrey Huntley

Australian software engineer and writer; known for the “Ralph” loop framing and for public arguments about agentic coding, context hygiene, and identity in software engineering.

Interview-Driven Specification

A two-phase approach for building large features: specification through structured Q&A, followed by execution in a separate session. Pattern Phase 1: Spec refinement Start with a minimal spec or prompt. Ask the AI to interview you using interactive questioning to expand and refine the spec. Phase 2: Execution Create a new session and implement the refined spec. Why separate phases Clarity of intent: Specification mode focuses on decisions, not code. Deeper exploration: 40+ questions can surface edge cases, UX details, and trade-offs. Control: You shape the spec through answers, not by reviewing generated code. Clean context: Execution session starts with complete requirements, not discovery artifacts. Example prompt (Phase 1) read this @SPEC.md and interview me in detail using the AskUserQuestionTool about literally anything: technical implementation, UI & UX, concerns, tradeoffs, etc. but make sure the questions are not obvious be very in-depth and continue interviewing me continually until it's complete, then write the spec to the file When to use Building large features or new projects When requirements are incomplete but you know the rough shape When you want systematic exploration of edge cases and trade-offs When you prefer decision-making over code review as the primary feedback loop Trade-offs Overhead: More upfront time before code is written Question fatigue: 40+ questions requires sustained attention Assumes good questions: AI must ask non-obvious, useful questions Two sessions: Context doesn’t flow automatically from spec to execution Reference Thariq Shihipar (@trq212), December 28, 2024: ...

Landing the Plane

Landing the Plane is a session termination protocol for AI agents, formalized by Steve Yegge (creator of the Beads framework). It addresses the “amnesia” problem where agents lose context between sessions and leave “crap” (temporary artifacts) behind. The Problem Without a disciplined shutdown process: Context Loss: The next session starts fresh (“50 First Dates”), leading to repetitive re-explanation. Repo Pollution: Abandoned git branches, stashes, and debugging artifacts clutter the workspace. State Drift: The issue tracker (or mental model) falls out of sync with the actual code state. The Protocol When a user says “Land the plane,” the agent executes a scripted checklist to safely close the session: ...

Preflight the Plane

Preflight is a session initiation checklist for AI-assisted / agentic work. It is the counterpart to Landing the Plane (session shutdown): where landing focuses on cleanup + handoff, preflight focuses on scoping + context selection + verification setup so the session doesn’t drift. When to run it Run preflight when: you start a new chat (especially after a reset) you resume after time away you’re about to start an autonomous “loop” (background run / repeated iterations) you notice context pollution and decide to restart cleanly The Problem Without a preflight: ...

Separate Discovery from Delivery

Keep exploration and decision-making separate from execution to reduce scope creep and improve quality.

Skill Extraction Loop

The skill extraction loop is the practice of turning a repeatedly-used workflow into a reusable, delegatable skill (instructions + tooling + verification) so agents can run it with less supervision and less variance. You’ll also see this described as a skill-capture loop or “solve once → codify → reuse”. The goal is not “more automation”. The goal is more reliable delegation: fewer reminders, fewer one-off explanations, and tighter feedback loops. ...

Spec Branch Flow

A git branching strategy using a parallel ‘spec’ branch to iterate on messy specifications without polluting the main codebase, enabling seamless context injection for AI agents.