Competitive Agentic Forking

Competitive Agentic Forking is a software development workflow pattern where specific tasks are “forked” to multiple independent AI agents or models. Instead of relying on a single agent, the system spawns parallel competitors that attempt the same task. Their outputs are then evaluated, compared, and either selected or merged. This approach brings the competitive evaluation model popularized by lmarena.ai/leaderboard directly into development workflows—not as external benchmarking, but as native process integration. ...

Discovery Rituals for AI Tools

In the rapidly shifting landscape of AI-assisted development, your current stack is always decaying. To remain at the frontier, you must transition from “using tools” to “maintaining a discovery ritual.” This ritual is the engine behind the Tooling Chaos Monkey—it provides the candidates for your regular tool swaps. The Strategy: Models vs. Harnesses To do effective discovery, you must distinguish between the Model (the brain) and the Harness (the body/agent/IDE). Harness: Tools like Claud Code, Cline, Roo Code, OpenHands, or Droid. They provide the tool-use logic, environment access, and UI. Models: The underlying LLMs (Claude 3.5, GPT-4o, Gemini 1.5). A breakthrough can happen in either. A great model can fail in a poor harness, and a clever harness can make a mediocre model shine. ...

Interview-Driven Specification

A two-phase approach for building large features: specification through structured Q&A, followed by execution in a separate session. Pattern Phase 1: Spec refinement Start with a minimal spec or prompt. Ask the AI to interview you using interactive questioning to expand and refine the spec. Phase 2: Execution Create a new session and implement the refined spec. Why separate phases Clarity of intent: Specification mode focuses on decisions, not code. Deeper exploration: 40+ questions can surface edge cases, UX details, and trade-offs. Control: You shape the spec through answers, not by reviewing generated code. Clean context: Execution session starts with complete requirements, not discovery artifacts. Example prompt (Phase 1) read this @SPEC.md and interview me in detail using the AskUserQuestionTool about literally anything: technical implementation, UI & UX, concerns, tradeoffs, etc. but make sure the questions are not obvious be very in-depth and continue interviewing me continually until it's complete, then write the spec to the file When to use Building large features or new projects When requirements are incomplete but you know the rough shape When you want systematic exploration of edge cases and trade-offs When you prefer decision-making over code review as the primary feedback loop Trade-offs Overhead: More upfront time before code is written Question fatigue: 40+ questions requires sustained attention Assumes good questions: AI must ask non-obvious, useful questions Two sessions: Context doesn’t flow automatically from spec to execution Reference Thariq Shihipar (@trq212), December 28, 2024: ...

Semantic Zoom

Treating text as an elastic medium by explicitly directing an AI to “zoom out” (summarize/condense) to reduce context noise, or “zoom in” to detail specific areas.

Tooling Chaos Monkey

The Concept Applying the principles of Chaos Engineering to the developer toolchain. Just as Netflix’s Chaos Monkey randomly terminates instances to ensure system resilience, a Tooling Chaos Monkey involves intentionally and regularly disrupting your own development environment—disabling or swapping out key tools—to ensure workflow independence and adaptability. Why Do It? Prevent Lock-in: Deep reliance on a specific tool (e.g., a specific AI coding assistant’s proprietary features) creates a vulnerability. If that tool changes pricing, disappears, or degrades, your productivity shouldn’t crash. Accelerate Learning: New tools often introduce new paradigms. Regularly switching (e.g., from VS Code to Zed, or Copilot to Claude Sonnet) forces you to learn new capabilities and keeps your “tool plasticity” high. Validate Workflow Robustness: If your workflow breaks because you can’t use a specific button in a specific IDE, your workflow is fragile. True productivity comes from patterns that transcend tools. Implementation: The “Break One Tool” Drill Regularly (e.g., once a sprint or simply “on Tuesdays”), simulate a failure of a primary tool: ...