Weekly YouTube Picks (2026-W09)

This week’s picks are about making coding agents operational: pick the right harness boundaries, then raise verification frequency so high-velocity diffs don’t turn into variance. The recurring move is to treat context and guardrails as first-class artifacts you can validate, version, and iterate on. Emerging Patterns for Coding with Generative AI — DevCon Fall 2025 (Lada Kesseler) — The durable shift is from “better prompting” to context management: capture decisions into reloadable knowledge docs, keep instructions tight to avoid context rot, and use specialist agents when focus matters. Two tactics worth stealing immediately: Semantic Zoom (zoom out, then drill in) and the “feedback flip” where you force a reviewer pass before you trust a diff — a concrete way to operationalize a checker. ...

February 28, 2026

Weekly YouTube Picks (2026-W08)

This week’s picks converge on a single constraint: code is cheap, but variance isn’t — so the work shifts to specs, feedback loops, and grounded evidence (in the repo and in production). AI-written code — agents as “infinite interns” (Armin Ronacher) — A useful reframing: agents make ecosystem friction measurable. When there are five competing ways to do packaging, typing, or routing, the agent doesn’t just get slower — it gets inconsistent. The practical response is boring but real: standardize the tool loop, bias toward lower abstraction when debugging cost matters, and invest in conventions that reduce Comprehension Debt. ...

February 22, 2026

Weekly YouTube Picks (2026-W07)

This week’s picks share a throughline: the shift from “vibe-driven” AI use toward deliberate engineering discipline — guardrailed loops, skills-as-docs, evals as acceptance criteria, and context loaded with intent. AI Updates Weekly — February 16, 2026 (Lev Selector) — “Skills-in-the-middle” reframes app development: move middle-layer logic out of hard-coded decision trees and into auditable Markdown skill files. The catch is governance: skills become a supply-chain problem — enable least-privilege, review what any skill can touch, and anticipate needing a “skills security officer” before long. The tagline says it all: “You are becoming a skills developer, not a software developer.” ...

February 15, 2026

Weekly YouTube Picks (2026-W06)

Fundamental Knowledge SWE’s in 2026 Must Have (Hiring Bar) — Geoffrey Huntley demystifies agents as just a “~300-line loop” wrapping basic primitives like read, write, and bash. The real skill in 2026 will be Context Hygiene: maintaining “one task per context window” to prevent degradation and showing restraint with tools, since every registered function consumes tokens and attention. Pi: Minimal Multi-Model Coding Agent CLI — Radical minimalism for coding agents. “Pi” eschews complex MCP protocols for a Tool by README pattern: enabling tools simply by pointing the agent at a script’s documentation. It adopts an explicit “YOLO mode” stance—relying on containers for safety rather than internal guardrails—and prioritizes durable markdown artifacts (plans, checklists) over ephemeral “plan mode” UIs. ...

February 8, 2026

Weekly YouTube Picks (2026-W05)

Picks 10 tips to level up your ai-assisted coding — Aleksander Stensby at NDC Manchester 2025 provides a practical framework for “Compounding Engineering,” where you treat your AI assistant as a junior colleague. Key takeaways include active context management, maintaining repo-specific rule files, and using MCP to connect agents to real-world tools. The Developer Skills That Will Actually Survive AI — A high-level discussion with former GitHub CEO Thomas Dohmke and Guy Podjarny. They argue that as syntax becomes a commodity, the core engineering skill shifts to problem decomposition and system architecture. The conversation highlights the move toward agent-to-agent collaboration and why developers must remain the architects of value. ...

February 1, 2026

Context Hygiene

Context Hygiene is the disciplined practice of managing an LLM’s context window to maintain agent performance, accuracy, and efficiency. As context windows grow, the temptation is to “stuff” them, but practical experience shows that “context pollution” degrades reasoning capabilities. Core Principles 1. One Task Per Context Window Geoffrey Huntley emphasizes that developers should treat a context window like a small, precious resource (analagous to RAM in early computing). The rule is “one task per context window” (also formulated as Context is a Per-Feature Budget). ...

Context is a Per-Feature Budget

Context is a Per-Feature Budget is the operational principle that treats an LLM’s context window not as a persistent workspace, but as a finite, depleting resource allocated to a single unit of work (e.g., a feature, a bug fix, or a distinct task). The Core Concept Long-running chat sessions suffer from Context Decay. As a conversation grows, the model’s adherence to initial system instructions, quality bars, and specific constraints degrades. The model becomes “forgetful” or “lazy,” often silently dropping requirements (measured at ~40% drop rate in some experiments). ...

Maker-Checker Pattern

A workflow where an AI “maker” proposes changes and a human “checker” verifies them with tests, review, and explicit approval.

Semantic Zoom

Treating text as an elastic medium by explicitly directing an AI to “zoom out” (summarize/condense) to reduce context noise, or “zoom in” to detail specific areas.