Weekly YouTube Picks (2026-W09)

This week’s picks are about making coding agents operational: pick the right harness boundaries, then raise verification frequency so high-velocity diffs don’t turn into variance. The recurring move is to treat context and guardrails as first-class artifacts you can validate, version, and iterate on. Emerging Patterns for Coding with Generative AI — DevCon Fall 2025 (Lada Kesseler) — The durable shift is from “better prompting” to context management: capture decisions into reloadable knowledge docs, keep instructions tight to avoid context rot, and use specialist agents when focus matters. Two tactics worth stealing immediately: Semantic Zoom (zoom out, then drill in) and the “feedback flip” where you force a reviewer pass before you trust a diff — a concrete way to operationalize a checker. ...

February 28, 2026

Agent Skills

Agent Skills is an open standard for packaging reusable agent capabilities (instructions + resources) in a folder that tools can discover and load on demand. The format was originally developed by Anthropic and released publicly as an open specification in October 2025 (announcement, engineering blog). Treat the spec as actively evolving: implementations and best practices may change as more clients adopt it. Canonical definition: A simple, open format for giving agents new capabilities and expertise. ...

Agentic Harness

An agentic harness is the wrapper around an LLM that turns it into an agent by running it in a loop, giving it tools, and managing state. Working Definition Agentic harness = controller loop + tool runtime + context/state management + guardrails + (optional) evaluation. In practice, the harness is the part that: Calls the model repeatedly (plan → act → observe → update → repeat) Executes tools (shell, files, web, APIs, repo ops) on the model’s behalf Manages context (what to include, summarize, persist, retrieve) Enforces constraints (step limits, timeouts, budgets, sandboxing, policy checks) Optionally adds evaluation (tests, graders, benchmarks, self-checks) Two Common Meanings Runtime harness - used to build/operate real agents (coding agents, research agents, ops agents). Evaluation harness - used to run task suites and measure performance across models/variants. Why It Matters Most “agent capability” comes from the harness: ...