AI coding loops have a recurring failure mode. The tool edits code, guesses which verifier to run, gets a noisy failure, overcorrects, and guesses again. There is no stable controller, so it thrashes.
vary var is the controller. Run it after every edit, not just at the end. It is not a replacement for vary check and it is not a final gate you hit once when the code "feels done". It is the thing an agent should keep polling while it works.
Run vary var --json, read the current stage, blockers, recommendation, and nextCommand, make the smallest change that addresses the reported problem, and run vary var --json again. Keep going until the outcome is complete.
This is what VAR's session model is for. It remembers what already passed, resumes from the first incomplete stage when the tree is unchanged, and resets when new edits change the fingerprint. The agent does not have to track the validation plan itself.
AI tools are good at local edits. They are bad at deciding, on every turn and from scratch, whether the next move is a structural fix, a narrow test repair, a broader test run, a mutation pass, or review closeout. vary var makes that decision deterministic.
Rather than teaching every agent a custom verification ladder and a pile of repo-specific heuristics, give it one protocol:
vary var --json
The command reports what changed, what stage the repo is in, what is blocked, what exact command should run next, and whether the loop is making progress.
Use JSON mode. The human report is for humans, the JSON is the machine contract.
vary var --json
The fields that matter:
| Field | Why the agent should care |
|---|---|
stage | Current pipeline position |
outcome | Current terminal status |
blockers | What must be fixed next |
recommendation | Short next-step summary |
nextCommand | Exact command to run |
noProgress | Whether the loop is stuck |
Keep the policy narrow. If outcome is complete, stop. If there are blockers, fix the blockers before trying to advance. If nextCommand is set, run it instead of guessing. After edits, re-run vary var --json. If noProgress flips to true, change strategy, do not brute-force more of the same repair.
The steady-state call is still:
vary var --json
VAR will sometimes recommend something more specific, like a mutation pass or review closeout. Run that when it says to, then come back to vary var. Do not maintain a separate internal idea of whether the tree is "done enough for tests" or "ready for mutation". VAR already decides that.
The shape looks like this:
edit code
-> vary var --json
-> fix what VAR says is wrong
-> vary var --json
-> fix what VAR says is wrong
-> vary var --json
-> complete
It does not rewrite your source code for you. It is an orchestrator and a decision-maker.
Its job is to answer what validation work is left, what should run next, what failed, and whether the last edit helped. The agent's job is to read those answers, change code or tests, and re-run VAR. That split is the whole point: the command stays deterministic and auditable, and the model stays responsible for the repair.
| Good loop | Bad loop |
|---|---|
| VAR decides. The agent repairs. VAR re-evaluates. | The agent guesses. The verifier emits noise. The agent guesses again. |
An agent that polls vary var stops improvising the verification sequence. It gets the same answer to "what now?" every iteration, and that answer is grounded in the state of the tree rather than the model's memory of what it did last turn.
| Page | Topic |
|---|---|
| Verification ladder | Escalation order |
| Confidence at scale | Improving generated code |
| VAR overview | Session-based loop |
| VAR output modes | JSON protocol |