How AI tools should use vary var

AI coding loops have a recurring failure mode. The tool edits code, guesses which verifier to run, gets a noisy failure, overcorrects, and guesses again. There is no stable controller, so it thrashes.

vary var is the controller. Run it after every edit, not just at the end. It is not a replacement for vary check and it is not a final gate you hit once when the code "feels done". It is the thing an agent should keep polling while it works.

The loop

Run vary var --json, read the current stage, blockers, recommendation, and nextCommand, make the smallest change that addresses the reported problem, and run vary var --json again. Keep going until the outcome is complete.

This is what VAR's session model is for. It remembers what already passed, resumes from the first incomplete stage when the tree is unchanged, and resets when new edits change the fingerprint. The agent does not have to track the validation plan itself.

Why polling beats guessing

AI tools are good at local edits. They are bad at deciding, on every turn and from scratch, whether the next move is a structural fix, a narrow test repair, a broader test run, a mutation pass, or review closeout. vary var makes that decision deterministic.

Rather than teaching every agent a custom verification ladder and a pile of repo-specific heuristics, give it one protocol:

vary var --json

The command reports what changed, what stage the repo is in, what is blocked, what exact command should run next, and whether the loop is making progress.

What an agent should actually do

Use JSON mode. The human report is for humans, the JSON is the machine contract.

vary var --json

The fields that matter:

FieldWhy the agent should care
stageCurrent pipeline position
outcomeCurrent terminal status
blockersWhat must be fixed next
recommendationShort next-step summary
nextCommandExact command to run
noProgressWhether the loop is stuck

Keep the policy narrow. If outcome is complete, stop. If there are blockers, fix the blockers before trying to advance. If nextCommand is set, run it instead of guessing. After edits, re-run vary var --json. If noProgress flips to true, change strategy, do not brute-force more of the same repair.

Keep returning to the same command

The steady-state call is still:

vary var --json

VAR will sometimes recommend something more specific, like a mutation pass or review closeout. Run that when it says to, then come back to vary var. Do not maintain a separate internal idea of whether the tree is "done enough for tests" or "ready for mutation". VAR already decides that.

The shape looks like this:

edit code
-> vary var --json
-> fix what VAR says is wrong
-> vary var --json
-> fix what VAR says is wrong
-> vary var --json
-> complete

What vary var is not

It does not rewrite your source code for you. It is an orchestrator and a decision-maker.

Its job is to answer what validation work is left, what should run next, what failed, and whether the last edit helped. The agent's job is to read those answers, change code or tests, and re-run VAR. That split is the whole point: the command stays deterministic and auditable, and the model stays responsible for the repair.

Good loop vs bad loop

Good loopBad loop
VAR decides. The agent repairs. VAR re-evaluates.The agent guesses. The verifier emits noise. The agent guesses again.

An agent that polls vary var stops improvising the verification sequence. It gets the same answer to "what now?" every iteration, and that answer is grounded in the state of the tree rather than the model's memory of what it did last turn.

Related reading

PageTopic
Verification ladderEscalation order
Confidence at scaleImproving generated code
VAR overviewSession-based loop
VAR output modesJSON protocol