GAD
v5
requirements unknown
2026-04-07
pre-gate requirements

Escape the Dungeon · GAD

escape-the-dungeon/v5

Composite score

0.812

Human review

0.00

Process metrics diverged from reality

This run scored 0.812 on the composite formula (requirement coverage, planning quality, commit discipline, skill accuracy, time efficiency) but a human reviewer rated the actual artifact 0.00out of 1.0. This is exactly the failure mode that prompted gad-29 ("process metrics do not guarantee output quality") and the move to weight human review at 30%.

Human review note

Blank screen, no UI renders, no main menu, no playable content. Systems may exist in code but nothing visible. Eval must require vertical slice with playable demo.

Reviewed by human · 2026-04-08

Dimension scores

Where the composite came from

Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/escape-the-dungeon/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.

DimensionScoreBar
Human review0.000
Requirement coverage1.000
Planning quality1.000
Per-task discipline0.830
Skill accuracy1.000
Time efficiency0.963

Composite formula

How 0.812 was calculated

The composite score is a weighted sum of the dimensions above. Weights come from evals/escape-the-dungeon/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.

DimensionWeightScoreContribution
requirement_coverage0.151.0000.1500(26%)
planning_quality0.151.0000.1500(26%)
per_task_discipline0.150.8300.1245(22%)
skill_accuracy0.101.0000.1000(17%)
time_efficiency0.050.9630.0481(8%)
human_review0.300.0000.0000(0%)
Weighted sum0.900.5726

Note: The weighted sum above (0.5726) doesn't exactly match the stored composite (0.8123). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.

Skill accuracy breakdown

Did the agent invoke the right skills at the right moments?

For each expected trigger we recorded whether the agent invoked the skill at the right point in the loop. Accuracy = fired / expected = 1.00.

SkillExpected whenOutcome
/gad:plan-phasebefore implementationfired
/gad:execute-phaseper phasefired
/gad:task-checkpointbetween tasksfired
/gad:auto-conventionsafter first code phasefired
/gad:verify-workafter phase completionfired
/gad:check-todossession startfired

Gate report

Requirement coverage

Total criteria
12
Fully met
12
Partially met
0
Not met
0

Process metrics

How the agent actually worked

Primary runtime
older runs may not carry runtime attribution yet
Agent lanes
0
0 root · 0 subagent · source missing
Observed depth
0 traced event(s) with agent lineage
Wall clock
18m
12 phases · 12 tasks
Started
Apr 7, 7:00 AM
Run start captured in TRACE timing metadata
Ended
Apr 7, 7:18 AM
Missing end time usually means the run was scaffolded but never finalized
Tool uses
110
92,278 tokens
Commits
11
10 with task id · 1 batch
Planning docs
0
decisions captured · 12 phases planned
Client debug · NEXT_PUBLIC_CLIENT_DEBUG=1
0 lines

No events yet. Window errors, unhandled rejections, and React render errors appear here. Set NEXT_PUBLIC_CLIENT_DEBUG_CONSOLE=1 to mirror console.error / console.warn.