Escape the Dungeon · GAD
escape-the-dungeon/v5
Composite score
0.812
Human review
0.00
Process metrics diverged from reality
This run scored 0.812 on the composite formula (requirement coverage, planning quality, commit discipline, skill accuracy, time efficiency) but a human reviewer rated the actual artifact 0.00out of 1.0. This is exactly the failure mode that prompted gad-29 ("process metrics do not guarantee output quality") and the move to weight human review at 30%.
Human review note
Blank screen, no UI renders, no main menu, no playable content. Systems may exist in code but nothing visible. Eval must require vertical slice with playable demo.
Reviewed by human · 2026-04-08
Dimension scores
Where the composite came from
Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/escape-the-dungeon/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.
| Dimension | Score | Bar |
|---|---|---|
| Human review | 0.000 | |
| Requirement coverage | 1.000 | |
| Planning quality | 1.000 | |
| Per-task discipline | 0.830 | |
| Skill accuracy | 1.000 | |
| Time efficiency | 0.963 |
Composite formula
How 0.812 was calculated
The composite score is a weighted sum of the dimensions above. Weights come from evals/escape-the-dungeon/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.
| Dimension | Weight | Score | Contribution |
|---|---|---|---|
| requirement_coverage | 0.15 | 1.000 | 0.1500(26%) |
| planning_quality | 0.15 | 1.000 | 0.1500(26%) |
| per_task_discipline | 0.15 | 0.830 | 0.1245(22%) |
| skill_accuracy | 0.10 | 1.000 | 0.1000(17%) |
| time_efficiency | 0.05 | 0.963 | 0.0481(8%) |
| human_review | 0.30 | 0.000 | 0.0000(0%) |
| Weighted sum | 0.90 | 0.5726 |
Note: The weighted sum above (0.5726) doesn't exactly match the stored composite (0.8123). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.
Skill accuracy breakdown
Did the agent invoke the right skills at the right moments?
For each expected trigger we recorded whether the agent invoked the skill at the right point in the loop. Accuracy = fired / expected = 1.00.
| Skill | Expected when | Outcome |
|---|---|---|
| /gad:plan-phase | before implementation | fired |
| /gad:execute-phase | per phase | fired |
| /gad:task-checkpoint | between tasks | fired |
| /gad:auto-conventions | after first code phase | fired |
| /gad:verify-work | after phase completion | fired |
| /gad:check-todos | session start | fired |
Gate report
Requirement coverage
Process metrics