Escape the Dungeon · GAD
escape-the-dungeon/v2
Composite score
0.285
Dimension scores
Where the composite came from
Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/escape-the-dungeon/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.
| Dimension | Score | Bar |
|---|---|---|
| Planning quality | 0.000 | |
| Skill accuracy | 0.600 | |
| Time efficiency | 0.637 |
Composite formula
How 0.285 was calculated
The composite score is a weighted sum of the dimensions above. Weights come from evals/escape-the-dungeon/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.
| Dimension | Weight | Score | Contribution |
|---|---|---|---|
| skill_accuracy | 0.10 | 0.600 | 0.0600(65%) |
| time_efficiency | 0.05 | 0.637 | 0.0319(35%) |
| requirement_coverage | 0.15 | 0.000 | 0.0000(0%) |
| planning_quality | 0.15 | 0.000 | 0.0000(0%) |
| per_task_discipline | 0.15 | 0.000 | 0.0000(0%) |
| human_review | 0.30 | 0.000 | 0.0000(0%) |
| Weighted sum | 0.90 | 0.0919 |
Note: The weighted sum above (0.0919) doesn't exactly match the stored composite (0.2849). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.
Skill accuracy breakdown
Did the agent invoke the right skills at the right moments?
For each expected trigger we recorded whether the agent invoked the skill at the right point in the loop. Accuracy = fired / expected = 0.60.
| Skill | Expected when | Outcome |
|---|---|---|
| /gad:plan-phase | before implementation | fired |
| /gad:execute-phase | per phase | fired |
| /gad:task-checkpoint | between tasks | fired |
| /gad:auto-conventions | after first code phase | missed |
| /gad:verify-work | after phase completion | missed |
Process metrics