GAD
v9
requirements v4
2026-04-09
Gate failed
rate limited — not comparable

Escape the Dungeon · GAD

escape-the-dungeon/v9

Composite score

0.000

Human review

0.05

Rate-limited run · excluded from cross-round comparisons

This run hit an account-level rate limit before completing. The data captured here reflects partial progress only — it is not included in the freedom-hypothesis scatter, the Results card grid, or any aggregate comparison on the site (decision gad-63). Preserved here as a data point for planning-differential analysis and honest documentation.

Details: Hit account-level rate limit at tool_uses=81 (highest of three conditions). Planned 7 phases + 23 tasks before rate-limit. Completed phase 01 fully + task 02-01.

Full round 4 partial-results finding →

Human review note

Rate-limited GAD v9. Start screen loads but nothing beyond it — regressed vs earlier GAD builds. Preserved as data point but excluded from cross-round quality per gad-63. User playtest 2026-04-09.

Reviewed by human · 2026-04-09

Dimension scores

Where the composite came from

Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/escape-the-dungeon/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.

DimensionScoreBar
Human review0.050
Requirement coverage0.208
Planning quality0.850
Per-task discipline0.000
Time efficiency0.533

Composite formula

How 0.000 was calculated

The composite score is a weighted sum of the dimensions above. Weights come from evals/escape-the-dungeon/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.

DimensionWeightScoreContribution
planning_quality0.150.8500.1275(64%)
requirement_coverage0.150.2080.0312(16%)
time_efficiency0.050.5330.0267(13%)
human_review0.300.0500.0150(7%)
per_task_discipline0.150.0000.0000(0%)
skill_accuracy0.100.0000.0000(0%)
Weighted sum0.900.2004

Note: The weighted sum above (0.2004) doesn't exactly match the stored composite (0.0000). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.

Skill accuracy breakdown

Did the agent invoke the right skills at the right moments?

Skill accuracy data isn't relevant for this run (no expected trigger set).

What the agent built for itself

Emergent workflow artifacts

Bare and emergent runs don't have a framework giving them structure — they author their own methodology on the fly. These are the files the agent wrote into its own game/.planning/during this run. When a file appears here that isn't in the inherited bootstrap set, the agent invented it.

Planning notes(2)

  • VERIFICATION.md154 B
  • source-STAT-AND-BEHAVIOUR-TAXONOMY.md4.5 KB

Gate report

Requirement coverage

Total criteria
12
Fully met
1
Partially met
3
Not met
8

Reviewer notes on gates

G1 PARTIAL — title screen scene renders, scene transition to New Game NOT IMPLEMENTED (task 02-02 was next). No room navigation. G2 NOT MET. G3 PARTIAL — title screen styled. G4 NOT MET. RATE LIMITED at task 02-02. Phase 01 scaffold passed its own verification with working build.

Process metrics

How the agent actually worked

Primary runtime
older runs may not carry runtime attribution yet
Agent lanes
0
0 root · 0 subagent · source missing
Observed depth
0 traced event(s) with agent lineage
Wall clock
14m
1 phases · 4 tasks
Started
Apr 9, 4:26 AM
Run start captured in TRACE timing metadata
Ended
Apr 9, 4:40 AM
Missing end time usually means the run was scaffolded but never finalized
Tool uses
81
3,238 tokens · Highest tool_uses of the three conditions — most planning overhead upfront.
Commits
0
0 with task id · 0 batch
Planning docs
0
decisions captured · 7 phases planned
Client debug · NEXT_PUBLIC_CLIENT_DEBUG=1
0 lines

No events yet. Window errors, unhandled rejections, and React render errors appear here. Set NEXT_PUBLIC_CLIENT_DEBUG_CONSOLE=1 to mirror console.error / console.warn.