Emergent
v2
requirements v3
2026-04-08
Gate passed

Escape the Dungeon · Emergent

escape-the-dungeon-emergent/v2

Composite score

0.478

Human review

0.50

Human review note

Solid crafting system — forge functional with more authored content than other runs. Playable but no floor progression after boss. No way to train or test crafted spells. Maintained 2 phase commits even under rate limit pressure (the only condition that did). UI is medium quality — better than GAD v8, worse than bare v3. Score 0.50: most disciplined under pressure with functional crafting.

Reviewed by human · 2026-04-08

Dimension scores

Where the composite came from

Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/escape-the-dungeon-emergent/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.

DimensionScoreBar
Human review0.500
Requirement coverage0.667
Implementation quality0.600
Iteration evidence0.500
Time efficiency0.969

Composite formula

How 0.478 was calculated

The composite score is a weighted sum of the dimensions above. Weights come from evals/escape-the-dungeon-emergent/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.

DimensionWeightScoreContribution
human_review0.300.5000.1500(26%)
requirement_coverage0.200.6670.1334(24%)
implementation_quality0.150.6000.0900(16%)
workflow_quality0.100.7500.0750(13%)
time_efficiency0.050.9690.0485(9%)
skill_reuse0.150.3000.0450(8%)
iteration_evidence0.050.5000.0250(4%)
Weighted sum1.000.5668

Note: The weighted sum above (0.5668) doesn't exactly match the stored composite (0.4780). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.

Skill accuracy breakdown

Did the agent invoke the right skills at the right moments?

Skill accuracy data isn't relevant for this run (no expected trigger set).

What the agent built for itself

Emergent workflow artifacts

Bare and emergent runs don't have a framework giving them structure — they author their own methodology on the fly. These are the files the agent wrote into its own game/.planning/during this run. When a file appears here that isn't in the inherited bootstrap set, the agent invented it.

Workflow docs(1)

  • WORKFLOW.md1.1 KB

Gate report

Requirement coverage

Total criteria
12
Fully met
6
Partially met
4
Not met
2

Reviewer notes on gates

G1 (game-loop) partial — playable but no floor progression. G2 (spell-crafting) MET — forge added and functional, has authored content. G3 (ui-quality) medium — better than GAD, worse than bare v3.

Process metrics

How the agent actually worked

Primary runtime
older runs may not carry runtime attribution yet
Agent lanes
0
0 root · 0 subagent · source missing
Observed depth
0 traced event(s) with agent lineage
Wall clock
15m
2 phases · 0 tasks
Started
Run start captured in TRACE timing metadata
Ended
Missing end time usually means the run was scaffolded but never finalized
Tool uses
73
1,609 tokens · rate-limited — but maintained phase commits
Commits
2
0 with task id · 0 batch
Client debug · NEXT_PUBLIC_CLIENT_DEBUG=1
0 lines

No events yet. Window errors, unhandled rejections, and React render errors appear here. Set NEXT_PUBLIC_CLIENT_DEBUG_CONSOLE=1 to mirror console.error / console.warn.