Bare
v4
requirements v4
2026-04-09
Gate failed
rate limited — not comparable

Escape the Dungeon · Bare

escape-the-dungeon-bare/v4

Composite score

0.000

Rate-limited run · excluded from cross-round comparisons

This run hit an account-level rate limit before completing. The data captured here reflects partial progress only — it is not included in the freedom-hypothesis scatter, the Results card grid, or any aggregate comparison on the site (decision gad-63). Preserved here as a data point for planning-differential analysis and honest documentation.

Details: Hit account-level rate limit at tool_uses=45. Shared rate bucket with parallel GAD + Emergent runs.

Full round 4 partial-results finding →

Human review note

RATE LIMITED before completion. 6 source files written, vite build succeeds manually (54 KB bundle). worklog.md shows 10-step plan covering all 4 gates. Implementation depth: step 1 of 10 complete. DO NOT include in cross-round comparisons against completed runs.

Dimension scores

Where the composite came from

Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/escape-the-dungeon-bare/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.

DimensionScoreBar
Requirement coverage0.125
Planning quality0.100
Per-task discipline0.000
Time efficiency0.533

Composite formula

How 0.000 was calculated

The composite score is a weighted sum of the dimensions above. Weights come from evals/escape-the-dungeon-bare/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.

DimensionWeightScoreContribution
time_efficiency0.050.5330.0267(52%)
requirement_coverage0.200.1250.0250(48%)
implementation_quality0.200.0000.0000(0%)
workflow_emergence0.150.0000.0000(0%)
iteration_evidence0.100.0000.0000(0%)
human_review0.300.0000.0000(0%)
Weighted sum1.000.0517

Note: The weighted sum above (0.0517) doesn't exactly match the stored composite (0.0000). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.

Skill accuracy breakdown

Did the agent invoke the right skills at the right moments?

Skill accuracy data isn't relevant for this run (no expected trigger set).

What the agent built for itself

Emergent workflow artifacts

Bare and emergent runs don't have a framework giving them structure — they author their own methodology on the fly. These are the files the agent wrote into its own game/.planning/during this run. When a file appears here that isn't in the inherited bootstrap set, the agent invented it.

Skills written(2)

  • create-skill.md4.9 KB
  • find-sprites.md5.9 KB

Planning notes(1)

  • worklog.md745 B

Gate report

Requirement coverage

Total criteria
12
Fully met
0
Partially met
3
Not met
9

Reviewer notes on gates

G1 partial (scaffold only). G2/G3/G4 NOT MET. RATE LIMITED before any gate could be verified. Vite build succeeds manually (54 KB, 18 modules) but game loop is not wired end-to-end.

Process metrics

How the agent actually worked

Primary runtime
older runs may not carry runtime attribution yet
Agent lanes
0
0 root · 0 subagent · source missing
Observed depth
0 traced event(s) with agent lineage
Wall clock
14m
1 phases · 1 tasks
Started
Apr 9, 4:26 AM
Run start captured in TRACE timing metadata
Ended
Apr 9, 4:40 AM
Missing end time usually means the run was scaffolded but never finalized
Tool uses
45
1,682 tokens · rate-limited — partial implementation
Commits
0
0 with task id · 0 batch
Planning docs
0
decisions captured · 10 phases planned
Client debug · NEXT_PUBLIC_CLIENT_DEBUG=1
0 lines

No events yet. Window errors, unhandled rejections, and React render errors appear here. Set NEXT_PUBLIC_CLIENT_DEBUG_CONSOLE=1 to mirror console.error / console.warn.