reverse-engineer-eval
reverse-engineer-eval/v1
Composite score
0.000
Dimension scores
Where the composite came from
Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/reverse-engineer-eval/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.
| Dimension | Score | Bar |
|---|
Composite formula
How 0.000 was calculated
The composite score is a weighted sum of the dimensions above. Weights come from evals/reverse-engineer-eval/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.
| Dimension | Weight | Score | Contribution |
|---|---|---|---|
| requirements_completeness | 0.25 | 0.000 | 0.0000(0%) |
| requirements_accuracy | 0.20 | 0.000 | 0.0000(0%) |
| build_success | 0.15 | 0.000 | 0.0000(0%) |
| functional_fidelity | 0.10 | 0.000 | 0.0000(0%) |
| human_review | 0.30 | 0.000 | 0.0000(0%) |
| Weighted sum | 1.00 | 0.0000 |
Skill accuracy breakdown
Did the agent invoke the right skills at the right moments?
Skill accuracy data isn't relevant for this run (no expected trigger set).
Process metrics