portfolio-bare
portfolio-bare/v4
Composite score
0.996
Dimension scores
Where the composite came from
Each dimension is scored 0.0 – 1.0 and combined using the weights in evals/portfolio-bare/gad.json. Human review dominates on purpose — process metrics alone can't rescue a broken run.
| Dimension | Score | Bar |
|---|---|---|
| Requirement coverage | 1.000 | |
| Planning quality | 1.000 | |
| Per-task discipline | 1.000 | |
| Skill accuracy | 1.000 | |
| Time efficiency | 0.975 |
Composite formula
How 0.996 was calculated
The composite score is a weighted sum of the dimensions above. Weights come from evals/portfolio-bare/gad.json. Contribution = score × weight; dimensions sorted by contribution so you can see what actually moved the needle.
| Dimension | Weight | Score | Contribution |
|---|---|---|---|
| requirement_coverage | 0.40 | 1.000 | 0.4000(100%) |
| task_alignment | 0.25 | 0.000 | 0.0000(0%) |
| state_hygiene | 0.20 | 0.000 | 0.0000(0%) |
| decision_coverage | 0.15 | 0.000 | 0.0000(0%) |
| Weighted sum | 1.00 | 0.4000 |
Note: The weighted sum above (0.4000) doesn't exactly match the stored composite (0.9960). The difference is usually the v3 low-score cap (composite < 0.20 → 0.40, composite < 0.10 → 0.25) or a run with an older scoring pass.
Skill accuracy breakdown
Did the agent invoke the right skills at the right moments?
For each expected trigger we recorded whether the agent invoked the skill at the right point in the loop. Accuracy = fired / expected = 1.00.
| Skill | Expected when | Outcome |
|---|---|---|
| /gad:plan-phase | before implementation | fired |
| /gad:execute-phase | per phase | fired |
| /gad:task-checkpoint | between tasks | fired |
| /gad:auto-conventions | after first code phase | fired |
| /gad:verify-work | after phase completion | fired |
| /gad:check-todos | session start | fired |
Gate report
Requirement coverage
Process metrics