Benchmark suite for smart-contract fuzzers.
![]() |
![]() |
![]() |
![]() |
- Maintain a current view of common fuzzers under a shared, realistic workload.
- Focus on benchmark quality with real projects, real bug-finding tasks, long timeouts, and repeated runs.
- Publish transparent metrics and artifacts for independent review.
- Help fuzzer/tool builders identify bottlenecks and improve their tools.
A fuzzer is currently considered in-scope when it is:
- Open source.
- Able to run assertion failures.
- Able to run global invariants.
- Foundry
- Echidna
- Medusa
Use the target onboarding skill for new targets:
skills/README.mdskills/target-onboarding/SKILL.md
For all technical/operational details, use the docs site pages:
- Introduction:
docs/introduction.md - Start benchmark request:
docs/start.md - Methodology:
docs/methodology.md - Operations guide (Terraform, running, reruns, analysis, CI workflows):
docs/operations.md - Target onboarding skill (machine-oriented):
skills/target-onboarding/SKILL.md
Rendered docs navigation and run/benchmark pages are available under docs/.



