diff --git a/docs/integrators/README.md b/docs/integrators/README.md index b5eadb2..a2a88c9 100644 --- a/docs/integrators/README.md +++ b/docs/integrators/README.md @@ -13,17 +13,19 @@ The guides are complementary to: ## Index -| Guide | Mechanism | Mechanism type | Status | -|---|---|---|---| -| [m001-enh.md](m001-enh.md) | Credit Class Approval Voting | Scoring (0-1000 composite, 3-way recommendation) | ✅ Written | -| [m012.md](m012.md) | Fixed Cap Dynamic Supply | Supply dynamics (BigInt arithmetic, phase-gated multipliers) | ✅ Written | -| [m014.md](m014.md) | Authority Validator Governance | Validator performance (re-normalized weighted score) | ✅ Written | -| m008.md | Data Attestation Bonding | Scoring | ⏳ TODO — follow m001-enh template | -| m009.md | Service Provision Escrow | Dual-guard scoring (score AND confidence) | ⏳ TODO — follow m001-enh template | -| m010.md | Reputation Signal | Stake-weighted endorsement + challenge lifecycle | ⏳ TODO | -| m011.md | Marketplace Curation | 7-factor quality scoring + collections | ⏳ TODO | -| m013.md | Value-Based Fee Routing | Fee computation + pool distribution | ⏳ TODO — follow m012 template | -| m015.md | Contribution-Weighted Rewards | Stability + activity tiers | ⏳ TODO | +| Guide | Mechanism | Mechanism type | +|---|---|---| +| [m001-enh.md](m001-enh.md) | Credit Class Approval Voting | Scoring (0-1000 composite, 3-way recommendation) | +| [m008.md](m008.md) | Data Attestation Bonding | Scoring (4-factor, no recommendation) | +| [m009.md](m009.md) | Service Provision Escrow | Dual-guard scoring (score AND confidence) | +| [m010.md](m010.md) | Reputation Signal | Event-driven decay-weighted average | +| [m011.md](m011.md) | Marketplace Curation | 7-factor quality scoring + collections | +| [m012.md](m012.md) | Fixed Cap Dynamic Supply | Supply dynamics (BigInt arithmetic, phase-gated multipliers) | +| [m013.md](m013.md) | Value-Based Fee Routing | Fee computation + pool distribution with Fee Conservation | +| [m014.md](m014.md) | Authority Validator Governance | Validator performance (re-normalized weighted score) | +| [m015.md](m015.md) | Contribution-Weighted Rewards | Activity score + stability tier allocation | + +**All 9 mechanism guides are written.** Future additions (new mechanisms, upgraded versions of existing mechanisms) should follow the same five-section template. ## Guide structure diff --git a/docs/integrators/m008.md b/docs/integrators/m008.md new file mode 100644 index 0000000..ba47f0d --- /dev/null +++ b/docs/integrators/m008.md @@ -0,0 +1,132 @@ +# Integrator guide: m008 Data Attestation Bonding + +**Mechanism:** Data Attestation Bonding +**Canonical spec:** [`mechanisms/m008-attestation-bonding/SPEC.md`](../../mechanisms/m008-attestation-bonding/SPEC.md) +**Reference implementation:** [`mechanisms/m008-attestation-bonding/reference-impl/m008_score.js`](../../mechanisms/m008-attestation-bonding/reference-impl/m008_score.js) + +## 1. What this mechanism does + +m008 scores bonded attestations on a 0-1000 composite. An attestation is a signed claim about ecological data (methodology validation, credit issuance, baseline measurement, project boundary) backed by a REGEN bond. The score reflects how trustworthy the attestation is *as evidence* — it combines the adequacy of the bond, the attester's reputation, the completeness of the evidence document, and the inherent risk of the attestation type. An integrator uses m008 to decide whether to accept an attestation into a registry, flag it for challenge, or reject it outright. + +Four weighted factors drive the composite: + +- `bond_adequacy` × 0.30 — bond amount relative to the minimum for this attestation type +- `attester_reputation` × 0.30 — attester's M010 reputation (default 300 when no history, matching the cautious default in `mechanisms/m008-attestation-bonding/SPEC.md` §5.2) +- `evidence_completeness` × 0.25 — completeness of the supporting evidence document +- `type_risk` × 0.15 — risk factor assigned to the attestation type (higher = more risky) + +Unlike m001-enh, m008 does NOT return a recommendation — it's a pure score. The integrator decides the threshold. + +## 2. What you give it + +```js +import { computeM008Score } from "./m008_score.js"; + +const result = computeM008Score({ + attestation: { + attestation_id: "att-001", + attestation_type: "BaselineMeasurement", // ProjectBoundary / BaselineMeasurement / CreditIssuanceClaim / MethodologyValidation + attestation_iri: "koi://attestation/soil-baseline-plot-42", + bond: { amount: "1500", denom: "uregen" }, + }, + factors: { + // Scoring inputs — each 0..1000, clamped. Defaults: + // bond_adequacy: 0, attester_reputation: 300, evidence: 0, type_risk: 0 + bond_adequacy: 750, + attester_reputation: 680, + evidence_completeness: 1000, + type_risk: 600, + + // Confidence flags. Note: iri_resolvable and type_recognized default to + // TRUE when unset (the check is `!== false`, not === true), while + // reputation_available and has_prior_attestations default to FALSE + // (the check is truthy). + reputation_available: true, + iri_resolvable: true, + has_prior_attestations: true, + type_recognized: true, + }, +}); +``` + +**Schemas:** [`m008_attestation.schema.json`](../../mechanisms/m008-attestation-bonding/schemas/m008_attestation.schema.json) and [`m008_quality_score.schema.json`](../../mechanisms/m008-attestation-bonding/schemas/m008_quality_score.schema.json). + +## 3. What you get back + +```js +{ + score: 769, // 0..1000 composite, clamped (225 + 204 + 250 + 90) + confidence: 1000, // 0..1000, derived from availability flags + factors: { // clamped factor values echoed back + bond_adequacy: 750, + attester_reputation: 680, + evidence_completeness: 1000, + type_risk: 600, + }, +} +``` + +There is **no `recommendation` field**. m008 is advisory-only at the scoring layer — the integrator decides what to do at each score tier. The SPEC suggests the following rule of thumb: + +| Score | Interpretation | +|---|---| +| `>= 700` | Strong evidence — acceptable without further review | +| `400..699` | Moderate — flag for human review | +| `< 400` | Weak — consider challenge or reject | + +This is a guideline, not enforcement. Downstream tooling can apply its own threshold. + +## 4. Common error modes + +### 4a. Missing attester history + +If the attester has no prior attestations, `attester_reputation` defaults to **300** (below neutral, reflecting that the agent has no basis to trust the attester). Set `reputation_available: false` so confidence drops from 1000 to 750 (3/4 flags). If the evidence is strong despite the unknown attester, the composite can still clear the 700 threshold purely on `bond_adequacy + evidence_completeness` contributions. + +### 4b. Unresolvable IRI + +Set `iri_resolvable: false`. This is one of only two confidence flags that explicitly respects `false` — the guard is `!== false`, so leaving it unset counts as available. This means: + +- `iri_resolvable: true` → counts (confidence gains 250) +- `iri_resolvable` undefined → counts (default is "available") +- `iri_resolvable: false` → does NOT count + +The same asymmetry applies to `type_recognized`. Pinned by [`vector_v0_empty_factors_defaults`](../../mechanisms/m008-attestation-bonding/reference-impl/test_vectors/vector_v0_empty_factors_defaults.input.json). + +### 4c. Bond-heavy attack vector + +A well-funded attacker might try to "buy" a high score by pouring REGEN into a bonded attestation with no real evidence. With `bond_adequacy = 1000` and `type_risk = 1000` (MethodologyValidation) but zero reputation and zero evidence, the composite is `0.30×1000 + 0×0.30 + 0×0.25 + 0.15×1000 = 450` — well below the 700 threshold. Bond alone cannot carry the composite into the "strong evidence" tier. Pinned by [`vector_v0_bond_heavy_evidence_zero`](../../mechanisms/m008-attestation-bonding/reference-impl/test_vectors/vector_v0_bond_heavy_evidence_zero.input.json). + +### 4d. Explicit `0` vs missing + +An explicit `attester_reputation: 0` is preserved — the nullish-coalescing default of 300 only fires when the key is `null` or `undefined`, not when it's `0`. This is different from the `||` operator's behavior. If you're migrating from a data source that returns 0 ambiguously, normalize upstream. + +### 4e. Type risk is a known-type lookup, not a computed value + +`type_risk` in the input is the final 0..1000 factor score, not the raw attestation type. The SPEC table maps types to fixed values: + +| Type | type_risk | +|---|---| +| `MethodologyValidation` | 1000 | +| `CreditIssuanceClaim` | 800 | +| `BaselineMeasurement` | 600 | +| `ProjectBoundary` | 400 | + +The reference impl exports `getTypeRiskFactor(type)` and `getMinBond(type)` helpers for this — use them when you're computing factors upstream. + +## 5. Runnable example + +The reference implementation ships with 7 test vectors (1 sample + 6 edge cases covering the zero floor, maximum ceiling, overflow clamping, empty-factors defaults, type-contribution isolation, and the bond-heavy attack vector). The self-test discovers every vector in `test_vectors/` automatically: + +```bash +node mechanisms/m008-attestation-bonding/reference-impl/m008_score.js +``` + +Expected output: + +``` +m008_score self-test: PASS (11 attestations across 7 vectors) +``` + +--- + +Canonical spec: [`mechanisms/m008-attestation-bonding/SPEC.md`](../../mechanisms/m008-attestation-bonding/SPEC.md) §5. diff --git a/docs/integrators/m009.md b/docs/integrators/m009.md new file mode 100644 index 0000000..78a8b07 --- /dev/null +++ b/docs/integrators/m009.md @@ -0,0 +1,120 @@ +# Integrator guide: m009 Service Provision Escrow + +**Mechanism:** Service Provision Escrow +**Canonical spec:** [`mechanisms/m009-service-escrow/SPEC.md`](../../mechanisms/m009-service-escrow/SPEC.md) +**Reference implementation:** [`mechanisms/m009-service-escrow/reference-impl/m009_score.js`](../../mechanisms/m009-service-escrow/reference-impl/m009_score.js) + +## 1. What this mechanism does + +m009 reviews milestone deliverables inside a service escrow agreement and returns one of three recommendations: **APPROVE**, **NEEDS_REVISION**, or **FLAG_FOR_CLIENT**. A service escrow agreement locks client funds against a set of milestones; each milestone must be reviewed before the escrowed funds are released. An integrator uses m009 to provide a consistent first-pass review of a milestone submission, letting a human arbiter focus attention on the cases the agent can't auto-approve. + +Unlike m001-enh (which has a single-guard REJECT branch), m009 uses a **dual-guard** recommendation model: APPROVE requires BOTH a high score AND high confidence, while FLAG_FOR_CLIENT fires when EITHER score is low OR confidence is low. This asymmetry protects both sides of the deal — the client never has a low-confidence submission auto-approved, and the provider never has a thin-evidence review auto-rejected. + +Four weighted factors drive the composite: + +- `deliverable_quality` × 0.40 — methodology compliance and technical quality of the submission +- `evidence_completeness` × 0.25 — completeness of the supporting evidence package +- `milestone_consistency` × 0.20 — consistency with prior milestones in the same agreement +- `provider_reputation` × 0.15 — provider's M010 reputation (default 300 when no history) + +## 2. What you give it + +```js +import { computeM009Score } from "./m009_score.js"; + +const result = computeM009Score({ + milestone: { + milestone_id: "ms-001", + escrow_id: "escrow-001", + provider: "regen1provider001", + amount: { amount: "1000", denom: "uregen" }, + }, + factors: { + // Scoring inputs — each 0..1000, clamped. Defaults: + // quality: 0, evidence: 0, consistency: 0, provider_reputation: 300 + deliverable_quality: 850, + evidence_completeness: 800, + milestone_consistency: 750, + provider_reputation: 700, + + // Confidence flags — all use strict `=== true` (unlike m008 which + // uses `!== false` for two of its flags). This means UNSET counts + // as FALSE for m009. Set every flag explicitly if you want it to + // count. + reputation_available: true, + iri_resolvable: true, + has_prior_milestones: true, + spec_available: true, + }, +}); +``` + +**Schemas:** [`m009_milestone_review.schema.json`](../../mechanisms/m009-service-escrow/schemas/m009_milestone_review.schema.json) and the agreement lifecycle schema in the same directory. + +## 3. What you get back + +```js +{ + score: 795, // 0..1000 composite, clamped (340 + 200 + 150 + 105) + confidence: 1000, // 0..1000 (count of `=== true` flags / 4) + recommendation: "APPROVE", // APPROVE | NEEDS_REVISION | FLAG_FOR_CLIENT + factors: { // clamped factor values echoed back + deliverable_quality: 850, + evidence_completeness: 800, + milestone_consistency: 750, + provider_reputation: 700, + }, +} +``` + +**Recommendation rules — read carefully, the branches are NOT symmetric:** + +| Condition | Recommendation | +|---|---| +| `score >= 700` AND `confidence >= 750` | `APPROVE` | +| `score < 400` OR `confidence < 250` | `FLAG_FOR_CLIENT` | +| otherwise | `NEEDS_REVISION` | + +- APPROVE requires BOTH conditions (a conjunction). A perfect score with low confidence does NOT auto-approve — it falls through to NEEDS_REVISION for human review. +- FLAG_FOR_CLIENT fires when EITHER side of the OR is true. A perfect 1000 score with zero confidence flags FIRES — the client must review it manually, and the provider does not get auto-approved on evidence the system cannot verify. +- Everything else is NEEDS_REVISION — the middle state where neither guard wants to auto-decide. + +## 4. Common error modes + +### 4a. High score, low confidence → NEEDS_REVISION, not APPROVE + +A submission with a perfect 1000 score but only 2 of 4 flags true (confidence 500) does NOT auto-approve. The APPROVE branch requires confidence `>= 750`. The submission falls through to NEEDS_REVISION — human review required. Pinned by [`vector_v0_high_score_low_confidence_revision`](../../mechanisms/m009-service-escrow/reference-impl/test_vectors/vector_v0_high_score_low_confidence_revision.input.json). + +### 4b. High score, zero confidence → FLAG_FOR_CLIENT + +A submission with score 800 but zero confidence flags (confidence 0) does NOT approve — it fires FLAG_FOR_CLIENT because the confidence guard on the FLAG branch is `< 250`. This protects the client from an agent approving a "high quality" submission the system cannot verify at all. Pinned by [`vector_v0_confidence_floor_forces_flag`](../../mechanisms/m009-service-escrow/reference-impl/test_vectors/vector_v0_confidence_floor_forces_flag.input.json). + +### 4c. Score exactly at 400 is NEEDS_REVISION, not FLAG + +The FLAG_FOR_CLIENT predicate is `score < 400` — strict inequality. A score of exactly 400 does NOT fire the flag. It falls through to NEEDS_REVISION. A future refactor that changes `<` to `<=` would silently flip this boundary, which is why [`vector_v0_boundary_revision_exact_400`](../../mechanisms/m009-service-escrow/reference-impl/test_vectors/vector_v0_boundary_revision_exact_400.input.json) pins it. + +### 4d. Unlike m008, the `=== true` guard is strict + +All four m009 confidence flags use strict `=== true`. Unset flags count as FALSE, not TRUE. If you're porting from m008 (which uses `!== false` for two of its flags), re-audit your flag-setting code — `iri_resolvable: undefined` counts in m008 but NOT in m009. + +### 4e. First-time provider gets the reputation default + +`provider_reputation` defaults to **300** when unset, not 500. This is lower than m008's attester default because the SPEC treats first-time providers as higher risk in escrow agreements. Set `reputation_available: false` to also drop the confidence contribution. + +## 5. Runnable example + +The reference implementation ships with 7 test vectors covering the happy path (sample), the dual-guard APPROVE boundary at (700, 750), the score-high/confidence-low fallback, the exact 399 FLAG boundary, the exact 400 REVISION boundary, the confidence-floor-forces-flag case, and the overflow clamp. The self-test discovers every vector in `test_vectors/` automatically: + +```bash +node mechanisms/m009-service-escrow/reference-impl/m009_score.js +``` + +Expected output: + +``` +m009_score self-test: PASS (11 milestones across 7 vectors) +``` + +--- + +Canonical spec: [`mechanisms/m009-service-escrow/SPEC.md`](../../mechanisms/m009-service-escrow/SPEC.md) §5. diff --git a/docs/integrators/m010.md b/docs/integrators/m010.md new file mode 100644 index 0000000..d4998b6 --- /dev/null +++ b/docs/integrators/m010.md @@ -0,0 +1,122 @@ +# Integrator guide: m010 Reputation Signal + +**Mechanism:** Reputation Signal +**Canonical spec:** [`mechanisms/m010-reputation-signal/SPEC.md`](../../mechanisms/m010-reputation-signal/SPEC.md) +**Reference implementation:** [`mechanisms/m010-reputation-signal/reference-impl/m010_score.js`](../../mechanisms/m010-reputation-signal/reference-impl/m010_score.js) + +## 1. What this mechanism does + +m010 computes a reputation score for a subject (credit class, project, verifier, methodology, or address) by aggregating endorsement signals from signalers, with exponential time decay so old signals fade out. An integrator uses m010 to power an "is this subject trustworthy?" check — the output feeds M001-ENH credit class approval, M011 marketplace curation, and any other downstream mechanism that needs a legitimacy signal for a subject. + +The v0 implementation is an **advisory, decay-weighted average**: each endorsement level (1-5) is normalized to 0-1 and weighted by `exp(-ln(2) × age_hours / halfLifeHours)`. Stake weighting is reserved for v1 when on-chain stake data is available. The v0 output is normalized to `[0, 1]`; the v1 target is 0-1000. + +Unlike the other scoring mechanisms in this suite, m010 is **event-driven** rather than factor-driven — you pass it a list of historical signals plus a scoring point-in-time, and it computes the score from the events. No factors to tune, no defaults to remember. + +## 2. What you give it + +```js +import { computeM010Score } from "./m010_score.js"; + +const result = computeM010Score({ + as_of: "2026-02-18T12:00:00Z", + events: [ + { + signal_id: "sig-001", + timestamp: "2026-02-01T00:00:00Z", // when the signal was submitted + endorsement_level: 5, // integer 1..5, higher = more positive + signaler: "regen1signaler001", + status: "active", // active | resolved_valid | withdrawn | challenged | escalated | resolved_invalid | invalidated + }, + { + signal_id: "sig-002", + timestamp: "2025-12-01T00:00:00Z", // older → larger decay + endorsement_level: 4, + signaler: "regen1signaler002", + status: "active", + }, + { + signal_id: "sig-003", + timestamp: "2026-02-10T00:00:00Z", + endorsement_level: 1, // a strong negative endorsement + signaler: "regen1signaler003", + status: "resolved_valid", // challenged but resolved in the signal's favor — still counts + }, + ], + halfLifeHours: 336, // optional; default 14 days (14*24 = 336) + useStakeWeighting: false, // reserved for v1 +}); +``` + +**Contributing statuses:** only `active` and `resolved_valid` signals contribute to the score. Signals with status `withdrawn`, `challenged`, `escalated`, `resolved_invalid`, or `invalidated` are excluded entirely (not decayed-to-zero — **not counted at all**). + +**Schemas:** [`m010_signal.schema.json`](../../mechanisms/m010-reputation-signal/schemas/m010_signal.schema.json) for the event shape, [`m010_challenge.schema.json`](../../mechanisms/m010-reputation-signal/schemas/m010_challenge.schema.json) for the challenge lifecycle state machine. + +## 3. What you get back + +```js +{ + reputation_score_0_1: 0.7543, // v0: 0..1 (four decimal places) +} +``` + +**v0 formula:** + +``` +decay(t) = exp(-ln(2) × (as_of - t) / halfLifeHours) +contribution(e) = (endorsement_level(e) / 5) × decay(e.timestamp) +score = sum(contribution(e)) / sum(decay(e.timestamp)) +``` + +The normalization is a weighted average, not a weighted sum — a subject with 100 unanimous 5-star endorsements gets the same score as a subject with 1 recent 5-star endorsement (both land at 1.0). **The score measures signal quality, not signal volume.** If you need a volume signal, aggregate the event count separately. + +**v1 target formula** (not yet implemented): + +``` +score_1000 = sum(stake × decay × level / 5) / sum(stake × decay) × 1000 +``` + +The interface already accepts a `useStakeWeighting` flag and an optional `stake` field on each event for forward compatibility. Today the flag is ignored and the score is un-weighted. + +## 4. Common error modes + +### 4a. Empty event list + +`events: []` returns `{ reputation_score_0_1: 0.0 }` — not an error, just "no signal". Downstream should distinguish this from a score of 0.0 achieved through actual negative endorsements. The SPEC suggests using a `has_signal` flag upstream to carry the distinction. + +### 4b. Signal in the future + +If an event's `timestamp` is after `as_of`, the computed `age` would be negative. The reference impl clamps age at 0 via `Math.max(0, ageH)` — future-dated signals are treated as fresh (decay = 1.0) rather than producing an `exp()` blow-up. In practice this only happens when you're scoring "as of now" against a clock-skewed signal; always prefer a deterministic `as_of` that's strictly in the past. + +### 4c. Unsupported status + +If a signal has a status the mechanism doesn't recognize, the reference impl's guard is `!contributingStatuses.has(status)` — unknown statuses are excluded. If you're reading signals from a data source that might surface newer status values than this version of m010 knows about, the signal gets quietly dropped. Always normalize status upstream rather than assuming the mechanism will handle new values. + +### 4d. Half-life mismatch between consumers + +The default half-life is 336 hours (14 days). If you change this in one consumer, every other consumer reading the same signal set will produce a different score. Keep the half-life consistent across downstream consumers — it's a governance parameter, not a per-call tuning knob. Any proposal to change the value should go through the Tokenomics Working Group. + +### 4e. Stake weighting is a no-op in v0 + +Passing `useStakeWeighting: true` in v0 does nothing. The `stake` field on events is also ignored. The interface is forward-compatible with v1, but today's output will be identical whether you pass the flag or not. + +## 5. Runnable example + +The reference implementation ships with 5 test vectors covering the sample, the challenge lifecycle (challenge → escalated, challenge → edge timing, challenge → resolved). The repository also registers m010 in the root `scripts/verify.mjs` with **dedicated verification scripts** (`scripts/verify-m010-reference-impl.mjs` and `scripts/verify-m010-datasets.mjs`) that go deeper than the other mechanisms' self-tests. + +To run the mechanism's own self-test: + +```bash +node mechanisms/m010-reputation-signal/reference-impl/m010_score.js +``` + +To run the full per-mechanism verification: + +```bash +node scripts/verify.mjs +``` + +The v0_sample fixture yields `reputation_score_0_1: 0.5488` — use that as a sanity check for your own integration. + +--- + +Canonical spec: [`mechanisms/m010-reputation-signal/SPEC.md`](../../mechanisms/m010-reputation-signal/SPEC.md) §5. diff --git a/docs/integrators/m011.md b/docs/integrators/m011.md new file mode 100644 index 0000000..47cf6ce --- /dev/null +++ b/docs/integrators/m011.md @@ -0,0 +1,141 @@ +# Integrator guide: m011 Marketplace Curation + +**Mechanism:** Marketplace Curation & Quality Signals +**Canonical spec:** [`mechanisms/m011-marketplace-curation/SPEC.md`](../../mechanisms/m011-marketplace-curation/SPEC.md) +**Reference implementation:** [`mechanisms/m011-marketplace-curation/reference-impl/m011_score.js`](../../mechanisms/m011-marketplace-curation/reference-impl/m011_score.js) + +## 1. What this mechanism does + +m011 scores ecological credit batches on a 0-1000 composite that reflects "how curation-worthy is this batch?" — an input for marketplace display priority, automated collection membership, and the curation-quality dashboard. An integrator uses m011 to filter a raw batch list into a curated set, to rank batches in a marketplace view, or to flag batches whose quality has dropped below a collection's floor. + +Seven weighted factors drive the composite — the most of any mechanism in the framework: + +- `project_reputation` × 0.25 — M010 reputation for the project +- `class_reputation` × 0.20 — M010 reputation for the credit class +- `vintage_freshness` × 0.15 — linear decay over 10 years since batch start date +- `verification_recency` × 0.15 — linear decay over 3 years since last verification +- `seller_reputation` × 0.10 — M010 reputation for the seller +- `price_fairness` × 0.10 — deviation from class median price +- `additionality_confidence` × 0.05 — methodology additionality assessment + +The weights sum to exactly 1.0. The two reputation weights (project + class) dominate at 0.45 combined. The smallest factor (`additionality_confidence`) is at 0.05 — a factor cap that stops the methodology layer from single-handedly moving the composite. + +## 2. What you give it + +```js +import { computeM011Score } from "./m011_score.js"; + +const result = computeM011Score({ + batch: { + batch_denom: "C01-001-20240101-20241231-001", + project_id: "P-regen-42", + class_id: "C", + issuer: "regen1issuer042", + seller: "regen1seller042", + }, + factors: { + // Every scoring factor is 0..1000, clamped. All default to 0. + project_reputation: 920, + class_reputation: 850, + vintage_freshness: 900, // see computeVintageFreshness helper + verification_recency: 850, // see computeVerificationRecency helper + seller_reputation: 780, + price_fairness: 800, // see computePriceFairness helper + additionality_confidence: 700, + + // Confidence flags. NOTE: vintage_known uses `!== false` semantics + // (unset counts as "available"). All others are strict truthy checks. + project_reputation_available: true, + class_reputation_available: true, + seller_reputation_available: true, + vintage_known: true, + verification_date_known: true, + price_data_available: true, + methodology_available: true, + }, +}); +``` + +**Helpers exported by the reference impl** for computing the three derived factors upstream: + +- `computeVintageFreshness(ageYears)` → `1000` at issuance, `0` at 10 years, linear in between +- `computeVerificationRecency(ageYears)` → `1000` at last verification, `0` at 3 years, linear in between +- `computePriceFairness(listingPrice, medianPrice)` → `1000` at median, `0` at ±50% deviation, linear in between + +Use the helpers so you don't drift from the canonical formulas. If the SPEC adjusts the decay curves in a future version, your factor values automatically follow. + +**Schemas:** [`m011_quality_score.schema.json`](../../mechanisms/m011-marketplace-curation/schemas/m011_quality_score.schema.json) and [`m011_collection.schema.json`](../../mechanisms/m011-marketplace-curation/schemas/m011_collection.schema.json). + +## 3. What you get back + +```js +{ + score: 856, // 0..1000 composite, clamped (Math.round(855.5) = 856) + confidence: 1000, // 0..1000 (count_of_true_flags / 7 * 1000) + factors: { // clamped factor values echoed back + project_reputation: 920, + class_reputation: 850, + vintage_freshness: 900, + verification_recency: 850, + seller_reputation: 780, + price_fairness: 800, + additionality_confidence: 700, + }, +} +``` + +No `recommendation` field — m011 is a pure score. The SPEC suggests the following interpretation for marketplace tiers: + +| Score | Collection tier | +|---|---| +| `>= 800` | Premium — high-confidence collection floor | +| `500..799` | Standard — default marketplace display | +| `< 500` | Restricted — flagged for review, challenge-eligible | + +Downstream tooling applies its own threshold. Collections can set a `min_quality_score` at instantiate time and reject batches below it. + +## 4. Common error modes + +### 4a. The `vintage_known` default asymmetry + +Six of the seven confidence flags use strict truthy checks — unset counts as FALSE. But `vintage_known` uses `!== false`, so: + +- `vintage_known: true` → counts (confidence gains 143) +- `vintage_known` undefined → counts (default is "available") +- `vintage_known: false` → does NOT count + +This mirrors the asymmetry in m008 but applies to a different flag. If you're porting confidence-flag code between mechanisms, re-audit every flag. Pinned by [`vector_v0_empty_factors_default_vintage_flag`](../../mechanisms/m011-marketplace-curation/reference-impl/test_vectors/vector_v0_empty_factors_default_vintage_flag.input.json). + +### 4b. Weight-swap attack + +A refactor that accidentally swaps `additionality_confidence` (0.05) with `price_fairness` (0.10) would change every batch's score by 50 points at most — hard to notice in aggregate but meaningfully distortionary for batches near a tier boundary. [`vector_v0_additionality_only`](../../mechanisms/m011-marketplace-curation/reference-impl/test_vectors/vector_v0_additionality_only.input.json) isolates the 0.05 weight by setting all other factors to 0 so the contribution is exactly `50` — any refactor that accidentally promotes additionality breaks this vector immediately. + +### 4c. Reputation dominates the composite + +The two reputation weights sum to 0.45 — nearly half the total. A batch with perfect reputation but zero everything else scores 450, nearly at the "Standard" tier already. Conversely, a batch with zero reputation but perfect everything else scores 550. This is intentional — the SPEC treats reputation as the single most important curation signal. If you're building a display layer that needs to surface *non-reputation* quality signals to users, compute them from the factor `breakdown` side (not covered here — see the SPEC §5.3 for the breakdown API). + +### 4d. Vintage and verification decay curves are fixed + +The helpers assume fixed decay windows: vintage decays to 0 at 10 years, verification decays to 0 at 3 years. These are governance parameters in the SPEC but baked into the reference impl today. If your integration needs different windows, pass pre-computed factor values rather than calling the helpers. + +### 4e. Clamp behavior on out-of-range input + +Every factor is clamped to `[0, 1000]` before weighting. Pass `project_reputation: 1500` and you get the same score as `1000`. Pass `price_fairness: -200` and you get the same score as `0`. The clamp is pinned by [`vector_v0_overflow_clamp`](../../mechanisms/m011-marketplace-curation/reference-impl/test_vectors/vector_v0_overflow_clamp.input.json) — a refactor that silently drops the clamp would break the vector. + +## 5. Runnable example + +The reference implementation ships with 7 test vectors (1 sample + 6 edge cases covering zero floor, maximum ceiling, overflow clamp, empty-factors defaults, reputation-only isolation, and additionality-only isolation). The self-test discovers every vector automatically: + +```bash +node mechanisms/m011-marketplace-curation/reference-impl/m011_score.js +``` + +Expected output: + +``` +m011_score self-test: PASS (11 batches across 7 vectors) +``` + +--- + +Canonical spec: [`mechanisms/m011-marketplace-curation/SPEC.md`](../../mechanisms/m011-marketplace-curation/SPEC.md) §5. diff --git a/docs/integrators/m013.md b/docs/integrators/m013.md new file mode 100644 index 0000000..043c073 --- /dev/null +++ b/docs/integrators/m013.md @@ -0,0 +1,102 @@ +# Integrator guide: m013 Value-Based Fee Routing + +**Mechanism:** Value-Based Fee Routing +**Canonical spec:** [`mechanisms/m013-value-based-fee-routing/SPEC.md`](../../mechanisms/m013-value-based-fee-routing/SPEC.md) +**Reference implementation:** [`mechanisms/m013-value-based-fee-routing/reference-impl/m013_fee.js`](../../mechanisms/m013-value-based-fee-routing/reference-impl/m013_fee.js) + +## 1. What this mechanism does + +m013 computes the fee on a credit transaction and splits it across four purpose-specific pools: `burn`, `validator`, `community`, and `agent`. An integrator uses m013 to quote a fee before a user submits a transaction (so the UI can display it), to reconcile post-transaction pool balances, or to audit whether the fee router distributed funds correctly. + +The fee is a value-proportional rate on the transaction amount, floored at a configurable minimum. Four tx types have independent rates. The distribution across pools is configurable but must always sum to exactly the fee amount — this is the **Fee Conservation invariant**, and it's enforced by computing the `validator` share as the arithmetic remainder after flooring the other three. + +## 2. What you give it + +```js +import { computeFee } from "./m013_fee.js"; + +const result = computeFee({ + tx_type: "CreditIssuance", // CreditIssuance / CreditTransfer / CreditRetirement / MarketplaceTrade + value: 5_000_000_000, // uregen (number, not string — values fit in safe range) + fee_config: { + fee_rates: { + CreditIssuance: 0.02, // 2% + CreditTransfer: 0.001, // 0.1% + CreditRetirement: 0.005, // 0.5% + MarketplaceTrade: 0.01, // 1% + }, + distribution_shares: { + burn: 0.30, + validator: 0.40, + community: 0.25, + agent: 0.05, + }, + min_fee_uregen: 1_000_000, // 1 REGEN floor + }, +}); +``` + +**`fee_config` is optional.** If omitted, the reference impl uses `DEFAULT_FEE_RATES` and `DEFAULT_DISTRIBUTION_SHARES` which match the SPEC Model A baseline (30/40/25/5). Integrators building against the Economic Reboot governance proposal should override with Proposal A values (15/30/50/5) — see `docs/governance/needs-governance-proposals.md`. + +**Unknown tx_type throws.** Not an option to catch, an exception: `Error: Unknown tx_type: `. The caller must validate upstream or wrap in a try/catch. There is no "default" tx type. + +**Schemas:** [`m013_fee_event.schema.json`](../../mechanisms/m013-value-based-fee-routing/schemas/m013_fee_event.schema.json) and [`m013_fee_config.schema.json`](../../mechanisms/m013-value-based-fee-routing/schemas/m013_fee_config.schema.json). + +## 3. What you get back + +```js +{ + fee_amount: 100_000_000, // uregen + min_fee_applied: false, // true if the floor kicked in + distribution: { + burn: 30_000_000, // uregen + validator: 40_000_000, // uregen — computed as remainder + community: 25_000_000, // uregen + agent: 5_000_000, // uregen + }, +} +``` + +**Fee Conservation invariant:** `burn + validator + community + agent === fee_amount`. This is enforced by flooring `burn`, `community`, and `agent` independently and computing `validator` as `fee_amount - burn - community - agent`. Any rounding remainder is absorbed into the validator pool. A refactor that accidentally floors the validator share instead would leak dust — [`vector_v0_rounding_remainder_validator_absorbs`](../../mechanisms/m013-value-based-fee-routing/reference-impl/test_vectors/vector_v0_rounding_remainder_validator_absorbs.input.json) pins this. + +## 4. Common error modes + +### 4a. Unknown `tx_type` throws + +If you pass a tx_type that isn't in the `fee_rates` map, `computeFee` throws `Error: Unknown tx_type: `. The caller must validate upstream or catch the exception. There is no "default" type — pick one explicitly. + +### 4b. Min-fee boundary is strict + +The floor predicate is `rawFee < minFee`, not `<=`. A rawFee that lands **exactly** on `min_fee_uregen` does NOT fire `min_fee_applied: true` — it passes through as-is. A refactor that changes `<` to `<=` would flip this boundary silently, which is why [`vector_v0_min_fee_boundary_exact_equal`](../../mechanisms/m013-value-based-fee-routing/reference-impl/test_vectors/vector_v0_min_fee_boundary_exact_equal.input.json) pins it. + +### 4c. Zero value still pays the minimum + +A degenerate `value: 0` transaction produces `rawFee: 0`, which is strictly less than any positive `min_fee_uregen`. The floor fires, the transaction pays the minimum fee, and the distribution is split across all four pools as normal. Pinned by [`vector_v0_zero_value_fee_falls_to_min`](../../mechanisms/m013-value-based-fee-routing/reference-impl/test_vectors/vector_v0_zero_value_fee_falls_to_min.input.json). + +### 4d. Custom distribution shares override the defaults + +Pass a `distribution_shares` object in `fee_config` and the reference impl uses it without falling back to `DEFAULT_DISTRIBUTION_SHARES`. The Economic Reboot Proposal A distribution (15/30/50/5) is a common override — [`vector_v0_custom_distribution_shares`](../../mechanisms/m013-value-based-fee-routing/reference-impl/test_vectors/vector_v0_custom_distribution_shares.input.json) verifies this path. If your consumer needs to track which distribution is in effect, read `fee_config.distribution_shares` from the on-chain governance parameter rather than hardcoding the values. + +### 4e. Shares must sum to exactly 1.0 + +The reference impl does NOT validate that `distribution_shares` sums to 1.0 — it just trusts the input and computes `validator` as the remainder. If you pass shares that sum to 0.8, the validator pool gets 20% extra. If you pass shares summing to 1.2, the validator pool goes negative. Validate upstream. Real governance parameters are enforced on-chain, but the pure JS reference impl is permissive. + +## 5. Runnable example + +The reference implementation ships with 7 test vectors covering the sample, zero-value-forces-min, min-fee boundary one-below, min-fee boundary exact-equal, rounding-remainder-validator-absorbs, all-four-tx-types (verifying rate mapping), and custom-distribution-shares (Economic Reboot Proposal A values). The self-test discovers every vector automatically: + +```bash +node mechanisms/m013-value-based-fee-routing/reference-impl/m013_fee.js +``` + +Expected output: + +``` +m013_fee self-test: PASS (14 events across 7 vectors) +``` + +The mechanism is registered in `scripts/verify.mjs`, so `node scripts/verify.mjs` runs the same self-test as part of CI. + +--- + +Canonical spec: [`mechanisms/m013-value-based-fee-routing/SPEC.md`](../../mechanisms/m013-value-based-fee-routing/SPEC.md) §5. diff --git a/docs/integrators/m015.md b/docs/integrators/m015.md new file mode 100644 index 0000000..19a0c0e --- /dev/null +++ b/docs/integrators/m015.md @@ -0,0 +1,170 @@ +# Integrator guide: m015 Contribution-Weighted Rewards + +**Mechanism:** Contribution-Weighted Rewards +**Canonical spec:** [`mechanisms/m015-contribution-weighted-rewards/SPEC.md`](../../mechanisms/m015-contribution-weighted-rewards/SPEC.md) +**Reference implementation:** [`mechanisms/m015-contribution-weighted-rewards/reference-impl/m015_score.js`](../../mechanisms/m015-contribution-weighted-rewards/reference-impl/m015_score.js) + +## 1. What this mechanism does + +m015 computes **activity scores** for individual participants and the **stability tier allocation** for the community pool each period. The activity score drives reward distribution — a participant who purchases credits, retires credits, facilitates trades, votes on governance, and submits proposals earns a weighted contribution score. The stability allocation locks a portion of the community pool inflow into a passive-return tier for long-term holders, capped at a configurable share of the pool. + +An integrator uses m015 in two distinct shapes: + +1. **Per-participant activity scoring** — call `computeActivityScore` to rank contributors, power a rewards dashboard, or feed the activity-pool distribution math +2. **Per-period stability allocation** — call `computeStabilityAllocation` to split the community pool inflow between the stability tier (passive returns to committed holders) and the activity pool (rewards to active contributors) + +Five weighted activities drive the per-participant score: + +- `credit_purchase` × 0.30 +- `credit_retirement` × 0.30 +- `platform_facilitation` × 0.20 +- `governance_voting` × 0.10 +- `proposal_submission` × 0.10 + +The weights sum to exactly 1.0. The two credit-flow weights (purchase + retirement) dominate at 0.60 combined, reflecting the SPEC's view that actual credit economic activity matters more than governance participation for the contribution rewards tier. + +## 2. What you give it + +### Per-participant activity score + +```js +import { computeActivityScore } from "./m015_score.js"; + +const result = computeActivityScore({ + activities: { + credit_purchase_value: 500_000, // uregen + credit_retirement_value: 250_000, // uregen + platform_facilitation_value: 100_000, // uregen + governance_votes_cast: 3, // count + + // Proposal submissions with anti-gaming credit rules. See below. + proposals: [ + { passed: true, reached_quorum: true }, // 1.0 credit + { passed: false, reached_quorum: true }, // 0.5 credit (reached quorum but failed) + { passed: false, reached_quorum: false }, // 0.0 credit (no quorum = no credit) + ], + }, +}); +``` + +**Proposal anti-gaming rules** (in `computeProposalCredit`): + +| Proposal outcome | Credit toward activity score | +|---|---| +| Reached quorum AND passed | 1.0 | +| Reached quorum AND failed | 0.5 | +| Did not reach quorum | 0.0 | + +This protects against "proposal spam" — submitting lots of low-quality proposals that never reach quorum does NOT inflate the contributor's score. Passing proposals earn full credit; serious-but-failing proposals earn half credit; no-quorum proposals earn nothing. + +### Per-period stability allocation + +```js +import { computeStabilityAllocation } from "./m015_score.js"; + +const result = computeStabilityAllocation({ + community_pool_inflow: 10_000_000_000, // uregen for this period + stability_commitments: [ // active commitments + { committed_amount_uregen: 5_000_000_000 }, + { committed_amount_uregen: 2_500_000_000 }, + { committed_amount_uregen: 1_000_000_000 }, + ], + periods_per_year: 52, // optional; default 52 (weekly) + max_stability_share: 0.30, // optional; default 30% cap + annual_return: 0.06, // optional; default 6% annual +}); +``` + +**Schemas:** [`m015_activity_score.schema.json`](../../mechanisms/m015-contribution-weighted-rewards/schemas/m015_activity_score.schema.json) and [`m015_stability_commitment.schema.json`](../../mechanisms/m015-contribution-weighted-rewards/schemas/m015_stability_commitment.schema.json). + +## 3. What you get back + +### Activity score + +```js +{ + total_score: 245_000.45, // weighted sum (150_000 + 75_000 + 20_000 + 0.30 + 0.15) + breakdown: { + credit_purchase: { + raw_value: 500_000, + weight: 0.30, + weighted_value: 150_000, + }, + credit_retirement: { + raw_value: 250_000, + weight: 0.30, + weighted_value: 75_000, + }, + platform_facilitation: { + raw_value: 100_000, + weight: 0.20, + weighted_value: 20_000, + }, + governance_voting: { + raw_value: 3, // note: raw count, not uregen + weight: 0.10, + weighted_value: 0.30, // count × weight + }, + proposal_submission: { + raw_value: 1.5, // effective credit (passed + half-credit) + weight: 0.10, + weighted_value: 0.15, + }, + }, +} +``` + +**Note the unit mismatch:** the first three activities are in uregen, while `governance_voting` is a count and `proposal_submission` is a unit-less credit. The `total_score` is a dimensionless sum — useful for ranking participants, but not directly a uregen value. Downstream distribution logic divides by `total_activity_score` (the sum across all participants for the period) to compute each participant's share of the activity pool. + +### Stability allocation + +```js +{ + stability_allocation: 9_807_692, // uregen locked for stability tier this period + activity_pool: 9_990_192_308, // uregen remaining for activity rewards +} +``` + +**Cap enforcement:** `stability_allocation` cannot exceed `community_pool_inflow × max_stability_share` (default 30%). If the computed allocation (sum of commitments × periodic return rate) exceeds the cap, it's clamped to the cap and the activity pool retains the remainder. This is the **30% cap** referenced throughout the SPEC. + +## 4. Common error modes + +### 4a. Proposal spam doesn't pay + +A participant who submits 100 proposals that never reach quorum gets **0.0 proposal credit**, not 10.0. The `computeProposalCredit` function filters on `reached_quorum` before awarding any credit. If you see a participant with a high raw proposal count but zero proposal contribution, the quorum filter is working correctly. + +### 4b. Stability tier cap vs uncapped + +When the stability commitments × annual return (pro-rated to the period) would exceed `max_stability_share × community_pool_inflow`, the allocation is clamped to the cap. The participants get less than their contractual return for that period. The SPEC treats this as an acceptable degradation — the cap exists to prevent the stability tier from draining the activity pool entirely during a low-inflow period. Pinned by [`vector_v0_cap_overflow`](../../mechanisms/m015-contribution-weighted-rewards/reference-impl/test_vectors/vector_v0_cap_overflow.input.json). + +### 4c. Zero activity participant + +A participant with zero activity across all five dimensions scores `total_score: 0`. They earn no distribution from the activity pool. The distribution logic must not divide by zero — check `total_activity_score > 0` before computing shares, or the reward math produces `Infinity`. Pinned by [`vector_v0_zero_activity`](../../mechanisms/m015-contribution-weighted-rewards/reference-impl/test_vectors/vector_v0_zero_activity.input.json). + +### 4d. Early exit commitment + +Stability commitments that exit before their lock expires suffer a **50% early exit penalty** (per the SPEC). The reference impl doesn't handle the exit transaction itself — that's a contract-side concern — but the activity pool computation uses the **remaining committed amount** after any early exits in the period. If your consumer integrates with the `contribution-rewards` contract, read the post-exit state rather than the pre-exit state. Pinned by [`vector_v0_early_exit`](../../mechanisms/m015-contribution-weighted-rewards/reference-impl/test_vectors/vector_v0_early_exit.input.json). + +### 4e. Unit confusion between activity dimensions + +Three of the five activity dimensions are in uregen (`credit_purchase`, `credit_retirement`, `platform_facilitation`), and two are counts (`governance_votes_cast`, effective proposal credit). The `total_score` mixes them via weights. Do NOT interpret `total_score` as a uregen amount — it's a ranking number. Use the breakdown if you need to surface per-dimension values in a UI with proper units. + +## 5. Runnable example + +The reference implementation ships with 8 test vectors — the richest in the framework. Covers the sample, zero-activity, early-exit, and cap-overflow cases. The self-test runs all vectors and validates both `computeActivityScore` and `computeStabilityAllocation`: + +```bash +node mechanisms/m015-contribution-weighted-rewards/reference-impl/m015_score.js +``` + +Expected output: + +``` +m015_score self-test: ... passed +``` + +The mechanism is registered in `scripts/verify.mjs`, so `node scripts/verify.mjs` runs the same self-test as part of CI. + +--- + +Canonical spec: [`mechanisms/m015-contribution-weighted-rewards/SPEC.md`](../../mechanisms/m015-contribution-weighted-rewards/SPEC.md) §5.