Conversation
- Add _setStoredBlobHash helper and BATCH_BLOB_VERSIONED_HASHES_SLOT in L1MessageBaseTest - Preset batchBlobVersionedHashes in tests so commitBatch succeeds without blob tx - test_commitAndFinalizeWithL1Messages_succeeds: set stored blob hash for batch 1 and 2 - test_commitBatches_succeeds: set stored blob hash for batch 1 - test_revertBatch_succeeds: set stored blob hash for batch 1 and 2 - Remove duplicate ZERO_VERSIONED_HASH from RollupCommitBatchWithProofTest Made-with: Cursor
Co-authored-by: FletcherMan <fanciture@163.com> Co-authored-by: fletcher.fan <fletcher.fan@bitget.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a new commitState pathway to recommit batch state using a stored blob hash, updates Rollup storage and flows, updates bindings/bytecode, and adjusts tx-submitter to detect and switch between commitBatch and commitState at runtime. Changes
Sequence Diagram(s)sequenceDiagram
participant Submitter as Submitter (tx-submitter)
participant Rollup as Rollup Contract
participant L1 as L1 Chain / Blob Source
participant Pending as Pending TX Pool
Note over Submitter: Decide commit method
Submitter->>Rollup: call BatchBlobVersionedHashes(nextBatchIndex)
Rollup-->>Submitter: storedHash? (bytes32)
alt Stored hash exists
Submitter->>Submitter: choose commitState
Submitter->>Submitter: build TX without blob data
else No stored hash
Submitter->>L1: read blobhash(0) / derive blobVersionedHash
L1-->>Submitter: blobVersionedHash
Submitter->>Submitter: choose commitBatch / commitBatchWithProof (include blob)
end
Submitter->>Pending: submit & track TX
Submitter->>Rollup: execute commit TX
Rollup-->>Submitter: TX confirmed / revert (already committed)
alt already committed with stored hash
Submitter->>Submitter: tryRebuildRollupCommitTx -> switch to commitState and retry
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tx-submitter/services/rollup.go (2)
386-405:⚠️ Potential issue | 🔴 CriticalNil/uninitialized rotator handling is still inconsistent.
These lines now treat
r.rotator == nilor missingstartTime/epochas a safe skip, butrollup()still unconditionally callsr.rotator.CurrentSubmitter(...)and dereferencesr.rotator.startTime/epoch. With the same configuration the background rollup loop will panic instead of skipping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tx-submitter/services/rollup.go` around lines 386 - 405, The rollup() logic calls r.rotator.CurrentSubmitter and later dereferences r.rotator.startTime/epoch even when r.rotator is nil or uninitialized, risking a panic; update rollup() to first check r.cfg.PriorityRollup and r.rotator != nil before calling r.rotator.CurrentSubmitter, and also guard access to r.rotator.startTime and r.rotator.epoch (returning errNotMyTurn when rotator is nil or its state fields are nil) so all uses of r.rotator are protected; specifically adjust the conditional around r.cfg.PriorityRollup and r.rotator, move the CurrentSubmitter call after that guard, and ensure any early returns use consistent error values.
602-688:⚠️ Potential issue | 🟠 MajorReverted batches can leave doomed
commitBatchtxs pending until timeout.This doomed-commit check only looks at
batchIndex <= lastCommitted. AfterrevertBatch,lastCommitteddrops back below the batch again, butRollup.commitBatchnow reverts whilebatchBlobVersionedHashes[batchIndex]is set. A recovered/pendingcommitBatchtherefore sits untilTxTimeoutbeforeReSubmitTxcan rebuild it tocommitState, which can stall recommit or burn a nonce on-chain.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tx-submitter/services/rollup.go` around lines 602 - 688, handlePendingTx can leave doomed commitBatch txs pending when a revertBatch leaves lastCommitted < batchIndex but batchBlobVersionedHashes[batchIndex] remains set; update the commit-like branch in handlePendingTx to also detect this reverted-but-has-blob state and cancel immediately: after computing batchIndex and lastCommitted (using utils.ParseParentBatchIndex and r.Rollup.LastCommittedBatchIndex), add a check against the rollup's batch blob map (batchBlobVersionedHashes or an existing accessor) and treat a present/non-empty blob entry for batchIndex as a reason to cancel (same CancelTx flow used when batchIndex <= lastCommitted); reference handlePendingTx, CancelTx, commitBatch, revertBatch, and batchBlobVersionedHashes when making the change.tx-submitter/services/pendingtx.go (1)
242-299:⚠️ Potential issue | 🟠 MajorRecover
pnoncefrom the max nonce, not the last journal entry.Line 294 assumes
ParseAllTxs()returns nonce order, but journal append order can contain replacements with a lower nonce after a higher one. After restart that leavespnoncetoo low and the next submission can reuse an already-pending nonce.💡 Suggested fix
- var maxCommitBatchIndex, maxFinalizeBatchIndex uint64 + var maxCommitBatchIndex, maxFinalizeBatchIndex, maxNonce uint64 for _, tx := range txs { + if tx.Nonce() > maxNonce { + maxNonce = tx.Nonce() + } + // Get method name method := utils.ParseMethod(tx, abi) @@ - pt.SetNonce(txs[len(txs)-1].Nonce()) + pt.SetNonce(maxNonce)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tx-submitter/services/pendingtx.go` around lines 242 - 299, The recovery currently sets pnonce from the last journal entry via pt.SetNonce(txs[len(txs)-1].Nonce()), which is unsafe; instead compute the maximum nonce seen while iterating txs and call pt.SetNonce(maxNonce) after the loop. Update the loop that processes txs to track a maxNonce (compare using tx.Nonce()), handle the empty-txs case appropriately, and replace the pt.SetNonce(...) call with pt.SetNonce(maxNonce) so pnonce reflects the highest pending nonce rather than the last array element.
🧹 Nitpick comments (2)
contracts/contracts/test/base/L1MessageBase.t.sol (1)
50-54: Storage slot constants are fragile — consider using foundry's storage inspection.These hardcoded slot numbers (172, 173) will silently break if the Rollup contract's storage layout changes. Consider adding a comment noting how these were derived (e.g.,
forge inspect Rollup storage-layout) or adding a sanity-check assertion insetUp()that verifies the slots are still correct.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/contracts/test/base/L1MessageBase.t.sol` around lines 50 - 54, The hardcoded storage slot constants BATCH_BLOB_VERSIONED_HASHES_SLOT and ROLLUP_DELAY_PERIOD_SLOT (and ZERO_VERSIONED_HASH) are fragile; run and record the exact command used to derive them (e.g., `forge inspect Rollup storage-layout`) in a comment next to the constants, and add a sanity-check assertion in setUp() that uses vm.load against the deployed Rollup contract to verify those slots contain the expected sentinel values (compare the loaded bytes32 to ZERO_VERSIONED_HASH or known defaults) so tests fail fast if the Rollup storage layout changes.tx-submitter/mock/rollup.go (1)
68-71: Mock always returns empty hash —commitStatepath untestable.The mock always returns
[32]byte{}, which meansuseCommitState(checked viastoredBlobHash != [32]byte{}inrollup.go:1163) will always befalse. This prevents testing thecommitStatecode path.Consider adding a setter method to control the return value:
Proposed addition
type MockRollup struct { lastCommittedBatchIndex *big.Int lastFinalizedBatchIndex *big.Int insideChallengeWindow bool batchExists bool batchTx *types.Transaction finalizeTx *types.Transaction + blobVersionedHash [32]byte } // BatchBlobVersionedHashes implements IRollup (no stored hash by default) func (m *MockRollup) BatchBlobVersionedHashes(opts *bind.CallOpts, batchIndex *big.Int) ([32]byte, error) { - return [32]byte{}, nil + return m.blobVersionedHash, nil } +// SetBlobVersionedHash sets the mock value for BatchBlobVersionedHashes +func (m *MockRollup) SetBlobVersionedHash(hash [32]byte) { + m.blobVersionedHash = hash +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tx-submitter/mock/rollup.go` around lines 68 - 71, The mock BatchBlobVersionedHashes currently always returns an empty [32]byte, which prevents the rollup code path that checks storedBlobHash != [32]byte{} (useCommitState) from being exercised; add a field to MockRollup (e.g., batchBlobVersionedHashesRet [32]byte) and a setter method (e.g., SetBatchBlobVersionedHashesHash(hash [32]byte)) that sets that field, then update BatchBlobVersionedHashes to return the stored field value instead of always returning [32]byte{} so tests can toggle useCommitState by setting a non-zero hash.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@contracts/contracts/test/Rollup.t.sol`:
- Around line 1396-1406: The test test_commitState_reverts_when_carrying_blob
currently sends a non-blob batch and only asserts success, so it doesn't
exercise the "blobhash(0) != 0" revert; change the test to explicitly assert the
revert path by constructing a BatchDataInput where blobhash(0) is non-zero
(e.g., bytes32(uint256(1))) and wrap the commit call with
hevm.expectRevert("commitState must not carry blob") (or vm.expectRevert if your
test harness uses vm), then hevm.prank(alice); call
rollup.commitState(batchDataInput, batchSignatureInput) to ensure the revert is
triggered; optionally keep the existing non-blob assert as a separate test or
split into two cases so both success and revert paths are covered (referencing
test_commitState_reverts_when_carrying_blob, batchDataInput,
rollup.commitState).
In `@tx-submitter/services/pendingtx.go`:
- Around line 279-299: The recovery rewrite is not crash-safe because pt.dump()
calls journal.Clean() before re-appending entries from pt.txinfos, risking
truncation if a crash or append fails; fix by making dump() atomic: write the
deduplicated entries to a temporary journal (or append to a new file) and fsync
the temp, then swap/rename it over the live journal (or only call
journal.Clean() after successful writes), ensuring pt.dump() (and any calls to
journal.Clean()/journal.Append()) only truncates the original after the new data
is durably persisted; update references in pendingtx.go to use the
atomic-temp-and-rename approach inside pt.dump() so pt.txinfos remains the
authoritative source until the swap completes.
In `@tx-submitter/services/reorg.go`:
- Around line 31-37: The inline comment and the value for maxHistory in
NewReorgDetector disagree (comment says "Track last 50 blocks" but maxHistory is
5); fix by making them consistent: either set maxHistory to 50 to match the
comment in NewReorgDetector or change the comment to reflect tracking 5 blocks.
Update the unique symbol NewReorgDetector and its maxHistory field so the value
and comment match.
---
Outside diff comments:
In `@tx-submitter/services/pendingtx.go`:
- Around line 242-299: The recovery currently sets pnonce from the last journal
entry via pt.SetNonce(txs[len(txs)-1].Nonce()), which is unsafe; instead compute
the maximum nonce seen while iterating txs and call pt.SetNonce(maxNonce) after
the loop. Update the loop that processes txs to track a maxNonce (compare using
tx.Nonce()), handle the empty-txs case appropriately, and replace the
pt.SetNonce(...) call with pt.SetNonce(maxNonce) so pnonce reflects the highest
pending nonce rather than the last array element.
In `@tx-submitter/services/rollup.go`:
- Around line 386-405: The rollup() logic calls r.rotator.CurrentSubmitter and
later dereferences r.rotator.startTime/epoch even when r.rotator is nil or
uninitialized, risking a panic; update rollup() to first check
r.cfg.PriorityRollup and r.rotator != nil before calling
r.rotator.CurrentSubmitter, and also guard access to r.rotator.startTime and
r.rotator.epoch (returning errNotMyTurn when rotator is nil or its state fields
are nil) so all uses of r.rotator are protected; specifically adjust the
conditional around r.cfg.PriorityRollup and r.rotator, move the CurrentSubmitter
call after that guard, and ensure any early returns use consistent error values.
- Around line 602-688: handlePendingTx can leave doomed commitBatch txs pending
when a revertBatch leaves lastCommitted < batchIndex but
batchBlobVersionedHashes[batchIndex] remains set; update the commit-like branch
in handlePendingTx to also detect this reverted-but-has-blob state and cancel
immediately: after computing batchIndex and lastCommitted (using
utils.ParseParentBatchIndex and r.Rollup.LastCommittedBatchIndex), add a check
against the rollup's batch blob map (batchBlobVersionedHashes or an existing
accessor) and treat a present/non-empty blob entry for batchIndex as a reason to
cancel (same CancelTx flow used when batchIndex <= lastCommitted); reference
handlePendingTx, CancelTx, commitBatch, revertBatch, and
batchBlobVersionedHashes when making the change.
---
Nitpick comments:
In `@contracts/contracts/test/base/L1MessageBase.t.sol`:
- Around line 50-54: The hardcoded storage slot constants
BATCH_BLOB_VERSIONED_HASHES_SLOT and ROLLUP_DELAY_PERIOD_SLOT (and
ZERO_VERSIONED_HASH) are fragile; run and record the exact command used to
derive them (e.g., `forge inspect Rollup storage-layout`) in a comment next to
the constants, and add a sanity-check assertion in setUp() that uses vm.load
against the deployed Rollup contract to verify those slots contain the expected
sentinel values (compare the loaded bytes32 to ZERO_VERSIONED_HASH or known
defaults) so tests fail fast if the Rollup storage layout changes.
In `@tx-submitter/mock/rollup.go`:
- Around line 68-71: The mock BatchBlobVersionedHashes currently always returns
an empty [32]byte, which prevents the rollup code path that checks
storedBlobHash != [32]byte{} (useCommitState) from being exercised; add a field
to MockRollup (e.g., batchBlobVersionedHashesRet [32]byte) and a setter method
(e.g., SetBatchBlobVersionedHashesHash(hash [32]byte)) that sets that field,
then update BatchBlobVersionedHashes to return the stored field value instead of
always returning [32]byte{} so tests can toggle useCommitState by setting a
non-zero hash.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4c9cf0df-78cd-4d89-bd4f-b5ec573f6fb2
📒 Files selected for processing (19)
bindings/bin/rollup_deployed.hexbindings/bin/zkevmverifierv1_deployed.hexbindings/bindings/rollup.gobindings/bindings/rollup_more.gobindings/bindings/zkevmverifierv1.gobindings/bindings/zkevmverifierv1_more.gocontracts/contracts/l1/rollup/IRollup.solcontracts/contracts/l1/rollup/Rollup.solcontracts/contracts/test/Rollup.t.solcontracts/contracts/test/base/L1MessageBase.t.soltx-submitter/batch/batch_restart_test.gotx-submitter/constants/methods.gotx-submitter/iface/rollup.gotx-submitter/mock/rollup.gotx-submitter/services/pendingtx.gotx-submitter/services/reorg.gotx-submitter/services/rollup.gotx-submitter/services/rollup_handle_test.gotx-submitter/utils/utils.go
panos-xyz
left a comment
There was a problem hiding this comment.
Adversarial Review — Critical & High Severity Findings
CRITICAL-1: ClearConfirmedTxs() called but not defined in this diff
In rollup.go, handleReorg now calls:
if r.pendingTxs != nil {
r.pendingTxs.ClearConfirmedTxs()
}However, the diff removes ClearPendingTxs() and MarkUnconfirmed() but does not show any implementation of ClearConfirmedTxs(). If this method does not exist, the code will not compile. If it exists in a pre-existing file not touched by this PR, its implementation needs verification — does it hold the mutex? Does it sync the journal?
Risk: Compilation failure, or undefined behavior after reorg if the method exists but is incorrect.
CRITICAL-2: Removing pendingTxs.Add() calls relies on a dangerous implicit assumption
The PR removes pendingTxs.Add(tx) in 5+ locations with comments like "SendTx already adds signedTx to the pool":
rollup.gofinalize pathrollup.gorollup pathpendingtx.gohandlePendingTx cancel (commit & finalize)pendingtx.goreplaceTimedOutTx
Core concern: If SendTx fails after broadcasting to the L1 mempool but before its internal Add() call (e.g., network timeout on confirmation, signing error on the tracking side), the transaction occupies a nonce on L1 but is not tracked in the pending pool. Consequences:
- The nonce is consumed but the system is unaware
- Subsequent transactions using the same nonce will be rejected by L1
- The submitter can become permanently stuck
Question: Why not keep the Add() calls as a defensive fallback? The code savings are minimal, but the risk introduced is significant. At minimum, SendTx should guarantee atomic add-or-fail semantics, and that guarantee should be documented and tested.
CRITICAL-3: TOCTOU race in tryRebuildRollupCommitTx
stored, err := r.Rollup.BatchBlobVersionedHashes(nil, big.NewInt(int64(batchIndex)))
// ... series of operations (cache lookup, signature rebuild, tx construction) ...
newTx, err = r.createDynamicFeeTx(...) // or createRollupTxThere is a time window between reading the stored value from L1 and actually building + signing + submitting the new tx. If L1 reorgs during this window:
- Read stored hash → build
commitState→ hash cleared on L1 by the time tx lands → tx reverts - Read no stored hash → build
commitBatch+ blob → hash gets set on L1 → wasted blob fee
When blob fees are elevated, this race condition can cause significant fund waste. Consider adding a post-send verification or at minimum documenting this as a known limitation.
HIGH-1: handleConfirmedTx silently swallows nil receipt
func (r *Rollup) handleConfirmedTx(..., status *txStatus, currentBlock uint64) error {
if status == nil || status.receipt == nil {
return nil // silently returns, tx stays in pending pool
}If getTxStatus returns txStatusConfirmed but the receipt disappears due to a reorg happening between the status check and this call, processSingleTx routes to the confirmed branch, and handleConfirmedTx silently returns nil. The transaction will never be processed or removed — it becomes a zombie in the pending pool.
Suggestion: At minimum log.Warn this condition. Ideally, return the tx to pending status so the next processing cycle re-evaluates it.
HIGH-2: ParseParentBatchIndex falls back to commitBatch ABI for unknown selectors
} else {
// Unknown selector: keep legacy behavior (unpack as commitBatch)
method, ok = rollupAbi.Methods["commitBatch"]
}If a finalizeBatch transaction's calldata is accidentally passed to this function (shouldn't happen, but defensive programming should account for it), it will unpack using commitBatch's ABI layout and produce a garbage batch index. This garbage index could then be used in critical state decisions.
Suggestion: Return 0 for truly unknown selectors instead of guessing.
HIGH-3: Recover() journal rewrite breaks consistency on non-crash failure
// Adds to in-memory map without journal
pt.mu.Lock()
pt.txinfos[tx.Hash()] = &types.TxRecord{...}
pt.mu.Unlock()
// ... later ...
if err := pt.dump(); err != nil {
return fmt.Errorf("failed to rewrite journal after recovery: %w", err)
}The comment states "a crash here is safe — the next restart will re-read the original entries". However, dump() failure ≠ crash. If dump() returns an error:
- The in-memory map has already been modified (new txs added)
- The journal file retains its old state
- The process does not crash — it continues running with inconsistent in-memory vs. journal state
- Subsequent
Add()/Remove()calls will append to the stale journal
Suggestion: If dump() fails, either crash (as the comment implies is safe) or rollback the in-memory state to match the journal.
HIGH-4: Reorg handling failure is silently swallowed
if hasReorg {
if err := r.handleReorg(depth); err != nil {
log.Warn("Post-reorg handling failed", "error", err) // only warns!
}
}If handleReorg fails (e.g., ClearConfirmedTxs errors out), the reorg state cleanup is incomplete, but the submitter continues processing transactions. It may make decisions based on pre-reorg confirmed state, potentially submitting duplicate or conflicting transactions.
Suggestion: A failed reorg cleanup should either retry or halt processing for the current cycle (return error from ProcessTx), not silently continue.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores