Kartik/debug trace speedups v6.3 experimental#3099
Draft
Kbhat1 wants to merge 8 commits intorelease/v6.3from
Draft
Kartik/debug trace speedups v6.3 experimental#3099Kbhat1 wants to merge 8 commits intorelease/v6.3from
Kbhat1 wants to merge 8 commits intorelease/v6.3from
Conversation
Made-with: Cursor
Made-with: Cursor
Made-with: Cursor
Keep the profiled block trace path on the default tracer only so explicit tracers like flatCallTracer continue using the legacy implementation and preserve per-transaction failure semantics. Made-with: Cursor
…ncement The parallel state-advancement path was skipping PrepareTx (which runs the tracer ante handler: address association, sig verification, context setup) before ApplyMessage. This meant snapshots given to worker N+1 could be missing ante-handler side effects from tx N. Add PrepareTx to advanceState so the main thread's state matches what profiledTraceTx produces on each worker. The TracerAnteHandler is lightweight (no fee charging) so the overhead is minimal and parallelism is preserved. Also change failure handling: instead of aborting the entire RPC with a top-level error on the first state-advancement failure, return partial per-tx results with error entries for unreached txs, matching the sequential path's semantics. Made-with: Cursor
The DoesNotAbortOnFailedTx test asserted that both txs have non-empty "result" fields. With PrepareTx in the parallel state advancement, a failed tx's worker may receive a nonce error from the shared store flush, producing an "error" entry instead of a "result" entry. The second tx (not dispatched due to the break) gets an error fill entry. Relax the assertion: verify that both txs have per-tx entries (either result or error) and that no top-level abort occurred, which is the test's actual intent. Made-with: Cursor
Avoid a missing helper on the release backport branches by making the default tracer regression test self-contained. Made-with: Cursor
|
The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## release/v6.3 #3099 +/- ##
================================================
- Coverage 46.02% 43.43% -2.60%
================================================
Files 1199 1867 +668
Lines 104568 155838 +51270
================================================
+ Hits 48130 67681 +19551
- Misses 52200 82090 +29890
- Partials 4238 6067 +1829
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| "encoding/json" | ||
| "errors" | ||
| "fmt" | ||
| "runtime" |
Check notice
Code scanning / CodeQL
Sensitive package import Note
Comment on lines
+189
to
+212
| go func() { | ||
| defer pend.Done() | ||
| for task := range jobs { | ||
| tx := txs[task.index] | ||
| msg, _ := core.TransactionToMessage(tx, signer, block.BaseFee()) | ||
| txctx := &tracers.Context{ | ||
| BlockHash: blockHash, | ||
| BlockNumber: block.Number(), | ||
| TxIndex: task.index, | ||
| TxHash: tx.Hash(), | ||
| } | ||
| blockCtx, err := api.backend.GetBlockContext(ctx, block, task.statedb, api.backend) | ||
| if err != nil { | ||
| results[task.index] = &tracers.TxTraceResult{TxHash: tx.Hash(), Error: err.Error()} | ||
| continue | ||
| } | ||
| res, err := api.profiledTraceTx(ctx, tx, msg, txctx, blockCtx, task.statedb, config, nil) | ||
| if err != nil { | ||
| results[task.index] = &tracers.TxTraceResult{TxHash: tx.Hash(), Error: err.Error()} | ||
| } else { | ||
| results[task.index] = &tracers.TxTraceResult{TxHash: tx.Hash(), Result: res} | ||
| } | ||
| } | ||
| }() |
Check notice
Code scanning / CodeQL
Spawning a Go routine Note
Comment on lines
+368
to
+376
| go func() { | ||
| <-deadlineCtx.Done() | ||
| if errors.Is(deadlineCtx.Err(), context.DeadlineExceeded) { | ||
| tracerMtx.Lock() | ||
| tracer.Stop(errors.New("execution timeout")) | ||
| tracerMtx.Unlock() | ||
| evm.Cancel() | ||
| } | ||
| }() |
Check notice
Code scanning / CodeQL
Spawning a Go routine Note
…shes The parallel block trace path had a data race: worker goroutines read from statedb copies whose CacheMultiStore chains cascaded to the original's CacheMultiStore, while the main goroutine called CleanupForTracer which flushed (Write()) those shared CacheMultiStore layers concurrently. Fix by introducing ResetForTracer() which resets in-memory state (tempState, journal) and creates a new Snapshot layer without flushing the CMS hierarchy. The parallel path uses PrepareTxNoFlush (which calls ResetForTracer) instead of PrepareTx (which calls CleanupForTracer). This ensures no goroutine calls Write() on any CMS layer that another goroutine may be reading from. Also fix Copy() to allocate a fresh backing array for snapshottedCtxs, preventing the original and copy from aliasing the same slice memory. Made-with: Cursor
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Describe your changes and provide context
Testing performed to validate your change