Conversation
|
The latest Buf updates on your PR. Results from workflow Buf / buf (pull_request).
|
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3027 +/- ##
==========================================
- Coverage 58.71% 58.46% -0.25%
==========================================
Files 2094 2093 -1
Lines 172970 172886 -84
==========================================
- Hits 101555 101082 -473
- Misses 62430 62826 +396
+ Partials 8985 8978 -7
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| } | ||
|
|
||
| return store | ||
| return store, nil |
There was a problem hiding this comment.
Instead of changing the interface to return an error, wouldn't it be easier to just panic inside this constructor? Is there any benefit to return the error if we failed to create the store?
|
|
||
| // FlatKVConfig is the configuration for the FlatKV (EVM) backend | ||
| FlatKVConfig flatkv.Config | ||
| FlatKVConfig *flatkv.Config |
There was a problem hiding this comment.
Why do we need to make this a pointer?
| if err := scStore.CleanupCrashArtifacts(); err != nil { | ||
| panic(err) | ||
| } | ||
| if err := scStore.CleanupCrashArtifacts(); err != nil { |
There was a problem hiding this comment.
Duplicate CleanupCrashArtifacts call here, we should remove the one above?
| return fmt.Errorf("failed to batch read old values: %w", err) | ||
| } | ||
|
|
||
| s.phaseTimer.SetPhase("apply_change_sets_prepare") |
There was a problem hiding this comment.
Duplicate code s.phaseTimer.SetPhase("apply_change_sets_prepare")
| ) | ||
|
|
||
| // Configuration for the PebbleDB database. | ||
| type PebbleDBConfig struct { |
There was a problem hiding this comment.
These configs are not specific to PebbleDB, instead, they are specific to flatkv, I think we should move them to flatkv instead of put them here, we could change the db engine in the future for flatkv, but these configs are universal to all db engines
| // The number of shards in the key-value cache. Must be a power of two and greater than 0. | ||
| CacheShardCount uint64 | ||
| // The size of pebbleDB's internal block cache, in bytes. | ||
| BlockCacheSize int |
There was a problem hiding this comment.
Would recommend remove this BlockCacheSize from the config, we can set a tuned default value for now, we can add it in the future if we are 100% sure we need a different specific value for each DB.
In any way, we are not going rely on the PebbleDB internal cache for read optimization, so would recommend maybe just set a default number
| opts types.OpenOptions, | ||
| enableMetrics bool, | ||
| config *PebbleDBConfig, | ||
| comparer *pebble.Comparer, |
There was a problem hiding this comment.
Do we have to pass the comparer in? Under what scenario will we pass in a custom comparer?
| return fmt.Errorf("account db config is invalid: %w", c.AccountDBConfig.Validate()) | ||
| } | ||
| if c.CodeDBConfig.Validate() != nil { | ||
| return fmt.Errorf("code db config is invalid: %w", c.CodeDBConfig.Validate()) |
There was a problem hiding this comment.
Nit: We shouldn't need to call Validate() twice here
| return fmt.Errorf("metadata db config is invalid: %w", c.MetadataDBConfig.Validate()) | ||
| } | ||
|
|
||
| if c.ReaderThreadsPerCore < 0 { |
There was a problem hiding this comment.
If ReaderThreadsPerCore == 0, and ReaderConstantThreadCount == 0, it will create a pool with 0 workers, causing deadlock issue?
| if c.DataDir == "" { | ||
| return fmt.Errorf("data dir is required") | ||
| } | ||
| if c.CacheSize > 0 && (c.CacheShardCount&(c.CacheShardCount-1)) != 0 { |
There was a problem hiding this comment.
Shall we also validate and make sure CacheShardCount > 0?
|
One blocker comment: I think we should decouple cache from db_engine as much as possible, which allows FlatKV to switch different db_engine easily without reimplmenting the cache for each engine in the future |
Describe your changes and provide context
Add a caching layer to FlatKV, more than doubling performance in cryptosim benchmarks.
Testing performed to validate your change
Unit tests, ran benchmark over several days.