MOD-9881 Track shared SVS thread pool memory & expose it through public API#967
MOD-9881 Track shared SVS thread pool memory & expose it through public API#967meiravgri wants to merge 5 commits into
Conversation
introduce VecSim_GlobalStatsInfo
🛡️ Jit Security Scan Results✅ No security findings were detected in this PR
Security scan by Jit
|
| // Iterators passed here are produced by the C++ debugInfoIterator() method, not the | ||
| // VecSimIndex_DebugInfoIterator C API, so the top-level GLOBAL_MEMORY field is | ||
| // never appended at this level. SVS-tiered does append SHARED_SVS_THREADPOOL_MEMORY | ||
| // from its own debugInfoIterator() override. |
There was a problem hiding this comment.
Tiered iterator missing GLOBAL_MEMORY handler causes test failure
Medium Severity
compareTieredIndexInfoToIterator asserts the field count as DebugInfoIteratorFieldCount::TIERED_SVS (18) and has no handler for GLOBAL_MEMORY_STRING in its while loop — any unrecognized field hits the else { FAIL(); } branch. If this function is ever called with an iterator produced by the C API VecSimIndex_DebugInfoIterator (which unconditionally appends GLOBAL_MEMORY), the assertion on number of fields will fail (19 != 18) and the field itself will trigger FAIL(). The comment at line 449 acknowledges this limitation but the function provides no expect_global_memory parameter like the other comparators do, leaving it fragile and inconsistent with compareFlatIndexInfoToIterator, compareHNSWIndexInfoToIterator, and compareSVSIndexInfoToIterator which all gained that parameter.
Reviewed by Cursor Bugbot for commit 94b40f8. Configure here.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 0ce994f. Configure here.
| VecSimIndexDebugInfo info = this->debugInfo(); | ||
| // For readability. Update this number when needed. | ||
| size_t numberOfInfoFields = 23; | ||
| size_t numberOfInfoFields = 24; |
There was a problem hiding this comment.
SVS numberOfInfoFields reserve hint is incorrect
Low Severity
numberOfInfoFields is set to 24 but the method actually adds 26 fields (1 ALGORITHM + 9 common via addCommonInfoToIterator + 1 BLOCK_SIZE + 14 SVS-specific fields + 1 SHARED_SVS_THREADPOOL_MEMORY). The test constant DebugInfoIteratorFieldCount::SVS = 26 confirms the correct count. This value is used for reserve() so it causes an unnecessary vector reallocation rather than a crash, but the comment explicitly says "Update this number when needed" — and the PR added a field while setting it to an incorrect value.
Reviewed by Cursor Bugbot for commit 0ce994f. Configure here.


The shared
VecSimSVSThreadPoolsingleton was previously created via rawnewwith the default allocator, so its slot vector and per-slotThreadSlotobjects bypassedVecSimAllocatorand were invisible to any memory accounting downstream (FT.INFO,INFO MODULES, etc.).This PR:
VecSimAllocator(usingVecsimSTLAllocatorfor the slot vector andstd::allocate_sharedfor thread objects).size_t VecSim_GetGlobalMemory(void)returning the total bytes of process-wide VecSim allocations not tied to any single index — today equal to the shared SVS thread pool's tracked allocation size.VECSIM_INFOvia two new fields:GLOBAL_MEMORY— appended at the top level of every algorithm's debug response by the C API wrapperVecSimIndex_DebugInfoIterator. Always present (value may be 0).SHARED_SVS_THREADPOOL_MEMORY— appended at the end of any SVS algorithm section bySVSIndex::debugInfoIterator(). Present at the top level of a non-tiered SVS response, or insideBACKEND_INDEXof a tiered SVS response.Public API change
Before
// (no API to query process-wide VecSim memory)After
Callers (e.g. RediSearch) can fold this into per-spec or process-wide memory metrics without depending on which algorithm contributes.
VECSIM_INFO(FT.DEBUG) output changeCommon header (every algorithm, unchanged)
FLAT — 11 → 12 fields
<common header × 10> BLOCK_SIZE + GLOBAL_MEMORYHNSW — 18 → 19 fields
<common header × 10> BLOCK_SIZE, M, EF_CONSTRUCTION, EF_RUNTIME, MAX_LEVEL, ENTRYPOINT, EPSILON, NUMBER_OF_MARKED_DELETED + GLOBAL_MEMORYSVS (non-tiered) — 25 → 27 fields
Tiered HNSW — 16 → 17 fields
<common header × 10> (ALGORITHM = "TIERED") MANAGEMENT_LAYER_MEMORY, BACKGROUND_INDEXING, TIERED_BUFFER_LIMIT FRONTEND_INDEX = nested [<FLAT fields, 11>] (no GLOBAL_MEMORY in nested) BACKEND_INDEX = nested [<HNSW fields, 18>] (no GLOBAL_MEMORY in nested) TIERED_HNSW_SWAP_JOBS_THRESHOLD + GLOBAL_MEMORYTiered SVS — 17 → 18 fields
Two emission rules
GLOBAL_MEMORYis appended exactly once, at the outermost iterator level, by the C API wrapper. Never appears inside a nestedFRONTEND_INDEX/BACKEND_INDEX.SHARED_SVS_THREADPOOL_MEMORYis appended at the end of any SVS algorithm section bySVSIndex::debugInfoIterator(). So it shows up at the top level of a non-tiered SVS response, or insideBACKEND_INDEXof a tiered SVS response — never duplicated.Stats / API output change
VecSim_GetGlobalMemory()Before: API did not exist. The pool's slot vector and per-slot
ThreadSlotobjects went through the default allocator and were not tracked anywhere.After:
Per-index
getAllocationSize()(SVS only)Before: Did not include any per-index portion of the parallelism slot, since the pool was untracked entirely.
After: Each SVS index's per-index allocator now tracks its own
parallelismslot (a small fixed-size structure inside the index, allocated through the index'sVecSimAllocator). The shared pool itself remains process-wide and reported viaVecSim_GetGlobalMemory().Cross-field invariant
Since the SVS thread pool is currently the only contributor to global memory:
This invariant is enforced by the new gtest
SVSTest.debugInfoGlobalMemoryEqualsSharedSVSThreadPoolMemory. If a future contributor is added toVecSim_GetGlobalMemory()without updating breakdowns, the test will catch the drift.Tests
SVSTest.debugInfoGlobalMemoryEqualsSharedSVSThreadPoolMemory— asserts both new fields exist exactly once in a non-tiered SVS response and report the same bytes asVecSim_GetGlobalMemory().compareFlatIndexInfoToIterator,compareHNSWIndexInfoToIterator,compareSVSIndexInfoToIteratorto take anexpect_global_memoryparameter (defaulttrue) — needed because these comparators are called both with the C API iterator (top level, has GLOBAL_MEMORY) and as nested-backend comparators insidecompareTieredIndexInfoToIterator(no GLOBAL_MEMORY).SVS25 → 26,TIERED_SVSupdated accordingly.FLAT/HNSW/TIERED_HNSWfield-count constants represent the C++ method count (no GLOBAL_MEMORY); the comparators add+1when called with the C API iterator.getSVSFields()/getTieredSVSFields()updated to reflect the new field positions.Compatibility
VECSIM_INFO. Existing consumers parsing by field name are unaffected; consumers indexing by position must shift their expectations accordingly (covered above).VecSim_GetGlobalMemory()is purely additive.Mark if applicable
Note
Medium Risk
Adds a new public C API and changes
VECSIM_INFO/debug iterator field counts/order across all algorithms, which can break consumers that rely on positional parsing; also adjusts SVS thread-pool allocation paths affecting memory accounting.Overview
Adds process-wide VecSim memory accounting by routing the shared SVS thread pool singleton allocations through a dedicated tracked
VecSimAllocator, and by exposing the pool’s tracked bytes via the new public APIVecSim_GetGlobalMemory().Extends debug/info output: the C API
VecSimIndex_DebugInfoIteratornow appends a new top-levelGLOBAL_MEMORYfield for every index, whileSVSIndex::debugInfoIterator()adds an SVS-specificSHARED_SVS_THREADPOOL_MEMORYfield (including in tiered SVS via the backend iterator). Tests and debug-iterator field-count/order expectations are updated accordingly, and SVS thread-pool wrappers now require an allocator so per-index parallelism state is tracked.Reviewed by Cursor Bugbot for commit 0ce994f. Bugbot is set up for automated code reviews on this repo. Configure here.