Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughRefactors workspace model to root-path storage, consolidates multiple binaries into a single executable with alias symlinks, adds a Scan command and Requirements setup step with async checks/org discovery, replaces ProviderEntry/AuthMethod with WorkspaceProvider, updates CLI/TUI, workspace store, tests, and CI workflows. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant CLI as CLI
participant Requirements as RequirementsChecker
participant Provider as ProviderFactory
participant Store as WorkspaceStore
User->>CLI: start setup wizard / run scan
CLI->>Requirements: maybe_start_requirements_checks()
Note right of Requirements: async checks spawn
Requirements->>CLI: SetupCheckResults (back via backend)
CLI->>User: render results & prompt
User->>CLI: provide auth token
CLI->>Provider: create_provider(WorkspaceProvider, token)
Provider-->>CLI: provider client
User->>CLI: provide workspace path / confirm
CLI->>Store: expand/validate root and write <root>/.git-same/config.toml
Store-->>CLI: save & register workspace in global registry
CLI->>User: setup complete / scan summary
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
This PR refactors workspace configuration to be stored inside each workspace’s root (<root>/.git-same/) and updates the CLI/TUI/setup flows accordingly, while also adding a new scan command and reworking binary alias handling to use symlinks instead of multiple compiled binaries.
Changes:
- Move workspace config/cache/history into workspace-scoped
.git-same/storage and update resolution/default-workspace behavior to be path-based. - Rework setup wizard and TUI navigation (remove SystemCheck/welcome screens, add requirements checks + org discovery wiring).
- Introduce
scancommand plus CI/release + packaging updates for alias symlinks and drift checking.
Reviewed changes
Copilot reviewed 96 out of 97 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| toolkit/packaging/binary-aliases.txt | Defines primary + alias names |
| toolkit/conductor/setup.sh | Updates setup instructions/paths |
| toolkit/conductor/run.sh | Installs primary + creates symlinks |
| toolkit/conductor/archive.sh | Uninstall/cleanup using alias manifest |
| tests/integration_test.rs | Integration coverage for new workspace behavior |
| src/workflows/sync_workspace_tests.rs | Updates tests for root-path workspaces |
| src/workflows/sync_workspace.rs | Cache/auth now keyed by workspace root |
| src/workflows/status_scan_tests.rs | Updates tests for new workspace constructor |
| src/tui/ui.rs | Removes SystemCheck routing |
| src/tui/screens/workspaces_tests.rs | Updates tests for root-path defaults |
| src/tui/screens/workspaces.rs | Default workspace now stored as path |
| src/tui/screens/system_check_tests.rs | Removes SystemCheck tests |
| src/tui/screens/system_check.rs | Removes SystemCheck screen |
| src/tui/screens/sync_tests.rs | Updates TUI sync tests for new App ctor |
| src/tui/screens/sync.rs | Removes banner animation from sync popup |
| src/tui/screens/settings_tests.rs | Updates settings tests for new App ctor |
| src/tui/screens/mod.rs | Removes SystemCheck module export |
| src/tui/screens/dashboard_tests.rs | Adds tests for banner animation phase logic |
| src/tui/screens/dashboard.rs | Animates banner only during sync op |
| src/tui/mod.rs | Adds config_was_created into run_tui |
| src/tui/handler_tests.rs | Updates handler tests for new setup flow |
| src/tui/handler.rs | Adds requirements + org discovery backend wiring |
| src/tui/event_tests.rs | Updates backend message variants tests |
| src/tui/event.rs | Adds Setup* backend messages |
| src/tui/backend_tests.rs | Updates backend tests for new App ctor |
| src/tui/backend.rs | Spawns setup org discovery task |
| src/tui/app_tests.rs | Updates app selection/default-workspace tests |
| src/tui/app.rs | Workspaces now keyed by root path |
| src/setup/ui_tests.rs | Adds rendering coverage for banner visibility |
| src/setup/ui.rs | Setup wizard now starts at Requirements |
| src/setup/state_tests.rs | Updates state tests for requirements step |
| src/setup/state.rs | Adds Requirements step + new state fields |
| src/setup/screens/welcome_tests.rs | Removes Welcome screen tests |
| src/setup/screens/welcome.rs | Removes Welcome screen |
| src/setup/screens/requirements_tests.rs | New tests for requirements screen |
| src/setup/screens/requirements.rs | New requirements step renderer |
| src/setup/screens/path_tests.rs | Updates path screen tests for new UX |
| src/setup/screens/path.rs | Folder navigator-focused path selection |
| src/setup/screens/mod.rs | Replaces welcome with requirements module |
| src/setup/screens/confirm_tests.rs | Updates confirm tests (no workspace name) |
| src/setup/screens/confirm.rs | Updates confirm summary + save location text |
| src/setup/screens/complete_tests.rs | Updates completion tests (no workspace name) |
| src/setup/screens/complete.rs | Updates completion UI for new model |
| src/setup/mod_tests.rs | Adds requirements-step unit tests |
| src/setup/mod.rs | Runs requirements checks automatically in wizard |
| src/setup/handler_tests.rs | Adds tests for new path/org behaviors |
| src/setup/handler.rs | Saves workspace by root path; org discovery refactor |
| src/provider/mod_tests.rs | Updates provider factory tests for WorkspaceProvider |
| src/provider/mod.rs | create_provider now takes WorkspaceProvider |
| src/provider/github/pagination.rs | Minor retry handling adjustment |
| src/main.rs | Moves banner printing into run_command; passes quiet |
| src/lib_tests.rs | Updates prelude re-export expectations |
| src/lib.rs | Prelude now re-exports Workspace* types |
| src/config/workspace_tests.rs | Updates workspace config tests for root_path |
| src/config/workspace_store_tests.rs | Adds workspace-store tests for dot-dir storage |
| src/config/workspace_store.rs | Implements .git-same/-based persistence + registry |
| src/config/workspace_policy_tests.rs | Adds selector resolution + cwd detection tests |
| src/config/workspace_policy.rs | Implements selector/path/cwd/default resolution |
| src/config/workspace_manager_tests.rs | Updates manager tests for dot-dir paths |
| src/config/workspace_manager.rs | Facade now operates on root paths |
| src/config/workspace.rs | WorkspaceConfig now stores root_path and provider |
| src/config/provider_config_tests.rs | Replaces ProviderEntry tests with WorkspaceProvider |
| src/config/provider_config.rs | Deprecates legacy provider config module content |
| src/config/parser_tests.rs | Removes provider config tests; adds registry tests |
| src/config/parser.rs | Global config now stores workspace registry list |
| src/config/mod.rs | Updates exports + example config |
| src/commands/workspace.rs | Default workspace is saved as tilde-collapsed path |
| src/commands/sync_cmd_tests.rs | Adjusts error expectations for new resolution |
| src/commands/sync_cmd.rs | Uses immutable ensure_base_path + new resolution |
| src/commands/support/workspace_tests.rs | Updates ensure_base_path tests for root-path |
| src/commands/support/workspace.rs | ensure_base_path no longer mutates workspace |
| src/commands/status_tests.rs | Adjusts error expectations for new resolution |
| src/commands/status.rs | Uses immutable ensure_base_path + new resolution |
| src/commands/scan_tests.rs | Adds scan discovery unit tests |
| src/commands/scan.rs | New scan + optional register command |
| src/commands/reset_tests.rs | Updates reset tests for dot-dir workspaces |
| src/commands/reset.rs | Reset now removes .git-same/ and unregisters |
| src/commands/mod.rs | Adds scan command + centralized banner printing |
| src/cli.rs | Adds scan args + updates selector help text |
| src/cache/sync_history.rs | Sync history now stored in workspace dot-dir |
| src/cache/discovery.rs | Discovery cache now stored in workspace dot-dir |
| src/banner_tests.rs | Adds CLI banner rendering assertions |
| src/banner.rs | CLI banner now uses truecolor gradient rendering |
| src/auth/mod.rs | Auth now accepts WorkspaceProvider |
| docs/plans/remove-global-providers-config.md | Removes obsolete planning doc |
| docs/plans/remove-global-providers-config-opus.md | Removes obsolete planning doc |
| docs/plans/path-selector-ux.md | Removes obsolete planning doc |
| docs/plans/optimize-binary-aliases.md | Removes obsolete planning doc |
| docs/plans/move-config-to-sync-folder.md | Removes obsolete planning doc |
| docs/README.md | Documents new alias + selector behavior |
| conductor.json | Updates toolkit paths |
| Cargo.toml | Bumps version; removes extra bin targets |
| Cargo.lock | Updates lockfile for version bump |
| .github/workflows/S4-Publish-Crates.yml | Uses explicit toolchain for publish |
| .github/workflows/S3-Publish-Homebrew.yml | Adds alias symlinks + token rename |
| .github/workflows/S2-Release-GitHub.yml | Uses explicit toolchain for CI/release steps |
| .github/workflows/S1-Test-CI.yml | Uses explicit toolchain + adds alias drift check |
Comments suppressed due to low confidence (3)
toolkit/conductor/run.sh:14
PRIMARY_BIN=$(head -n1 "$ALIAS_FILE")assumes the alias manifest exists and is readable; if it’s missing, the script will fail with a non-obvious error due toset -e. Consider adding an explicit-fcheck with a clear error (or falling back to a hardcoded primary) likearchive.shdoes.
src/setup/screens/path.rs:2- The module header says “Step 4”, but
SetupStep::SelectPathis now step 5 after adding the Requirements step. Update this comment to avoid drifting from the wizard step numbering used elsewhere.
toolkit/conductor/run.sh:18 - This script later prints example commands using
$GS_COMMAND(e.g.,--workspace github,workspace default my-ws). Since workspace selection has shifted toward “path or folder name”, consider updating those examples to reflect the current selector semantics so users don’t copy/paste outdated values.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (11)
src/provider/github/pagination.rs (1)
113-118: Consider logging transient errors before retrying.Binding the error to
_makes it clear the value is intentionally unused, which is fine. However, logging transient network failures (even atdebugortracelevel) could help diagnose intermittent connectivity issues in production without impacting retry behavior.♻️ Optional: Add trace-level logging for transient errors
- Err(_) if retry_count < MAX_RETRIES => { + Err(e) if retry_count < MAX_RETRIES => { + tracing::debug!( + error = %e, + retry = retry_count + 1, + backoff_ms, + "Transient network error, retrying" + ); retry_count += 1; tokio::time::sleep(Duration::from_millis(backoff_ms)).await; backoff_ms *= 2; continue; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/provider/github/pagination.rs` around lines 113 - 118, The retry branch that currently matches Err(_) and increments retry_count (in pagination.rs within the code handling retries using retry_count, MAX_RETRIES, and backoff_ms) should log the transient error before sleeping and retrying; change the match arm to capture the error (e.g., Err(e) if retry_count < MAX_RETRIES) and call the logger at debug/trace level with a concise message including e and retry_count, then proceed to increment retry_count, sleep, double backoff_ms, and continue as before so behavior is unchanged but transient failures are recorded.src/tui/screens/sync.rs (1)
972-994: Consider usingfiltered_log_entriesto reduce code duplication.The filter logic here duplicates the implementation in
filtered_log_entries(lines 186-210). You could simplify this by calling the existing function:let entries = filtered_log_entries(app);This would keep the filter logic in one place and make future filter changes easier to maintain.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/screens/sync.rs` around lines 972 - 994, The block building `entries` duplicates filtering logic already implemented in `filtered_log_entries`; replace the match and iterator code with a single call to `filtered_log_entries(app)` (use that result for the `entries` binding currently declared as `let entries: Vec<&crate::tui::app::SyncLogEntry>`), ensuring you keep the same variable name (`entries`) and that any expected type (borrowed slice/Vec of `&SyncLogEntry`) is preserved or adjusted to match the existing call-site usage; remove the manual match over `app.log_filter` and rely on `filtered_log_entries` to centralize the filter logic.src/commands/reset.rs (1)
308-311: Consider using a sentinel value or restructuring the enum.Using
PathBuf::new()as a placeholder for the "specific workspace" menu option works but is semantically unclear. The code immediately callsprompt_workspaceto replace it, so there's no bug, but this pattern could confuse future maintainers.An alternative would be using
Option<PathBuf>or a separate enum variant likeWorkspacePrompt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/reset.rs` around lines 308 - 311, The code uses ResetScope::Workspace(PathBuf::new()) as a placeholder which is confusing; change ResetScope to express the intent explicitly (either make the Workspace variant hold Option<PathBuf> or add a new WorkspacePrompt variant), update where options is populated to push the new sentinel (e.g. ResetScope::WorkspacePrompt or ResetScope::Workspace(None)), and modify match/handlers that call prompt_workspace (and any places that construct Workspace) to handle the new variant by running prompt_workspace and replacing/setting the PathBuf accordingly; ensure all pattern matches on ResetScope (e.g. in the reset command handler) are updated to handle both the concrete Workspace(PathBuf) and the prompt sentinel.src/banner_tests.rs (1)
25-27: Optional: make color-sequence assertions less brittle.Lines 25-27 hardcode RGB values; deriving expected prefixes from
GRADIENT_STOPSwould keep the test aligned with intentional stop updates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/banner_tests.rs` around lines 25 - 27, The test currently hardcodes RGB escape sequences; change it to derive expected color prefixes from the GRADIENT_STOPS constant instead: iterate the first few stops in GRADIENT_STOPS, format each stop into the ANSI prefix string used by the banner renderer (e.g., "\u{1b}[38;2;{r};{g};{b}") and assert that rendered.contains(...) for those generated prefixes (or use starts_with for expected positions). Update the assertions that reference the literal sequences to use the generated values so the test follows updates to GRADIENT_STOPS (refer to GRADIENT_STOPS and the rendered variable in the test).src/commands/status_tests.rs (1)
23-25: Tighten this assertion to avoid false-positive passes.Line 25 allows a very generic
"Configuration error"match, which can hide regressions unrelated to workspace resolution.Suggested test tightening
- assert!( - err.to_string().contains("not found") - || err.to_string().contains("No workspace config found") - || err.to_string().contains("Configuration error"), - "unexpected error: {}", - err - ); + let msg = err.to_string(); + assert!( + msg.contains("workspace") + && (msg.contains("not found") || msg.contains("No workspace config found")), + "unexpected error: {}", + msg + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/status_tests.rs` around lines 23 - 25, The current predicate uses a generic "Configuration error" substring and can produce false positives; update the assertion that checks err.to_string().contains(...) so it matches a specific workspace-related message (e.g., "workspace config" or "workspace resolution") or a precise error enum/variant instead of the broad "Configuration error"; locate the predicate expression using err.to_string().contains(...) in the test and replace the generic check with a tighter string or pattern (or assert on the concrete error type/variant when possible) to ensure only workspace-resolution failures pass the test.src/setup/screens/confirm_tests.rs (1)
32-33: Add an assertion that the rendered base path is visible.Line 32 and Line 55 now drive
base_path, but the tests don’t assert it appears in output, so regressions in path rendering could slip through.Also applies to: 55-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/confirm_tests.rs` around lines 32 - 33, Add an assertion in the confirm_tests.rs test(s) to verify that the rendered base path string is present in the rendered output: after setting state.base_path = "~/Git-Same/GitHub".to_string() and after rendering the UI (where the test currently exercises the confirm screen), assert that the output contains the same base_path value (use the state.base_path variable or the literal string) so regressions in path rendering are caught; add the same check for the second occurrence where state.base_path is set near state.orgs to ensure both code paths are covered.src/commands/sync_cmd_tests.rs (1)
63-69: Consider narrowing accepted error text here as well.Line 66’s generic
"Configuration error"branch can let unrelated failures pass this test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/sync_cmd_tests.rs` around lines 63 - 69, The test assertion over-broadly accepts "Configuration error", which can mask unrelated failures; update the assertion that checks err (the error produced in the test) to be narrowed by removing the generic "Configuration error" branch and instead assert only the expected workspace-related messages ("No workspace configured" or "No workspace config found") or, better, match the concrete error variant/type returned by the code under test (use downcast_ref or pattern-match the error enum) so the assertion targets the specific workspace configuration error rather than any generic configuration error.src/setup/mod_tests.rs (1)
65-79: Move this end-to-end check to the integration test suite.Line 65 (
standalone_requirements_step_auto_runs_checks_end_to_end) executes the real requirements flow and is better scoped fortests/integration_test.rsto keep unit tests deterministic and fast.Based on learnings: Keep integration tests in
tests/integration_test.rs, separate from unit tests in_tests.rsfiles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/mod_tests.rs` around lines 65 - 79, The test standalone_requirements_step_auto_runs_checks_end_to_end in src/setup/mod_tests.rs should be moved to the integration test suite: cut the entire async #[tokio::test] function and paste it into tests/integration_test.rs (or the project’s integration test file), keep the async body but change the attribute to #[tokio::test] in the new file so it runs as an integration test, and remove the original function from mod_tests.rs to keep unit tests fast and deterministic; ensure imports used by SetupState, SetupStep, maybe_start_requirements_checks, and run_requirements_checks are added to the integration file.src/commands/support/workspace_tests.rs (1)
27-33: Use a tempdir-derived missing path for deterministic cross-platform behavior.Line 28 hardcodes
/nonexistent/path/xyz; deriving a non-existent child path from a tempdir is more robust on all runners.Proposed test hardening
fn ensure_base_path_errors_on_missing_path() { - let workspace = WorkspaceConfig::new_from_root(std::path::Path::new("/nonexistent/path/xyz")); + let temp = tempfile::tempdir().unwrap(); + let missing = temp.path().join("definitely-missing"); + let workspace = WorkspaceConfig::new_from_root(&missing); let output = Output::new(Verbosity::Quiet, false); let err = ensure_base_path(&workspace, &output).unwrap_err(); assert!(err.to_string().contains("does not exist")); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/support/workspace_tests.rs` around lines 27 - 33, Replace the hardcoded absolute path in the test ensure_base_path_errors_on_missing_path with a deterministic tempdir-derived missing path: create a tempdir (e.g., via tempfile::tempdir()), construct a child path inside it that does not exist, pass that child path to WorkspaceConfig::new_from_root, then call ensure_base_path(&workspace, &output) and assert the error contains "does not exist"; keep using Output and Verbosity as before so symbols WorkspaceConfig::new_from_root, ensure_base_path, Output, and Verbosity are unchanged.src/tui/screens/dashboard.rs (1)
62-70: Consider consolidating duplicate key bindings.Both
'e'(line 63) and'i'(line 69) now navigate toScreen::Settings. If this is intentional (e.g., providing multiple shortcuts), consider adding a comment to clarify. Otherwise, one of these bindings may be redundant or the'i'binding might be a leftover from previousSystemChecknavigation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/screens/dashboard.rs` around lines 62 - 70, There are duplicate key bindings: both KeyCode::Char('e') and KeyCode::Char('i') call app.navigate_to(Screen::Settings); decide whether both shortcuts are intentional and either remove the redundant branch or add a clarifying comment; specifically update the match arms in dashboard.rs so only the intended binding(s) call app.navigate_to(Screen::Settings) (or change KeyCode::Char('i') to navigate to Screen::SystemCheck if that was the original intent), and include a brief comment if you keep multiple shortcuts to document why both exist.src/config/workspace_policy.rs (1)
50-54: Consider typed error variant instead of string matching.Checking
selector_err.to_string().contains("ambiguous")is fragile and could break if the error message changes. Consider adding anAppError::AmbiguousSelectorvariant or a helper method onAppErrorto check the error type.💡 Suggested approach
Add a method to check error type:
impl AppError { pub fn is_ambiguous_selector(&self) -> bool { matches!(self, AppError::Config(msg) if msg.contains("ambiguous")) } }Or better, introduce a dedicated error variant for ambiguous selectors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/workspace_policy.rs` around lines 50 - 54, Replace fragile string matching on selector_err.to_string().contains("ambiguous") with a typed check: add a dedicated AppError::AmbiguousSelector variant (or implement a method like AppError::is_ambiguous_selector(&self) -> bool) and use that to detect ambiguity; then update the branch in workspace_policy.rs (the code that currently binds selector_err) to call the new variant/method (e.g., if selector_err.is_ambiguous_selector() { return Err(selector_err); }) so the logic relies on structured error types instead of message text.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/cache/discovery.rs`:
- Around line 113-116: The for_workspace function in DiscoveryCache is currently
resolving the cache path via WorkspaceStore::cache_path(root) which places
discovery cache under workspace-local storage; change it to compute and use a
TTL-managed global cache directory (e.g. ~/.cache/git-same/) instead so
DiscoveryCache entries follow the project's TTL policy. Update the for_workspace
constructor (and any helpers in DiscoveryCache) to derive cache_path from a
global cache root (not WorkspaceStore::cache_path), create per-workspace
subpaths under that global root, and ensure the cache files are read/validated
with the existing TTL logic used by DiscoveryCache; keep the function signature
(for_workspace(root: &Path) -> Result<Self>) and preserve any existing struct
fields but replace the stored path value to point into the ~/.cache/git-same/
TTL-managed directory.
In `@src/config/parser.rs`:
- Around line 369-385: The code silently no-ops when the "workspaces" key exists
but is not an array; change the logic in the block that handles
table.entry("workspaces") so that if the existing workspaces value is present
and not an Array you return an Err (or propagate a typed error) instead of
proceeding to rewrite the file; specifically check the result of
table.get("workspaces") or inspect the Value returned by entry("workspaces")—if
it is not a toml::Value::Array return an error mentioning "workspaces must be an
array" (use the same error type your parser uses) and only proceed to
push/retain on arr when you have successfully obtained a mutable array (the
variables to update are workspaces/arr/add/remove in this scope).
- Around line 105-107: The Config struct change replacing the required
[[providers]] with pub workspaces: Vec<String> breaks the schema contract and
modify_registry_at silently returns when the workspaces field exists but is not
an array; update the code to either (A) restore/support the [[providers]]
section and keep workspaces as a separate, optional field, or (B) accept the new
schema explicitly and add defensive validation with diagnostics: in parser.rs
locate the Config struct and the modify_registry_at function and (1) if
preserving compatibility, ensure parsing still reads [[providers]] and only
updates workspaces without removing providers, or (2) if changing schema
intentionally, update the default template to include the new workspaces
structure and replace the early return in modify_registry_at with explicit error
logging/Result return when workspaces is not an array (e.g., check
workspaces.as_array_mut(), emit a descriptive error and fail instead of silently
skipping) so malformed workspaces is diagnosed.
In `@src/config/workspace.rs`:
- Around line 176-183: In tilde_collapse_path, avoid using string starts_with
and instead treat HOME/USERPROFILE as Paths and use Path::strip_prefix to ensure
true path-prefix semantics (so `/home/al` won't match `/home/alice`); implement
a Windows fallback by checking USERPROFILE when HOME is not set, attempt
path.strip_prefix(&home_path) and on Ok(suffix) return format!("~{}",
suffix.to_string_lossy()), otherwise return the original path string; reference
the function name tilde_collapse_path and use std::path::Path::strip_prefix for
the change.
In `@src/setup/handler.rs`:
- Around line 549-567: The key handler in this path input mode is swallowing
editable input and hijacking 'b'; remove the special-case KeyCode::Char('b')
branch and instead handle character and edit keys so they modify the path
buffer: call the existing path-confirm and browse helpers only for Enter
(confirm_path) and a dedicated browse key if you still want one, but implement
KeyCode::Char(c) to append c to the current path buffer (e.g. via a path-edit
method on state) and KeyCode::Backspace | KeyCode::Delete to delete from the
path buffer (via a backspace/delete method on state); keep Left/Esc calling
state.prev_step() and leave Up/Down/Right/Tab/Home/End as no-ops. Use the
symbols open_path_browse_mode, confirm_path, and state.prev_step to locate the
logic to change.
In `@src/setup/screens/mod.rs`:
- Line 9: Add the companion test-module reference for this module by inserting a
test-only module declaration: add #[cfg(test)] #[path = "screens_tests.rs"] mod
tests; (place it in src/setup/screens/mod.rs near the existing pub mod
requirements; line) so the screens module has its required companion test file
reference.
In `@src/tui/app.rs`:
- Around line 368-370: The code always calls
SetupState::with_first_setup(&default_path, true) which hardcodes is_first_setup
and can misclassify flows; change the call in App::new to pass the actual
config_was_created flag (or a derived boolean) instead of true so
SetupState::with_first_setup(&default_path, config_was_created) (and ensure
setup.config_was_created is still set) so the SetupState reflects the real
first-setup state.
In `@src/tui/handler.rs`:
- Around line 70-77: The conversion from CheckResult to CheckEntry currently
drops the remediation guidance by not mapping the suggestion field: update the
mapping that builds CheckEntry from results (the closure using r -> CheckEntry {
name: r.name, passed: r.passed, message: r.message, critical: r.critical }) to
also include suggestion: r.suggestion (or clone/into as appropriate), and
likewise update the reconstruction site that builds CheckResult/CheckEntry
around line ~610 (where suggestion: None is used) to preserve and round-trip the
suggestion value instead of setting it to None; reference the CheckResult,
CheckEntry, results variable and the closure that constructs CheckEntry to
locate both places to change.
In `@toolkit/conductor/archive.sh`:
- Around line 14-22: The current code reads ALIAS_FILE into the BINARIES array
but doesn't handle the case where the file exists yet yields no usable entries;
update the logic after the while-read loop to check if BINARIES is empty (e.g.,
test -z "${BINARIES[*]}") and if so either populate BINARIES with the hardcoded
fallback list ("git-same" "gitsame" "gitsa" "gisa") and emit a warning, or exit
with an error—modify the block using the ALIAS_FILE, BINARIES, and the while
IFS= read -r line loop to perform this emptiness check and take the
fallback/warning action so cleanup will not silently skip all aliases.
In `@toolkit/conductor/run.sh`:
- Around line 12-15: The script currently derives PRIMARY_BIN using head -n1 on
ALIAS_FILE without validating the manifest, which fails if the file is missing
or the first line is empty; update the logic around ALIAS_FILE and
PRIMARY_BIN/GS_COMMAND to first verify ALIAS_FILE exists and is readable (check
-f/readable), then extract the first non-empty, non-comment alias (e.g., skip
blank lines and comments) and ensure the result is non-empty before assigning
PRIMARY_BIN and GS_COMMAND; on validation failure emit a clear error and exit
non-zero. Apply the same validation and error handling to the other
alias-related blocks referenced (the sections that set PRIMARY_BIN/GS_COMMAND at
lines ~21-23 and ~27-35) so every place that reads ALIAS_FILE or derives
PRIMARY_BIN validates the file contents first.
---
Nitpick comments:
In `@src/banner_tests.rs`:
- Around line 25-27: The test currently hardcodes RGB escape sequences; change
it to derive expected color prefixes from the GRADIENT_STOPS constant instead:
iterate the first few stops in GRADIENT_STOPS, format each stop into the ANSI
prefix string used by the banner renderer (e.g., "\u{1b}[38;2;{r};{g};{b}") and
assert that rendered.contains(...) for those generated prefixes (or use
starts_with for expected positions). Update the assertions that reference the
literal sequences to use the generated values so the test follows updates to
GRADIENT_STOPS (refer to GRADIENT_STOPS and the rendered variable in the test).
In `@src/commands/reset.rs`:
- Around line 308-311: The code uses ResetScope::Workspace(PathBuf::new()) as a
placeholder which is confusing; change ResetScope to express the intent
explicitly (either make the Workspace variant hold Option<PathBuf> or add a new
WorkspacePrompt variant), update where options is populated to push the new
sentinel (e.g. ResetScope::WorkspacePrompt or ResetScope::Workspace(None)), and
modify match/handlers that call prompt_workspace (and any places that construct
Workspace) to handle the new variant by running prompt_workspace and
replacing/setting the PathBuf accordingly; ensure all pattern matches on
ResetScope (e.g. in the reset command handler) are updated to handle both the
concrete Workspace(PathBuf) and the prompt sentinel.
In `@src/commands/status_tests.rs`:
- Around line 23-25: The current predicate uses a generic "Configuration error"
substring and can produce false positives; update the assertion that checks
err.to_string().contains(...) so it matches a specific workspace-related message
(e.g., "workspace config" or "workspace resolution") or a precise error
enum/variant instead of the broad "Configuration error"; locate the predicate
expression using err.to_string().contains(...) in the test and replace the
generic check with a tighter string or pattern (or assert on the concrete error
type/variant when possible) to ensure only workspace-resolution failures pass
the test.
In `@src/commands/support/workspace_tests.rs`:
- Around line 27-33: Replace the hardcoded absolute path in the test
ensure_base_path_errors_on_missing_path with a deterministic tempdir-derived
missing path: create a tempdir (e.g., via tempfile::tempdir()), construct a
child path inside it that does not exist, pass that child path to
WorkspaceConfig::new_from_root, then call ensure_base_path(&workspace, &output)
and assert the error contains "does not exist"; keep using Output and Verbosity
as before so symbols WorkspaceConfig::new_from_root, ensure_base_path, Output,
and Verbosity are unchanged.
In `@src/commands/sync_cmd_tests.rs`:
- Around line 63-69: The test assertion over-broadly accepts "Configuration
error", which can mask unrelated failures; update the assertion that checks err
(the error produced in the test) to be narrowed by removing the generic
"Configuration error" branch and instead assert only the expected
workspace-related messages ("No workspace configured" or "No workspace config
found") or, better, match the concrete error variant/type returned by the code
under test (use downcast_ref or pattern-match the error enum) so the assertion
targets the specific workspace configuration error rather than any generic
configuration error.
In `@src/config/workspace_policy.rs`:
- Around line 50-54: Replace fragile string matching on
selector_err.to_string().contains("ambiguous") with a typed check: add a
dedicated AppError::AmbiguousSelector variant (or implement a method like
AppError::is_ambiguous_selector(&self) -> bool) and use that to detect
ambiguity; then update the branch in workspace_policy.rs (the code that
currently binds selector_err) to call the new variant/method (e.g., if
selector_err.is_ambiguous_selector() { return Err(selector_err); }) so the logic
relies on structured error types instead of message text.
In `@src/provider/github/pagination.rs`:
- Around line 113-118: The retry branch that currently matches Err(_) and
increments retry_count (in pagination.rs within the code handling retries using
retry_count, MAX_RETRIES, and backoff_ms) should log the transient error before
sleeping and retrying; change the match arm to capture the error (e.g., Err(e)
if retry_count < MAX_RETRIES) and call the logger at debug/trace level with a
concise message including e and retry_count, then proceed to increment
retry_count, sleep, double backoff_ms, and continue as before so behavior is
unchanged but transient failures are recorded.
In `@src/setup/mod_tests.rs`:
- Around line 65-79: The test
standalone_requirements_step_auto_runs_checks_end_to_end in
src/setup/mod_tests.rs should be moved to the integration test suite: cut the
entire async #[tokio::test] function and paste it into tests/integration_test.rs
(or the project’s integration test file), keep the async body but change the
attribute to #[tokio::test] in the new file so it runs as an integration test,
and remove the original function from mod_tests.rs to keep unit tests fast and
deterministic; ensure imports used by SetupState, SetupStep,
maybe_start_requirements_checks, and run_requirements_checks are added to the
integration file.
In `@src/setup/screens/confirm_tests.rs`:
- Around line 32-33: Add an assertion in the confirm_tests.rs test(s) to verify
that the rendered base path string is present in the rendered output: after
setting state.base_path = "~/Git-Same/GitHub".to_string() and after rendering
the UI (where the test currently exercises the confirm screen), assert that the
output contains the same base_path value (use the state.base_path variable or
the literal string) so regressions in path rendering are caught; add the same
check for the second occurrence where state.base_path is set near state.orgs to
ensure both code paths are covered.
In `@src/tui/screens/dashboard.rs`:
- Around line 62-70: There are duplicate key bindings: both KeyCode::Char('e')
and KeyCode::Char('i') call app.navigate_to(Screen::Settings); decide whether
both shortcuts are intentional and either remove the redundant branch or add a
clarifying comment; specifically update the match arms in dashboard.rs so only
the intended binding(s) call app.navigate_to(Screen::Settings) (or change
KeyCode::Char('i') to navigate to Screen::SystemCheck if that was the original
intent), and include a brief comment if you keep multiple shortcuts to document
why both exist.
In `@src/tui/screens/sync.rs`:
- Around line 972-994: The block building `entries` duplicates filtering logic
already implemented in `filtered_log_entries`; replace the match and iterator
code with a single call to `filtered_log_entries(app)` (use that result for the
`entries` binding currently declared as `let entries:
Vec<&crate::tui::app::SyncLogEntry>`), ensuring you keep the same variable name
(`entries`) and that any expected type (borrowed slice/Vec of `&SyncLogEntry`)
is preserved or adjusted to match the existing call-site usage; remove the
manual match over `app.log_filter` and rely on `filtered_log_entries` to
centralize the filter logic.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (96)
.github/workflows/S1-Test-CI.yml.github/workflows/S2-Release-GitHub.yml.github/workflows/S3-Publish-Homebrew.yml.github/workflows/S4-Publish-Crates.ymlCargo.tomlconductor.jsondocs/README.mddocs/plans/move-config-to-sync-folder.mddocs/plans/optimize-binary-aliases.mddocs/plans/path-selector-ux.mddocs/plans/remove-global-providers-config-opus.mddocs/plans/remove-global-providers-config.mdsrc/auth/mod.rssrc/banner.rssrc/banner_tests.rssrc/cache/discovery.rssrc/cache/sync_history.rssrc/cli.rssrc/commands/mod.rssrc/commands/reset.rssrc/commands/reset_tests.rssrc/commands/scan.rssrc/commands/scan_tests.rssrc/commands/status.rssrc/commands/status_tests.rssrc/commands/support/workspace.rssrc/commands/support/workspace_tests.rssrc/commands/sync_cmd.rssrc/commands/sync_cmd_tests.rssrc/commands/workspace.rssrc/config/mod.rssrc/config/parser.rssrc/config/parser_tests.rssrc/config/provider_config.rssrc/config/provider_config_tests.rssrc/config/workspace.rssrc/config/workspace_manager.rssrc/config/workspace_manager_tests.rssrc/config/workspace_policy.rssrc/config/workspace_policy_tests.rssrc/config/workspace_store.rssrc/config/workspace_store_tests.rssrc/config/workspace_tests.rssrc/lib.rssrc/lib_tests.rssrc/main.rssrc/provider/github/pagination.rssrc/provider/mod.rssrc/provider/mod_tests.rssrc/setup/handler.rssrc/setup/handler_tests.rssrc/setup/mod.rssrc/setup/mod_tests.rssrc/setup/screens/complete.rssrc/setup/screens/complete_tests.rssrc/setup/screens/confirm.rssrc/setup/screens/confirm_tests.rssrc/setup/screens/mod.rssrc/setup/screens/path.rssrc/setup/screens/path_tests.rssrc/setup/screens/requirements.rssrc/setup/screens/requirements_tests.rssrc/setup/screens/welcome.rssrc/setup/screens/welcome_tests.rssrc/setup/state.rssrc/setup/state_tests.rssrc/setup/ui.rssrc/setup/ui_tests.rssrc/tui/app.rssrc/tui/app_tests.rssrc/tui/backend.rssrc/tui/backend_tests.rssrc/tui/event.rssrc/tui/event_tests.rssrc/tui/handler.rssrc/tui/handler_tests.rssrc/tui/mod.rssrc/tui/screens/dashboard.rssrc/tui/screens/dashboard_tests.rssrc/tui/screens/mod.rssrc/tui/screens/settings_tests.rssrc/tui/screens/sync.rssrc/tui/screens/sync_tests.rssrc/tui/screens/system_check.rssrc/tui/screens/system_check_tests.rssrc/tui/screens/workspaces.rssrc/tui/screens/workspaces_tests.rssrc/tui/ui.rssrc/workflows/status_scan_tests.rssrc/workflows/sync_workspace.rssrc/workflows/sync_workspace_tests.rstests/integration_test.rstoolkit/conductor/archive.shtoolkit/conductor/run.shtoolkit/conductor/setup.shtoolkit/packaging/binary-aliases.txt
💤 Files with no reviewable changes (12)
- src/tui/screens/mod.rs
- src/tui/ui.rs
- src/setup/screens/welcome.rs
- docs/plans/optimize-binary-aliases.md
- src/tui/screens/system_check_tests.rs
- docs/plans/remove-global-providers-config.md
- src/setup/screens/welcome_tests.rs
- docs/plans/move-config-to-sync-folder.md
- src/setup/screens/complete_tests.rs
- docs/plans/path-selector-ux.md
- src/tui/screens/system_check.rs
- docs/plans/remove-global-providers-config-opus.md
| /// Create a cache manager for a specific workspace root path. | ||
| pub fn for_workspace(root: &Path) -> Result<Self> { | ||
| let cache_path = crate::config::WorkspaceStore::cache_path(root); | ||
| Ok(Self { |
There was a problem hiding this comment.
Discovery cache location no longer matches repository cache policy.
Line 115 resolves cache under workspace-local storage via WorkspaceStore::cache_path(root), but this module is expected to keep DiscoveryCache under a TTL-managed cache directory at ~/.cache/git-same/.
As per coding guidelines: src/cache/**/*.rs: Implement DiscoveryCache with TTL-based validity at ~/.cache/git-same/.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cache/discovery.rs` around lines 113 - 116, The for_workspace function in
DiscoveryCache is currently resolving the cache path via
WorkspaceStore::cache_path(root) which places discovery cache under
workspace-local storage; change it to compute and use a TTL-managed global cache
directory (e.g. ~/.cache/git-same/) instead so DiscoveryCache entries follow the
project's TTL policy. Update the for_workspace constructor (and any helpers in
DiscoveryCache) to derive cache_path from a global cache root (not
WorkspaceStore::cache_path), create per-workspace subpaths under that global
root, and ensure the cache files are read/validated with the existing TTL logic
used by DiscoveryCache; keep the function signature (for_workspace(root: &Path)
-> Result<Self>) and preserve any existing struct fields but replace the stored
path value to point into the ~/.cache/git-same/ TTL-managed directory.
| /// Registry of known workspace root paths (tilde-collapsed). | ||
| #[serde(default)] | ||
| pub workspaces: Vec<String>, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check current references to providers and workspaces in the codebase
rg -n --type=md --type=rs --type=toml '\[\[providers\]\]|workspaces\s*=|default_workspace' . --max-count=20Repository: ZAAI-com/git-same
Length of output: 87
🏁 Script executed:
#!/bin/bash
# Examine the full Config struct in parser.rs to understand the schema
fd 'parser.rs' src/config | head -5 | xargs -I {} sh -c 'echo "=== {} ===" && wc -l {} && echo "---" && cat -n {}'Repository: ZAAI-com/git-same
Length of output: 16335
🏁 Script executed:
#!/bin/bash
# Look for the modify_registry_at function to verify the no-op behavior claim
rg -n 'modify_registry_at|fn modify' src/config/parser.rs -A 15Repository: ZAAI-com/git-same
Length of output: 1486
Schema contract changed without compatibility strategy; silent failure on malformed workspaces field.
The Config struct replaces the required [[providers]] section with workspaces: Vec<String> (line 107), violating the coding guideline specifying sections [clone], [filters], [[providers]]. The default template (lines 206–244) lacks any [[providers]] definition.
Additionally, modify_registry_at (line 358) silently skips operations if the workspaces field exists but is not an array. Line 374's if let Some(arr) = workspaces.as_array_mut() exits early without error when the field is malformed, leaving the config unchanged with no diagnostic.
Either preserve the [[providers]] structure for backward compatibility and update workspaces separately, or confirm this schema change is intentional and update the coding guidelines accordingly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/config/parser.rs` around lines 105 - 107, The Config struct change
replacing the required [[providers]] with pub workspaces: Vec<String> breaks the
schema contract and modify_registry_at silently returns when the workspaces
field exists but is not an array; update the code to either (A) restore/support
the [[providers]] section and keep workspaces as a separate, optional field, or
(B) accept the new schema explicitly and add defensive validation with
diagnostics: in parser.rs locate the Config struct and the modify_registry_at
function and (1) if preserving compatibility, ensure parsing still reads
[[providers]] and only updates workspaces without removing providers, or (2) if
changing schema intentionally, update the default template to include the new
workspaces structure and replace the early return in modify_registry_at with
explicit error logging/Result return when workspaces is not an array (e.g.,
check workspaces.as_array_mut(), emit a descriptive error and fail instead of
silently skipping) so malformed workspaces is diagnosed.
| match key.code { | ||
| KeyCode::Tab => { | ||
| apply_tab_completion(state); | ||
| KeyCode::Left => { | ||
| state.prev_step(); | ||
| } | ||
| KeyCode::Enter => { | ||
| confirm_path(state); | ||
| } | ||
| KeyCode::Char('b') => { | ||
| open_path_browse_mode(state); | ||
| } | ||
| KeyCode::Esc => { | ||
| state.prev_step(); | ||
| } | ||
| KeyCode::Backspace => { | ||
| if state.path_cursor > 0 { | ||
| state.path_cursor -= 1; | ||
| state.base_path.remove(state.path_cursor); | ||
| state.path_completions = compute_completions(&state.base_path); | ||
| state.path_completion_index = 0; | ||
| } | ||
| } | ||
| KeyCode::Delete => { | ||
| if state.path_cursor < state.base_path.len() { | ||
| state.base_path.remove(state.path_cursor); | ||
| state.path_completions = compute_completions(&state.base_path); | ||
| state.path_completion_index = 0; | ||
| } | ||
| } | ||
| KeyCode::Left => { | ||
| if state.path_cursor > 0 { | ||
| state.path_cursor -= 1; | ||
| } | ||
| } | ||
| KeyCode::Right => { | ||
| if state.path_cursor < state.base_path.len() { | ||
| state.path_cursor += 1; | ||
| } | ||
| } | ||
| KeyCode::Home => { | ||
| state.path_cursor = 0; | ||
| } | ||
| KeyCode::End => { | ||
| state.path_cursor = state.base_path.len(); | ||
| } | ||
| KeyCode::Char(c) => { | ||
| state.base_path.insert(state.path_cursor, c); | ||
| state.path_cursor += 1; | ||
| state.path_completions = compute_completions(&state.base_path); | ||
| state.path_completion_index = 0; | ||
| } | ||
| KeyCode::Up | KeyCode::Down | KeyCode::Right => {} | ||
| KeyCode::Tab => {} | ||
| KeyCode::Backspace | KeyCode::Delete => {} | ||
| KeyCode::Home | KeyCode::End => {} | ||
| KeyCode::Char(_) => {} | ||
| _ => {} |
There was a problem hiding this comment.
Path input mode cannot edit text (and hijacks b).
Line 556 consumes b as a command, and Line 562-567 ignore character/backspace/delete input. In practice, users cannot type or edit a custom path in input mode.
💡 Proposed fix
match key.code {
KeyCode::Left => {
state.prev_step();
}
KeyCode::Enter => {
confirm_path(state);
}
- KeyCode::Char('b') => {
- open_path_browse_mode(state);
- }
+ KeyCode::Tab => {
+ open_path_browse_mode(state);
+ }
KeyCode::Esc => {
state.prev_step();
}
- KeyCode::Up | KeyCode::Down | KeyCode::Right => {}
- KeyCode::Tab => {}
- KeyCode::Backspace | KeyCode::Delete => {}
- KeyCode::Home | KeyCode::End => {}
- KeyCode::Char(_) => {}
+ KeyCode::Backspace => {
+ if state.path_cursor > 0 && state.path_cursor <= state.base_path.len() {
+ state.path_cursor -= 1;
+ state.base_path.remove(state.path_cursor);
+ }
+ }
+ KeyCode::Delete => {
+ if state.path_cursor < state.base_path.len() {
+ state.base_path.remove(state.path_cursor);
+ }
+ }
+ KeyCode::Char(c) => {
+ state.base_path.insert(state.path_cursor, c);
+ state.path_cursor += 1;
+ }
+ KeyCode::Up | KeyCode::Down | KeyCode::Right => {}
+ KeyCode::Home | KeyCode::End => {}
_ => {}
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/setup/handler.rs` around lines 549 - 567, The key handler in this path
input mode is swallowing editable input and hijacking 'b'; remove the
special-case KeyCode::Char('b') branch and instead handle character and edit
keys so they modify the path buffer: call the existing path-confirm and browse
helpers only for Enter (confirm_path) and a dedicated browse key if you still
want one, but implement KeyCode::Char(c) to append c to the current path buffer
(e.g. via a path-edit method on state) and KeyCode::Backspace | KeyCode::Delete
to delete from the path buffer (via a backspace/delete method on state); keep
Left/Esc calling state.prev_step() and leave Up/Down/Right/Tab/Home/End as
no-ops. Use the symbols open_path_browse_mode, confirm_path, and state.prev_step
to locate the logic to change.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (1)
src/cache/discovery.rs (1)
113-118:⚠️ Potential issue | 🟠 MajorKeep discovery cache in the global TTL cache directory, not workspace-local metadata.
for_workspacenow stores discovery cache under<workspace>/.git-same/cache.json, which breaks the globalDiscoveryCachelocation policy and conflates workspace metadata with TTL-managed cache data.💡 Suggested direction
pub fn for_workspace(root: &Path) -> Result<Self> { - let cache_path = crate::config::WorkspaceStore::cache_path(root); + let cache_root = std::env::var("XDG_CACHE_HOME") + .map(PathBuf::from) + .or_else(|_| { + std::env::var("HOME") + .or_else(|_| std::env::var("USERPROFILE")) + .map(|home| PathBuf::from(home).join(".cache")) + }) + .context("Could not determine cache directory")? + .join("git-same") + .join("discovery"); + + let workspace_key = root + .to_string_lossy() + .replace(std::path::MAIN_SEPARATOR, "_"); + let cache_path = cache_root.join(format!("{workspace_key}.json")); Ok(Self { cache_path, ttl: DEFAULT_CACHE_TTL, }) }As per coding guidelines:
src/cache/**/*.rs: ImplementDiscoveryCachewith TTL-based validity at~/.cache/git-same/.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache/discovery.rs` around lines 113 - 118, The for_workspace function in DiscoveryCache is using crate::config::WorkspaceStore::cache_path(root) which writes discovery cache into workspace-local metadata; change it to use the global TTL cache directory helper (e.g. crate::config::CacheStore::discovery_cache_path() or similarly named Global/TTL cache accessor) so DiscoveryCache files live under the shared TTL cache (~/.cache/git-same/) rather than <workspace>/.git-same; update the call inside for_workspace (and its uses) to obtain the discovery cache path from the global cache helper instead of WorkspaceStore::cache_path(root) and remove the workspace-local dependency.
🧹 Nitpick comments (4)
.github/workflows/S2-Release-GitHub.yml (1)
89-118: Consider centralizing workflow secret-safety checks to avoid drift.This block is effectively duplicated logic across workflows. Moving it to a reusable workflow or composite action will reduce maintenance divergence over time.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S2-Release-GitHub.yml around lines 89 - 118, Extract the shell-check block currently inside the job named "workflow-secret-safety" and the step titled "Block known secret-leak patterns in workflows" into a single reusable workflow or composite action, preserving the three grep checks (credential-in-URL, direct echo of secrets, and shell xtrace patterns) and their failure behavior; then replace this job/block with a call to that reusable workflow/composite from this workflow so all repositories can invoke the same implementation and avoid duplicated logic drift..github/workflows/S1-Test-CI.yml (1)
144-146: Consider using AWK for more resilient TOML parsing in alias drift check.The current grep -A1 approach on line 145 works for the current Cargo.toml layout (
namedirectly after[[bin]]), but is fragile to field reordering. Prefer AWK or a proper TOML parser for defensive parsing:More resilient approach
- CARGO_BIN=$(grep -A1 '^\[\[bin\]\]' Cargo.toml | grep 'name' | sed 's/.*= *"\(.*\)"/\1/') + CARGO_BIN=$(awk ' + /^\[\[bin\]\]/ { in_bin=1; next } + in_bin && /^\[/ { exit } + in_bin && $1 ~ /^name$/ { + match($0, /"([^"]+)"/, m) + if (m[1] != "") { print m[1]; exit } + } + ' Cargo.toml)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S1-Test-CI.yml around lines 144 - 146, The script that sets CARGO_BIN by grepping the next line after '[[bin]]' is brittle to field reordering; replace the grep -A1 + sed approach with an AWK-based extraction that scans Cargo.toml for the next section header '[[bin]]' and then searches subsequent lines until it finds a name = "..." entry, assigning that value to CARGO_BIN; keep the existing MANIFEST_PRIMARY comparison logic intact so the if [ "$MANIFEST_PRIMARY" != "$CARGO_BIN" ] check still works.src/config/parser.rs (1)
398-402: Note:toml::to_string_prettywill reformat the config file.Unlike
save_default_workspace_towhich preserves comments and formatting through targeted text replacement,modify_registry_atusestoml::to_string_prettywhich will strip comments and reorder keys. This is likely acceptable for registry operations but worth being aware of.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/parser.rs` around lines 398 - 402, The code in modify_registry_at uses toml::to_string_pretty(&doc) which will reformat the file and drop comments/ordering; instead serialize the existing toml_edit::Document back to its preserved representation (e.g., use doc.to_string() or the Document API that preserves formatting) and then write that string to config_path so comments and original ordering are retained; replace the toml::to_string_pretty(&doc) call with doc.to_string() (or equivalent toml_edit serialization) and keep the std::fs::write(config_path, ...) logic.src/config/workspace_store_tests.rs (1)
43-67: Prefer root-derived expected paths over hard-coded/tmpliterals.This keeps the test portable and focused on derivation logic rather than Unix-specific path formatting.
💡 Suggested portable test rewrite
fn dot_dir_cache_and_config_paths_are_derived_from_root() { - let root = Path::new("/tmp/my-workspace"); + let temp = tempfile::tempdir().unwrap(); + let root = temp.path().join("my-workspace"); - let dot_dir = WorkspaceStore::dot_dir(root); - let config = WorkspaceStore::config_path(root); - let cache = WorkspaceStore::cache_path(root); - let history = WorkspaceStore::sync_history_path(root); + let dot_dir = WorkspaceStore::dot_dir(&root); + let config = WorkspaceStore::config_path(&root); + let cache = WorkspaceStore::cache_path(&root); + let history = WorkspaceStore::sync_history_path(&root); - assert_eq!( - dot_dir, - std::path::PathBuf::from("/tmp/my-workspace/.git-same") - ); - assert_eq!( - config, - std::path::PathBuf::from("/tmp/my-workspace/.git-same/config.toml") - ); - assert_eq!( - cache, - std::path::PathBuf::from("/tmp/my-workspace/.git-same/cache.json") - ); - assert_eq!( - history, - std::path::PathBuf::from("/tmp/my-workspace/.git-same/sync-history.json") - ); + assert_eq!(dot_dir, root.join(".git-same")); + assert_eq!(config, root.join(".git-same").join("config.toml")); + assert_eq!(cache, root.join(".git-same").join("cache.json")); + assert_eq!(history, root.join(".git-same").join("sync-history.json")); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/workspace_store_tests.rs` around lines 43 - 67, Test currently asserts against hard-coded "/tmp" literals; replace those with paths derived from the test `root` variable so the expectations are portable. Update the assertions to build expected PathBufs from `root` (e.g., join `root` with ".git-same", then with "config.toml", "cache.json", "sync-history.json") and compare those to the results of WorkspaceStore::dot_dir, WorkspaceStore::config_path, WorkspaceStore::cache_path, and WorkspaceStore::sync_history_path respectively so the test uses root-derived expected values instead of Unix-specific strings.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/S2-Release-GitHub.yml:
- Around line 119-122: The release job build-release-assets currently gates on
[test, coverage, audit, workflow-secret-safety] but omits the alias drift
validation; update the job's needs array to include the alias-drift check (e.g.,
add "alias-drift" or the repo's alias drift job name) so build-release-assets
(matrix.target) cannot run until alias-drift completes successfully.
In `@docs/plans/apple-code-signing.md`:
- Line 21: The document's step count is wrong: update the phrasing that
currently says “Add 3 new steps to the `build-release-assets` job” to match the
actual list (which defines Steps 1–4) so readers aren't confused; edit the
sentence near the "Rename binary" and "Upload artifact" references in
docs/plans/apple-code-signing.md to state the correct number of steps (e.g.,
“Add 1 new step” or “Add 4 steps” as appropriate) and ensure the conditional
macOS insertion is described consistently with the enumerated steps in the
`build-release-assets` job.
In `@docs/plans/github-settings-links.md`:
- Around line 221-223: Update Phase 3 step 2 to reflect the repository's actual
workflow model: state that S2 must be started via manual workflow_dispatch
(manual trigger) and that S2's jobs run the S1 gate jobs to validate those gates
before any artifact build, i.e., explicitly require manually dispatching the S2
workflow and verifying that all S1 gate jobs complete successfully prior to
building release artifacts; reference S2 and S1 gates and the .github/workflows
manual trigger model in the text so the runbook is accurate and executable.
In `@src/commands/scan_tests.rs`:
- Line 80: Replace the string suffix check that uses
cfg.workspaces[0].ends_with("/scan-root/team/project") with a path-aware
assertion: convert the workspace string to a std::path::Path and use
Path::ends_with with a Path built from components (for example
Path::new("scan-root").join("team").join("project")) so the assertion is
separator-agnostic; update the assertion in scan_tests.rs to use
Path::new(&cfg.workspaces[0]).ends_with(<joined Path>) referencing
cfg.workspaces to locate the change.
In `@src/commands/scan.rs`:
- Around line 83-92: The reported summary uses new_count (pre-registration) so
successful registrations still show as "unregistered"; update the count after
each successful save/register by decrementing new_count (or incrementing the
registered count) or recompute registered vs unregistered from the updated found
collection after performing registration when args.register is true; locate the
code that attempts to save/register each workspace (the loop that calls the
save/register function) and adjust new_count on success (or recalc found.len() -
new_count from the post-save state) so the final "Found {} workspace(s): {}
registered, {} unregistered{}" message reflects the actual outcome.
In `@src/config/workspace_store.rs`:
- Around line 163-187: The delete function currently removes the workspace
directory with std::fs::remove_dir_all then calls Config::remove_from_registry,
which can leave a stale registry entry if registry removal fails; change the
operation order in WorkspaceStore::delete so you call
Config::remove_from_registry(&tilde_collapse_path(root)) before calling
std::fs::remove_dir_all(&dot_dir) and return the registry error without deleting
files if it fails; alternatively (if you prefer keeping deletion guaranteed)
keep the current deletion order but catch and log a warning on
Config::remove_from_registry failure instead of returning an error so the stale
entry is tolerated (refer to delete, dot_dir, tilde_collapse_path,
remove_dir_all, Config::remove_from_registry and list()).
In `@tests/integration_test.rs`:
- Around line 438-441: The assertion uses a Unix-style string suffix
(default_workspace.ends_with("/repos/team-a/work")) which is platform-dependent;
change it to use Path-based suffix checking by converting default_workspace to a
std::path::Path and calling Path::ends_with with a Path constructed from the
expected path components (e.g., Path::new("repos").join("team-a").join("work")
or Path::new("team-a").join("work")) so the comparison works by path components
across platforms; update the similar occurrence that compares the same suffix at
the other test location as well.
---
Duplicate comments:
In `@src/cache/discovery.rs`:
- Around line 113-118: The for_workspace function in DiscoveryCache is using
crate::config::WorkspaceStore::cache_path(root) which writes discovery cache
into workspace-local metadata; change it to use the global TTL cache directory
helper (e.g. crate::config::CacheStore::discovery_cache_path() or similarly
named Global/TTL cache accessor) so DiscoveryCache files live under the shared
TTL cache (~/.cache/git-same/) rather than <workspace>/.git-same; update the
call inside for_workspace (and its uses) to obtain the discovery cache path from
the global cache helper instead of WorkspaceStore::cache_path(root) and remove
the workspace-local dependency.
---
Nitpick comments:
In @.github/workflows/S1-Test-CI.yml:
- Around line 144-146: The script that sets CARGO_BIN by grepping the next line
after '[[bin]]' is brittle to field reordering; replace the grep -A1 + sed
approach with an AWK-based extraction that scans Cargo.toml for the next section
header '[[bin]]' and then searches subsequent lines until it finds a name =
"..." entry, assigning that value to CARGO_BIN; keep the existing
MANIFEST_PRIMARY comparison logic intact so the if [ "$MANIFEST_PRIMARY" !=
"$CARGO_BIN" ] check still works.
In @.github/workflows/S2-Release-GitHub.yml:
- Around line 89-118: Extract the shell-check block currently inside the job
named "workflow-secret-safety" and the step titled "Block known secret-leak
patterns in workflows" into a single reusable workflow or composite action,
preserving the three grep checks (credential-in-URL, direct echo of secrets, and
shell xtrace patterns) and their failure behavior; then replace this job/block
with a call to that reusable workflow/composite from this workflow so all
repositories can invoke the same implementation and avoid duplicated logic
drift.
In `@src/config/parser.rs`:
- Around line 398-402: The code in modify_registry_at uses
toml::to_string_pretty(&doc) which will reformat the file and drop
comments/ordering; instead serialize the existing toml_edit::Document back to
its preserved representation (e.g., use doc.to_string() or the Document API that
preserves formatting) and then write that string to config_path so comments and
original ordering are retained; replace the toml::to_string_pretty(&doc) call
with doc.to_string() (or equivalent toml_edit serialization) and keep the
std::fs::write(config_path, ...) logic.
In `@src/config/workspace_store_tests.rs`:
- Around line 43-67: Test currently asserts against hard-coded "/tmp" literals;
replace those with paths derived from the test `root` variable so the
expectations are portable. Update the assertions to build expected PathBufs from
`root` (e.g., join `root` with ".git-same", then with "config.toml",
"cache.json", "sync-history.json") and compare those to the results of
WorkspaceStore::dot_dir, WorkspaceStore::config_path,
WorkspaceStore::cache_path, and WorkspaceStore::sync_history_path respectively
so the test uses root-derived expected values instead of Unix-specific strings.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (32)
.claude/CLAUDE.md.github/dependabot.yml.github/workflows/S1-Test-CI.yml.github/workflows/S2-Release-GitHub.yml.github/workflows/S3-Publish-Homebrew.yml.github/workflows/S4-Publish-Crates.ymlAGENTS.mddocs/README.mddocs/plans/apple-code-signing.mddocs/plans/github-settings-links.mdsrc/cache/discovery.rssrc/commands/mod.rssrc/commands/scan.rssrc/commands/scan_tests.rssrc/config/parser.rssrc/config/parser_tests.rssrc/config/workspace.rssrc/config/workspace_store.rssrc/config/workspace_store_tests.rssrc/config/workspace_tests.rssrc/setup/screens/mod.rssrc/setup/screens/mod_tests.rssrc/setup/state.rssrc/tui/app.rssrc/tui/app_tests.rssrc/tui/event_tests.rssrc/tui/handler.rssrc/tui/handler_tests.rssrc/tui/screens/settings.rstests/integration_test.rstoolkit/conductor/archive.shtoolkit/conductor/run.sh
🚧 Files skipped from review as they are similar to previous changes (4)
- src/tui/app_tests.rs
- .github/workflows/S4-Publish-Crates.yml
- src/tui/app.rs
- src/setup/screens/mod.rs
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (5)
docs/plans/github-settings-links.md (1)
221-223: Optional wording polish to reduce repetitive sentence starts.Lines 221–223 all begin with “Trigger …”; a small rewrite would improve readability.
✏️ Suggested wording tweak
-2. Trigger S2 via `workflow_dispatch` and confirm all S1 gate jobs pass before artifact build starts. -3. Trigger S3 via `workflow_dispatch` and verify environment approval for Homebrew publish. -4. Trigger S4 via `workflow_dispatch` and verify environment approval for crates publish. +2. Run S2 via `workflow_dispatch` and confirm all S1 gate jobs pass before artifact build starts. +3. Run S3 via `workflow_dispatch` and verify environment approval for Homebrew publish. +4. Run S4 via `workflow_dispatch` and verify environment approval for crates publish.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/plans/github-settings-links.md` around lines 221 - 223, Rephrase the three sequential bullets that all start with "Trigger ..." (steps S2, S3, S4 referencing workflow_dispatch) to vary sentence starts and improve flow; for example, start one with "Use workflow_dispatch to trigger S2 and confirm all S1 gate jobs pass before artifact build begins," another with "Run S3 via workflow_dispatch and verify environment approval for Homebrew publish," and the third with "Invoke S4 via workflow_dispatch and verify environment approval for crates publish," ensuring each retains the same checks (S1 gate verification and environment approvals) and the workflow_dispatch mention.tests/integration_test.rs (1)
34-44: Prefer using production workspace path helpers to avoid test drift.Line 35 and Line 38 rebuild
".git-same/config.toml"inline. Sincesrc/config/workspace_store.rsalready exposesdot_dirandconfig_path(lines 26-33), reusing those helpers in integration tests would reduce schema/path drift risk over time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration_test.rs` around lines 34 - 44, The test helper write_workspace_config currently hardcodes ".git-same" and the config path; change it to call the production helpers from src/config/workspace_store.rs (use the exported dot_dir and config_path helpers) to compute the workspace metadata dir and config file path, then create the directory and write the same TOML to that computed config_path; update the function signature or add an import so write_workspace_config(root: &Path) delegates path construction to those helpers instead of building ".git-same" inline..github/workflows/S2-Release-GitHub.yml (2)
111-113: Use fixed-string matching for alias checks.Line 111 uses regex mode (
grep -q "$alias"), so alias characters can be interpreted as regex and yield unreliable matches.Proposed tweak
- if ! grep -q "$alias" .github/workflows/S3-Publish-Homebrew.yml; then + if ! grep -Fq -- "$alias" .github/workflows/S3-Publish-Homebrew.yml; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S2-Release-GitHub.yml around lines 111 - 113, The grep invocation using regex mode (grep -q "$alias") can misinterpret special characters in the "$alias" variable; change it to use fixed-string matching (use grep --fixed-strings / -F with -q and pass the pattern as an argument) when checking for "$alias" in S3-Publish-Homebrew.yml so the check treats the alias literally and cannot be tripped by regex metacharacters.
103-106: Make Cargo bin-name extraction robust to field ordering.Line 103 currently assumes
nameis immediately after[[bin]](grep -A1), which can fail on validCargo.tomllayouts.Proposed hardening
- CARGO_BIN=$(grep -A1 '^\[\[bin\]\]' Cargo.toml | grep 'name' | sed 's/.*= *"\(.*\)"/\1/') + CARGO_BIN=$(awk ' + /^\[\[bin\]\]/ { in_bin=1; next } + /^\[/ { if (in_bin) exit } + in_bin && $1=="name" { + match($0, /"([^"]+)"/, m); print m[1]; exit + } + ' Cargo.toml) + [ -n "$CARGO_BIN" ] || { echo "ERROR: Could not resolve bin name from Cargo.toml"; exit 1; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S2-Release-GitHub.yml around lines 103 - 106, The current CARGO_BIN extraction (the grep -A1 ... | grep 'name' pipeline) assumes the name field is immediately after the [[bin]] header and fails if fields are reordered; update the extraction so it finds the first [[bin]] block and then searches subsequent lines until the next section header to extract the name key robustly (e.g., use awk/perl/sed to locate the line matching '^\[\[bin\]\]' and then scan forward for a line like 'name = "..."' and pull the value), keeping the MANIFEST_PRIMARY comparison logic the same so the check comparing CARGO_BIN and MANIFEST_PRIMARY remains valid.src/config/workspace_store.rs (1)
49-49: Use the sharedResulttype alias consistently.These signatures use
Result<_, AppError>directly; switching to the crate-levelResult<T>keeps error typing consistent across modules.As per coding guidelines "Use
thiserrorfor typed errors andanyhowfor error propagation; define customResulttype alias inerrors/module".Also applies to: 65-65, 74-74, 133-133, 166-166, 194-194
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/workspace_store.rs` at line 49, Several functions in workspace_store.rs (including the public load function and the other methods returning Result<_, AppError>) use Result<_, AppError> directly; replace those signatures to use the crate-level Result<T> alias instead. Update the function signatures (e.g., pub fn load(root: &Path) -> Result<WorkspaceConfig>) and any other methods that currently return Result<..., AppError> to return Result<T>, add the appropriate use/import for the crate-level Result alias (e.g., use crate::errors::Result;) at the top of the file, and ensure all return points still produce the same AppError types so the alias compiles.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/plans/apple-code-signing.md`:
- Line 81: The doc is ambiguous about APPLE_SIGNING_IDENTITY format and can
cause double-prefixing with the "Developer ID Application:" prefix; update the
description for APPLE_SIGNING_IDENTITY to explicitly state it should be the name
+ team ID only (e.g., "Your Name (TEAMID)") without the "Developer ID
Application:" prefix, or alternatively change the example/command to show the
full identity including the "Developer ID Application: Your Name (TEAMID)" form
so readers know which exact value to use; reference APPLE_SIGNING_IDENTITY and
the existing "Developer ID Application:" prefix in your edit to make the
intended format unambiguous.
In `@src/commands/scan.rs`:
- Around line 31-35: The code currently masks Config::load_from / Config::load
errors by calling unwrap_or_default() when building `global`; change this to
fail fast: call the appropriate load function, propagate or handle the Result
instead of defaulting (e.g., return Err or exit with a clear error message), so
parsing/load failures for `global` are reported and stop execution; locate the
match that assigns `global` (uses Config::load_from, Config::load, and
unwrap_or_default) and replace the unwrap_or_default logic with proper error
handling that surfaces the underlying error.
In `@src/config/workspace_store.rs`:
- Around line 166-178: Normalize the incoming root path at the start of
WorkspaceStore::delete so the registry removal uses the same canonicalized key
as storage; instead of calling tilde_collapse_path(root) after checking dot_dir,
first produce a normalized/canonical path (e.g., expand tilde and canonicalize)
from the provided root and use that normalized path when calling
Config::remove_from_registry(&tilde_path) (and when computing dot_dir via
Self::dot_dir if necessary) so relative or non-canonical inputs match stored
registry entries.
---
Nitpick comments:
In @.github/workflows/S2-Release-GitHub.yml:
- Around line 111-113: The grep invocation using regex mode (grep -q "$alias")
can misinterpret special characters in the "$alias" variable; change it to use
fixed-string matching (use grep --fixed-strings / -F with -q and pass the
pattern as an argument) when checking for "$alias" in S3-Publish-Homebrew.yml so
the check treats the alias literally and cannot be tripped by regex
metacharacters.
- Around line 103-106: The current CARGO_BIN extraction (the grep -A1 ... | grep
'name' pipeline) assumes the name field is immediately after the [[bin]] header
and fails if fields are reordered; update the extraction so it finds the first
[[bin]] block and then searches subsequent lines until the next section header
to extract the name key robustly (e.g., use awk/perl/sed to locate the line
matching '^\[\[bin\]\]' and then scan forward for a line like 'name = "..."' and
pull the value), keeping the MANIFEST_PRIMARY comparison logic the same so the
check comparing CARGO_BIN and MANIFEST_PRIMARY remains valid.
In `@docs/plans/github-settings-links.md`:
- Around line 221-223: Rephrase the three sequential bullets that all start with
"Trigger ..." (steps S2, S3, S4 referencing workflow_dispatch) to vary sentence
starts and improve flow; for example, start one with "Use workflow_dispatch to
trigger S2 and confirm all S1 gate jobs pass before artifact build begins,"
another with "Run S3 via workflow_dispatch and verify environment approval for
Homebrew publish," and the third with "Invoke S4 via workflow_dispatch and
verify environment approval for crates publish," ensuring each retains the same
checks (S1 gate verification and environment approvals) and the
workflow_dispatch mention.
In `@src/config/workspace_store.rs`:
- Line 49: Several functions in workspace_store.rs (including the public load
function and the other methods returning Result<_, AppError>) use Result<_,
AppError> directly; replace those signatures to use the crate-level Result<T>
alias instead. Update the function signatures (e.g., pub fn load(root: &Path) ->
Result<WorkspaceConfig>) and any other methods that currently return Result<...,
AppError> to return Result<T>, add the appropriate use/import for the
crate-level Result alias (e.g., use crate::errors::Result;) at the top of the
file, and ensure all return points still produce the same AppError types so the
alias compiles.
In `@tests/integration_test.rs`:
- Around line 34-44: The test helper write_workspace_config currently hardcodes
".git-same" and the config path; change it to call the production helpers from
src/config/workspace_store.rs (use the exported dot_dir and config_path helpers)
to compute the workspace metadata dir and config file path, then create the
directory and write the same TOML to that computed config_path; update the
function signature or add an import so write_workspace_config(root: &Path)
delegates path construction to those helpers instead of building ".git-same"
inline.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
.github/workflows/S2-Release-GitHub.ymldocs/plans/apple-code-signing.mddocs/plans/github-settings-links.mdsrc/commands/scan.rssrc/commands/scan_tests.rssrc/config/workspace_store.rssrc/config/workspace_store_tests.rstests/integration_test.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
.github/workflows/S2-Release-GitHub.yml (1)
78-135: Consider extracting duplicated jobs into reusable workflows.The
alias-drift-checkandworkflow-secret-safetyjobs are identical to those inS1-Test-CI.yml. This duplication increases maintenance burden—any fix or enhancement must be applied to both files.GitHub Actions supports reusable workflows that would allow S1 and S2 to call a shared workflow for these checks.
♻️ Example: Extract to reusable workflow
Create
.github/workflows/reusable-validation-checks.yml:name: Validation Checks on: workflow_call: jobs: alias-drift-check: # ... existing job definition ... workflow-secret-safety: # ... existing job definition ...Then in S1 and S2:
validation-checks: uses: ./.github/workflows/reusable-validation-checks.ymlAlso applies to: 147-175
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S2-Release-GitHub.yml around lines 78 - 135, Extract the duplicated jobs into a reusable workflow: create a new workflow (e.g. reusable-validation-checks.yml) with on: workflow_call and move the jobs alias-drift-check and workflow-secret-safety into it, keeping their step definitions intact; then replace the inline job definitions in both S2-Release-GitHub.yml and S1-Test-CI.yml with a single job that uses: ./.github/workflows/reusable-validation-checks.yml (invoking any required inputs/secrets if needed) so both files call the shared workflow instead of duplicating the logic..github/workflows/S1-Test-CI.yml (1)
145-152: The awk script may fail silently if bin name uses single quotes.The regex
/"([^"]+)"/only matches double-quoted strings. IfCargo.tomluses single quotes (valid TOML),CARGO_BINwill be empty and the check will fail with a generic error rather than identifying the actual issue.This is unlikely to occur in practice since Cargo.toml typically uses double quotes, but worth noting.
♻️ Optional: Handle both quote styles
CARGO_BIN=$(awk ' /^\[\[bin\]\]/ { in_bin=1; next } in_bin && /^\[/ { exit } in_bin && $1 == "name" { - match($0, /"([^"]+)"/, m) - if (m[1] != "") { print m[1]; exit } + # Handle both double and single quotes + if (match($0, /"([^"]+)"/, m) || match($0, /'"'"'([^'"'"']+)'"'"'/, m)) { + if (m[1] != "") { print m[1]; exit } + } } ' Cargo.toml)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S1-Test-CI.yml around lines 145 - 152, The awk snippet that sets CARGO_BIN uses the regex /"([^"]+)"/ which only matches double-quoted names so it can return empty if Cargo.toml uses single quotes; update the awk pattern to accept both single and double quotes (e.g. match either " or ' and capture the inner text) and ensure the match extraction uses that same group (replace /"([^"]+)"/ with a regex like /["']([^"']+)["']/ in the CARGO_BIN assignment) so single-quoted bin names are handled correctly.src/config/workspace_store.rs (1)
219-259: Rollback helper ignoresremove_dirfailures for non-empty directories.If the rollback removes the config file but other files were added to
.git-same/concurrently,remove_dirwill fail withDirectoryNotEmpty. The warning is only logged for unexpected errors, butDirectoryNotEmptywould be logged as a warning when it might actually be expected in certain edge cases.💡 Optional: Also ignore DirectoryNotEmpty
if remove_dot_dir { if let Err(e) = std::fs::remove_dir(dot_dir) { - if e.kind() != std::io::ErrorKind::NotFound { + if !matches!(e.kind(), std::io::ErrorKind::NotFound | std::io::ErrorKind::DirectoryNotEmpty) { tracing::warn!( path = %dot_dir.display(), error = %e, "Failed to remove workspace directory during rollback" ); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/workspace_store.rs` around lines 219 - 259, In rollback_workspace_write change the error handling for std::fs::remove_dir(dot_dir) so it also treats std::io::ErrorKind::DirectoryNotEmpty as benign (like NotFound) instead of logging a warning; update the if condition that currently checks e.kind() != std::io::ErrorKind::NotFound to check both NotFound and DirectoryNotEmpty and only emit tracing::warn! when the error kind is neither, keeping the rest of the function (and variable names config_path, previous_config_content, dot_dir, remove_dot_dir) unchanged.src/commands/scan.rs (1)
10-16: Consider handlingcanonicalizefailure more explicitly.Line 16 silently falls back to the original path if canonicalization fails. While this is reasonable for non-existent paths, it may hide permission errors or other filesystem issues during scanning.
💡 Optional: Log when canonicalization fails
- let root = std::fs::canonicalize(&root).unwrap_or(root); + let root = std::fs::canonicalize(&root).unwrap_or_else(|e| { + tracing::debug!(path = %root.display(), error = %e, "Could not canonicalize scan root"); + root + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/scan.rs` around lines 10 - 16, The current run function silently falls back to the original root when std::fs::canonicalize(&root) fails; change this to handle errors explicitly in the run function: call std::fs::canonicalize(&root) and match on the Result, keeping the original path only for NotFound (or other benign cases) but for other IO errors return a proper Err or emit a clear log via the provided Output (or include the io::Error in the returned error) so permission/IO problems aren't swallowed; update the code around the canonicalize call in run (referencing the root variable, args.path and the canonicalize call) to perform this conditional handling and surface non-benign errors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/commands/scan.rs`:
- Around line 119-150: scan_recursive is vulnerable to symlink cycles because it
follows directory symlinks via entry.path(); change the directory check to use
the DirEntry's file_type() (e.g., entry.file_type()?.is_dir()) so you only treat
true directories (not symlinked targets) as recursion candidates, and
additionally track visited inodes (device+inode via metadata().dev()/ino() or
canonicalized paths) in a HashSet to skip revisiting the same directory; update
the loop in scan_recursive (and any early returns) to use
WorkspaceStore::config_path(dir) as before but only push canonicalized paths
after confirming they are real directories and not previously seen to prevent
cycles.
---
Nitpick comments:
In @.github/workflows/S1-Test-CI.yml:
- Around line 145-152: The awk snippet that sets CARGO_BIN uses the regex
/"([^"]+)"/ which only matches double-quoted names so it can return empty if
Cargo.toml uses single quotes; update the awk pattern to accept both single and
double quotes (e.g. match either " or ' and capture the inner text) and ensure
the match extraction uses that same group (replace /"([^"]+)"/ with a regex like
/["']([^"']+)["']/ in the CARGO_BIN assignment) so single-quoted bin names are
handled correctly.
In @.github/workflows/S2-Release-GitHub.yml:
- Around line 78-135: Extract the duplicated jobs into a reusable workflow:
create a new workflow (e.g. reusable-validation-checks.yml) with on:
workflow_call and move the jobs alias-drift-check and workflow-secret-safety
into it, keeping their step definitions intact; then replace the inline job
definitions in both S2-Release-GitHub.yml and S1-Test-CI.yml with a single job
that uses: ./.github/workflows/reusable-validation-checks.yml (invoking any
required inputs/secrets if needed) so both files call the shared workflow
instead of duplicating the logic.
In `@src/commands/scan.rs`:
- Around line 10-16: The current run function silently falls back to the
original root when std::fs::canonicalize(&root) fails; change this to handle
errors explicitly in the run function: call std::fs::canonicalize(&root) and
match on the Result, keeping the original path only for NotFound (or other
benign cases) but for other IO errors return a proper Err or emit a clear log
via the provided Output (or include the io::Error in the returned error) so
permission/IO problems aren't swallowed; update the code around the canonicalize
call in run (referencing the root variable, args.path and the canonicalize call)
to perform this conditional handling and surface non-benign errors.
In `@src/config/workspace_store.rs`:
- Around line 219-259: In rollback_workspace_write change the error handling for
std::fs::remove_dir(dot_dir) so it also treats
std::io::ErrorKind::DirectoryNotEmpty as benign (like NotFound) instead of
logging a warning; update the if condition that currently checks e.kind() !=
std::io::ErrorKind::NotFound to check both NotFound and DirectoryNotEmpty and
only emit tracing::warn! when the error kind is neither, keeping the rest of the
function (and variable names config_path, previous_config_content, dot_dir,
remove_dot_dir) unchanged.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.github/workflows/S1-Test-CI.yml.github/workflows/S2-Release-GitHub.ymldocs/plans/apple-code-signing.mdsrc/commands/scan.rssrc/commands/scan_tests.rssrc/config/workspace_store.rssrc/config/workspace_store_tests.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/plans/apple-code-signing.md
- src/commands/scan_tests.rs
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
.github/workflows/S1-Test-CI.yml (1)
193-196: Generalize credential-in-URL detection beyondgithub.com.The current rule only catches credentials embedded for GitHub URLs. Expanding to any URL userinfo format would improve coverage.
Suggested refactor
- if grep -RInE 'https://[^/@[:space:]]+@github[.]com' .github/workflows; then + if grep -RInE 'https?://[^/@[:space:]]+@[^[:space:]/]+' .github/workflows; then🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S1-Test-CI.yml around lines 193 - 196, The current grep only matches credential-in-URL for GitHub; update the pattern to detect any URL containing userinfo (credentials) by replacing 'https://[^/@[:space:]]+@github[.]com' with a broader regex such as 'https?://[^/@[:space:]]+@' (or '://[^/@[:space:]]+@' if you want to catch non-HTTP schemes) in the grep command so the workflow step flags any URL with embedded credentials in .github/workflows.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/S1-Test-CI.yml:
- Around line 130-140: Change how BIN_COUNT is computed so the grep invocation
is resilient to set -e and matches leading whitespace before [[bin]];
specifically update the BIN_COUNT assignment that uses grep -c (the current
pattern '^\[\[bin\]\]') to use a regex that permits leading whitespace (e.g.,
'^[[:space:]]*\[\[bin\]\]') and ensure the grep command cannot cause the script
to exit under set -e by allowing a non-zero grep exit to be ignored (for example
via "|| true").
In @.github/workflows/S2-Release-GitHub.yml:
- Around line 174-177: The current grep check (the grep -RInE line) only detects
`-x` as a separate token and misses combined flag groups like `set -ex` or `bash
-eux`; update the regex to match `-x` occurring anywhere inside a flag group
(e.g. change the pattern to detect `set` or `bash` followed by `-` and any
letters containing `x`, such as using `[[:alpha:]]*x[[:alpha:]]*` in the grep
-RInE expression) so commands like `set -ex`, `set -xeu` or `bash -euxo` are
flagged and the script exits as intended.
In `@src/commands/scan.rs`:
- Around line 12-29: The code currently swallows errors from
std::fs::canonicalize(&root) causing scan_for_workspaces to run on an invalid
path and yield a misleading "No workspaces found." message; change the behavior
in the block that sets root so that if std::fs::canonicalize(&root) returns Err
you return an error (or call output.error and return Err) instead of falling
back to the original root — update the logic around args.path,
std::fs::canonicalize, and the subsequent call to scan_for_workspaces to
validate and fail fast on unreadable/invalid scan roots and include the
canonicalization error in the returned/printed message for debugging.
---
Nitpick comments:
In @.github/workflows/S1-Test-CI.yml:
- Around line 193-196: The current grep only matches credential-in-URL for
GitHub; update the pattern to detect any URL containing userinfo (credentials)
by replacing 'https://[^/@[:space:]]+@github[.]com' with a broader regex such as
'https?://[^/@[:space:]]+@' (or '://[^/@[:space:]]+@' if you want to catch
non-HTTP schemes) in the grep command so the workflow step flags any URL with
embedded credentials in .github/workflows.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/workflows/S1-Test-CI.yml.github/workflows/S2-Release-GitHub.ymlsrc/commands/scan.rssrc/commands/scan_tests.rssrc/discovery.rssrc/discovery_tests.rs
| set -e | ||
|
|
||
| # Manifest exists | ||
| if [ ! -f toolkit/packaging/binary-aliases.txt ]; then | ||
| echo "ERROR: toolkit/packaging/binary-aliases.txt not found"; exit 1 | ||
| fi | ||
|
|
||
| # Only 1 [[bin]] in Cargo.toml | ||
| BIN_COUNT=$(grep -c '^\[\[bin\]\]' Cargo.toml) | ||
| if [ "$BIN_COUNT" -ne 1 ]; then | ||
| echo "ERROR: Cargo.toml has $BIN_COUNT [[bin]] entries, expected 1"; exit 1 |
There was a problem hiding this comment.
Make the [[bin]] count check resilient under set -e.
With set -e, grep -c can exit early in the zero-match case before your custom error message is emitted, and the pattern misses leading whitespace before [[bin]].
Suggested fix
- BIN_COUNT=$(grep -c '^\[\[bin\]\]' Cargo.toml)
+ BIN_COUNT=$(grep -c '^[[:space:]]*\[\[bin\]\]' Cargo.toml || true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/S1-Test-CI.yml around lines 130 - 140, Change how
BIN_COUNT is computed so the grep invocation is resilient to set -e and matches
leading whitespace before [[bin]]; specifically update the BIN_COUNT assignment
that uses grep -c (the current pattern '^\[\[bin\]\]') to use a regex that
permits leading whitespace (e.g., '^[[:space:]]*\[\[bin\]\]') and ensure the
grep command cannot cause the script to exit under set -e by allowing a non-zero
grep exit to be ignored (for example via "|| true").
| if grep -RInE '(^|[[:space:];])set[[:space:]]+-x([[:space:];]|$)|bash[[:space:]]+-x' .github/workflows; then | ||
| echo "ERROR: Shell xtrace detected in workflow files." | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Xtrace detection misses combined flag patterns.
The regex requires -x as a standalone flag but misses common patterns like set -ex, set -xe, set -xeu, or set -euxo pipefail where -x is combined with other flags. These would bypass the check while still enabling xtrace, potentially leaking secrets in CI logs.
🔧 Proposed fix to catch combined flags
# Prevent shell xtrace in workflow scripts.
- if grep -RInE '(^|[[:space:];])set[[:space:]]+-x([[:space:];]|$)|bash[[:space:]]+-x' .github/workflows; then
+ if grep -RInE '(^|[[:space:];])set[[:space:]]+-[a-wyzA-WYZ]*x|bash[[:space:]]+-[a-wyzA-WYZ]*x' .github/workflows; then
echo "ERROR: Shell xtrace detected in workflow files."
exit 1
fiThis pattern matches -x appearing anywhere in a flag group (e.g., -ex, -xeu, -euxo).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/S2-Release-GitHub.yml around lines 174 - 177, The current
grep check (the grep -RInE line) only detects `-x` as a separate token and
misses combined flag groups like `set -ex` or `bash -eux`; update the regex to
match `-x` occurring anywhere inside a flag group (e.g. change the pattern to
detect `set` or `bash` followed by `-` and any letters containing `x`, such as
using `[[:alpha:]]*x[[:alpha:]]*` in the grep -RInE expression) so commands like
`set -ex`, `set -xeu` or `bash -euxo` are flagged and the script exits as
intended.
| let root = match &args.path { | ||
| Some(p) => p.clone(), | ||
| None => std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")), | ||
| }; | ||
|
|
||
| let root = std::fs::canonicalize(&root).unwrap_or(root); | ||
| output.info(&format!( | ||
| "Scanning {} (depth {})", | ||
| root.display(), | ||
| args.depth | ||
| )); | ||
|
|
||
| let found = scan_for_workspaces(&root, args.depth); | ||
|
|
||
| if found.is_empty() { | ||
| output.info("No .git-same/ workspaces found."); | ||
| return Ok(()); | ||
| } |
There was a problem hiding this comment.
Fail fast on invalid/unreadable scan root instead of reporting “No workspaces found.”
At Line 17, canonicalization failure falls back silently, and later paths can produce a false “no workspaces” success path. This masks bad user input and permission issues.
💡 Proposed fix
pub fn run(args: &ScanArgs, config_path: Option<&Path>, output: &Output) -> Result<()> {
let root = match &args.path {
Some(p) => p.clone(),
- None => std::env::current_dir().unwrap_or_else(|_| PathBuf::from(".")),
+ None => std::env::current_dir().map_err(|e| {
+ AppError::config(format!("Failed to resolve current directory: {}", e))
+ })?,
};
- let root = std::fs::canonicalize(&root).unwrap_or(root);
+ let root = std::fs::canonicalize(&root).map_err(|e| {
+ AppError::config(format!("Failed to access scan root {}: {}", root.display(), e))
+ })?;
+ if !root.is_dir() {
+ return Err(AppError::config(format!(
+ "Scan root is not a directory: {}",
+ root.display()
+ )));
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/commands/scan.rs` around lines 12 - 29, The code currently swallows
errors from std::fs::canonicalize(&root) causing scan_for_workspaces to run on
an invalid path and yield a misleading "No workspaces found." message; change
the behavior in the block that sets root so that if std::fs::canonicalize(&root)
returns Err you return an error (or call output.error and return Err) instead of
falling back to the original root — update the logic around args.path,
std::fs::canonicalize, and the subsequent call to scan_for_workspaces to
validate and fail fast on unreadable/invalid scan roots and include the
canonicalization error in the returned/printed message for debugging.
This PR moves workspace and provider configuration into local workspace-scoped storage and updates reset/workspace command behavior accordingly.
It reworks setup and TUI initialization by removing the old welcome/system-check screens, adding requirements checks, and refining workspace/path interactions.
It introduces a new scan command with related workflow/status wiring and expands test coverage across config, setup, TUI, and integration paths.
It also updates CI/release workflows, renames toolkit scripts to toolkit/conductor, and cleans up docs/plans while adding binary alias packaging metadata.
Summary by CodeRabbit
New Features
Improvements
Documentation
Version