Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review infoConfiguration used: Organization UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (31)
WalkthroughAdds a full Rust CLI/TUI application "git-same" implementing authentication, configuration, provider discovery, shell-backed Git operations, parallel clone/sync managers, a TUI and setup wizard, caching, extensive error types, progress reporting, tests, docs, and CI/release pipelines. Changes
Sequence Diagram(s)mermaid Note: rectangles/colors omitted because simple sequence suffices. Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
This pull request adds comprehensive caching, SSH key detection, and clone CLI options to the git-same tool. It's a major enhancement that introduces discovery caching to avoid API rate limits, improves GitHub pagination with retry logic, and expands authentication options.
Changes:
- Added discovery caching with TTL-based validation to reduce API calls
- Implemented SSH key detection and authentication flow improvements
- Enhanced pagination with exponential backoff and rate limit handling
- Added comprehensive CLI options for clone, fetch, pull, and status commands
Reviewed changes
Copilot reviewed 42 out of 45 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/integration_test.rs | New integration tests for CLI commands with dry-run and help verification |
| src/types/*.rs | Core type definitions for repositories, organizations, and operation results |
| src/provider/*.rs | Provider trait and GitHub API implementation with pagination |
| src/git/*.rs | Git operations abstraction with shell-based implementation |
| src/errors/*.rs | Comprehensive error handling with user-friendly suggestions |
| src/config/*.rs | TOML-based configuration with validation |
| src/auth/*.rs | Multi-method authentication (gh CLI, env vars, SSH detection) |
| src/cache/mod.rs | JSON-based discovery cache with TTL |
| src/output/*.rs | Progress bars and formatted output using indicatif |
| src/discovery/mod.rs | Repository discovery orchestration and planning |
| src/main.rs | Main CLI entry point with command routing |
| Cargo.toml | Dependencies and binary configurations |
| README.md | Comprehensive documentation with examples |
| CHANGELOG.md | Release notes for v0.2.0 |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/errors/provider.rs
Outdated
| pub fn suggested_action(&self) -> &'static str { | ||
| match self { | ||
| ProviderError::Authentication(_) => { | ||
| "Run 'gh auth login' to re-authenticate, or check your GITHUB_TOKEN" |
There was a problem hiding this comment.
The error message uses hardcoded suggestion for GitHub CLI that won't work for other providers. The suggested action should be context-aware based on which provider failed authentication.
| "Run 'gh auth login' to re-authenticate, or check your GITHUB_TOKEN" | |
| "Re-authenticate with your Git provider or verify that your access token or credentials are configured correctly" |
src/config/parser.rs
Outdated
| if self.concurrency == 0 || self.concurrency > 32 { | ||
| return Err(AppError::config("concurrency must be between 1 and 32")); |
There was a problem hiding this comment.
The validation rejects concurrency values greater than 32, but there's no technical justification for this limit. Modern systems can handle more concurrent operations. Consider increasing this limit or making it configurable, or at minimum document why 32 is the maximum.
src/auth/ssh.rs
Outdated
| pub fn has_github_ssh_access() -> bool { | ||
| // Try to test SSH connection to GitHub | ||
| let output = Command::new("ssh") | ||
| .args(["-T", "git@github.com", "-o", "StrictHostKeyChecking=no"]) |
There was a problem hiding this comment.
The SSH command includes -o StrictHostKeyChecking=no which disables host key verification. This is a security risk as it makes the connection vulnerable to man-in-the-middle attacks. Instead, accept the default behavior or use StrictHostKeyChecking=accept-new to only accept new hosts.
| .args(["-T", "git@github.com", "-o", "StrictHostKeyChecking=no"]) | |
| .args(["-T", "-o", "StrictHostKeyChecking=accept-new", "git@github.com"]) |
| Err(AppError::auth(format!( | ||
| "No GitHub authentication found.\n\n\ | ||
| Please authenticate using one of these methods:\n\n\ | ||
| 1. GitHub CLI (recommended):\n \ | ||
| gh auth login\n\n\ | ||
| 2. Environment variable:\n \ | ||
| export GITHUB_TOKEN=ghp_xxxx\n\ | ||
| {}\n\ | ||
| For more info: https://cli.github.com/manual/gh_auth_login", | ||
| ssh_note | ||
| ))) | ||
| } |
There was a problem hiding this comment.
The authentication error message includes instructions for GitHub CLI but doesn't account for GitHub Enterprise users who might need different authentication setup. The message should be more generic or conditionally include provider-specific instructions.
src/cache/mod.rs
Outdated
| .unwrap() | ||
| .as_secs(); | ||
|
|
||
| let age = now.saturating_sub(self.last_discovery); |
There was a problem hiding this comment.
The cache age calculation uses saturating_sub which is good for preventing underflow, but the comparison age < ttl.as_secs() could still have issues if system time moves backward. Consider adding additional validation or using monotonic time sources for cache validity checks.
| let age = now.saturating_sub(self.last_discovery); | |
| // If the system clock moved backwards, treat the cache as invalid. | |
| if now < self.last_discovery { | |
| return false; | |
| } | |
| let age = now - self.last_discovery; |
There was a problem hiding this comment.
Actionable comments posted: 19
🤖 Fix all issues with AI agents
In `@CHANGELOG.md`:
- Line 83: The changelog entry header "## [0.1.0] - 2024-01-15" uses the
same/outdated date and should be verified and corrected; update the date in the
"## [0.1.0] - 2024-01-15" header to the intended release date (or document that
the date is intentionally the same) so the CHANGELOG reflects accurate release
dates and remove any accidental duplication.
- Line 8: The release entry header "## [0.2.0] - 2024-01-20" in the changelog
has an incorrect/past date; update the date portion of that header to the
correct release date (or verify and leave as-is if intentional) by editing the
"## [0.2.0] - 2024-01-20" line to the accurate ISO date for this release.
In `@README.md`:
- Line 11: Update the "Multi-Provider Support" claim in the README so it matches
the roadmap: change or qualify the line that currently reads "**Multi-Provider
Support**: Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket" to
accurately reflect that GitLab and Bitbucket are pending (e.g., list only
supported providers or append "GitLab and Bitbucket support coming soon");
locate the string "Multi-Provider Support" in README.md to make the edit and
ensure the roadmap section remains consistent.
In `@src/cli.rs`:
- Around line 71-74: The CLI help text for the path argument is inconsistent
with Config::default_path; update the docstring on the path field in src/cli.rs
(the #[arg] comment above pub path: Option<PathBuf>) to match the actual default
returned by Config::default_path (i.e., change "~/.config/gisa/gisa.config.toml"
to the real default "~/.config/git-same/config.toml"), or alternatively update
Config::default_path to return the path string used in the help—ensure the text
in the path field and the behavior of Config::default_path are identical.
- Around line 157-159: The skip_dirty boolean flag currently uses #[arg(long,
default_value = "true")] which produces a no-value SetTrue flag and prevents
users from passing false; change the argument to use ArgAction::Set with an
explicit default_value_t so it accepts boolean values (e.g., --skip-dirty=false)
and import ArgAction if necessary; update the attribute on the skip_dirty field
(named skip_dirty) to use action = ArgAction::Set and a boolean default_value_t
= true (and add value parsing if your clap version requires it).
In `@src/clone/parallel.rs`:
- Around line 160-166: The code currently constructs a Semaphore with
self.options.concurrency and then awaits acquire_owned() in parallel cloning,
which deadlocks when concurrency is 0; clamp or validate
CloneManagerOptions.concurrency to be at least 1 before creating the semaphore
(e.g., compute let concurrency = self.options.concurrency.max(1) or return an
error on invalid value) and use that value when calling Semaphore::new(...) so
acquire_owned() cannot block forever; update the semaphore creation site in the
function that iterates repos (the semaphore = Arc::new(Semaphore::new(...))
line) and ensure any external validation logic for CloneManagerOptions enforces
a nonzero concurrency.
In `@src/config/parser.rs`:
- Around line 1-3: Module docs, the Config::default_path implementation, and the
CLI help text disagree on the config filename/path; pick a single canonical name
(suggest "gisa.config.toml" under "~/.config/gisa/") and make them consistent:
update the module doc comment at the top of parser.rs to mention
"gisa.config.toml", change the Config::default_path function to construct
"~/.config/gisa/gisa.config.toml" (use the same directory and filename logic
already present), and update the CLI help text to reference the exact same path
and filename; ensure you only change the textual references and
path-construction logic in Config::default_path and the CLI help string so all
three places match.
In `@src/config/provider_config.rs`:
- Around line 118-124: The code is inconsistent: validate() requires token_env
for AuthMethod::Env but effective_token_env() currently falls back to
"GITHUB_TOKEN"; remove the implicit fallback to match validation. Update
effective_token_env() so for AuthMethod::Env it returns
self.token_env.as_deref() (no .or(Some("GITHUB_TOKEN"))), leaving validation to
enforce presence; reference symbols: effective_token_env(), validate(),
token_env, AuthMethod::Env. Also run/update any tests that assumed the fallback.
In `@src/discovery/mod.rs`:
- Around line 176-191: The strip_prefix call uses the local `path` instead of
the repository root, so `rel_path` only contains the repo name; change the logic
in the scan function (scan_dir) to use `entry_path.strip_prefix(base_path)`
(replace the current use of `path`) when computing `rel_path`, and ensure
`base_path` is threaded into `scan_dir` (add a parameter if missing) so
`entry_path`, `rel_path`, and the org/repo extraction code use the repository
root to produce at least two components.
In `@src/errors/git.rs`:
- Around line 151-153: The SshAuthFailed arm in suggested_action currently
hardcodes "git@github.com" while the GitError::SshAuthFailed variant contains a
host field; update suggested_action to produce a host-aware message by either
changing its return type from &'static str to an owned String (or Cow<'static,
str>) and interpolating the host into the message, or keep a generic instruction
such as "Check your SSH key configuration with your git host (e.g. ssh -T
git@your-host)" if you must remain &'static; modify the match arm for
GitError::SshAuthFailed accordingly in the suggested_action function to use the
chosen approach and reference the variant's host field.
- Around line 130-133: The is_retryable implementation currently treats all
GitError::CommandFailed(_) as retryable; change this by distinguishing transient
vs permanent command failures: either (A) add a new enum variant
GitError::TransientCommandFailed(String) and mark only
GitError::TransientCommandFailed and GitError::Timeout as retryable (update all
places that construct CommandFailed to produce TransientCommandFailed when you
detect network/timeouts), or (B) keep CommandFailed but implement message-based
parsing inside is_retryable (match GitError::CommandFailed(msg) and return true
only when msg contains transient indicators like "network", "timed out",
"connection refused", etc.); update all call sites that create CommandFailed to
either construct the new variant or ensure messages can be parsed, and update
tests to reflect the new retry behavior.
In `@src/errors/provider.rs`:
- Around line 85-87: The match arm for ProviderError::Api { status: 403, .. } in
src/errors/provider.rs currently returns a hardcoded GitHub-specific scopes
string; change it to a generic, provider-aware message by either returning a
neutral instruction like "Check that your token has the required scopes or
permissions for the configured provider" or delegating to a provider-specific
helper (e.g., call a method on the Provider enum such as
Provider::authorization_help or Provider::required_scopes) so the message is not
GitHub-specific and reflects the actual provider in use.
In `@src/git/shell.rs`:
- Around line 15-31: The timeout_secs field on ShellGit (set via with_timeout)
is declared but never applied to command execution; update the command-running
methods on ShellGit (the functions that spawn/execute git commands) to honor
timeout_secs by using a timeout mechanism (e.g., the wait_timeout crate or spawn
+ tokio/standard thread + join with sleep) and kill/cleanup the child process if
the timeout elapses; ensure you reference ShellGit::timeout_secs when
constructing or awaiting the Command, and add proper error/timeout handling and
tests for both Some(timeout_secs) and None cases so the new behavior is covered.
- Around line 80-90: The loop over output.lines() slices &line[0..2] which will
panic for lines shorter than 2 bytes; update the loop in the function that
iterates output.lines() (the block referencing variable line and doing let code
= &line[0..2]) to defensively handle short lines by checking line.len() >= 2
before slicing (or using line.starts_with("??") / method-safe checks) and treat
any shorter/empty lines as non-dirty/untracked or skip them; ensure
has_untracked and is_dirty are set only after a safe check so no panic can occur
on malformed status output.
In `@src/lib.rs`:
- Around line 7-13: The docs claim full GitLab/Bitbucket support while the
provider factory returns NotImplemented for those kinds; update the crate-level
doc comment list to either remove GitLab/Bitbucket from the “Multi-Provider
Support” bullet or append “(coming soon)” and make the text reflect actual
support. Also ensure the code paths in the provider factory (e.g.,
provider_factory / create_provider / ProviderFactory handling of
ProviderKind::GitLab and ProviderKind::Bitbucket) remain the single source of
truth—either gate the documentation text behind the same feature flags or update
the docstring to reference that those providers currently return NotImplemented
and are planned for future implementation.
In `@src/main.rs`:
- Around line 511-516: The doc comment for expand_path is wrong: it claims to
expand ~ and env vars but calls shellexpand::tilde which only expands the home
tilde; update the implementation to call shellexpand::full on the path string
(still using path.to_string_lossy() and converting the result to PathBuf) so
both ~ and environment variables like $VAR are expanded, and keep the function
name expand_path unchanged.
In `@src/provider/mock.rs`:
- Line 210: The call progress.on_org_complete(&org.login, repos.len()) is
passing the cumulative repos.len() instead of the per-org count; change the
second argument to the number of repositories for the current org (e.g., compute
the count of repos belonging to org.login or use the specific per-org collection
used to build repos for that org) so progress.on_org_complete receives only the
current org's repo count (match behavior in github/client.rs and progress.rs
expectations).
In `@src/sync/manager.rs`:
- Around line 175-177: SyncManagerOptions.concurrency can be zero which creates
a Semaphore with zero permits and deadlocks; clamp the concurrency to at least 1
before building the semaphore by computing a local concurrency value (e.g., via
std::cmp::max(self.options.concurrency, 1)) and use that when calling
Semaphore::new, replacing direct use of self.options.concurrency when creating
the semaphore in the sync manager (variables: semaphore, total,
self.options.concurrency).
In `@tests/integration_test.rs`:
- Around line 8-12: The function git_same_binary builds a hardcoded path to
target/debug/git-same which breaks for release builds; change git_same_binary to
return the path from the Cargo-provided environment variable
CARGO_BIN_EXE_git-same (e.g., read env!("CARGO_BIN_EXE_git-same") or
std::env::var at runtime and convert to PathBuf) so tests use the correct binary
regardless of build profile; update any callers expecting PathBuf accordingly.
♻️ Duplicate comments (3)
src/auth/ssh.rs (1)
11-25: Security concern:StrictHostKeyChecking=nodisables host key verification.This makes the SSH connection vulnerable to man-in-the-middle attacks. Consider using
StrictHostKeyChecking=accept-newwhich only accepts new hosts while still verifying known hosts. Additionally, the-ooption should come before the host argument.🔒 Suggested fix
pub fn has_github_ssh_access() -> bool { // Try to test SSH connection to GitHub let output = Command::new("ssh") - .args(["-T", "git@github.com", "-o", "StrictHostKeyChecking=no"]) + .args(["-T", "-o", "StrictHostKeyChecking=accept-new", "-o", "BatchMode=yes", "git@github.com"]) .output();Adding
BatchMode=yesprevents password prompts from hanging the process.src/auth/mod.rs (1)
111-122: Error message is GitHub-specific but applies to all providers.The error message references GitHub CLI and
GITHUB_TOKENspecifically, but this function may be called for other providers. Consider making the message more generic or parameterizing it based on context.src/cache/mod.rs (1)
63-71: Cache validity check doesn't handle system clock moving backwards.While
saturating_subprevents underflow, if the system clock moves backwards (e.g., NTP correction), the cache could appear valid when it shouldn't be. Consider adding explicit validation.♻️ Suggested fix per previous review
pub fn is_valid(&self, ttl: Duration) -> bool { let now = SystemTime::now() .duration_since(UNIX_EPOCH) .unwrap() .as_secs(); + // If the system clock moved backwards, treat the cache as invalid + if now < self.last_discovery { + return false; + } + let age = now.saturating_sub(self.last_discovery); age < ttl.as_secs() }
🧹 Nitpick comments (25)
Cargo.toml (1)
37-38: Consider trimming tokio features for smaller binary size.Using
features = ["full"]includes all tokio features. For a CLI tool, you might only need specific features likert-multi-thread,macros,fs, andprocess. This could reduce binary size, though the current release profile optimizations may mitigate this.README.md (1)
1-7: Consider adding a Quick Context summary.Based on learnings, consider adding a one-line summary at the very top of the README that provides immediate context about the project's purpose.
.gitignore (1)
6-7: Consider committingCargo.lockfor reproducible builds.The Cargo book recommends committing
Cargo.lockfor binary applications to enable deterministic builds across environments. Sincegit-sameis a CLI tool, tracking the lock file will improve reproducibility..github/workflows/ci.yml (2)
78-81: Consider addingauditjob as a release dependency.The release job depends only on
test, but theauditjob runs independently. If a security vulnerability is detected, releases could still proceed.♻️ Suggested fix
release: name: Release (${{ matrix.target }}) if: startsWith(github.ref, 'refs/tags/') - needs: [test] + needs: [test, audit] runs-on: ${{ matrix.os }}
29-33: Toolchain action reference conflicts with matrix variable.Using
dtolnay/rust-toolchain@stableas the action reference while also specifyingtoolchain: ${{ matrix.rust }}works, but the@stablesuffix is misleading whenmatrix.rustcan bebeta. Consider using@masterfor clarity.♻️ Suggested fix
- name: Install Rust - uses: dtolnay/rust-toolchain@stable + uses: dtolnay/rust-toolchain@master with: toolchain: ${{ matrix.rust }} components: rustfmt, clippysrc/errors/provider.rs (1)
103-111: Consider handling HTTP 429 (Too Many Requests) infrom_status.The
RateLimitedvariant exists butfrom_statusdoesn't map 429 to it. Rate limit responses typically include reset time in headers, but a basic mapping could be added.♻️ Suggested enhancement
pub fn from_status(status: u16, message: impl Into<String>) -> Self { let message = message.into(); match status { 401 => ProviderError::Authentication(message), 403 => ProviderError::PermissionDenied(message), 404 => ProviderError::NotFound(message), + 429 => ProviderError::RateLimited { + reset_time: "unknown".to_string(), + }, _ => ProviderError::Api { status, message }, } }src/config/mod.rs (1)
1-21: Config filename inconsistency.The documentation here references
git-same.config.toml(line 4), while the spec documentDocs/Specs/Gisa-S3-github-api-access.mdreferencesgisa.config.toml. Consider aligning the config filename across all documentation to avoid user confusion.Docs/Specs/Gisa-S3-github-api-access.md (1)
128-132: Adequate security warnings for config token storage.The documentation appropriately labels storing tokens in
gisa.config.tomlas "not recommended for shared configs" and provides proper warnings in the security section. Based on learnings about never committing secrets, consider also adding a note to ensuregisa.config.tomlis added to.gitignoreif tokens are stored there, or reference a.localvariant pattern for sensitive values.src/provider/github/client.rs (2)
31-32: Consider deriving the User-Agent version from the package.The version string is hardcoded as
"gisa-cli/0.1.0". This will require manual updates when the version changes.♻️ Suggested improvement
- headers.insert(USER_AGENT, HeaderValue::from_static("gisa-cli/0.1.0")); + headers.insert( + USER_AGENT, + HeaderValue::from_str(&format!("gisa-cli/{}", env!("CARGO_PKG_VERSION"))) + .unwrap_or_else(|_| HeaderValue::from_static("gisa-cli")), + );
60-80: Consider adding retry logic for consistency with paginated requests.The
get()method doesn't have retry logic for transient failures (5xx errors), whilefetch_all_pagesin the pagination helper does. For critical single-item requests like credential validation, transient failures will immediately error out rather than retrying.src/auth/ssh.rs (1)
27-79: Consider extracting duplicated key file list into a constant.The
key_filesarray is duplicated betweenhas_ssh_keys()andget_ssh_key_files(). This creates maintenance overhead if new key types need to be added.♻️ Suggested refactor
+const SSH_KEY_FILES: &[&str] = &[ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", +]; + +fn get_ssh_dir() -> Option<PathBuf> { + std::env::var("HOME").ok().map(|h| PathBuf::from(h).join(".ssh")) +} + /// Detect if SSH keys exist in the standard locations. pub fn has_ssh_keys() -> bool { - let home = match std::env::var("HOME") { - Ok(h) => h, - Err(_) => return false, - }; - - let ssh_dir = PathBuf::from(home).join(".ssh"); - - // Check for common SSH key types - let key_files = [ - "id_rsa", - "id_ed25519", - "id_ecdsa", - "id_dsa", - "github_rsa", - "github_ed25519", - ]; - - for key_file in &key_files { - let key_path = ssh_dir.join(key_file); - if key_path.exists() { - return true; - } - } - - false + let Some(ssh_dir) = get_ssh_dir() else { return false }; + SSH_KEY_FILES.iter().any(|f| ssh_dir.join(f).exists()) } /// Get SSH key files that exist. pub fn get_ssh_key_files() -> Vec<PathBuf> { - let home = match std::env::var("HOME") { - Ok(h) => h, - Err(_) => return vec![], - }; - - let ssh_dir = PathBuf::from(home).join(".ssh"); - - let key_files = [ - "id_rsa", - "id_ed25519", - "id_ecdsa", - "id_dsa", - "github_rsa", - "github_ed25519", - ]; - - key_files - .iter() - .map(|f| ssh_dir.join(f)) - .filter(|p| p.exists()) - .collect() + let Some(ssh_dir) = get_ssh_dir() else { return vec![] }; + SSH_KEY_FILES.iter().map(|f| ssh_dir.join(f)).filter(|p| p.exists()).collect() }src/auth/gh_cli.rs (1)
134-151: Consider addinggithub_pat_prefix to the token validation.The ignored integration test validates token prefixes but doesn't include
github_pat_for fine-grained personal access tokens, which is validated insrc/auth/env_token.rs(line 55).♻️ Suggested fix
assert!( token.starts_with("ghp_") || token.starts_with("gho_") || token.starts_with("ghu_") || token.starts_with("ghr_") || token.starts_with("ghs_") + || token.starts_with("github_pat_") );src/provider/github/pagination.rs (1)
35-47: Potential panic on systems with incorrect clock.
SystemTime::now().duration_since(UNIX_EPOCH).unwrap()can panic if the system clock is set before the Unix epoch. While rare, consider handling this gracefully.♻️ Suggested fix
fn calculate_wait_time(reset_timestamp: &str) -> Option<Duration> { if let Ok(reset_secs) = reset_timestamp.parse::<u64>() { let now = SystemTime::now() .duration_since(UNIX_EPOCH) - .unwrap() + .ok()? .as_secs(); if reset_secs > now { return Some(Duration::from_secs(reset_secs - now)); } } None }src/auth/env_token.rs (1)
96-117: Test may have race conditions when run in parallel.The tests
test_get_token_presentandtest_has_token_in_env_falsemanipulate shared environment variables. If tests run in parallel (Rust's default), they could interfere with each other. Consider usingserial_testcrate or unique variable names per test.The current approach with unique variable names (
GISA_TEST_NONEXISTENT_VAR_12345,GISA_TEST_TOKEN_VAR) mitigates this for most tests, buttest_has_token_in_env_falsemodifiesDEFAULT_TOKEN_VARSwhich could affect other tests.src/main.rs (4)
98-99: Verbosity is reset to a fixed value, ignoring the global CLI setting.The verbosity is hardcoded to 0 or 1 based on JSON mode, ignoring the user's
--verbose/--quietflags passed via CLI. This affects progress bar behavior.♻️ Suggested fix
async fn cmd_clone(args: &CloneArgs, config: &Config, output: &Output) -> Result<()> { - let verbosity = Verbosity::from(if output.is_json() { 0 } else { 1 }); + let verbosity = output.verbosity();This requires
Outputto expose its verbosity level. If not available, pass the global verbosity from the CLI context.
101-107: Authentication happens before checking cache, which may be unnecessary.When
use_cacheis true and a valid cache exists, authentication is still performed first. For cached operations that don't need API calls, this adds latency and could fail if auth is temporarily unavailable.Consider deferring authentication until after confirming cache miss:
♻️ Suggested optimization
async fn cmd_clone(args: &CloneArgs, config: &Config, output: &Output) -> Result<()> { let verbosity = Verbosity::from(if output.is_json() { 0 } else { 1 }); - // Get authentication - output.info("Authenticating..."); - let auth = get_auth(None)?; - output.verbose(&format!( - "Authenticated as {:?} via {}", - auth.username, auth.method - )); - // Get first enabled provider from config let provider_entry = config .enabled_providers() .next() .ok_or_else(|| AppError::config("No enabled providers configured"))?; - // Create provider - let provider = create_provider(provider_entry, &auth.token)?; - // ... check cache first ... + + // Get authentication (only if cache miss) + if repos.is_empty() { + output.info("Authenticating..."); + let auth = get_auth(None)?; + // ... rest of discovery logic + }
139-153: Cache load errors are silently ignored.Both
CacheManager::new()andcache_manager.load()errors are silently swallowed. While this allows graceful fallback to API discovery, it might hide configuration issues. Consider logging at verbose level.♻️ Suggested improvement
if use_cache && !force_refresh { - if let Ok(cache_manager) = CacheManager::new() { - if let Ok(Some(cache)) = cache_manager.load() { + match CacheManager::new() { + Ok(cache_manager) => match cache_manager.load() { + Ok(Some(cache)) => { output.verbose(&format!( "Using cached discovery ({} repos, {} seconds old)", cache.repo_count, cache.age_secs() )); - // Extract repos from cache - for provider_repos in cache.repos.values() { - repos.extend(provider_repos.clone()); + for provider_repos in cache.repos.values() { + repos.extend(provider_repos.clone()); + } } + Ok(None) => output.verbose("Cache expired or not found"), + Err(e) => output.verbose(&format!("Failed to load cache: {}", e)), + }, + Err(e) => output.verbose(&format!("Failed to initialize cache: {}", e)), } }
275-276: Same verbosity issue as incmd_clone.The verbosity is reset here as well, ignoring global CLI settings.
src/provider/mock.rs (1)
101-115: Consider handling potential mutex poisoning gracefully.The
unwrap()calls on mutex locks will panic if the lock is poisoned (e.g., if a thread panicked while holding the lock). While this is acceptable for test code, you might want to document this behavior or use a recovery strategy.For a mock used only in tests, this is likely fine, but if MockProvider is ever used in integration testing with multiple threads, consider:
♻️ Optional: Add panic-safe lock handling
fn log_call(&self, method: &str) { - let mut log = self.call_log.lock().unwrap(); + let mut log = self.call_log.lock().unwrap_or_else(|e| e.into_inner()); log.push(method.to_string()); } pub fn get_calls(&self) -> Vec<String> { - self.call_log.lock().unwrap().clone() + self.call_log.lock().unwrap_or_else(|e| e.into_inner()).clone() } pub fn clear_calls(&self) { - self.call_log.lock().unwrap().clear(); + self.call_log.lock().unwrap_or_else(|e| e.into_inner()).clear(); }src/auth/mod.rs (2)
76-79: Consider logging the gh CLI error for debugging.When
gh_cli::get_token()fails after passing the install/auth checks, the error is silently discarded. This could make debugging difficult if users expect gh CLI auth to work but it silently falls back.♻️ Optional: Add debug logging
Err(_) => { - // Fall through to next method + // Fall through to next method + // Consider: log::debug!("gh CLI token retrieval failed, trying other methods"); }
237-268: Environment variable manipulation in tests may cause flakiness.This test modifies
GITHUB_TOKEN,GH_TOKEN, andGISA_TOKENenvironment variables which can cause race conditions when tests run in parallel. Consider using#[serial]from theserial_testcrate or structuring tests to avoid env var mutation.src/cache/mod.rs (1)
48-51:unwrap()onduration_since(UNIX_EPOCH)could panic.While extremely rare,
SystemTime::now().duration_since(UNIX_EPOCH)can fail if the system clock is set before January 1, 1970. Consider handling this edge case:♻️ Suggested defensive handling
let now = SystemTime::now() .duration_since(UNIX_EPOCH) - .unwrap() + .unwrap_or_default() .as_secs();src/discovery/mod.rs (2)
113-120: Consider logging when repos are skipped due to dirty state.When
skip_dirtyis true and repos are skipped, users may not understand why certain repos weren't synced. Consider adding debug logging or ensuring the caller reports skipped repos.
160-163: Silently ignoring directory read errors may hide issues.When
read_dirfails (e.g., permission denied), the error is silently ignored. Consider at least logging this for debugging purposes, especially since users may not understand why certain directories aren't scanned.src/provider/traits.rs (1)
65-135: Consider HashSet for org/exclusion filters (scales better).
If these lists grow, the currentVeclookup becomes O(n) per repo. AHashSetkeeps this O(1) and simplifiesshould_include_org.♻️ Possible refactor
+use std::collections::HashSet; #[derive(Debug, Clone, Default)] pub struct DiscoveryOptions { /// Include archived repositories pub include_archived: bool, /// Include forked repositories pub include_forks: bool, /// Filter to specific organizations (empty = all) - pub org_filter: Vec<String>, + pub org_filter: HashSet<String>, /// Exclude specific repos by full name - pub exclude_repos: Vec<String>, + pub exclude_repos: HashSet<String>, } impl DiscoveryOptions { /// Filter to specific organizations. - pub fn with_orgs(mut self, orgs: Vec<String>) -> Self { - self.org_filter = orgs; + pub fn with_orgs<I, S>(mut self, orgs: I) -> Self + where + I: IntoIterator<Item = S>, + S: Into<String>, + { + self.org_filter = orgs.into_iter().map(Into::into).collect(); self } /// Exclude specific repositories. - pub fn with_exclusions(mut self, repos: Vec<String>) -> Self { - self.exclude_repos = repos; + pub fn with_exclusions<I, S>(mut self, repos: I) -> Self + where + I: IntoIterator<Item = S>, + S: Into<String>, + { + self.exclude_repos = repos.into_iter().map(Into::into).collect(); self } /// Check if an org should be included based on filters. pub fn should_include_org(&self, org: &str) -> bool { if self.org_filter.is_empty() { return true; } - self.org_filter.iter().any(|o| o == org) + self.org_filter.contains(org) } }
docs/CHANGELOG.md
Outdated
| The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), | ||
| and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). | ||
|
|
||
| ## [0.2.0] - 2024-01-20 |
There was a problem hiding this comment.
Verify the release date is correct.
The date 2024-01-20 appears to be in the past (current date is January 2026). If this is a new release, consider updating to the correct date.
🤖 Prompt for AI Agents
In `@CHANGELOG.md` at line 8, The release entry header "## [0.2.0] - 2024-01-20"
in the changelog has an incorrect/past date; update the date portion of that
header to the correct release date (or verify and leave as-is if intentional) by
editing the "## [0.2.0] - 2024-01-20" line to the accurate ISO date for this
release.
docs/CHANGELOG.md
Outdated
| - Cross-platform CI/CD (Linux, macOS, Windows) | ||
| - Built with Rust 2021 edition | ||
|
|
||
| ## [0.1.0] - 2024-01-15 |
There was a problem hiding this comment.
Same date concern for version 0.1.0.
Date 2024-01-15 also appears outdated. Please verify these dates are intentional.
🤖 Prompt for AI Agents
In `@CHANGELOG.md` at line 83, The changelog entry header "## [0.1.0] -
2024-01-15" uses the same/outdated date and should be verified and corrected;
update the date in the "## [0.1.0] - 2024-01-15" header to the intended release
date (or document that the date is intentionally the same) so the CHANGELOG
reflects accurate release dates and remove any accidental duplication.
README.md
Outdated
|
|
||
| ## Features | ||
|
|
||
| - **Multi-Provider Support**: Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket |
There was a problem hiding this comment.
Feature claim is inconsistent with the roadmap.
Line 11 states "Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket", but the roadmap at lines 352-353 shows GitLab and Bitbucket support are still pending (unchecked). Consider updating this line to accurately reflect current capabilities:
📝 Suggested fix
-- **Multi-Provider Support**: Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket
+- **Multi-Provider Support**: Works with GitHub and GitHub Enterprise (GitLab and Bitbucket coming soon)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - **Multi-Provider Support**: Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket | |
| - **Multi-Provider Support**: Works with GitHub and GitHub Enterprise (GitLab and Bitbucket coming soon) |
🤖 Prompt for AI Agents
In `@README.md` at line 11, Update the "Multi-Provider Support" claim in the
README so it matches the roadmap: change or qualify the line that currently
reads "**Multi-Provider Support**: Works with GitHub, GitHub Enterprise, GitLab,
and Bitbucket" to accurately reflect that GitLab and Bitbucket are pending
(e.g., list only supported providers or append "GitLab and Bitbucket support
coming soon"); locate the string "Multi-Provider Support" in README.md to make
the edit and ensure the roadmap section remains consistent.
src/cli.rs
Outdated
| /// Skip repositories with uncommitted changes | ||
| #[arg(long, default_value = "true")] | ||
| pub skip_dirty: bool, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/cli.rs | sed -n '150,165p'Repository: ZAAI-com/git-same
Length of output: 621
🌐 Web query:
clap rust bool flag default value true allow false override ArgAction
💡 Result:
In clap, a plain bool “flag” (ArgAction::SetTrue) defaults to false and can only be turned on (it does not accept --flag=false). If you want default = true but allow users to override to false, make it an option that takes a boolean value and use ArgAction::Set with a bool parser. BoolishValueParser is the built-in parser for true/false-like strings. [1][2]
Derive (#[derive(Parser)]) form
use clap::{ArgAction, Parser};
use clap::builder::BoolishValueParser;
#[derive(Parser, Debug)]
struct Args {
#[arg(long,
action = ArgAction::Set,
default_value_t = true,
value_parser = BoolishValueParser::new())]
flag: bool,
}Usage:
- default:
flag == true - disable:
--flag=false - enable explicitly:
--flag=true
Builder form
use clap::{Arg, ArgAction, Command};
use clap::builder::BoolishValueParser;
let cmd = Command::new("app").arg(
Arg::new("flag")
.long("flag")
.action(ArgAction::Set)
.default_value("true")
.value_parser(BoolishValueParser::new()),
);ArgAction::SetTrue / SetFalse are for “no-value” flags (present/absent), and their implicit defaults are the opposite when no default_value is provided. [1]
Sources:
[1] ArgAction::SetTrue / SetFalse semantics and defaults (rustdocs.ferronweb.org)
[2] BoolishValueParser (parses bool-like strings) (docs.rs)
Citations:
- 1: https://rustdocs.ferronweb.org/clap/enum.ArgAction.html?utm_source=openai
- 2: https://docs.rs/clap/latest/clap/builder/struct.BoolishValueParser.html?utm_source=openai
--skip-dirty defaults to true and cannot be set to false with current clap settings.
With the current #[arg(long, default_value = "true")] on a bool field, the flag defaults to SetTrue (no-value mode), which always evaluates to true. Users cannot override this to false. To allow --skip-dirty=false, use ArgAction::Set with a bool value parser and default_value_t:
🔧 Proposed fix
- #[arg(long, default_value = "true")]
- pub skip_dirty: bool,
+ #[arg(long, action = clap::ArgAction::Set, default_value_t = true, value_parser = clap::builder::BoolishValueParser::new())]
+ pub skip_dirty: bool,This allows users to override with --skip-dirty=false, --skip-dirty=true, or other bool-like values.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Skip repositories with uncommitted changes | |
| #[arg(long, default_value = "true")] | |
| pub skip_dirty: bool, | |
| /// Skip repositories with uncommitted changes | |
| #[arg(long, action = clap::ArgAction::Set, default_value_t = true, value_parser = clap::builder::BoolishValueParser::new())] | |
| pub skip_dirty: bool, |
🤖 Prompt for AI Agents
In `@src/cli.rs` around lines 157 - 159, The skip_dirty boolean flag currently
uses #[arg(long, default_value = "true")] which produces a no-value SetTrue flag
and prevents users from passing false; change the argument to use ArgAction::Set
with an explicit default_value_t so it accepts boolean values (e.g.,
--skip-dirty=false) and import ArgAction if necessary; update the attribute on
the skip_dirty field (named skip_dirty) to use action = ArgAction::Set and a
boolean default_value_t = true (and add value parsing if your clap version
requires it).
src/main.rs
Outdated
| /// Expands ~ and environment variables in a path. | ||
| fn expand_path(path: &std::path::Path) -> PathBuf { | ||
| let path_str = path.to_string_lossy(); | ||
| let expanded = shellexpand::tilde(&path_str); | ||
| PathBuf::from(expanded.as_ref()) | ||
| } |
There was a problem hiding this comment.
Doc comment is misleading - only tilde expansion is performed.
The function doc says "Expands ~ and environment variables" but only shellexpand::tilde is called, which doesn't expand environment variables like $HOME or ${VAR}.
🐛 Fix documentation or implementation
Option 1: Fix the doc comment:
-/// Expands ~ and environment variables in a path.
+/// Expands ~ (tilde) in a path to the user's home directory.Option 2: Actually expand environment variables:
fn expand_path(path: &std::path::Path) -> PathBuf {
let path_str = path.to_string_lossy();
- let expanded = shellexpand::tilde(&path_str);
+ let expanded = shellexpand::full(&path_str).unwrap_or(std::borrow::Cow::Borrowed(&path_str));
PathBuf::from(expanded.as_ref())
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Expands ~ and environment variables in a path. | |
| fn expand_path(path: &std::path::Path) -> PathBuf { | |
| let path_str = path.to_string_lossy(); | |
| let expanded = shellexpand::tilde(&path_str); | |
| PathBuf::from(expanded.as_ref()) | |
| } | |
| /// Expands ~ (tilde) in a path to the user's home directory. | |
| fn expand_path(path: &std::path::Path) -> PathBuf { | |
| let path_str = path.to_string_lossy(); | |
| let expanded = shellexpand::tilde(&path_str); | |
| PathBuf::from(expanded.as_ref()) | |
| } |
| /// Expands ~ and environment variables in a path. | |
| fn expand_path(path: &std::path::Path) -> PathBuf { | |
| let path_str = path.to_string_lossy(); | |
| let expanded = shellexpand::tilde(&path_str); | |
| PathBuf::from(expanded.as_ref()) | |
| } | |
| /// Expands ~ and environment variables in a path. | |
| fn expand_path(path: &std::path::Path) -> PathBuf { | |
| let path_str = path.to_string_lossy(); | |
| let expanded = shellexpand::full(&path_str).unwrap_or(std::borrow::Cow::Borrowed(&path_str)); | |
| PathBuf::from(expanded.as_ref()) | |
| } |
🤖 Prompt for AI Agents
In `@src/main.rs` around lines 511 - 516, The doc comment for expand_path is
wrong: it claims to expand ~ and env vars but calls shellexpand::tilde which
only expands the home tilde; update the implementation to call shellexpand::full
on the path string (still using path.to_string_lossy() and converting the result
to PathBuf) so both ~ and environment variables like $VAR are expanded, and keep
the function name expand_path unchanged.
| fn git_same_binary() -> PathBuf { | ||
| let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR")); | ||
| path.push("target/debug/git-same"); | ||
| path | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check Cargo.toml for [[bin]] target
fd Cargo.toml
rg -n '\[\[bin\]\]' -A 5 Cargo.tomlRepository: ZAAI-com/git-same
Length of output: 333
🏁 Script executed:
#!/bin/bash
# Check the structure of integration_test.rs to understand test setup
head -20 tests/integration_test.rsRepository: ZAAI-com/git-same
Length of output: 528
🏁 Script executed:
#!/bin/bash
# Verify how the tests are structured and if there are any build profile specifics
rg -n 'Command::new' tests/integration_test.rs | head -5
# Check if there's any CI/build configuration that runs tests with --release
fd '\.github' || fd 'tox.ini' || fd '.travis.yml' || fd 'build.yml'Repository: ZAAI-com/git-same
Length of output: 320
🏁 Script executed:
#!/bin/bash
# Verify CARGO_BIN_EXE availability during integration tests
# Check Rust/Cargo documentation context about environment variables
# Let's also check if there are any existing comments or TODOs about the binary path
rg -n 'CARGO_' tests/integration_test.rs
rg -n 'env!' tests/integration_test.rsRepository: ZAAI-com/git-same
Length of output: 188
🏁 Script executed:
#!/bin/bash
# Verify if CARGO_BIN_EXE_git-same is a compile-time or runtime variable
# Let's check if the suggested fix would actually work with env!() macro
python3 <<'EOF'
# The env!() macro in Rust is a compile-time macro that expands at compile time
# CARGO_BIN_EXE_<name> is a runtime environment variable set by Cargo
# This means the suggested fix would NOT work - env!() cannot access CARGO_BIN_EXE_*
# Let's verify what the correct approach would be
print("Analysis of the suggested fix:")
print("- env!() is a compile-time macro")
print("- CARGO_BIN_EXE_git-same is a runtime environment variable")
print("- Result: The suggested fix would fail to compile")
print()
print("Correct alternatives:")
print("1. Use std::env::var(\"CARGO_BIN_EXE_git-same\") at runtime")
print("2. Use conditional compilation for debug/release")
print("3. For integration tests, hardcoded debug path works but fails with --release")
EOFRepository: ZAAI-com/git-same
Length of output: 441
🏁 Script executed:
#!/bin/bash
# Verify the correct approach for Rust integration tests
# Let's check what CARGO_BIN_EXE_* variables are available and how to use them
python3 <<'EOF'
# According to Cargo documentation:
# CARGO_BIN_EXE_<name> is set as a runtime env var during cargo test
# Since env!() is compile-time, we need std::env::var() at runtime
# However, there's another approach - using target/debug or target/release
# But that requires knowing the profile at runtime
print("Integration test binary path approaches:")
print()
print("Option 1: Use std::env::var at runtime (BEST)")
print(" PathBuf::from(env::var(\"CARGO_BIN_EXE_git-same\").expect(...))")
print()
print("Option 2: Hardcoded with profile awareness")
print(" - Check if running in release or debug")
print(" - Use cfg! macros")
print()
print("Option 3: Keep hardcoded debug path")
print(" - Works for 'cargo test' (default debug)")
print(" - Fails for 'cargo test --release'")
EOF
# Let's also check what Cargo version might be in use
rg -n 'edition' Cargo.toml | head -3
rg -n '\[package\]' -A 10 Cargo.toml | head -15Repository: ZAAI-com/git-same
Length of output: 846
Hardcoded debug binary path causes failures in release builds.
The path target/debug/git-same is hardcoded and fails when tests are run with --release. Use Cargo's CARGO_BIN_EXE_git-same environment variable instead.
♻️ Suggested improvement
fn git_same_binary() -> PathBuf {
- let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
- path.push("target/debug/git-same");
- path
+ PathBuf::from(std::env::var("CARGO_BIN_EXE_git-same").expect("CARGO_BIN_EXE_git-same not set"))
}This uses the environment variable that Cargo automatically sets during cargo test, which points to the correct binary regardless of build profile.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn git_same_binary() -> PathBuf { | |
| let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR")); | |
| path.push("target/debug/git-same"); | |
| path | |
| } | |
| fn git_same_binary() -> PathBuf { | |
| PathBuf::from(std::env::var("CARGO_BIN_EXE_git-same").expect("CARGO_BIN_EXE_git-same not set")) | |
| } |
🤖 Prompt for AI Agents
In `@tests/integration_test.rs` around lines 8 - 12, The function git_same_binary
builds a hardcoded path to target/debug/git-same which breaks for release
builds; change git_same_binary to return the path from the Cargo-provided
environment variable CARGO_BIN_EXE_git-same (e.g., read
env!("CARGO_BIN_EXE_git-same") or std::env::var at runtime and convert to
PathBuf) so tests use the correct binary regardless of build profile; update any
callers expecting PathBuf accordingly.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@src/auth/ssh.rs`:
- Around line 11-21: The SSH probe in has_github_ssh_access can hang; modify the
Command::new("ssh") args (the array passed to .args([...]) used in
has_github_ssh_access) to include non-interactive and timeout options such as
"-o BatchMode=yes" and "-o ConnectTimeout=5" (in addition to the existing "-o
StrictHostKeyChecking=accept-new", "-T", and "git@github.com") so the probe
never prompts for passwords and returns quickly on network issues.
In `@src/cache/mod.rs`:
- Around line 118-130: The fallback in default_cache_path() assumes HOME is set;
make it cross-platform by replacing the HOME env lookup with a platform-agnostic
home-directory resolution (e.g., use dirs::home_dir() or check USERPROFILE on
Windows) when ProjectDirs::from(...) returns None; update the code in the
default_cache_path function to obtain the home directory via that cross-platform
method, then join ".config"/"git-same" and "cache.json" as before so Windows
will resolve a valid fallback path.
In `@src/discovery/mod.rs`:
- Around line 136-142: The depth calculation currently toggles only on
"{provider}"; replace that with a derived depth from the structure template by
counting path segments: compute depth = self.structure.split('/').filter(|s|
!s.is_empty()).count() (convert to the expected numeric type) and pass that to
scan_dir instead of the hardcoded 2/3. Update the code around self.structure and
the call to scan_dir so any leading prefix like "code/" is accounted for;
optionally add validation to ensure the structure contains "{org}" and "{repo}"
placeholders before proceeding.
In `@src/errors/provider.rs`:
- Around line 63-73: The is_retryable() logic ignores HTTP 429 because
from_status(429, ...) currently produces ProviderError::Api { status: 429, .. }
which is outside the 500..=599 check; update either from_status or is_retryable:
Option A — change from_status to map status 429 to ProviderError::RateLimited
(populate a placeholder reset time or None) so RateLimited matches the existing
retryable branch; Option B — extend is_retryable to also match
ProviderError::Api { status: 429, .. } in addition to 500..=599; modify the
function(s) named from_status and/or is_retryable and ensure
ProviderError::RateLimited and ProviderError::Api variants are referenced
consistently.
In `@src/main.rs`:
- Around line 114-122: The code currently assumes a hard-coded provider string
("github") when building paths and cache keys; instead derive a provider_name
from the selected provider entry and reuse it everywhere you currently pass the
literal "github". Concretely: after obtaining provider_entry and creating
provider (symbols: provider_entry and create_provider), compute a provider_name
(e.g., from provider_entry.name or provider_entry.provider_id) and replace all
hard-coded "github" usages in planning/cloning and cache key construction (the
call-sites around plan/cloner/cache logic referenced in the review) to use this
provider_name so path templates and cache keys reflect the active provider
consistently.
- Around line 139-157: The cached repo list is being used as-is (via
cache_manager.load() and extending repos) so CLI/config filters (e.g., org,
include_archived, include_forks) aren't applied; update the cache handling
inside the successful load block so that either (A) you apply the same filtering
function used for live discovery (call that function—e.g., filter_repos or
apply_filters—on each provider_repos before repos.extend) or (B) store filter
metadata with the cache and invalidate/skip the cache when current args
(args.no_cache/args.refresh and filter-related args) differ; implement the
change immediately after reading cache.repos (inside the if let Ok(Some(cache))
block) so only filtered repos are appended to repos, and keep the output.verbose
message but reflect whether the cache was accepted or bypassed.
♻️ Duplicate comments (8)
src/errors/provider.rs (1)
85-87: Hardcoded GitHub-specific OAuth scopes.The suggested scopes
(repo, read:org)are GitHub-specific. GitLab and Bitbucket use different permission models. Consider using a generic message similar to line 92.♻️ Suggested fix
ProviderError::Api { status: 403, .. } => { - "Check that your token has the required scopes (repo, read:org)" + "Check that your token has the required permissions for this operation" }src/cache/mod.rs (1)
62-75: Clock drift handling is properly implemented.The check at lines 69-71 correctly handles the case where system time may have moved backward, treating such cache entries as invalid. This is good defensive programming.
src/config/parser.rs (1)
1-3: Config filename/path still inconsistent.The module docs mention
gisa.config.toml, whileConfig::default_pathpoints to~/.config/git-same/config.toml. Please align on a single canonical path to avoid user confusion.Also applies to: 133-141
src/auth/mod.rs (1)
111-119: Make auth guidance provider-agnostic.The message says “Git provider” but hard-codes GitHub CLI and
GITHUB_TOKEN, which is misleading for non‑GitHub providers. Consider tailoring the guidance by provider or referencing the config docs.src/cli.rs (2)
71-73: Init help text should match the actual default path.The help string mentions
~/.config/gisa/gisa.config.toml. Please ensure this matchesConfig::default_pathto avoid user confusion.#!/bin/bash # Locate the default config path implementation rg -n "default_path|config\.toml" -C2 src
157-159: Allow--skip-dirty=falseif the default is true.With a
boolflag anddefault_value_t = true, clap may still treat this as a no‑value flag, making it impossible to setfalse. If you want user‑override to false, switch toArgAction::Setwith a bool parser (tests will need--skip-dirty=trueor to rely on the default).🔧 Suggested change
- #[arg(long, default_value_t = true)] + #[arg( + long, + action = clap::ArgAction::Set, + default_value_t = true, + value_parser = clap::builder::BoolishValueParser::new() + )] pub skip_dirty: bool,clap bool default_value_t ArgAction::Set BoolishValueParser allow --flag=falsesrc/clone/parallel.rs (1)
160-162: Guard against 0 concurrency at semaphore creation.
CloneManagerOptionsis public, so callers can setconcurrency = 0andSemaphore::new(0)will block forever. Clamp or validate at the use site to prevent deadlocks even when the builder isn’t used.🔧 Suggested change
- let semaphore = Arc::new(Semaphore::new(self.options.concurrency)); + let concurrency = self.options.concurrency.max(1); + let semaphore = Arc::new(Semaphore::new(concurrency));tokio Semaphore::new(0) acquire_owned behaviorsrc/main.rs (1)
520-524:expand_pathdocs don’t match behavior.Line 520 says env vars are expanded, but the implementation only expands
~. Either update the doc or useshellexpand::full.🛠️ Suggested fix (expand env vars too)
/// Expands ~ and environment variables in a path. fn expand_path(path: &std::path::Path) -> PathBuf { let path_str = path.to_string_lossy(); - let expanded = shellexpand::tilde(&path_str); + let expanded = + shellexpand::full(&path_str).unwrap_or(std::borrow::Cow::Borrowed(&path_str)); PathBuf::from(expanded.as_ref()) }shellexpand crate: confirm that `shellexpand::full` expands both `~` and environment variables in the version used by this repo.
🧹 Nitpick comments (5)
src/cache/mod.rs (4)
1-4: Documentation references only GitHub but module supports multiple providers.The module doc states "Caches GitHub API discovery results" but based on the
repos: HashMap<String, Vec<OwnedRepo>>structure keyed by provider name, this supports multiple providers. Consider updating the doc to reflect multi-provider support.📝 Suggested documentation update
//! Discovery cache module //! -//! Caches GitHub API discovery results to avoid hitting rate limits +//! Caches provider API discovery results to avoid hitting rate limits //! and speed up subsequent runs.
39-44: Fieldorgscollects repository owners, which may include both users and organizations.The field is named
orgsbut collects all uniqueownervalues from repositories. On GitHub, an owner can be either a user or an organization. If the intent is to only track organizations, additional filtering would be needed. If tracking all owners is intended, consider renaming toownersfor clarity.
152-167: Non-atomic write could leave corrupted cache on crash.If the process is interrupted during
fs::write(), the cache file could be left in a corrupted state. Whileload()would return an error (not silently use bad data), an atomic write pattern (write to temp file, then rename) would be more robust.♻️ Atomic write pattern
pub fn save(&self, cache: &DiscoveryCache) -> Result<()> { // Ensure parent directory exists if let Some(parent) = self.cache_path.parent() { fs::create_dir_all(parent) .context("Failed to create cache directory")?; } let json = serde_json::to_string_pretty(cache) .context("Failed to serialize cache")?; - fs::write(&self.cache_path, json) - .context("Failed to write cache file")?; + // Write to temp file first, then rename for atomicity + let temp_path = self.cache_path.with_extension("json.tmp"); + fs::write(&temp_path, json) + .context("Failed to write temporary cache file")?; + fs::rename(&temp_path, &self.cache_path) + .context("Failed to rename cache file")?; Ok(()) }
184-191: Silent fallback to temp directory may cause unexpected behavior.When
new()fails, theDefaultimplementation silently falls back to the system temp directory. This could lead to confusing behavior where the cache doesn't persist across reboots (temp directories are often cleared). Consider logging a warning when this fallback occurs, or documenting this behavior.src/auth/ssh.rs (1)
36-78: Deduplicate the SSH key filename list.The same key list appears in both
has_ssh_keysandget_ssh_key_files. A shared constant reduces drift.♻️ Suggested refactor
+const SSH_KEY_FILES: &[&str] = &[ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", +]; + pub fn has_ssh_keys() -> bool { @@ - let key_files = [ - "id_rsa", - "id_ed25519", - "id_ecdsa", - "id_dsa", - "github_rsa", - "github_ed25519", - ]; - - for key_file in &key_files { + for key_file in SSH_KEY_FILES { let key_path = ssh_dir.join(key_file); if key_path.exists() { return true; } } @@ pub fn get_ssh_key_files() -> Vec<PathBuf> { @@ - let key_files = [ - "id_rsa", - "id_ed25519", - "id_ecdsa", - "id_dsa", - "github_rsa", - "github_ed25519", - ]; - - key_files + SSH_KEY_FILES .iter() .map(|f| ssh_dir.join(f)) .filter(|p| p.exists()) .collect() }
src/cache.rs
Outdated
| /// Get the default cache path (~/.config/git-same/cache.json) | ||
| pub fn default_cache_path() -> Result<PathBuf> { | ||
| let config_dir = if let Some(dir) = directories::ProjectDirs::from("", "", "git-same") { | ||
| dir.config_dir().to_path_buf() | ||
| } else { | ||
| // Fallback to ~/.config/git-same | ||
| let home = std::env::var("HOME") | ||
| .context("HOME environment variable not set")?; | ||
| PathBuf::from(home).join(".config").join("git-same") | ||
| }; | ||
|
|
||
| Ok(config_dir.join("cache.json")) | ||
| } |
There was a problem hiding this comment.
HOME fallback may not work on Windows.
The fallback path resolution uses the HOME environment variable, which is typically not set on Windows (Windows uses USERPROFILE instead). Since directories::ProjectDirs handles cross-platform paths, this fallback would primarily affect Windows systems where the crate fails to determine the project directory.
🛠️ Suggested cross-platform fallback
let config_dir = if let Some(dir) = directories::ProjectDirs::from("", "", "git-same") {
dir.config_dir().to_path_buf()
} else {
- // Fallback to ~/.config/git-same
- let home = std::env::var("HOME")
- .context("HOME environment variable not set")?;
- PathBuf::from(home).join(".config").join("git-same")
+ // Fallback to home directory config
+ let home = std::env::var("HOME")
+ .or_else(|_| std::env::var("USERPROFILE"))
+ .context("Neither HOME nor USERPROFILE environment variable is set")?;
+ PathBuf::from(home).join(".config").join("git-same")
};🤖 Prompt for AI Agents
In `@src/cache/mod.rs` around lines 118 - 130, The fallback in
default_cache_path() assumes HOME is set; make it cross-platform by replacing
the HOME env lookup with a platform-agnostic home-directory resolution (e.g.,
use dirs::home_dir() or check USERPROFILE on Windows) when
ProjectDirs::from(...) returns None; update the code in the default_cache_path
function to obtain the home directory via that cross-platform method, then join
".config"/"git-same" and "cache.json" as before so Windows will resolve a valid
fallback path.
| // Determine scan depth based on structure | ||
| // {org}/{repo} -> 2 levels | ||
| // {provider}/{org}/{repo} -> 3 levels | ||
| let has_provider = self.structure.contains("{provider}"); | ||
| let depth = if has_provider { 3 } else { 2 }; | ||
|
|
||
| self.scan_dir(base_path, base_path, git, &mut repos, 0, depth); |
There was a problem hiding this comment.
Compute scan depth from the structure template.
Line 139-140 only toggles between depth 2 and 3 based on {provider}. If a user sets structure = "code/{org}/{repo}" (allowed by the docs), scan_local stops too early and misses repos, so status reports none. Derive depth by counting template components (after placeholder substitution), or validate supported templates.
🛠️ Suggested fix
- let has_provider = self.structure.contains("{provider}");
- let depth = if has_provider { 3 } else { 2 };
+ let depth = std::path::Path::new(
+ &self
+ .structure
+ .replace("{provider}", "provider")
+ .replace("{org}", "org")
+ .replace("{repo}", "repo"),
+ )
+ .components()
+ .filter(|c| matches!(c, std::path::Component::Normal(_)))
+ .count();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Determine scan depth based on structure | |
| // {org}/{repo} -> 2 levels | |
| // {provider}/{org}/{repo} -> 3 levels | |
| let has_provider = self.structure.contains("{provider}"); | |
| let depth = if has_provider { 3 } else { 2 }; | |
| self.scan_dir(base_path, base_path, git, &mut repos, 0, depth); | |
| // Determine scan depth based on structure | |
| // {org}/{repo} -> 2 levels | |
| // {provider}/{org}/{repo} -> 3 levels | |
| let depth = std::path::Path::new( | |
| &self | |
| .structure | |
| .replace("{provider}", "provider") | |
| .replace("{org}", "org") | |
| .replace("{repo}", "repo"), | |
| ) | |
| .components() | |
| .filter(|c| matches!(c, std::path::Component::Normal(_))) | |
| .count(); | |
| self.scan_dir(base_path, base_path, git, &mut repos, 0, depth); |
🤖 Prompt for AI Agents
In `@src/discovery/mod.rs` around lines 136 - 142, The depth calculation currently
toggles only on "{provider}"; replace that with a derived depth from the
structure template by counting path segments: compute depth =
self.structure.split('/').filter(|s| !s.is_empty()).count() (convert to the
expected numeric type) and pass that to scan_dir instead of the hardcoded 2/3.
Update the code around self.structure and the call to scan_dir so any leading
prefix like "code/" is accounted for; optionally add validation to ensure the
structure contains "{org}" and "{repo}" placeholders before proceeding.
src/main.rs
Outdated
| // Get first enabled provider from config | ||
| let provider_entry = config | ||
| .enabled_providers() | ||
| .next() | ||
| .ok_or_else(|| AppError::config("No enabled providers configured"))?; | ||
|
|
||
| // Create provider | ||
| let provider = create_provider(provider_entry, &auth.token)?; | ||
|
|
There was a problem hiding this comment.
Avoid hard‑coding provider name for planning and cloning.
Line 203/262/342 pass "github" even when the active provider is GitHub Enterprise or a custom name. This breaks {provider} path templates and can misplace repos. Derive a provider_name from the selected provider entry and reuse it consistently (including cache keys).
🧩 Suggested fix
let provider_entry = config
.enabled_providers()
.next()
.ok_or_else(|| AppError::config("No enabled providers configured"))?;
+ let provider_name = provider_entry
+ .name
+ .clone()
+ .unwrap_or_else(|| provider_entry.kind.to_string());
@@
- let plan = orchestrator.plan_clone(&base_path, repos, "github", &git);
+ let plan = orchestrator.plan_clone(&base_path, repos, &provider_name, &git);
@@
- let (summary, _results) = manager
- .clone_repos(&base_path, plan.to_clone, "github", Arc::clone(&progress))
+ let (summary, _results) = manager
+ .clone_repos(&base_path, plan.to_clone, &provider_name, Arc::clone(&progress))
.await;
@@
- let (to_sync, skipped) =
- orchestrator.plan_sync(&base_path, repos, "github", &git, args.skip_dirty);
+ let (to_sync, skipped) =
+ orchestrator.plan_sync(&base_path, repos, &provider_name, &git, args.skip_dirty);Also applies to: 201-204, 260-263, 339-343
🤖 Prompt for AI Agents
In `@src/main.rs` around lines 114 - 122, The code currently assumes a hard-coded
provider string ("github") when building paths and cache keys; instead derive a
provider_name from the selected provider entry and reuse it everywhere you
currently pass the literal "github". Concretely: after obtaining provider_entry
and creating provider (symbols: provider_entry and create_provider), compute a
provider_name (e.g., from provider_entry.name or provider_entry.provider_id) and
replace all hard-coded "github" usages in planning/cloning and cache key
construction (the call-sites around plan/cloner/cache logic referenced in the
review) to use this provider_name so path templates and cache keys reflect the
active provider consistently.
src/main.rs
Outdated
| // Check cache unless --no-cache or --refresh | ||
| let mut repos = Vec::new(); | ||
| let use_cache = !args.no_cache; | ||
| let force_refresh = args.refresh; | ||
|
|
||
| if use_cache && !force_refresh { | ||
| if let Ok(cache_manager) = CacheManager::new() { | ||
| if let Ok(Some(cache)) = cache_manager.load() { | ||
| output.verbose(&format!( | ||
| "Using cached discovery ({} repos, {} seconds old)", | ||
| cache.repo_count, | ||
| cache.age_secs() | ||
| )); | ||
| // Extract repos from cache | ||
| for provider_repos in cache.repos.values() { | ||
| repos.extend(provider_repos.clone()); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Cache bypasses current filters.
Line 139-157 loads cached repos as-is, but the CLI/config filters set on Line 127-135 are not applied. A warm cache can therefore clone/sync repos outside --org or include archived/forks unexpectedly. Apply filters to cached repos or invalidate cache when filters differ.
🧰 Suggested fix (filter cached repos)
if use_cache && !force_refresh {
if let Ok(cache_manager) = CacheManager::new() {
if let Ok(Some(cache)) = cache_manager.load() {
output.verbose(&format!(
"Using cached discovery ({} repos, {} seconds old)",
cache.repo_count,
cache.age_secs()
));
// Extract repos from cache
for provider_repos in cache.repos.values() {
repos.extend(provider_repos.clone());
}
}
}
}
+
+ if use_cache && !force_refresh && !repos.is_empty() {
+ repos.retain(|r| {
+ (filters.include_archived || !r.repo.archived)
+ && (filters.include_forks || !r.repo.fork)
+ && (filters.orgs.is_empty() || filters.orgs.contains(&r.owner))
+ && !filters
+ .exclude_repos
+ .iter()
+ .any(|ex| ex == r.full_name())
+ });
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check cache unless --no-cache or --refresh | |
| let mut repos = Vec::new(); | |
| let use_cache = !args.no_cache; | |
| let force_refresh = args.refresh; | |
| if use_cache && !force_refresh { | |
| if let Ok(cache_manager) = CacheManager::new() { | |
| if let Ok(Some(cache)) = cache_manager.load() { | |
| output.verbose(&format!( | |
| "Using cached discovery ({} repos, {} seconds old)", | |
| cache.repo_count, | |
| cache.age_secs() | |
| )); | |
| // Extract repos from cache | |
| for provider_repos in cache.repos.values() { | |
| repos.extend(provider_repos.clone()); | |
| } | |
| } | |
| } | |
| // Check cache unless --no-cache or --refresh | |
| let mut repos = Vec::new(); | |
| let use_cache = !args.no_cache; | |
| let force_refresh = args.refresh; | |
| if use_cache && !force_refresh { | |
| if let Ok(cache_manager) = CacheManager::new() { | |
| if let Ok(Some(cache)) = cache_manager.load() { | |
| output.verbose(&format!( | |
| "Using cached discovery ({} repos, {} seconds old)", | |
| cache.repo_count, | |
| cache.age_secs() | |
| )); | |
| // Extract repos from cache | |
| for provider_repos in cache.repos.values() { | |
| repos.extend(provider_repos.clone()); | |
| } | |
| } | |
| } | |
| } | |
| if use_cache && !force_refresh && !repos.is_empty() { | |
| repos.retain(|r| { | |
| (filters.include_archived || !r.repo.archived) | |
| && (filters.include_forks || !r.repo.fork) | |
| && (filters.orgs.is_empty() || filters.orgs.contains(&r.owner)) | |
| && !filters | |
| .exclude_repos | |
| .iter() | |
| .any(|ex| ex == r.full_name()) | |
| }); | |
| } |
🤖 Prompt for AI Agents
In `@src/main.rs` around lines 139 - 157, The cached repo list is being used as-is
(via cache_manager.load() and extending repos) so CLI/config filters (e.g., org,
include_archived, include_forks) aren't applied; update the cache handling
inside the successful load block so that either (A) you apply the same filtering
function used for live discovery (call that function—e.g., filter_repos or
apply_filters—on each provider_repos before repos.extend) or (B) store filter
metadata with the cache and invalidate/skip the cache when current args
(args.no_cache/args.refresh and filter-related args) differ; implement the
change immediately after reading cache.repos (inside the if let Ok(Some(cache))
block) so only filtered repos are appended to repos, and keep the output.verbose
message but reflect whether the cache was accepted or bypassed.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
.claude/CLAUDE.md (1)
10-10: Avoid hardcoding test counts in docs.The numbers (207/16/8) will drift as tests change, making the doc stale. Consider removing the counts or making them approximate.
♻️ Proposed tweak
-cargo test # Run all tests (207 unit + 16 integration + 8 doc) +cargo test # Run all tests🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/CLAUDE.md at line 10, The README line showing "cargo test # Run all tests (207 unit + 16 integration + 8 doc)" hardcodes test counts which will drift; update the .claude/CLAUDE.md entry by removing the specific numeric counts or replacing them with a non-specific phrase like "Run all tests (unit, integration and doc tests)" or "Run all tests (counts may vary)" so the comment remains accurate over time — edit the line containing the cargo test example to remove or generalize the "(207 unit + 16 integration + 8 doc)" portion.src/auth/ssh.rs (1)
47-90: Consider extracting the key file list to a shared constant.The
key_filesarray is duplicated inhas_ssh_keys()(lines 48-55) andget_ssh_key_files()(lines 76-83). This could lead to maintenance issues if the list needs updating.♻️ Suggested refactor
+const SSH_KEY_FILES: &[&str] = &[ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", +]; + /// Detect if SSH keys exist in the standard locations. pub fn has_ssh_keys() -> bool { let home = match std::env::var("HOME") { Ok(h) => h, Err(_) => return false, }; let ssh_dir = PathBuf::from(home).join(".ssh"); - // Check for common SSH key types - let key_files = [ - "id_rsa", - "id_ed25519", - "id_ecdsa", - "id_dsa", - "github_rsa", - "github_ed25519", - ]; - - for key_file in &key_files { + for key_file in SSH_KEY_FILES { let key_path = ssh_dir.join(key_file); if key_path.exists() { return true; } } false }Apply similar changes to
get_ssh_key_files().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/ssh.rs` around lines 47 - 90, The key file list is duplicated between has_ssh_keys() and get_ssh_key_files(); extract it into a single shared constant (e.g., const KEY_FILES: [&str; 6] = [...]) and replace the local key_files arrays in both functions with references to that constant, then iterate KEY_FILES.iter() inside has_ssh_keys() and get_ssh_key_files() so updates are made in one place; ensure the constant has a 'static lifetime and use the same join/filter logic already in get_ssh_key_files().src/auth/mod.rs (1)
235-243: Addurlas a direct dependency and use it for more robust URL parsing.The
extract_hostfunction uses simple string manipulation which works for common cases but may fail on edge cases (e.g., URLs with authentication credentials, unusual ports, or malformed input). Theurlcrate should be added as a direct dependency inCargo.toml(it's already available transitively viareqwest, but best practice is to declare direct usage explicitly).♻️ Suggested improvement
In
Cargo.toml, add:url = "2"Then update the function in
src/auth/mod.rs:fn extract_host(url: &str) -> Option<String> { - // Simple extraction - could use url crate for more robust parsing - let url = url - .trim_start_matches("https://") - .trim_start_matches("http://"); - let host = url.split('/').next()?; - Some(host.to_string()) + url::Url::parse(url) + .ok() + .and_then(|u| u.host_str().map(|h| h.to_string())) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/mod.rs` around lines 235 - 243, Add the `url = "2"` dependency to Cargo.toml and replace the simple string-based logic in the extract_host function with proper parsing using url::Url: parse the input with Url::parse (handle or map parse errors to return None), then extract the host via url.host_str() and return it as a String (including the port only if you intentionally want it; otherwise omit port by returning host_str()). Update the function named extract_host to use Url::parse and url.host_str() so edge cases like auth credentials, ports, or odd formatting are handled robustly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/auth/mod.rs`:
- Around line 278-309: The test test_get_auth_with_config_token mutates global
environment variables (GITHUB_TOKEN, GH_TOKEN, GISA_TOKEN) which can cause
flakiness in parallel runs; to fix, make the test run serially (e.g., add the
serial_test crate and annotate the test with #[serial] or equivalent) or avoid
touching global names by refactoring get_auth to accept injected env values for
testing and update test_get_auth_with_config_token to pass those instead; locate
the test function test_get_auth_with_config_token and the get_auth function to
apply the change and ensure the original env is still restored if you keep the
current approach.
---
Duplicate comments:
In `@README.md`:
- Line 11: The README's "Multi-Provider Support" bullet currently claims support
for "GitHub, GitHub Enterprise, GitLab, and Bitbucket" but the roadmap shows
GitLab and Bitbucket are pending; update that bullet (the "Multi-Provider
Support" line/string) to reflect current capabilities—e.g., list only "GitHub"
and "GitHub Enterprise" or append "GitLab and Bitbucket (coming soon)"—so the
README and roadmap are consistent.
In `@src/clone/parallel.rs`:
- Around line 56-70: The public concurrency field on CloneManagerOptions allows
callers to set 0 and bypass the with_concurrency clamp, leading to
Semaphore::new(0) deadlock; make the field non-public (remove pub from
concurrency in CloneManagerOptions), ensure the builder/constructor method
with_concurrency still clamps to a safe minimum (>=1), and add a public accessor
(e.g., fn concurrency(&self) -> usize) so external code reads the clamped value
rather than writing directly.
In `@src/config/parser.rs`:
- Around line 1-3: The module docstring and Config::default_path disagree on the
config filename/path (module claims "gisa.config.toml" while
Config::default_path builds "~/.config/git-same/config.toml"); pick one
canonical path/name and make both match: either update the module doc comment to
reference "~/.config/git-same/config.toml" (or "config.toml" in
~/.config/git-same) or change Config::default_path to return the
"gisa.config.toml" filename/location used in the module doc; ensure the chosen
name is used consistently in the module-level comment and the
Config::default_path function and adjust any related constant names or
documentation strings accordingly.
In `@src/git/shell.rs`:
- Around line 71-81: The loop over output.lines() currently uses an unsafe slice
&line[0..2] which can panic on short lines; change it to a defensive check such
as using line.get(0..2) and match on the Option or use line.starts_with("??") to
detect untracked files, and otherwise mark is_dirty = true; update the block
around for line in output.lines() to skip empty lines as before and use the safe
check instead of the direct slicing so has_untracked and is_dirty remain correct
without risking a panic.
In `@src/lib.rs`:
- Around line 7-10: Update the crate-level docs in src/lib.rs to stop claiming
full GitLab and Bitbucket support: edit the feature list lines that currently
state "Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket" to either
remove GitLab/Bitbucket or mark them as "coming soon" / "experimental" so they
match the provider factory behavior; also ensure the docs reference that the
provider factory currently returns NotImplemented for those providers (the
provider factory code that returns NotImplemented should be mentioned by name in
the comment if applicable) so readers aren’t misled.
In `@src/main.rs`:
- Around line 385-388: The call to orchestrator.plan_sync currently hardcodes
the provider string "github"; extract a provider_name variable the same way
cmd_clone does (matching how cmd_clone computes provider_name from args/flags)
and pass provider_name into orchestrator.plan_sync instead of the literal
"github", ensuring all uses in this block use the consistent provider_name
variable.
- Around line 304-307: The clone_repos call is using a hardcoded "github"
provider string; replace that literal with the derived provider_name variable so
clone_repos(&base_path, plan.to_clone, provider_name, progress_dyn). Locate the
call to manager.clone_repos (near the creation of progress_dyn) and pass
provider_name (the same value used in plan_clone/path templates and cache keys)
instead of "github", ensuring provider_name is in scope where progress_dyn,
base_path, and plan.to_clone are used.
- Around line 241-244: The code passes the hardcoded string "github" into
orchestrator.plan_clone which breaks path templates that use {provider}; instead
use the computed provider_name (the variable set earlier around provider_name)
or derive the active provider's name and pass that into plan_clone (i.e.,
replace the literal "github" with provider_name so plan_clone receives the
actual provider identifier).
- Around line 569-573: The doc for expand_path is inaccurate: it claims to
expand both tilde and environment variables but currently only calls
shellexpand::tilde; update the implementation to use shellexpand::full (or
otherwise perform environment-variable expansion) and handle its Result safely,
returning a PathBuf, or alternatively change the doc comment to only mention
tilde expansion; reference the expand_path function and replace the
shellexpand::tilde call (or adjust the doc) accordingly.
- Around line 182-196: Cached repos are being added directly to repos when
loaded from CacheManager::new()/cache_manager.load(), which bypasses CLI/config
filters (e.g., --org, include_archived, include_forks); update the cache-loading
path to run the same filtering logic used for fresh discovery before extending
repos: either call the existing filter function (or refactor the discovery
filter into a reusable function) and apply it to each entry from
cache.repos.values() (or the provider_repos iterator) so only repos that satisfy
the current filters are added to the repos collection.
In `@src/sync/manager.rs`:
- Around line 106-115: The public SyncManagerOptions.concurrency allows callers
to bypass the builder's with_concurrency clamping and can produce
Semaphore::new(0) deadlock; fix by ensuring concurrency is clamped before use or
made private: either (A) make the field private and expose a builder/getter that
enforces the clamp in with_concurrency, or (B) clamp the value at the start of
sync_repos (e.g., let concurrency = self.options.concurrency.clamp(1, MAX);)
before calling Semaphore::new(concurrency) so Semaphore never receives 0; update
references to SyncManagerOptions.concurrency and with_concurrency accordingly.
In `@tests/integration_test.rs`:
- Around line 8-12: The test helper git_same_binary currently hardcodes
"target/debug/git-same", which breaks release builds; update git_same_binary to
compute the profile dynamically (e.g., let profile = if cfg!(debug_assertions) {
"debug" } else { "release" }) and then build PathBuf from
env!("CARGO_MANIFEST_DIR") joined with format!("target/{}/git-same", profile)
(or alternatively check both target/debug and target/release and return the
existing path) so the test uses the correct binary for both debug and release
runs.
---
Nitpick comments:
In @.claude/CLAUDE.md:
- Line 10: The README line showing "cargo test # Run all
tests (207 unit + 16 integration + 8 doc)" hardcodes test counts which will
drift; update the .claude/CLAUDE.md entry by removing the specific numeric
counts or replacing them with a non-specific phrase like "Run all tests (unit,
integration and doc tests)" or "Run all tests (counts may vary)" so the comment
remains accurate over time — edit the line containing the cargo test example to
remove or generalize the "(207 unit + 16 integration + 8 doc)" portion.
In `@src/auth/mod.rs`:
- Around line 235-243: Add the `url = "2"` dependency to Cargo.toml and replace
the simple string-based logic in the extract_host function with proper parsing
using url::Url: parse the input with Url::parse (handle or map parse errors to
return None), then extract the host via url.host_str() and return it as a String
(including the port only if you intentionally want it; otherwise omit port by
returning host_str()). Update the function named extract_host to use Url::parse
and url.host_str() so edge cases like auth credentials, ports, or odd formatting
are handled robustly.
In `@src/auth/ssh.rs`:
- Around line 47-90: The key file list is duplicated between has_ssh_keys() and
get_ssh_key_files(); extract it into a single shared constant (e.g., const
KEY_FILES: [&str; 6] = [...]) and replace the local key_files arrays in both
functions with references to that constant, then iterate KEY_FILES.iter() inside
has_ssh_keys() and get_ssh_key_files() so updates are made in one place; ensure
the constant has a 'static lifetime and use the same join/filter logic already
in get_ssh_key_files().
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (8)
docs/specs/Gisa-S4-config-specification.md (1)
88-91: Add a warning about plaintext tokens in config.Storing
tokenin a config file can leak secrets (git history, backups, logs). Recommend adding a short warning and steering users togh-cliortoken_env.✍️ Suggested doc tweak
| `auth` | string | `"gh-cli"` | `"gh-cli"`, `"env"`, `"token"` | | `token_env` | string | `""` | Env var name (required when `auth = "env"`) | -| `token` | string | `""` | Token value (required when `auth = "token"`) | +| `token` | string | `""` | Token value (required when `auth = "token"`) — avoid committing plaintext tokens; prefer `gh-cli` or `token_env` |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/specs/Gisa-S4-config-specification.md` around lines 88 - 91, Add a short warning under the config table about storing plaintext tokens: mention that the `token` field can expose secrets via git history, backups, or logs and recommend using `auth = "gh-cli"` or `token_env` instead; update the description cells for `token` and `token_env` (and optionally `auth`) to note the security recommendation and required alternatives when `auth = "env"` or `auth = "token"`.docs/specs/Gisa-S3-github-api-access.md (1)
221-236: Cache format update from YAML to JSON is appropriate.The JSON format and XDG-compliant location (
~/.config/git-same/cache.json) follow good practices. The cache structure looks suitable for storing discovery results.One optional clarity improvement: Line 233's constant definition
DEFAULT_CACHE_TTL = 3600is shown in narrative text. Consider formatting it more clearly, perhaps in a code block or dedicated "Configuration Constants" section if this is meant to document an actual code constant.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/specs/Gisa-S3-github-api-access.md` around lines 221 - 236, The documentation mentions the constant DEFAULT_CACHE_TTL = 3600 inline, which is unclear; move or duplicate this into a clearly formatted code block or a dedicated "Configuration Constants" section and reference DEFAULT_CACHE_TTL there (showing its value and meaning in seconds), so readers can clearly see it as an actual configurable constant and understand the default cache TTL behavior.src/sync/manager.rs (1)
179-182: Consider clamping concurrency at semaphore creation as a final safeguard.While
with_concurrency()clamps the value,SyncManagerOptionshas public fields allowing direct assignment ofconcurrency = 0. Clamping at semaphore creation provides defense-in-depth against semaphore deadlock.🔧 Proposed fix
pub async fn sync_repos( &self, repos: Vec<LocalRepo>, progress: Arc<dyn SyncProgress>, ) -> (OpSummary, Vec<SyncResult>) { let total = repos.len(); - let semaphore = Arc::new(Semaphore::new(self.options.concurrency)); + let concurrency = self.options.concurrency.max(1); + let semaphore = Arc::new(Semaphore::new(concurrency)); let mut handles = Vec::with_capacity(total);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sync/manager.rs` around lines 179 - 182, Clamp the semaphore permit count when creating it to avoid a zero-capacity semaphore if SyncManagerOptions.concurrency was set to 0 directly; in the block that constructs `semaphore = Arc::new(Semaphore::new(self.options.concurrency))` replace the raw use of `self.options.concurrency` with a clamped value (e.g. max(1, ...)) so that `Semaphore::new` is always called with at least 1, preserving the existing `with_concurrency()` behavior as a primary clamp but adding a defense-in-depth check at semaphore creation.src/auth/ssh.rs (2)
50-57: Consider extracting duplicate key file list to a constant.The same list of SSH key file names appears in both
has_ssh_keys()andget_ssh_key_files(). Extracting to a module-level constant would reduce duplication and ensure consistency.♻️ Proposed refactor
+/// Common SSH key file names to check. +const SSH_KEY_FILES: &[&str] = &[ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", +]; + /// Detect if SSH keys exist in the standard locations. pub fn has_ssh_keys() -> bool { let home = match std::env::var("HOME") { Ok(h) => h, Err(_) => return false, }; let ssh_dir = PathBuf::from(home).join(".ssh"); - // Check for common SSH key types - let key_files = [ - "id_rsa", - "id_ed25519", - "id_ecdsa", - "id_dsa", - "github_rsa", - "github_ed25519", - ]; - - for key_file in &key_files { + for key_file in SSH_KEY_FILES { let key_path = ssh_dir.join(key_file); if key_path.exists() { return true; } } false }Also applies to: 78-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/ssh.rs` around lines 50 - 57, Extract the duplicated array of SSH filenames into a single module-level constant (e.g., const SSH_KEY_FILES: &[&str] = &["id_rsa","id_ed25519","id_ecdsa","id_dsa","github_rsa","github_ed25519"]); then replace the local `key_files` arrays in both has_ssh_keys() and get_ssh_key_files() with references to SSH_KEY_FILES so both functions use the same source of truth and duplication is removed.
42-45: HOME environment variable fallback may not work on Windows.The
HOMEenvironment variable is typically not set on Windows (Windows usesUSERPROFILEinstead). Consider adding a cross-platform fallback if Windows support is desired for SSH key detection.♻️ Proposed cross-platform fix
pub fn has_ssh_keys() -> bool { - let home = match std::env::var("HOME") { - Ok(h) => h, - Err(_) => return false, - }; + let home = std::env::var("HOME") + .or_else(|_| std::env::var("USERPROFILE")) + .ok(); + let Some(home) = home else { + return false; + };Also applies to: 71-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/ssh.rs` around lines 42 - 45, The code currently reads the HOME env var with std::env::var and returns false if missing, which fails on Windows where USERPROFILE is used; update the SSH key detection to use a cross-platform home lookup (try std::env::var("HOME") then std::env::var("USERPROFILE"), or better use a platform helper like dirs::home_dir()) to populate the local variable currently named `home` and apply the same change where `home` is obtained again (the other occurrence around the second block). Ensure the logic preserves the existing failure behavior only if no home directory can be resolved.docs/CHANGELOG.md (1)
28-28: Minor: Consider capitalizing "GitHub.com" for consistency.The official branding uses a capital "H" in GitHub.
-- GitHub support (github.com) +- GitHub support (GitHub.com)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/CHANGELOG.md` at line 28, Update the changelog entry string "GitHub support (github.com)" to use the official branding by capitalizing the H — change it to "GitHub support (GitHub.com)" so the term "GitHub.com" matches the project's casing and branding conventions.docs/specs/Gisa-S5-architecture-overview.md (1)
24-54: Add language specifier to ASCII diagram code blocks for accessibility.The markdown linter flags fenced code blocks without language specifiers. For ASCII diagrams, you can use
textorplaintextas the language identifier.📝 Suggested fix
-``` +```text ┌─────────────────────────────────────────────────────────────────┐ │ Gisa CLI │This applies to all ASCII diagram blocks in the document (lines 24, 97, 128, 209, 258, 299).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/specs/Gisa-S5-architecture-overview.md` around lines 24 - 54, The fenced ASCII diagram code blocks in the document are missing a language specifier; update each diagram's opening fence (the ``` line) to include a plain text language identifier (for example change ``` to ```text or ```plaintext) so the markdown linter recognizes them as text blocks—apply this change to every ASCII diagram block (e.g., the triple-backtick blocks containing the box-art diagrams) throughout the file.src/discovery/planning.rs (1)
100-157: Consider guarding against symlink loops in recursive directory scanning.The
scan_dirfunction recursively traverses directories but doesn't protect against symlink cycles. Whileis_dir()doesn't follow symlinks by default, consider adding explicit symlink detection for robustness.♻️ Proposed guard
for entry in entries.flatten() { let entry_path = entry.path(); if !entry_path.is_dir() { continue; } // Skip hidden directories if entry.file_name().to_string_lossy().starts_with('.') { continue; } + // Skip symlinks to avoid potential cycles + if entry_path.is_symlink() { + continue; + } + if current_depth + 1 == max_depth && git.is_repo(&entry_path) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/discovery/planning.rs` around lines 100 - 157, The recursive scanner scan_dir should skip symbolic links to avoid symlink cycles: before treating entry_path as a directory or recursing, call std::fs::symlink_metadata on the entry (use the existing entry or entry_path) and check file_type().is_symlink(), returning/continuing if true; keep the current is_dir() check and recursion into scan_dir (with current_depth/current_depth+1 and max_depth) but do not follow symlinks — this prevents loops while preserving the existing repo detection logic in scan_dir.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/specs/Gisa-S3-github-api-access.md`:
- Around line 276-279: Prominently surface the config.toml token warning by
moving it above other items or into its own "⚠️ WARNING: config.toml in project
directory" subsection in the Security Considerations, then add explicit
actionable guidance: instruct users to add config.toml to .gitignore (provide
example patterns such as ignoring config.toml and any env or secret files),
recommend using a user-global path like ~/.config/git-same/config.toml instead
of a project-local file, and add a runtime safeguard in the tool to detect and
emit a clear warning when tokens are found in a project-level config file
(implement token-detection where config parsing happens and log a
high-visibility warning).
- Around line 129-133: Replace the weak note "(not recommended for shared
configs)" next to the config.toml example with a clear, strong security warning:
explicitly state that storing tokens in config.toml (Project directory) risks
accidental commits to version control and credential leakage, instruct users to
never store secrets in repository-tracked files, recommend using environment
variables (real CI env vars, not .env defaults) or a secrets manager for the
[auth].token, and add a short remediation step telling users how to
rotate/delete tokens if accidentally committed.
- Line 255: The docs currently state that config.toml is stored in the "Project
directory", which raises accidental-commit risk; update the specification and UX
to either (1) change the default storage location from project directory to a
user-global path (e.g., ~/.config/git-same/ alongside the cache) so tokens are
not kept per-repo, (2) add a prominent, bold security note wherever
"config.toml" and "Project directory" are referenced instructing users to add
project-level config.toml to .gitignore immediately, and (3) add a runtime check
that scans for tokens in any project-directory config.toml and emits a clear
warning/error at startup; update any documentation strings and CLI/UX text that
mention "config.toml" or "Project directory" to reflect the new default and
warning behavior.
In `@src/app/run.rs`:
- Around line 12-28: Currently config is loaded unconditionally via
Config::load_from / Config::load before matching cli.command which causes Init
and Completions to error when config is missing; change the logic to only call
Config::load_from or Config::load inside the match arms that require it (e.g.,
before calling run_clone, run_sync/run_pull via run_sync, and run_status), while
leaving Command::Init (run_init) and Command::Completions
(crate::cli::generate_completions) to run without loading config; ensure you
pass &config only where created so run_clone, run_sync, and run_status receive
it.
In `@src/commands/clone.rs`:
- Around line 24-30: Replace the generic get_auth(None) call and the hardcoded
with_ssh(!args.https) usage with provider-aware logic: call
get_auth_for_provider(&provider_entry) to obtain auth for the current provider
(assign to auth) and determine SSH preference by consulting
provider_entry.prefer_ssh (use its value when Some, otherwise fall back to
!args.https), then pass that boolean into with_ssh(...); make the same change at
the other occurrence around line ~167 so provider-level auth and prefer_ssh take
precedence over the previous defaults.
In `@src/commands/status.rs`:
- Around line 34-118: The summary counters dirty_count and behind_count are
updated before applying the args.dirty/args.behind/args.org filters, causing the
final summary to include repos that were filtered out; move the increments so
they occur after the filter checks (i.e., after the if
args.dirty/args.behind/args.org continue checks inside the Ok(s) arm) or
alternatively introduce separate counters like total_dirty/total_behind and
displayed_dirty/displayed_behind and increment the displayed_* counters only
after passing filters (refer to dirty_count, behind_count, args.dirty,
args.behind, args.org, the loop over local_repos and the Ok(s) match arm to
locate where to change).
In `@src/discovery/orchestrator.rs`:
- Around line 33-41: compute_path currently injects provider, repo.owner and
repo.repo.name directly into the path template; add explicit sanitization to
prevent path traversal or path-separator characters by creating a helper (e.g.,
sanitize_component) that validates and/or normalizes a component (strip or
replace sequences like "..", remove path separators '/' and '\', trim
whitespace, and fallback to a safe placeholder if empty) and call it from
compute_path on provider, &repo.owner and &repo.repo.name before performing the
.replace calls on self.structure so the returned PathBuf cannot escape the
intended base_path.
---
Duplicate comments:
In `@docs/CHANGELOG.md`:
- Line 8: The release header for version "## [0.2.0] - 2024-01-20" in
docs/CHANGELOG.md is using an outdated date; verify the actual release date for
v0.2.0 and update that header to the correct date (replace "2024-01-20" with the
verified release date) so the changelog reflects the correct timestamp.
In `@src/cache/mod.rs`:
- Around line 141-152: The fallback in default_cache_path() relies on the HOME
env var which is not guaranteed on Windows; update the fallback to use
platform-appropriate config directory resolution (e.g., use dirs::config_dir()
or check USERPROFILE if HOME is missing) instead of only reading HOME. Locate
the default_cache_path function and replace the manual HOME-based fallback with
a call to a cross-platform helper (dirs::config_dir() or similar) or implement a
fallback that first checks std::env::var("HOME").or_else(|_|
std::env::var("USERPROFILE")) so the path resolves correctly on Windows and
Unix.
In `@src/commands/clone.rs`:
- Around line 32-40: The code currently hard-codes "github" for provider
templates; instead derive a provider_name from the selected provider entry and
use it wherever templates or messages expect {provider} (replace any literal
"github" usages in the clone/planning flow). Concretely, after obtaining
provider_entry (the result of config.enabled_providers().next()), extract its
identifier into a provider_name variable and pass or inject that provider_name
into create_provider calls and any templating/formatting logic in the
cloning/planning functions (references: provider_entry, create_provider, the
clone/planning template usage), ensuring no literal "github" strings remain.
- Around line 57-105: Cached discovery currently merges all provider entries
(cache.repos.values()) and bypasses the active provider and user filters; change
the cache-loading path so it only reads the cached entry for the active provider
(use provider_entry.name or provider_entry.kind to pick
cache.repos[provider_name]) and then run the same filtering logic you apply to
freshly discovered repos (the same filter functions/conditions used after
orchestrator.discover in this flow) before assigning to repos. Also consider
including the active filter/signature when constructing/validating the cache
(CacheManager::save / DiscoveryCache) so cached results are invalidated if
filters/provider differ.
In `@src/commands/sync.rs`:
- Around line 36-79: The code hard-codes "github" when calling
orchestrator.plan_sync which breaks provider templates and non-GitHub providers;
change the call to use a provider_name derived from the selected provider_entry
(the same source used to create the provider via create_provider) — e.g.,
extract provider_name from provider_entry (provider_entry.name or
provider_entry.type/variant as appropriate) and pass that variable to
orchestrator.plan_sync(provider_name) instead of the literal "github"; update
the call site near create_provider, provider, and the orchestrator.plan_sync
invocation.
---
Nitpick comments:
In `@docs/CHANGELOG.md`:
- Line 28: Update the changelog entry string "GitHub support (github.com)" to
use the official branding by capitalizing the H — change it to "GitHub support
(GitHub.com)" so the term "GitHub.com" matches the project's casing and branding
conventions.
In `@docs/specs/Gisa-S3-github-api-access.md`:
- Around line 221-236: The documentation mentions the constant DEFAULT_CACHE_TTL
= 3600 inline, which is unclear; move or duplicate this into a clearly formatted
code block or a dedicated "Configuration Constants" section and reference
DEFAULT_CACHE_TTL there (showing its value and meaning in seconds), so readers
can clearly see it as an actual configurable constant and understand the default
cache TTL behavior.
In `@docs/specs/Gisa-S4-config-specification.md`:
- Around line 88-91: Add a short warning under the config table about storing
plaintext tokens: mention that the `token` field can expose secrets via git
history, backups, or logs and recommend using `auth = "gh-cli"` or `token_env`
instead; update the description cells for `token` and `token_env` (and
optionally `auth`) to note the security recommendation and required alternatives
when `auth = "env"` or `auth = "token"`.
In `@docs/specs/Gisa-S5-architecture-overview.md`:
- Around line 24-54: The fenced ASCII diagram code blocks in the document are
missing a language specifier; update each diagram's opening fence (the ``` line)
to include a plain text language identifier (for example change ``` to ```text
or ```plaintext) so the markdown linter recognizes them as text blocks—apply
this change to every ASCII diagram block (e.g., the triple-backtick blocks
containing the box-art diagrams) throughout the file.
In `@src/auth/ssh.rs`:
- Around line 50-57: Extract the duplicated array of SSH filenames into a single
module-level constant (e.g., const SSH_KEY_FILES: &[&str] =
&["id_rsa","id_ed25519","id_ecdsa","id_dsa","github_rsa","github_ed25519"]);
then replace the local `key_files` arrays in both has_ssh_keys() and
get_ssh_key_files() with references to SSH_KEY_FILES so both functions use the
same source of truth and duplication is removed.
- Around line 42-45: The code currently reads the HOME env var with
std::env::var and returns false if missing, which fails on Windows where
USERPROFILE is used; update the SSH key detection to use a cross-platform home
lookup (try std::env::var("HOME") then std::env::var("USERPROFILE"), or better
use a platform helper like dirs::home_dir()) to populate the local variable
currently named `home` and apply the same change where `home` is obtained again
(the other occurrence around the second block). Ensure the logic preserves the
existing failure behavior only if no home directory can be resolved.
In `@src/discovery/planning.rs`:
- Around line 100-157: The recursive scanner scan_dir should skip symbolic links
to avoid symlink cycles: before treating entry_path as a directory or recursing,
call std::fs::symlink_metadata on the entry (use the existing entry or
entry_path) and check file_type().is_symlink(), returning/continuing if true;
keep the current is_dir() check and recursion into scan_dir (with
current_depth/current_depth+1 and max_depth) but do not follow symlinks — this
prevents loops while preserving the existing repo detection logic in scan_dir.
In `@src/sync/manager.rs`:
- Around line 179-182: Clamp the semaphore permit count when creating it to
avoid a zero-capacity semaphore if SyncManagerOptions.concurrency was set to 0
directly; in the block that constructs `semaphore =
Arc::new(Semaphore::new(self.options.concurrency))` replace the raw use of
`self.options.concurrency` with a clamped value (e.g. max(1, ...)) so that
`Semaphore::new` is always called with at least 1, preserving the existing
`with_concurrency()` behavior as a primary clamp but adding a defense-in-depth
check at semaphore creation.
| ```toml | ||
| # Or in config.toml (not recommended for shared configs) | ||
| [auth] | ||
| token = "ghp_xxxxxxxxxxxx" | ||
| ``` |
There was a problem hiding this comment.
Strengthen the security warning about token storage in config.toml.
The comment "(not recommended for shared configs)" understates the risk. Since line 255 indicates config.toml is stored in the "Project directory," there's a significant risk of accidentally committing tokens to version control. Based on learnings, this is a critical security concern.
📝 Suggested stronger warning
-# Or in config.toml (not recommended for shared configs)
+# Or in config.toml (not recommended - high risk of committing secrets to git)
[auth]
token = "ghp_xxxxxxxxxxxx"
+# IMPORTANT: Add config.toml to .gitignore if storing tokens hereBased on learnings: "Never commit secrets. Do not rely on .env defaults; use real env vars in CI."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/specs/Gisa-S3-github-api-access.md` around lines 129 - 133, Replace the
weak note "(not recommended for shared configs)" next to the config.toml example
with a clear, strong security warning: explicitly state that storing tokens in
config.toml (Project directory) risks accidental commits to version control and
credential leakage, instruct users to never store secrets in repository-tracked
files, recommend using environment variables (real CI env vars, not .env
defaults) or a secrets manager for the [auth].token, and add a short remediation
step telling users how to rotate/delete tokens if accidentally committed.
| | `gh` CLI (recommended) | OS keychain (macOS Keychain, Windows Credential Manager, Linux secret-service) | GitHub CLI | | ||
| | Environment variable | Shell session / CI secrets | User / CI system | | ||
| | `.gisarc` config | Project directory | User (not recommended) | | ||
| | `config.toml` | Project directory | User (not recommended) | |
There was a problem hiding this comment.
Project directory location for config.toml increases accidental commit risk.
Line 255 documents that config.toml is stored in the "Project directory," which significantly increases the risk of accidentally committing tokens to version control. This makes the security warnings throughout the document even more critical.
Consider either:
- Moving
config.tomlto~/.config/git-same/alongside the cache (user-global) - Providing very prominent warnings in the documentation and UX that project-directory config files should be added to
.gitignoreimmediately - Implementing runtime warnings when tokens are detected in project-directory config files
Based on learnings: "Never commit secrets" is a critical security requirement that should be enforced through both documentation and tooling.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/specs/Gisa-S3-github-api-access.md` at line 255, The docs currently
state that config.toml is stored in the "Project directory", which raises
accidental-commit risk; update the specification and UX to either (1) change the
default storage location from project directory to a user-global path (e.g.,
~/.config/git-same/ alongside the cache) so tokens are not kept per-repo, (2)
add a prominent, bold security note wherever "config.toml" and "Project
directory" are referenced instructing users to add project-level config.toml to
.gitignore immediately, and (3) add a runtime check that scans for tokens in any
project-directory config.toml and emits a clear warning/error at startup; update
any documentation strings and CLI/UX text that mention "config.toml" or "Project
directory" to reflect the new default and warning behavior.
| 1. **Never log tokens** — Mask in debug output | ||
| 2. **Prefer ****`gh`**** CLI** — It handles secure storage | ||
| 3. **Warn about ****`.gisarc`**** tokens** — Suggest `.gitignore` | ||
| 3. **Warn about ****`config.toml`**** tokens** — Suggest `.gitignore` | ||
| 4. **Minimal scopes** — Request only `repo` and `read:org` |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Strengthen the security guidance about config.toml token storage.
The security considerations are appropriate but given that config.toml is stored in the project directory (line 255), the warning about token commits should be more prominent and actionable. Consider:
- Moving the config.toml warning higher in the list or to its own subsection
- Providing specific .gitignore patterns
- Recommending runtime warnings in the tool itself when tokens are detected in project configs
📋 Suggested security section enhancement
## Security Considerations
1. **Never log tokens** — Mask in debug output
2. **Prefer `gh` CLI** — It handles secure storage
3. **⚠️ WARNING: config.toml in project directory** — If storing tokens in config.toml:
- Add `config.toml` to `.gitignore` immediately
- Never commit the file to version control
- Consider using `~/.config/git-same/config.toml` instead for user-global settings
- The tool should warn at runtime if tokens are detected in project-level config
4. **Minimal scopes** — Request only `repo` and `read:org`
5. **Token rotation** — Support for short-lived tokens via `gh`Based on learnings: "Never commit secrets" is a critical security requirement that requires prominent documentation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/specs/Gisa-S3-github-api-access.md` around lines 276 - 279, Prominently
surface the config.toml token warning by moving it above other items or into its
own "⚠️ WARNING: config.toml in project directory" subsection in the Security
Considerations, then add explicit actionable guidance: instruct users to add
config.toml to .gitignore (provide example patterns such as ignoring config.toml
and any env or secret files), recommend using a user-global path like
~/.config/git-same/config.toml instead of a project-local file, and add a
runtime safeguard in the tool to detect and emit a clear warning when tokens are
found in a project-level config file (implement token-detection where config
parsing happens and log a high-visibility warning).
src/discovery/orchestrator.rs
Outdated
| /// Computes the local path for a repository. | ||
| pub fn compute_path(&self, base_path: &Path, repo: &OwnedRepo, provider: &str) -> PathBuf { | ||
| let path_str = self | ||
| .structure | ||
| .replace("{provider}", provider) | ||
| .replace("{org}", &repo.owner) | ||
| .replace("{repo}", &repo.repo.name); | ||
|
|
||
| base_path.join(path_str) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, understand the file structure
git ls-files src/discovery/ src/provider/ src/types/ 2>/dev/null | head -20Repository: ZAAI-com/git-same
Length of output: 349
🏁 Script executed:
# Search for compute_path usage and related sanitization
rg -n "compute_path|sanitize|escape" --type rust src/Repository: ZAAI-com/git-same
Length of output: 1402
🏁 Script executed:
# Examine OwnedRepo definition and related types
rg -n "struct OwnedRepo|struct Repo" --type rust src/Repository: ZAAI-com/git-same
Length of output: 189
🏁 Script executed:
# Check where provider, org, and repo values come from
rg -n "compute_path\(" --type rust -B 2 -A 2 src/Repository: ZAAI-com/git-same
Length of output: 3482
🏁 Script executed:
# Examine OwnedRepo struct definition
cat -n src/types/repo.rs | head -120Repository: ZAAI-com/git-same
Length of output: 4462
🏁 Script executed:
# Check how OwnedRepo is created/populated (look for new instances)
rg -n "OwnedRepo\s*{" --type rust -A 3 src/ | head -40Repository: ZAAI-com/git-same
Length of output: 150
🏁 Script executed:
# Examine tests to understand what data is injected
cat -n src/discovery/mod.rs | grep -A 15 "test_compute_path"Repository: ZAAI-com/git-same
Length of output: 1289
🏁 Script executed:
# Check the clone/parallel.rs compute_path as well to see if there's a pattern
sed -n '150,165p' src/clone/parallel.rsRepository: ZAAI-com/git-same
Length of output: 639
🏁 Script executed:
# Check how repos are fetched from APIs - look at GitHub client
cat -n src/provider/github/client.rs | head -50Repository: ZAAI-com/git-same
Length of output: 1991
🏁 Script executed:
# Check for any validation on repo/owner fields during deserialization
rg -n "full_name|owner" --type rust src/types/repo.rs | head -20Repository: ZAAI-com/git-same
Length of output: 1097
🏁 Script executed:
# Now let's verify PathBuf::join behavior with a quick test
python3 << 'EOF'
# Rust's PathBuf::join behavior:
# When you use path.join("../escape"), it treats the string as relative path components
# However, ".." is a legitimate path component in Unix/Windows that means "parent directory"
# So base_path.join("../escape") would indeed resolve to the parent of base_path
# Let's understand the actual behavior:
from pathlib import Path
# Test case 1: Simple join
base = Path("/base")
result = base / "org/repo"
print(f"base.join('org/repo') = {result}")
# Test case 2: With path traversal
base = Path("/base")
result = base / "../escape"
print(f"base.join('../escape') = {result}")
# Test case 3: What actually happens - if user provides ".."
base = Path("/base")
# If the string contains "..", Path will interpret it as a path component
result = base / "org/../../../etc"
print(f"base.join('org/../../../etc') = {result}")
# The key question: Does join normalize the path?
result_resolved = result.resolve()
print(f"After resolve: {result_resolved}")
# In Rust, PathBuf::join does NOT normalize - it just appends
# So "/base".join("../escape") = "/base/../escape" (not resolved)
# However, when used with actual filesystem operations, the OS resolves it
EOFRepository: ZAAI-com/git-same
Length of output: 215
🏁 Script executed:
# Check how the computed path is actually used (is it resolved?)
rg -n "compute_path\(" --type rust -A 5 src/ | head -60Repository: ZAAI-com/git-same
Length of output: 3548
🏁 Script executed:
# Check if there's any validation when creating repos from API responses
rg -n "deserialize|from_" --type rust src/provider/github/ | head -20Repository: ZAAI-com/git-same
Length of output: 1030
🏁 Script executed:
# Check GitHub API response parsing for repos
sed -n '100,200p' src/provider/github/client.rsRepository: ZAAI-com/git-same
Length of output: 3082
🏁 Script executed:
# Look for the actual repo fetch/parse code
rg -n "Repo\|OwnedRepo" --type rust -B 2 -A 2 src/provider/github/client.rs | head -50Repository: ZAAI-com/git-same
Length of output: 43
Sanitize path components to prevent potential path traversal.
Provider, org, and repo names are injected into path strings without sanitization. While GitHub/GitLab APIs enforce naming rules that prevent .. and path separators, explicit validation would strengthen the code against future API changes or when reused with less-constrained providers.
Sanitization example
+ let sanitize = |s: &str| s.replace(['/', '\\'], "_").replace("..", "__");
let path_str = self
.structure
- .replace("{provider}", provider)
- .replace("{org}", &repo.owner)
- .replace("{repo}", &repo.repo.name);
+ .replace("{provider}", &sanitize(provider))
+ .replace("{org}", &sanitize(&repo.owner))
+ .replace("{repo}", &sanitize(&repo.repo.name));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/discovery/orchestrator.rs` around lines 33 - 41, compute_path currently
injects provider, repo.owner and repo.repo.name directly into the path template;
add explicit sanitization to prevent path traversal or path-separator characters
by creating a helper (e.g., sanitize_component) that validates and/or normalizes
a component (strip or replace sequences like "..", remove path separators '/'
and '\', trim whitespace, and fallback to a safe placeholder if empty) and call
it from compute_path on provider, &repo.owner and &repo.repo.name before
performing the .replace calls on self.structure so the returned PathBuf cannot
escape the intended base_path.
There was a problem hiding this comment.
Actionable comments posted: 13
♻️ Duplicate comments (12)
src/auth/mod.rs (1)
304-335: Test environment variable manipulation may cause flakiness in parallel runs.This concern was previously raised. The test modifies global environment variables which could interfere with other tests running in parallel.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/mod.rs` around lines 304 - 335, The test test_get_auth_with_config_token mutates global environment variables (GITHUB_TOKEN, GH_TOKEN, GISA_TOKEN) which makes it flaky when tests run in parallel; fix by avoiding global side-effects: either (a) wrap the test with a serial execution marker like #[serial] (from serial_test) to force exclusive execution, or (b) replace the manual env hacking with a scoped/env-guard helper (e.g. temp_env::with_vars or implement a RAII guard) so get_auth is invoked with temporary env values and the original variables are reliably restored in a drop/finally guard; update the test_get_auth_with_config_token to use one of these approaches and remove unsafe global removals and manual restores.src/commands/status.rs (1)
45-61: Summary counts ignore filters (previously flagged).The
dirty_countandbehind_countare incremented before the filter checks, so the summary can report counts that include repositories that weren't displayed due to--dirty,--behind, or--orgfilters. This was noted in a previous review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/status.rs` around lines 45 - 61, The summary counters dirty_count and behind_count are incremented before the filter checks, so filtered-out repos are still counted; update the logic in the loop (the block using is_dirty, is_behind, args.dirty, args.behind, args.org and org) so that you apply the filters first and only increment dirty_count and behind_count for repositories that pass the filters (or move the increments to after the existing args.dirty/args.behind/args.org checks); ensure the conditions still use is_dirty/is_behind and args to decide whether to continue before updating the counts.src/commands/clone.rs (3)
123-124:⚠️ Potential issue | 🟠 MajorAvoid hard‑coding
"github"for planning/cloning.This misplaces repos for GitHub Enterprise or custom provider names. Use a derived
provider_namefrom the provider entry instead.Also applies to: 186-187
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/clone.rs` around lines 123 - 124, The call to orchestrator.plan_clone currently hard-codes the provider string ("github"); change it to derive and pass the actual provider name from the provider entry (e.g., a provider_name variable taken from the provider struct you already have) so that plan_clone(&base_path, repos, provider_name, &git) uses the correct provider for GitHub Enterprise or custom providers; update the other occurrence likewise (the similar call around the later cloning logic) to use the same derived provider_name instead of the literal "github".
62-74:⚠️ Potential issue | 🟠 MajorCache bypasses current filters.
Cached repos are used as-is, so
--org,include_archived, andinclude_forksaren’t applied. Filter cached repos (or invalidate cache when filters differ) before using them.Also applies to: 78-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/clone.rs` around lines 62 - 74, The cached discovery is being used as-is in the block guarded by use_cache && !force_refresh (CacheManager::new / cache_manager.load) which bypasses current filters like --org, include_archived and include_forks; update the logic in clone.rs so that before extending repos from cache.repos into repos you apply the same filtering pipeline used for live discovery (filter by org, skip archived if include_archived is false, skip forks if include_forks is false), or alternatively store and compare cache metadata for those filter flags and invalidate/force-refresh the cache when they differ; ensure you reference and reuse the same filter functions/conditions used elsewhere in this file so cached entries are consistent with runtime flags.
24-30:⚠️ Potential issue | 🟠 MajorProvider-specific auth/SSH preferences are ignored.
get_auth(None)skips provider-configured tokens, andwith_ssh(!args.https)ignoresprefer_ssh. This breaks provider-specific auth and user preference resolution.Also applies to: 174-179
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/clone.rs` around lines 24 - 30, The code calls get_auth(None) and uses with_ssh(!args.https) which ignores provider-configured tokens and the user's prefer_ssh preference; change the auth resolution to pass the resolved provider and preference into get_auth and to compute the SSH flag from provider/user preference (e.g., use get_auth(Some(provider_name)) or build an AuthOptions struct containing provider token and prefer_ssh, and then call get_auth(options)); replace with_ssh(!args.https) with logic that respects args.prefer_ssh (if present), otherwise falls back to the provider's prefer_ssh configuration and only then to !args.https so provider-specific credentials and prefer_ssh are honored (apply same change at both call sites around get_auth/with_ssh).src/tui/backend.rs (2)
298-299:⚠️ Potential issue | 🟠 MajorDon’t hard-code
"github"when planning/cloning/syncing.This breaks GitHub Enterprise/custom provider names and
{provider}path templates. Derive aprovider_namefrom the selected provider entry and reuse it everywhere.🛠️ Suggested fix
- let plan = orchestrator.plan_clone(&base_path, repos, "github", &git); + let provider_name = provider_entry + .name + .clone() + .unwrap_or_else(|| provider_entry.kind.to_string()); + let plan = orchestrator.plan_clone(&base_path, repos, &provider_name, &git); @@ - let (summary, _results) = manager - .clone_repos(&base_path, plan.to_clone, "github", progress) + let (summary, _results) = manager + .clone_repos(&base_path, plan.to_clone, &provider_name, progress) .await; @@ - let (to_sync, _skipped) = orchestrator.plan_sync(&base_path, repos, "github", &git, true); + let (to_sync, _skipped) = orchestrator.plan_sync(&base_path, repos, &provider_name, &git, true);Also applies to: 340-341, 436-437
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/backend.rs` around lines 298 - 299, The call to orchestrator.plan_clone currently hard-codes "github" as the provider; instead, derive a provider_name from the selected provider entry (the structure used to choose providers) and pass that variable into orchestrator.plan_clone (and the matching clone/sync calls) in place of the string literal. Locate the places that call orchestrator.plan_clone and similar methods (the current call using base_path, repos, &git) and replace the literal "github" with provider_name obtained from the selected provider object, then reuse that provider_name for the other occurrences (the other plan/clone/sync calls) so custom/GHE provider names and {provider} templates work correctly.
233-236:⚠️ Potential issue | 🟠 MajorTUI backend ignores provider-specific auth and SSH preference.
get_auth(None)bypasses provider-configured tokens, andwith_ssh(true)hard-codes SSH even when the provider or user prefers HTTPS. This can make TUI unusable for config-token-only setups or HTTPS-only environments.Also applies to: 331-336, 372-376
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/backend.rs` around lines 233 - 236, The TUI currently calls get_auth(None) and forces with_ssh(true), which ignores provider-configured tokens and SSH/HTTPS preferences; update the auth acquisition to pass the provider/context (e.g., call get_auth(Some(provider_id) or pass the repository/provider config) so provider tokens are honored and ensure with_ssh is driven by the provider or user preference (e.g., provider.ssh_preference or repo_config.use_ssh) rather than hard-coded true, and add a fallback path that supports token-only authentication for HTTPS-only environments; change occurrences of get_auth(None) and with_ssh(true) (seen around the auth handling and repo clone/setup code) accordingly so provider tokens and SSH settings are respected.src/config/parser.rs (1)
1-3: Config filename/path docs are inconsistent.The module doc says
gisa.config.toml, whileConfig::default_pathbuilds~/.config/git-same/config.toml. This was flagged in a previous review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/parser.rs` around lines 1 - 3, The module doc and the code disagree on the config filename/path: the top-of-file documentation says "gisa.config.toml" while the code (Config::default_path) constructs "~/.config/git-same/config.toml"; update them to match by either changing the module doc to document "~/.config/git-same/config.toml" (preferred) or changing Config::default_path to return "gisa.config.toml" — ensure the module-level comment and the Config::default_path implementation use the same filename/path and mention the chosen path format (home/config directory vs project file) consistently.src/cache.rs (1)
148-154: HOME fallback may not work on Windows.The fallback path resolution uses the
HOMEenvironment variable, which is typically not set on Windows (Windows usesUSERPROFILEinstead). This issue was flagged in a previous review and appears to still be present in the non-macOS branch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache.rs` around lines 148 - 154, The non-macOS fallback uses std::env::var("HOME") which fails on Windows; update the config_dir resolution in src/cache.rs (the block assigning config_dir using directories::ProjectDirs::from) to use a cross-platform home lookup instead (e.g., dirs::home_dir() or check USERPROFILE if HOME is missing) when ProjectDirs::from returns None, then build the ".config/git-same" path from that home path; ensure the code still returns a proper error if no home directory can be determined.src/operations/clone.rs (1)
83-160:⚠️ Potential issue | 🟡 MinorDirect field access can bypass concurrency clamping.
Same issue as in
sync.rs: theconcurrencyfield ispub, allowing direct assignment that bypasseswith_concurrencyclamping. The past review flagged this for the semaphore creation site.🛡️ Suggested fix: clamp at semaphore creation
pub async fn clone_repos( &self, base_path: &Path, repos: Vec<OwnedRepo>, provider: &str, progress: Arc<dyn CloneProgress>, ) -> (OpSummary, Vec<CloneResult>) { let total = repos.len(); - let semaphore = Arc::new(Semaphore::new(self.options.concurrency)); + let concurrency = self.options.concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + let semaphore = Arc::new(Semaphore::new(concurrency));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/clone.rs` around lines 83 - 160, The public concurrency field on CloneManagerOptions allows callers to bypass with_concurrency clamping; make concurrency private (remove pub from the concurrency field) and/or enforce clamping at the point you create the semaphore (where you construct the semaphore for the clone manager) by using CloneManagerOptions::check_concurrency_cap or concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY) to derive the effective value; update any call sites that access .concurrency directly to use a public accessor or the with_concurrency builder so the semaphore is always created with a clamped value (refer to CloneManagerOptions, with_concurrency, check_concurrency_cap, concurrency, MIN_CONCURRENCY, MAX_CONCURRENCY and the semaphore creation site).src/cli.rs (1)
64-74:⚠️ Potential issue | 🟡 MinorInit help text path mismatch.
The help text references
~/.config/gisa/gisa.config.toml, which previously differed fromConfig::default_path. Please align these to avoid misleading users.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli.rs` around lines 64 - 74, The help text for InitArgs.path currently hardcodes "~/.config/gisa/gisa.config.toml" which can diverge from Config::default_path; update the docstring on the path field (in the InitArgs struct) to either reference Config::default_path or match its value exactly (e.g., replace the hardcoded path with wording like "Path for config file (default: same as Config::default_path)") so the help output and Config::default_path remain consistent; ensure the change is made in the InitArgs definition so CLI help shows the corrected default path text.src/lib.rs (1)
7-10:⚠️ Potential issue | 🟡 MinorDocs overstate provider support.
The feature list claims GitLab/Bitbucket support, but provider creation currently returns “NotImplemented” for those kinds. Please align docs with actual support or mark as “coming soon.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib.rs` around lines 7 - 10, The crate docs under the top-level documentation block (the "//! ## Features" section) overclaim support for GitLab/Bitbucket while the provider creation path currently returns NotImplemented for those kinds; update the docs to reflect actual support by either removing GitLab/Bitbucket from the "Multi-Provider Support" bullet or changing it to "GitLab/Bitbucket coming soon", and ensure the doc change matches the implementation in the provider factory (the code path that returns NotImplemented for provider creation) so the public docs and the provider creation logic are consistent.
🧹 Nitpick comments (6)
src/tui/screens/command_picker.rs (1)
14-19: Consider exposing COMMANDS length for bounds checking.The
COMMANDSarray is private. If the handler needs to wrappicker_index, it would need to know the length. Consider making this accessible or adding a helper.💡 Optional: expose command count
const COMMANDS: &[(&str, &str)] = &[ ("Clone", "Clone all new repositories"), ("Fetch", "Fetch updates (safe, no working tree changes)"), ("Pull", "Pull updates (modifies working tree)"), ("Status", "Show repository status"), ]; + +/// Number of available commands (for index bounds checking). +pub const COMMAND_COUNT: usize = COMMANDS.len();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/screens/command_picker.rs` around lines 14 - 19, Make the COMMANDS length accessible so callers (e.g., wrapping logic using picker_index) can bounds-check and wrap safely: add a small public accessor such as a pub fn commands_len() -> usize that returns COMMANDS.len() or a pub const COMMANDS_LEN: usize (via an explicit constant) and use that from handlers instead of hardcoding lengths; reference the existing COMMANDS and picker_index when adding the helper so callers can replace magic numbers with the new accessor.src/tui/app.rs (1)
72-137: Consider boundinglog_linesto prevent unbounded memory growth.The
log_lines: Vec<String>field has no size limit. In long-running sessions with many operations, this could grow indefinitely. Consider using a bounded buffer (e.g.,VecDequewith max capacity).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/app.rs` around lines 72 - 137, The App struct's log_lines field is unbounded and can grow indefinitely; change it from Vec<String> to a bounded buffer (e.g., std::collections::VecDeque<String>) and introduce a MAX_LOG_LINES constant, update any places that append logs (references to App::log_lines usage) to push_back new entries and pop_front when capacity is exceeded so the deque never grows beyond MAX_LOG_LINES, and update any code that iterates/collects log_lines to handle VecDeque (or provide a getter that returns a Vec snapshot) so external consumers keep the same semantics.src/tui/event.rs (2)
63-83: Silent read errors may hide terminal issues.When
event::read()fails, the error is silently discarded withif let Ok(ev). While this prevents crashes, it could mask persistent terminal issues. Consider logging read errors at debug level for troubleshooting.♻️ Suggested improvement
if event::poll(tick_rate).unwrap_or(false) { - if let Ok(ev) = event::read() { + match event::read() { + Ok(ev) => { let app_event = match ev { CtEvent::Key(key) => AppEvent::Terminal(key), CtEvent::Resize(w, h) => AppEvent::Resize(w, h), _ => continue, }; if event_tx.send(app_event).is_err() { break; } + } + Err(_) => { + // Terminal read error - continue to allow recovery + continue; + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/event.rs` around lines 63 - 83, The loop inside tokio::task::spawn_blocking silently ignores errors from event::read(); instead, detect the Err case and emit a debug-level log containing the error to aid troubleshooting. Update the closure around event::read() (the match that maps CtEvent -> AppEvent and uses event_tx) to handle Err(e) by calling your logger (e.g., debug! or processLogger.debug) with context like "terminal read error" and the error value, but continue the loop as before; keep existing behavior for successful reads, resize and tick handling, and the event_tx error checks.
85-85: Unconventional return order: (receiver, sender).The typical Rust convention is to return
(sender, receiver)from channel creation functions (matchingmpsc::channel()). This reversed order may confuse callers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/event.rs` at line 85, The function currently returns the channel pair in reversed order as (rx, tx); change the return tuple to (tx, rx) so it follows Rust convention (sender, receiver). Update the function's signature/return documentation and all call sites that destructure or expect (rx, tx) to the new order (tx, rx), adjusting variable names (tx, rx) accordingly to avoid logic errors; run tests to ensure no remaining mismatches.src/commands/mod.rs (1)
64-69: Non-UTF8 paths will be lossy-converted.
to_string_lossy()replaces invalid UTF-8 sequences with�, which could silently corrupt paths containing non-UTF8 characters on some systems. Consider usingOsStr-aware expansion or documenting this limitation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/mod.rs` around lines 64 - 69, expand_path currently uses to_string_lossy which silently corrupts non-UTF8 paths; change expand_path to return Result<PathBuf, Error> (or Option<PathBuf>) and first call path.to_str().ok_or(...) to detect non-UTF8, then pass the &str to shellexpand::tilde and build the PathBuf from the expansion; alternatively, if you must keep a fallible signature, document the UTF-8-only limitation in expand_path's doc comment and avoid to_string_lossy.src/operations/clone.rs (1)
177-187: Duplicate logic:compute_pathexists insrc/discovery.rs.This method duplicates the path computation logic from
src/discovery.rs(lines 47-55). Consider extracting to a shared utility to maintain consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/clone.rs` around lines 177 - 187, The compute_path implementation in clone.rs duplicates logic already in discovery.rs; extract the shared logic into a single helper (e.g., a function like repo_local_path/base_path_for_repo) that accepts base_path: &Path, repo: &OwnedRepo, provider: &str and the structure string (options.structure) and returns a PathBuf, then replace the compute_path bodies in both clone.rs (method compute_path) and discovery.rs (the existing compute_path) to call this new helper so both use the same implementation and avoid drift.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Cargo.toml`:
- Around line 77-78: Update the Cargo.toml dependency for crossterm to ensure
compatibility with ratatui 0.29 by tightening the version requirement: change
the crossterm entry currently declared as `crossterm = { version = "0.28",
optional = true }` to require at least 0.28.1 (e.g., `crossterm = { version =
"0.28.1", optional = true }`) so ratatui (version 0.29) has the minimum
crossterm version it needs.
In `@src/cache.rs`:
- Around line 58-61: Replace the panicking unwrap on SystemTime with a
non-panicking fallback: change occurrences of
SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs() (and the same
patterns at the other noted sites) to handle the Err case, e.g.
SystemTime::now().duration_since(UNIX_EPOCH).map(|d| d.as_secs()).unwrap_or(0)
or SystemTime::now().duration_since(UNIX_EPOCH).unwrap_or_default().as_secs();
update each occurrence (the let now = ... instances) so the code uses the safe
alternative instead of unwrap().
In `@src/config/parser.rs`:
- Around line 138-153: The current default_path() uses
std::env::var("HOME").unwrap_or_else(|_| ".".to_string()) which can create
config files in the current working directory; replace that "." fallback with a
predictable behavior: use dirs::home_dir() (or
directories::BaseDirs::new().map(|b| b.home_dir().to_path_buf())) to obtain the
user's home directory and, if that returns None, explicitly fail with a clear
error (panic or return a Result if you prefer to change the signature) instead
of using "."; update the branches that set config_dir in default_path() to use
this new home-dir lookup so config_dir and the final config path are always
user-home-based or fail fast.
- Around line 179-203: The validator in validate() uses a local MAX_CONCURRENCY
(32) that diverges from the real cap used by CloneManagerOptions and
SyncManagerOptions (16); refactor by moving the true MAX_CONCURRENCY into a
shared constant (e.g., pub const MAX_CONCURRENCY) in a common module and import
that constant into parser.rs, then replace the local MAX_CONCURRENCY with the
shared MAX_CONCURRENCY in the validate() method; ensure CloneManagerOptions and
SyncManagerOptions and the warn_if_concurrency_capped() helper also import and
use the same shared constant so all paths enforce the same limit.
In `@src/discovery.rs`:
- Around line 88-126: In plan_sync, when skip_dirty is true and calling
git.status(&local_path) returns an Err, treat that as a skip instead of
proceeding to sync: catch the Err from GitOperations::status, push (repo,
format!("failed to get status: {}", err)) onto skipped (same tuple type used
elsewhere) and continue the loop; keep the existing behavior for Ok(status)
checks and still create LocalRepo::new(repo, local_path) only when status
succeeded or skip_dirty is false. Ensure you reference plan_sync,
GitOperations::status, LocalRepo::new and compute_path when locating the change.
In `@src/operations/sync.rs`:
- Around line 148-157: The pub concurrency field on SyncManagerOptions allows
bypassing with_concurrency's clamp and can create a zero-semaphore deadlock; to
fix, ensure you clamp the value when constructing the semaphore (e.g., in
SyncManager::new or wherever the semaphore is created) by using something like
max(1, options.concurrency) (or enforce a min/max range) before building the
semaphore, and optionally make SyncManagerOptions::concurrency private or only
settable via with_concurrency to prevent direct external mutation; update
references to SyncManagerOptions, with_concurrency, and SyncManager::new (or the
semaphore creation site) accordingly.
In `@src/provider/github/pagination.rs`:
- Around line 68-80: The calculate_wait_time function currently calls .unwrap()
on SystemTime::now().duration_since(UNIX_EPOCH), which can panic if the system
clock is before the epoch; change this to handle the Result safely (e.g., use
.ok() or match on duration_since to get now_secs or return None on Err) and then
compute the wait Duration only when both reset_timestamp.parse::<u64>() and the
duration_since result succeed (you can use now_secs.saturating_sub(reset_secs)
or ensure reset_secs > now_secs before creating Duration); update
calculate_wait_time to return None on any time-parsing or clock errors instead
of unwrapping.
In `@src/tui/backend.rs`:
- Around line 64-83: The on_start methods in TuiCloneProgress
(TuiCloneProgress::on_start) and TuiSyncProgress (TuiSyncProgress::on_start) are
emitting BackendMessage::RepoProgress which causes double-counting; remove the
RepoProgress send from both on_start implementations so only on_complete (and
error/skip paths) emit RepoProgress, or alternatively introduce a distinct
BackendMessage::RepoStarted variant and update the event handler if you need a
start notification—ensure the handler only increments completed on
BackendMessage::RepoProgress so counters are not inflated.
In `@src/tui/handler.rs`:
- Around line 256-335: The app never transitions to OperationState::Running so
RepoProgress updates are ignored; in handle_backend_message detect when a
RepoProgress arrives while app.operation_state is OperationState::Discovering
(or otherwise not Running) and switch app.operation_state into
OperationState::Running preserving the active operation and existing counters
(or add a current_operation field to App and set it from
OperationState::Discovering before switching), then apply the existing
RepoProgress logic; ensure OperationComplete uses that preserved operation
(refer to handle_backend_message, BackendMessage::RepoProgress,
BackendMessage::OperationComplete, OperationState::Discovering,
OperationState::Running, and the App.operation_state field).
- Around line 206-223: The start_operation function allows a new operation when
app.operation_state is Discovering, causing overlapping backend tasks; update
the initial guard in start_operation to treat both OperationState::Running and
OperationState::Discovering as busy (e.g., replace the
matches!(app.operation_state, OperationState::Running { .. }) check with a check
that returns an error when app.operation_state is either OperationState::Running
{ .. } or OperationState::Discovering { .. }), set app.error_message the same
way and return early so spawn_operation is never called while Discovering.
In `@src/tui/mod.rs`:
- Around line 63-73: The loop does not handle rx.recv().await returning None
(channel closed), which can cause an infinite loop; update the receive branch to
explicitly handle None by breaking the loop or setting app.should_quit = true
instead of proceeding to call handler::handle_event; replace the current if let
Some(event) = rx.recv().await { handler::handle_event(app, event,
backend_tx).await; } with a match or if/else that detects None and exits the
loop (or sets app.should_quit) and only calls handler::handle_event when
Some(event).
In `@src/tui/screens/progress.rs`:
- Around line 139-167: render_log currently always displays the last visible
window of app.log_lines so scrolling (updated in handle_progress_key via
app.scroll_offset) has no effect; change the start calculation to account for
app.scroll_offset by subtracting it from the computed start (e.g., compute start
= total.saturating_sub(visible_height).saturating_sub(app.scroll_offset) and
clamp to 0), then slice app.log_lines[start..] as before; ensure you use the
same visible_height and total variables and keep bounds-safe operations
(saturating_sub) to avoid panics.
- Around line 38-43: The title_text construction naively appends "ing" or other
suffixes to operation names (in the match on app.operation_state /
OperationState::{Running, Finished}), producing malformed words like "Cloneing";
change the match arms to use explicit, human-friendly labels instead of "{}ing"
concatenation — e.g., map the Running { operation } arm to a fixed string like
"Cloning Repositories" or use operation.display()/to_string() without adding
"ing", and change Finished { operation } to "Clone Complete" or similar; update
the title_text match arms (the OperationState::Running and
OperationState::Finished branches) to return those explicit strings.
---
Duplicate comments:
In `@src/auth/mod.rs`:
- Around line 304-335: The test test_get_auth_with_config_token mutates global
environment variables (GITHUB_TOKEN, GH_TOKEN, GISA_TOKEN) which makes it flaky
when tests run in parallel; fix by avoiding global side-effects: either (a) wrap
the test with a serial execution marker like #[serial] (from serial_test) to
force exclusive execution, or (b) replace the manual env hacking with a
scoped/env-guard helper (e.g. temp_env::with_vars or implement a RAII guard) so
get_auth is invoked with temporary env values and the original variables are
reliably restored in a drop/finally guard; update the
test_get_auth_with_config_token to use one of these approaches and remove unsafe
global removals and manual restores.
In `@src/cache.rs`:
- Around line 148-154: The non-macOS fallback uses std::env::var("HOME") which
fails on Windows; update the config_dir resolution in src/cache.rs (the block
assigning config_dir using directories::ProjectDirs::from) to use a
cross-platform home lookup instead (e.g., dirs::home_dir() or check USERPROFILE
if HOME is missing) when ProjectDirs::from returns None, then build the
".config/git-same" path from that home path; ensure the code still returns a
proper error if no home directory can be determined.
In `@src/cli.rs`:
- Around line 64-74: The help text for InitArgs.path currently hardcodes
"~/.config/gisa/gisa.config.toml" which can diverge from Config::default_path;
update the docstring on the path field (in the InitArgs struct) to either
reference Config::default_path or match its value exactly (e.g., replace the
hardcoded path with wording like "Path for config file (default: same as
Config::default_path)") so the help output and Config::default_path remain
consistent; ensure the change is made in the InitArgs definition so CLI help
shows the corrected default path text.
In `@src/commands/clone.rs`:
- Around line 123-124: The call to orchestrator.plan_clone currently hard-codes
the provider string ("github"); change it to derive and pass the actual provider
name from the provider entry (e.g., a provider_name variable taken from the
provider struct you already have) so that plan_clone(&base_path, repos,
provider_name, &git) uses the correct provider for GitHub Enterprise or custom
providers; update the other occurrence likewise (the similar call around the
later cloning logic) to use the same derived provider_name instead of the
literal "github".
- Around line 62-74: The cached discovery is being used as-is in the block
guarded by use_cache && !force_refresh (CacheManager::new / cache_manager.load)
which bypasses current filters like --org, include_archived and include_forks;
update the logic in clone.rs so that before extending repos from cache.repos
into repos you apply the same filtering pipeline used for live discovery (filter
by org, skip archived if include_archived is false, skip forks if include_forks
is false), or alternatively store and compare cache metadata for those filter
flags and invalidate/force-refresh the cache when they differ; ensure you
reference and reuse the same filter functions/conditions used elsewhere in this
file so cached entries are consistent with runtime flags.
- Around line 24-30: The code calls get_auth(None) and uses
with_ssh(!args.https) which ignores provider-configured tokens and the user's
prefer_ssh preference; change the auth resolution to pass the resolved provider
and preference into get_auth and to compute the SSH flag from provider/user
preference (e.g., use get_auth(Some(provider_name)) or build an AuthOptions
struct containing provider token and prefer_ssh, and then call
get_auth(options)); replace with_ssh(!args.https) with logic that respects
args.prefer_ssh (if present), otherwise falls back to the provider's prefer_ssh
configuration and only then to !args.https so provider-specific credentials and
prefer_ssh are honored (apply same change at both call sites around
get_auth/with_ssh).
In `@src/commands/status.rs`:
- Around line 45-61: The summary counters dirty_count and behind_count are
incremented before the filter checks, so filtered-out repos are still counted;
update the logic in the loop (the block using is_dirty, is_behind, args.dirty,
args.behind, args.org and org) so that you apply the filters first and only
increment dirty_count and behind_count for repositories that pass the filters
(or move the increments to after the existing args.dirty/args.behind/args.org
checks); ensure the conditions still use is_dirty/is_behind and args to decide
whether to continue before updating the counts.
In `@src/config/parser.rs`:
- Around line 1-3: The module doc and the code disagree on the config
filename/path: the top-of-file documentation says "gisa.config.toml" while the
code (Config::default_path) constructs "~/.config/git-same/config.toml"; update
them to match by either changing the module doc to document
"~/.config/git-same/config.toml" (preferred) or changing Config::default_path to
return "gisa.config.toml" — ensure the module-level comment and the
Config::default_path implementation use the same filename/path and mention the
chosen path format (home/config directory vs project file) consistently.
In `@src/lib.rs`:
- Around line 7-10: The crate docs under the top-level documentation block (the
"//! ## Features" section) overclaim support for GitLab/Bitbucket while the
provider creation path currently returns NotImplemented for those kinds; update
the docs to reflect actual support by either removing GitLab/Bitbucket from the
"Multi-Provider Support" bullet or changing it to "GitLab/Bitbucket coming
soon", and ensure the doc change matches the implementation in the provider
factory (the code path that returns NotImplemented for provider creation) so the
public docs and the provider creation logic are consistent.
In `@src/operations/clone.rs`:
- Around line 83-160: The public concurrency field on CloneManagerOptions allows
callers to bypass with_concurrency clamping; make concurrency private (remove
pub from the concurrency field) and/or enforce clamping at the point you create
the semaphore (where you construct the semaphore for the clone manager) by using
CloneManagerOptions::check_concurrency_cap or concurrency.clamp(MIN_CONCURRENCY,
MAX_CONCURRENCY) to derive the effective value; update any call sites that
access .concurrency directly to use a public accessor or the with_concurrency
builder so the semaphore is always created with a clamped value (refer to
CloneManagerOptions, with_concurrency, check_concurrency_cap, concurrency,
MIN_CONCURRENCY, MAX_CONCURRENCY and the semaphore creation site).
In `@src/tui/backend.rs`:
- Around line 298-299: The call to orchestrator.plan_clone currently hard-codes
"github" as the provider; instead, derive a provider_name from the selected
provider entry (the structure used to choose providers) and pass that variable
into orchestrator.plan_clone (and the matching clone/sync calls) in place of the
string literal. Locate the places that call orchestrator.plan_clone and similar
methods (the current call using base_path, repos, &git) and replace the literal
"github" with provider_name obtained from the selected provider object, then
reuse that provider_name for the other occurrences (the other plan/clone/sync
calls) so custom/GHE provider names and {provider} templates work correctly.
- Around line 233-236: The TUI currently calls get_auth(None) and forces
with_ssh(true), which ignores provider-configured tokens and SSH/HTTPS
preferences; update the auth acquisition to pass the provider/context (e.g.,
call get_auth(Some(provider_id) or pass the repository/provider config) so
provider tokens are honored and ensure with_ssh is driven by the provider or
user preference (e.g., provider.ssh_preference or repo_config.use_ssh) rather
than hard-coded true, and add a fallback path that supports token-only
authentication for HTTPS-only environments; change occurrences of get_auth(None)
and with_ssh(true) (seen around the auth handling and repo clone/setup code)
accordingly so provider tokens and SSH settings are respected.
---
Nitpick comments:
In `@src/commands/mod.rs`:
- Around line 64-69: expand_path currently uses to_string_lossy which silently
corrupts non-UTF8 paths; change expand_path to return Result<PathBuf, Error> (or
Option<PathBuf>) and first call path.to_str().ok_or(...) to detect non-UTF8,
then pass the &str to shellexpand::tilde and build the PathBuf from the
expansion; alternatively, if you must keep a fallible signature, document the
UTF-8-only limitation in expand_path's doc comment and avoid to_string_lossy.
In `@src/operations/clone.rs`:
- Around line 177-187: The compute_path implementation in clone.rs duplicates
logic already in discovery.rs; extract the shared logic into a single helper
(e.g., a function like repo_local_path/base_path_for_repo) that accepts
base_path: &Path, repo: &OwnedRepo, provider: &str and the structure string
(options.structure) and returns a PathBuf, then replace the compute_path bodies
in both clone.rs (method compute_path) and discovery.rs (the existing
compute_path) to call this new helper so both use the same implementation and
avoid drift.
In `@src/tui/app.rs`:
- Around line 72-137: The App struct's log_lines field is unbounded and can grow
indefinitely; change it from Vec<String> to a bounded buffer (e.g.,
std::collections::VecDeque<String>) and introduce a MAX_LOG_LINES constant,
update any places that append logs (references to App::log_lines usage) to
push_back new entries and pop_front when capacity is exceeded so the deque never
grows beyond MAX_LOG_LINES, and update any code that iterates/collects log_lines
to handle VecDeque (or provide a getter that returns a Vec snapshot) so external
consumers keep the same semantics.
In `@src/tui/event.rs`:
- Around line 63-83: The loop inside tokio::task::spawn_blocking silently
ignores errors from event::read(); instead, detect the Err case and emit a
debug-level log containing the error to aid troubleshooting. Update the closure
around event::read() (the match that maps CtEvent -> AppEvent and uses event_tx)
to handle Err(e) by calling your logger (e.g., debug! or processLogger.debug)
with context like "terminal read error" and the error value, but continue the
loop as before; keep existing behavior for successful reads, resize and tick
handling, and the event_tx error checks.
- Line 85: The function currently returns the channel pair in reversed order as
(rx, tx); change the return tuple to (tx, rx) so it follows Rust convention
(sender, receiver). Update the function's signature/return documentation and all
call sites that destructure or expect (rx, tx) to the new order (tx, rx),
adjusting variable names (tx, rx) accordingly to avoid logic errors; run tests
to ensure no remaining mismatches.
In `@src/tui/screens/command_picker.rs`:
- Around line 14-19: Make the COMMANDS length accessible so callers (e.g.,
wrapping logic using picker_index) can bounds-check and wrap safely: add a small
public accessor such as a pub fn commands_len() -> usize that returns
COMMANDS.len() or a pub const COMMANDS_LEN: usize (via an explicit constant) and
use that from handlers instead of hardcoding lengths; reference the existing
COMMANDS and picker_index when adding the helper so callers can replace magic
numbers with the new accessor.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (41)
.github/workflows/ci.ymlCargo.tomlconductor.jsondocs/specs/Gisa-S5-architecture-overview.mdrustfmt.tomlsrc/auth/mod.rssrc/banner.rssrc/cache.rssrc/cli.rssrc/commands/clone.rssrc/commands/init.rssrc/commands/mod.rssrc/commands/status.rssrc/commands/sync.rssrc/config/parser.rssrc/discovery.rssrc/lib.rssrc/main.rssrc/operations/clone.rssrc/operations/mod.rssrc/operations/sync.rssrc/output.rssrc/provider/github/pagination.rssrc/tui/app.rssrc/tui/backend.rssrc/tui/event.rssrc/tui/handler.rssrc/tui/mod.rssrc/tui/screens/command_picker.rssrc/tui/screens/dashboard.rssrc/tui/screens/mod.rssrc/tui/screens/org_browser.rssrc/tui/screens/progress.rssrc/tui/screens/repo_status.rssrc/tui/ui.rssrc/tui/widgets/mod.rssrc/tui/widgets/repo_table.rssrc/tui/widgets/status_bar.rstoolkit/Conductor/archive.shtoolkit/Conductor/run.shtoolkit/Conductor/setup.sh
🚧 Files skipped from review as they are similar to previous changes (6)
- src/commands/init.rs
- .github/workflows/ci.yml
- toolkit/Conductor/run.sh
- toolkit/Conductor/setup.sh
- rustfmt.toml
- src/commands/sync.rs
src/tui/handler.rs
Outdated
| fn start_operation(app: &mut App, operation: Operation, backend_tx: &UnboundedSender<AppEvent>) { | ||
| if matches!(app.operation_state, OperationState::Running { .. }) { | ||
| app.error_message = Some("An operation is already running".to_string()); | ||
| return; | ||
| } | ||
|
|
||
| app.operation_state = OperationState::Discovering { | ||
| message: format!("Starting {}...", operation), | ||
| }; | ||
| app.log_lines.clear(); | ||
| app.scroll_offset = 0; | ||
|
|
||
| if !matches!(app.screen, Screen::Progress | Screen::RepoStatus) { | ||
| app.navigate_to(Screen::Progress); | ||
| } | ||
|
|
||
| super::backend::spawn_operation(operation, app, backend_tx.clone()); | ||
| } |
There was a problem hiding this comment.
Block new operations while Discovering.
Right now only Running is blocked, so a second operation can start while the first is in Discovering, leading to overlapping backend tasks and corrupted UI state.
🛠️ Suggested fix
- if matches!(app.operation_state, OperationState::Running { .. }) {
+ if matches!(
+ app.operation_state,
+ OperationState::Running { .. } | OperationState::Discovering { .. }
+ ) {
app.error_message = Some("An operation is already running".to_string());
return;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn start_operation(app: &mut App, operation: Operation, backend_tx: &UnboundedSender<AppEvent>) { | |
| if matches!(app.operation_state, OperationState::Running { .. }) { | |
| app.error_message = Some("An operation is already running".to_string()); | |
| return; | |
| } | |
| app.operation_state = OperationState::Discovering { | |
| message: format!("Starting {}...", operation), | |
| }; | |
| app.log_lines.clear(); | |
| app.scroll_offset = 0; | |
| if !matches!(app.screen, Screen::Progress | Screen::RepoStatus) { | |
| app.navigate_to(Screen::Progress); | |
| } | |
| super::backend::spawn_operation(operation, app, backend_tx.clone()); | |
| } | |
| fn start_operation(app: &mut App, operation: Operation, backend_tx: &UnboundedSender<AppEvent>) { | |
| if matches!( | |
| app.operation_state, | |
| OperationState::Running { .. } | OperationState::Discovering { .. } | |
| ) { | |
| app.error_message = Some("An operation is already running".to_string()); | |
| return; | |
| } | |
| app.operation_state = OperationState::Discovering { | |
| message: format!("Starting {}...", operation), | |
| }; | |
| app.log_lines.clear(); | |
| app.scroll_offset = 0; | |
| if !matches!(app.screen, Screen::Progress | Screen::RepoStatus) { | |
| app.navigate_to(Screen::Progress); | |
| } | |
| super::backend::spawn_operation(operation, app, backend_tx.clone()); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/handler.rs` around lines 206 - 223, The start_operation function
allows a new operation when app.operation_state is Discovering, causing
overlapping backend tasks; update the initial guard in start_operation to treat
both OperationState::Running and OperationState::Discovering as busy (e.g.,
replace the matches!(app.operation_state, OperationState::Running { .. }) check
with a check that returns an error when app.operation_state is either
OperationState::Running { .. } or OperationState::Discovering { .. }), set
app.error_message the same way and return early so spawn_operation is never
called while Discovering.
| loop { | ||
| terminal.draw(|frame| ui::render(app, frame))?; | ||
|
|
||
| if let Some(event) = rx.recv().await { | ||
| handler::handle_event(app, event, backend_tx).await; | ||
| } | ||
|
|
||
| if app.should_quit { | ||
| break; | ||
| } | ||
| } |
There was a problem hiding this comment.
Handle channel closure to avoid infinite loop.
If rx.recv().await returns None (all senders dropped), the loop continues without breaking, potentially causing an infinite loop where should_quit is never set.
🛠️ Proposed fix
loop {
terminal.draw(|frame| ui::render(app, frame))?;
- if let Some(event) = rx.recv().await {
- handler::handle_event(app, event, backend_tx).await;
+ match rx.recv().await {
+ Some(event) => handler::handle_event(app, event, backend_tx).await,
+ None => break, // Channel closed, exit gracefully
}
if app.should_quit {
break;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| loop { | |
| terminal.draw(|frame| ui::render(app, frame))?; | |
| if let Some(event) = rx.recv().await { | |
| handler::handle_event(app, event, backend_tx).await; | |
| } | |
| if app.should_quit { | |
| break; | |
| } | |
| } | |
| loop { | |
| terminal.draw(|frame| ui::render(app, frame))?; | |
| match rx.recv().await { | |
| Some(event) => handler::handle_event(app, event, backend_tx).await, | |
| None => break, // Channel closed, exit gracefully | |
| } | |
| if app.should_quit { | |
| break; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/mod.rs` around lines 63 - 73, The loop does not handle
rx.recv().await returning None (channel closed), which can cause an infinite
loop; update the receive branch to explicitly handle None by breaking the loop
or setting app.should_quit = true instead of proceeding to call
handler::handle_event; replace the current if let Some(event) = rx.recv().await
{ handler::handle_event(app, event, backend_tx).await; } with a match or if/else
that detects None and exits the loop (or sets app.should_quit) and only calls
handler::handle_event when Some(event).
src/tui/screens/progress.rs
Outdated
| let title_text = match &app.operation_state { | ||
| OperationState::Idle => "Idle".to_string(), | ||
| OperationState::Discovering { message } => message.clone(), | ||
| OperationState::Running { operation, .. } => format!("{}ing Repositories", operation), | ||
| OperationState::Finished { operation, .. } => format!("{} Complete", operation), | ||
| }; |
There was a problem hiding this comment.
Avoid naive {}ing concatenation for operation names.
This yields awkward strings like “Cloneing/Statusing.” Prefer explicit labels or rely on Display without concatenation.
✏️ Minimal fix to avoid malformed words
- OperationState::Running { operation, .. } => format!("{}ing Repositories", operation),
+ OperationState::Running { operation, .. } => format!("{} Repositories", operation),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/screens/progress.rs` around lines 38 - 43, The title_text
construction naively appends "ing" or other suffixes to operation names (in the
match on app.operation_state / OperationState::{Running, Finished}), producing
malformed words like "Cloneing"; change the match arms to use explicit,
human-friendly labels instead of "{}ing" concatenation — e.g., map the Running {
operation } arm to a fixed string like "Cloning Repositories" or use
operation.display()/to_string() without adding "ing", and change Finished {
operation } to "Clone Complete" or similar; update the title_text match arms
(the OperationState::Running and OperationState::Finished branches) to return
those explicit strings.
src/tui/screens/progress.rs
Outdated
| fn render_log(app: &App, frame: &mut Frame, area: ratatui::layout::Rect) { | ||
| let visible_height = area.height.saturating_sub(2) as usize; // account for borders | ||
| let total = app.log_lines.len(); | ||
| let start = total.saturating_sub(visible_height); | ||
|
|
||
| let items: Vec<ListItem> = app.log_lines[start..] | ||
| .iter() | ||
| .map(|line| { | ||
| let style = if line.starts_with("[ok]") { | ||
| Style::default().fg(Color::Green) | ||
| } else if line.starts_with("[!!]") { | ||
| Style::default().fg(Color::Red) | ||
| } else if line.starts_with("[--]") { | ||
| Style::default().fg(Color::DarkGray) | ||
| } else { | ||
| Style::default() | ||
| }; | ||
| ListItem::new(Line::from(Span::styled(format!(" {}", line), style))) | ||
| }) | ||
| .collect(); | ||
|
|
||
| let log = List::new(items).block( | ||
| Block::default() | ||
| .title(" Log ") | ||
| .borders(Borders::ALL) | ||
| .border_style(Style::default().fg(Color::DarkGray)), | ||
| ); | ||
| frame.render_widget(log, area); | ||
| } |
There was a problem hiding this comment.
Scroll offset is ignored, so j/k has no effect.
handle_progress_key updates app.scroll_offset, but render_log always shows the last window. This makes the scroll hint non-functional.
🧰 Suggested fix (apply scroll window)
- let visible_height = area.height.saturating_sub(2) as usize; // account for borders
- let total = app.log_lines.len();
- let start = total.saturating_sub(visible_height);
-
- let items: Vec<ListItem> = app.log_lines[start..]
- .iter()
- .map(|line| {
+ let visible_height = area.height.saturating_sub(2) as usize; // account for borders
+ let total = app.log_lines.len();
+ let items: Vec<ListItem> = if total == 0 || visible_height == 0 {
+ Vec::new()
+ } else {
+ let end = app.scroll_offset.min(total - 1);
+ let start = end.saturating_sub(visible_height - 1);
+ app.log_lines[start..=end]
+ .iter()
+ .map(|line| {
let style = if line.starts_with("[ok]") {
Style::default().fg(Color::Green)
} else if line.starts_with("[!!]") {
Style::default().fg(Color::Red)
} else if line.starts_with("[--]") {
Style::default().fg(Color::DarkGray)
} else {
Style::default()
};
ListItem::new(Line::from(Span::styled(format!(" {}", line), style)))
- })
- .collect();
+ })
+ .collect()
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/screens/progress.rs` around lines 139 - 167, render_log currently
always displays the last visible window of app.log_lines so scrolling (updated
in handle_progress_key via app.scroll_offset) has no effect; change the start
calculation to account for app.scroll_offset by subtracting it from the computed
start (e.g., compute start =
total.saturating_sub(visible_height).saturating_sub(app.scroll_offset) and clamp
to 0), then slice app.log_lines[start..] as before; ensure you use the same
visible_height and total variables and keep bounds-safe operations
(saturating_sub) to avoid panics.
|
@copilot code review[agent] Start review |
|
@manuelgruber I've opened a new pull request, #2, to work on those changes. Once the pull request is ready, I'll request review from you. |
There was a problem hiding this comment.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
♻️ Duplicate comments (9)
src/errors/git.rs (2)
151-153:⚠️ Potential issue | 🟡 Minor
SshAuthFailedsuggestion hardcodesgithub.comdespite having ahostfield.As flagged in a previous review, the suggested action should use the actual host from the error variant rather than hardcoding
github.com.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/errors/git.rs` around lines 151 - 153, The message for GitError::SshAuthFailed currently hardcodes "github.com"; update the match arm for GitError::SshAuthFailed to interpolate and use the variant's host field (e.g., use the host value from SshAuthFailed { host, .. }) in the suggestion string so the suggested command becomes "ssh -T git@{host}" (or equivalent) and ensure any formatting handles ownership/borrowing appropriately.
130-133:⚠️ Potential issue | 🟠 Major
CommandFailedbeing unconditionally retryable may cause unnecessary retries.As flagged in a previous review,
CommandFailedcaptures both transient (network timeout) and permanent (repository not found) failures. Marking all as retryable could waste resources retrying unrecoverable errors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/errors/git.rs` around lines 130 - 133, The is_retryable method currently treats GitError::CommandFailed(_) as always retryable; update GitError::is_retryable to inspect the CommandFailed payload instead of blanket-matching it: change matches!(..., GitError::CommandFailed(_)) to match GitError::CommandFailed(inner) and return true only for transient failure indicators (e.g., timeout/exit codes or stderr strings that indicate transient network/auth/timeouts) and false for permanent indications (e.g., "repository not found", 404/permission denied). Locate the is_retryable function and the GitError::CommandFailed variant and implement a conditional check on the inner error/exit status/message to distinguish transient vs permanent failures.src/config/parser.rs (2)
189-196:⚠️ Potential issue | 🟠 MajorUse the same concurrency bounds as runtime managers.
Validation currently allows
1..=32, while clone/sync managers cap at16. This creates silent clamping after config validation.🔧 Suggested fix
-use crate::operations::clone::DEFAULT_CONCURRENCY; +use crate::operations::clone::{DEFAULT_CONCURRENCY, MAX_CONCURRENCY, MIN_CONCURRENCY}; ... - const MAX_CONCURRENCY: usize = 32; - // Validate concurrency - if !(1..=MAX_CONCURRENCY).contains(&self.concurrency) { + if !(MIN_CONCURRENCY..=MAX_CONCURRENCY).contains(&self.concurrency) { return Err(AppError::config(format!( "concurrency must be between 1 and {}", MAX_CONCURRENCY ))); } ... -# Number of parallel clone/sync operations (1-32) +# Number of parallel clone/sync operations (1-16)Also applies to: 230-233
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/parser.rs` around lines 189 - 196, Validation uses MAX_CONCURRENCY = 32 which mismatches runtime clone/sync managers capped at 16, causing silent clamping; update the validation to use the same cap as the runtime managers (use 16 or reference the shared constant used by clone/sync managers) so that self.concurrency is validated against the actual runtime limit, and update the error message in AppError::config accordingly (change MAX_CONCURRENCY or replace with the shared constant used by the clone/sync managers referenced near the clone/sync implementation).
1-3:⚠️ Potential issue | 🟡 MinorKeep config path documentation consistent with
default_path().Line 3 still references
gisa.config.toml, butConfig::default_path()returns~/.config/git-same/config.toml.🔧 Suggested fix
-//! Handles loading and parsing of gisa.config.toml files. +//! Handles loading and parsing of config.toml files (default: ~/.config/git-same/config.toml).Also applies to: 143-160
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/parser.rs` around lines 1 - 3, Doc comments in src/config/parser.rs incorrectly mention "gisa.config.toml" instead of the path returned by Config::default_path(); update the module and any inline comments (including the region around lines 143-160) to consistently state "~/.config/git-same/config.toml" (or describe it as Config::default_path()) so the documentation matches the code; search for occurrences of "gisa.config.toml" in this file and replace them with the canonical path or a reference to Config::default_path() and ensure the comment text around the parser functions (e.g., the parser module and functions that load the config) reflects that change.src/discovery.rs (1)
108-115:⚠️ Potential issue | 🟠 MajorTreat
git.statusfailures as skipped whenskip_uncommittedis enabled.With current logic, a status error silently falls through and the repo is still queued for sync. For
skip_uncommitted=true, this should skip with a reason.🔧 Suggested fix
- if skip_uncommitted { - if let Ok(status) = git.status(&local_path) { - if status.is_uncommitted || status.has_untracked { - skipped.push((repo, "uncommitted changes".to_string())); - continue; - } - } - } + if skip_uncommitted { + match git.status(&local_path) { + Ok(status) => { + if status.is_uncommitted || status.has_untracked { + skipped.push((repo, "uncommitted changes".to_string())); + continue; + } + } + Err(err) => { + skipped.push((repo, format!("failed to get status: {}", err))); + continue; + } + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/discovery.rs` around lines 108 - 115, The current block calls git.status(&local_path) but ignores errors, causing repos to proceed when skip_uncommitted is true; update the logic in the section that checks skip_uncommitted (the code using git.status, local_path, skipped and repo) to treat Err(_) results as a skip: on Err, push (repo, "status error".to_string() or include the error message) into skipped and continue; on Ok, keep the existing is_uncommitted/has_untracked check and continue when true.src/operations/sync.rs (1)
212-212:⚠️ Potential issue | 🟡 MinorClamp semaphore concurrency at construction to avoid zero-permit deadlock.
Line 212 uses
self.options.concurrencydirectly; because the field is public,0can still be passed and causeacquire_owned()to wait forever.🔧 Suggested fix
- let semaphore = Arc::new(Semaphore::new(self.options.concurrency)); + let concurrency = self + .options + .concurrency + .clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + let semaphore = Arc::new(Semaphore::new(concurrency));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/sync.rs` at line 212, The semaphore is constructed with self.options.concurrency which can be zero and cause acquire_owned() to hang; change the construction of semaphore (the Arc::new(Semaphore::new(...)) call that assigns to semaphore) to clamp the permit count to at least 1 (e.g., compute a local permits = std::cmp::max(1, self.options.concurrency) and pass that) so you never create a Semaphore with 0 permits.src/lib.rs (1)
9-10:⚠️ Potential issue | 🟡 MinorAlign provider support docs with actual implementation.
Line 9 states GitLab and Bitbucket are supported, but current provider creation paths return
NotImplementedfor both. Please mark them as “coming soon” (or remove them from the supported list) to avoid misleading users.🔧 Suggested doc update
-//! - **Multi-Provider Support**: Works with GitHub, GitHub Enterprise, GitLab, and Bitbucket +//! - **Multi-Provider Support**: Works with GitHub and GitHub Enterprise (GitLab/Bitbucket coming soon)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib.rs` around lines 9 - 10, Update the crate-level documentation comment that currently lists "GitLab" and "Bitbucket" as supported providers to avoid claiming unsupported functionality: change those entries to "GitLab (coming soon)" and "Bitbucket (coming soon)" or remove them entirely, and ensure this matches the actual provider creation paths that return NotImplemented; verify and update any related doc comment block (the top-level //! comments) so the public docs align with the behavior of the provider creation code paths that return NotImplemented.src/cli.rs (1)
67-69:⚠️ Potential issue | 🟡 MinorFix init help text to match the actual default config path.
Line 67 references
~/.config/gisa/gisa.config.toml, but runtime default path is~/.config/git-same/config.toml.🔧 Suggested fix
- /// Path for config file (default: ~/.config/gisa/gisa.config.toml) + /// Path for config file (default: ~/.config/git-same/config.toml)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli.rs` around lines 67 - 69, The doc comment for the CLI field `path: Option<PathBuf>` incorrectly says the default is `~/.config/gisa/gisa.config.toml`; update that help text to the actual runtime default `~/.config/git-same/config.toml` (i.e., edit the triple-slash comment above the `path` field in src/cli.rs so the displayed help matches the real default).src/operations/clone.rs (1)
207-207:⚠️ Potential issue | 🟡 MinorClamp effective concurrency when building the semaphore.
Line 207 uses
self.options.concurrencyas-is; with public field mutation,0can still slip through and stall cloning.🔧 Suggested fix
- let semaphore = Arc::new(Semaphore::new(self.options.concurrency)); + let concurrency = self + .options + .concurrency + .clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + let semaphore = Arc::new(Semaphore::new(concurrency));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/clone.rs` at line 207, The semaphore is initialized with self.options.concurrency directly, which can be 0 if the public field was mutated and will deadlock; change the initialization to clamp the effective concurrency to a minimum of 1 (e.g., use max(1, self.options.concurrency) or an equivalent clamp) before calling Semaphore::new so semaphore always has at least one permit; update the line that constructs semaphore (the variable named semaphore and the call to Semaphore::new) to use the clamped value.
🟡 Minor comments (12)
.github/workflows/S3-Publish-Homebrew.yml-50-87 (1)
50-87:⚠️ Potential issue | 🟡 MinorHeredoc indentation will embed leading whitespace into formula.rb.
The heredoc content is indented to align with the YAML, but this whitespace becomes part of the generated file. Homebrew formulas are Ruby files and excessive indentation may cause linting warnings or style issues in the tap repository.
🛠️ Proposed fix: Remove heredoc indentation
cat > formula.rb << 'FORMULA' - class GitSame < Formula - desc "Discover and mirror GitHub org/repo structures locally" - homepage "https://github.com/zaai-com/git-same" - version "VERSION_PLACEHOLDER" - license "MIT" - - if OS.mac? - if Hardware::CPU.arm? - url "URL_PLACEHOLDER/git-same-macos-aarch64" - sha256 "SHA_MACOS_AARCH64_PLACEHOLDER" - else - url "URL_PLACEHOLDER/git-same-macos-x86_64" - sha256 "SHA_MACOS_X86_64_PLACEHOLDER" - end - elsif OS.linux? - if Hardware::CPU.arm? - url "URL_PLACEHOLDER/git-same-linux-aarch64" - sha256 "SHA_LINUX_AARCH64_PLACEHOLDER" - else - url "URL_PLACEHOLDER/git-same-linux-x86_64" - sha256 "SHA_LINUX_X86_64_PLACEHOLDER" - end - end - - def install - if OS.mac? - bin.install "git-same-macos-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" - elsif OS.linux? - bin.install "git-same-linux-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" - end - end - - test do - assert_match version.to_s, shell_output("#{bin}/git-same --version") - end - end - FORMULA +class GitSame < Formula + desc "Discover and mirror GitHub org/repo structures locally" + homepage "https://github.com/zaai-com/git-same" + version "VERSION_PLACEHOLDER" + license "MIT" + + if OS.mac? + if Hardware::CPU.arm? + url "URL_PLACEHOLDER/git-same-macos-aarch64" + sha256 "SHA_MACOS_AARCH64_PLACEHOLDER" + else + url "URL_PLACEHOLDER/git-same-macos-x86_64" + sha256 "SHA_MACOS_X86_64_PLACEHOLDER" + end + elsif OS.linux? + if Hardware::CPU.arm? + url "URL_PLACEHOLDER/git-same-linux-aarch64" + sha256 "SHA_LINUX_AARCH64_PLACEHOLDER" + else + url "URL_PLACEHOLDER/git-same-linux-x86_64" + sha256 "SHA_LINUX_X86_64_PLACEHOLDER" + end + end + + def install + if OS.mac? + bin.install "git-same-macos-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" + elsif OS.linux? + bin.install "git-same-linux-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" + end + end + + test do + assert_match version.to_s, shell_output("#{bin}/git-same --version") + end +end +FORMULA🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S3-Publish-Homebrew.yml around lines 50 - 87, The heredoc used to write formula.rb (the line starting with cat > formula.rb << 'FORMULA') is indented in the workflow so every line in the generated file gets leading spaces; fix this by left-aligning the heredoc: place the cat > formula.rb << 'FORMULA' marker at column 0 in the script section and unindent the entire heredoc body (the class GitSame ... end block) so the Ruby formula contents are written without leading whitespace, ensuring the install/test methods and metadata remain exactly as shown.src/commands/support/concurrency.rs-6-15 (1)
6-15:⚠️ Potential issue | 🟡 MinorAdd explicit guard against zero concurrency for defensive consistency.
While upstream clamping in
sync_workspace.rscurrently prevents zero concurrency from reaching this function,warn_if_concurrency_cappedhas asymmetric bounds checking—it only guards the upper bound. The codebase already establishes the pattern of guarding both bounds (seeCloneManagerOptions.with_concurrencyat line 126 ofsrc/operations/clone.rs). For defensive robustness and consistency, add an explicit check for zero:🛠️ Suggested lower-bound guard
pub(crate) fn warn_if_concurrency_capped(requested: usize, output: &Output) -> usize { + if requested == 0 { + output.warn("Requested concurrency 0 is invalid. Using 1 instead."); + 1 + } else if requested > MAX_CONCURRENCY {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/support/concurrency.rs` around lines 6 - 15, warn_if_concurrency_capped only enforces the upper bound; add a defensive lower-bound check to handle requested == 0 similarly to CloneManagerOptions.with_concurrency: if requested is 0, call output.warn with a clear message and return 1 (or the established minimum concurrency used in CloneManagerOptions) instead of 0; keep the existing MAX_CONCURRENCY branch and ensure the function returns the corrected non-zero value in all cases.src/commands/workspace.rs-50-55 (1)
50-55:⚠️ Potential issue | 🟡 MinorDefault-workspace load failures are silently ignored.
If
default_workspaceis configured but no longer loadable, list output omits that fact. A warning here would make broken config state visible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/workspace.rs` around lines 50 - 55, The code silently ignores errors when loading the configured default workspace (the block that calls WorkspaceManager::load with default_name and prints default_ws.display_label()); change this so a failed load emits a visible warning instead of being dropped—catch the Err from WorkspaceManager::load(default_name) and call output.warn or similar with a clear message (including the default_name and the error) so users see when the configured default workspace cannot be loaded while keeping the existing successful-path behavior that prints output.info with default_ws.display_label().docs/plans/move-config-to-sync-folder.md-91-91 (1)
91-91:⚠️ Potential issue | 🟡 MinorUpdate stale file path in migration plan.
Line 91 references
src/cache.rs, but cache code is undersrc/cache/mod.rs(with submodules). This can mislead implementation planning.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/plans/move-config-to-sync-folder.md` at line 91, The migration plan references a stale cache file path (`src/cache.rs`); update the plan to point to the actual cache module entry (`src/cache/mod.rs`) so the implementation guidance matches the repo layout and reflects that cache code lives under a mod.rs with submodules.src/commands/workspace.rs-124-131 (1)
124-131:⚠️ Potential issue | 🟡 Minor
test_list_emptydoes not assert the result and can hide failures.Line 130 discards
list’sResult, so the test may pass even if behavior regresses.Minimal test tightening
- let _ = list(&config, &output); + assert!(list(&config, &output).is_ok());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/workspace.rs` around lines 124 - 131, The test test_list_empty currently discards the Result returned by list and can hide failures; change it to assert the Result is Ok (or assert the expected Err) by calling list(&config, &output) and using assert!(... .is_ok()) or unwrap() as appropriate, so failures surface—update the test_list_empty function to evaluate the Result from list (referencing Config and quiet_output) and assert the expected outcome instead of ignoring it.src/commands/init.rs-51-51 (1)
51-51:⚠️ Potential issue | 🟡 MinorUse a consistent CLI command name in user guidance.
This message says
git-same setup, while other command flows in this PR guide users withgisa ..., which is confusing.🔧 Proposed fix
- output.info("Run 'git-same setup' to configure a local folder as workspace."); + output.info("Run 'gisa setup' to configure a local folder as workspace.");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/init.rs` at line 51, The user-facing message in the call to output.info in init.rs uses the inconsistent command name "git-same setup"; update that string to the project-wide CLI name ("gisa") so guidance is consistent (e.g., change the message passed to output.info in the init command implementation to use "gisa setup"). Ensure you modify the string literal only in the output.info invocation so other logic remains unchanged.src/setup/screens/path.rs-85-88 (1)
85-88:⚠️ Potential issue | 🟡 MinorClamp cursor placement to the visible input width.
Line 86 computes
cursor_xfrom full path length; long input can place the cursor outside the rendered input box.🔧 Proposed fix
- let cursor_x = chunks[1].x + 1 + 2 + cursor_pos as u16; + let inner_width = chunks[1].width.saturating_sub(2); // left/right borders + let max_visible_chars = inner_width.saturating_sub(2); // leading " " + let clamped_cursor = (cursor_pos as u16).min(max_visible_chars); + let cursor_x = chunks[1].x + 1 + 2 + clamped_cursor;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/path.rs` around lines 85 - 88, The cursor_x calculation uses the full input string length (cursor_pos) and can place the cursor outside the visible input box when the path is long; update the logic in the block guarded by state.path_suggestions_mode and state.path_browse_mode (where cursor_x is computed and frame.set_cursor_position is called) to clamp cursor_x to the rendered input area's visible width (compute the input inner width from chunks[1] minus borders/padding and use min(cursor_x, chunks_left + inner_width - 1) or equivalent), ensuring the final cursor_x passed to frame.set_cursor_position never exceeds the input box bounds.src/banner.rs-81-97 (1)
81-97:⚠️ Potential issue | 🟡 MinorPotential panic with single-element stops array.
If
interpolate_stopsis called with a single-element array,segments - 1underflows on Line 87. While current usage with 3+ stops is safe, this is a latent bug if the function is reused.🛡️ Defensive fix
#[cfg(feature = "tui")] pub(crate) fn interpolate_stops(stops: &[(u8, u8, u8)], t: f64) -> (u8, u8, u8) { + if stops.len() <= 1 { + return stops.first().copied().unwrap_or((0, 0, 0)); + } let t = t.clamp(0.0, 1.0); let segments = stops.len() - 1;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/banner.rs` around lines 81 - 97, interpolate_stops can underflow when stops has only one element; add a defensive guard at the top of the function: if stops.is_empty() return a sensible default (e.g., (0,0,0)) and if stops.len() == 1 return stops[0]; then compute segments = stops.len() - 1 and keep the rest of the logic (this avoids segments - 1 underflow and out-of-bounds access when computing idx and stops[idx + 1]). Reference: interpolate_stops, stops, segments, idx, local_t.src/tui/screens/dashboard.rs-106-111 (1)
106-111:⚠️ Potential issue | 🟡 MinorPlatform-specific command won't work cross-platform.
The
opencommand is macOS-specific. This will fail silently on Linux (usexdg-open) and Windows (usestartorexplorer).🛠️ Cross-platform fix
KeyCode::Enter => { // Open the selected repo's folder if let Some(path) = selected_repo_path(app) { - let _ = std::process::Command::new("open").arg(&path).spawn(); + #[cfg(target_os = "macos")] + let _ = std::process::Command::new("open").arg(&path).spawn(); + #[cfg(target_os = "linux")] + let _ = std::process::Command::new("xdg-open").arg(&path).spawn(); + #[cfg(target_os = "windows")] + let _ = std::process::Command::new("explorer").arg(&path).spawn(); } }Alternatively, consider using the
opencrate which handles this cross-platform.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/screens/dashboard.rs` around lines 106 - 111, The current KeyCode::Enter handler uses the macOS-only command Command::new("open") in the block that calls selected_repo_path(app). Replace this with a cross-platform opener: either call the open crate (e.g., open::that(path)) or dispatch per-OS using cfg!(target_os = "windows")/cfg!(target_os = "macos")/else to run "start"/"open"/"xdg-open" respectively, and propagate or log errors from the spawn/open call; update the code inside the KeyCode::Enter arm where selected_repo_path(app) is used and remove the hardcoded "open" invocation.src/git/shell.rs-121-128 (1)
121-128:⚠️ Potential issue | 🟡 MinorHandle unborn-branch status format in branch parsing.
For status output like
## No commits yet on main, the current fallback returnsNoas the branch name.♻️ Suggested fix
- let branch = line.split_whitespace().next().unwrap_or("HEAD"); + let branch = if let Some(rest) = line.strip_prefix("No commits yet on ") { + rest.trim() + } else { + line.split_whitespace().next().unwrap_or("HEAD") + }; (branch, None)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/git/shell.rs` around lines 121 - 128, The fallback branch parsing incorrectly takes the first whitespace token (e.g., "No" from "No commits yet on main"); update the logic in the block that currently uses line.split_whitespace().next().unwrap_or("HEAD") (the branch parsing branch in src/git/shell.rs where bracket_idx is handled) to detect "No commits yet on <branch>" (or similar unborn-branch phrases) and extract the actual branch name (the last token after the phrase) instead of the first token; simplest fix: check for the "No commits yet on " prefix (or use a small regex) and return the trailing branch name, otherwise keep the existing split_whitespace() fallback and HEAD default.src/tui/app.rs-402-410 (1)
402-410:⚠️ Potential issue | 🟡 MinorUpdate
workspace_indexwhen selecting a workspace.On Line 402, state updates
active_workspacebut notworkspace_index, which can desync selection state in the UI.🔧 Suggested fix
pub fn select_workspace(&mut self, index: usize) { if let Some(ws) = self.workspaces.get(index).cloned() { + self.workspace_index = index; self.base_path = Some(ws.expanded_base_path()); // Load sync history for this workspace self.sync_history = crate::cache::SyncHistoryManager::for_workspace(&ws.name) .and_then(|m| m.load()) .unwrap_or_default();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/app.rs` around lines 402 - 410, The select_workspace method updates active_workspace but fails to update the selection index, causing UI desync; update the workspace_index state inside select_workspace (e.g., set self.workspace_index = Some(index) or the appropriate index type) whenever a workspace is successfully selected (use symbols: select_workspace, workspace_index, active_workspace, workspaces) so the selection state remains consistent with the chosen workspace and ensure this assignment occurs alongside setting base_path, sync_history, and active_workspace.src/setup/state.rs-130-133 (1)
130-133:⚠️ Potential issue | 🟡 MinorUse a path-boundary check in
tilde_collapse.Current prefix logic can incorrectly collapse paths that only share a string prefix with HOME (e.g.,
/home/alice2/...).🔧 Suggested fix
pub fn tilde_collapse(path: &str) -> String { if let Ok(home) = std::env::var("HOME") { - if path.starts_with(&home) { - return format!("~{}", &path[home.len()..]); + let home = home.trim_end_matches('/'); + if path == home { + return "~".to_string(); + } + let home_prefix = format!("{}/", home); + if let Some(rest) = path.strip_prefix(&home_prefix) { + return format!("~/{}", rest); } } path.to_string() }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/state.rs` around lines 130 - 133, The tilde_collapse function incorrectly collapses any path that string-starts with HOME (using std::env::var("HOME") and path.starts_with(&home)), causing false matches like "/home/alice2/..."; change the check to only collapse when path == home or when path has HOME as a proper directory prefix (i.e., use strip_prefix(&home) and ensure the remaining suffix either is empty or begins with '/'), then return format!("~{}", suffix) accordingly so only true path-boundary matches are replaced.
🧹 Nitpick comments (16)
.gitignore (1)
15-16: Scopeconfig.tomlignore rule to avoid masking tracked nested configs.At Line 15,
config.tomlmatches any path, which can accidentally hide intentionally tracked files like fixtures/examples under subdirectories. Consider anchoring it to repository root.Suggested adjustment
-config.toml +/config.toml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 15 - 16, The .gitignore entry "config.toml" is too broad and will ignore files in nested directories; update the rule to anchor it to the repository root by replacing the unanchored "config.toml" entry with "/config.toml" (leave other rules like "*.local.toml" intact) so only the top-level config is ignored and tracked nested configs/fixtures are not masked..github/workflows/S3-Publish-Homebrew.yml (2)
24-34: Consider adding error handling for missing release assets.If any of the four platform binaries are missing from the release, the workflow will fail without a clear message. Adding validation or
|| exit 1with descriptive errors would improve debuggability.♻️ Proposed improvement: Validate downloaded assets
- name: Download release assets env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | mkdir -p assets gh release download "${{ steps.version.outputs.tag }}" \ --pattern "git-same-linux-x86_64" \ --pattern "git-same-linux-aarch64" \ --pattern "git-same-macos-x86_64" \ --pattern "git-same-macos-aarch64" \ --dir assets + + # Verify all expected assets were downloaded + for asset in git-same-linux-x86_64 git-same-linux-aarch64 git-same-macos-x86_64 git-same-macos-aarch64; do + if [[ ! -f "assets/$asset" ]]; then + echo "::error::Missing release asset: $asset" + exit 1 + fi + done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S3-Publish-Homebrew.yml around lines 24 - 34, The "Download release assets" step currently runs gh release download for four platform patterns but lacks validation; update that step to verify each expected asset was actually downloaded after the gh command (e.g., check for files matching "git-same-linux-x86_64", "git-same-linux-aarch64", "git-same-macos-x86_64", "git-same-macos-aarch64" inside the assets directory) and if any are missing emit a clear error message and exit non‑zero (use echo to print which pattern(s) are missing and exit 1) so the workflow fails with a descriptive message rather than a cryptic gh failure.
103-108: Git commit will fail if formula is unchanged (re-run scenario).If the workflow is re-run for the same version,
git commitwill fail because there are no changes. Consider using--allow-emptyor checking for changes first.♻️ Proposed fix: Handle no-changes scenario
cd tap git config user.name "github-actions[bot]" git config user.email "github-actions[bot]@users.noreply.github.com" git add "Formula/${FORMULA_NAME}.rb" - git commit -m "Update ${FORMULA_NAME} to ${{ steps.version.outputs.version }}" - git push + if git diff --cached --quiet; then + echo "No changes to commit" + else + git commit -m "Update ${FORMULA_NAME} to ${{ steps.version.outputs.version }}" + git push + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S3-Publish-Homebrew.yml around lines 103 - 108, The git commit step that runs 'git commit -m "Update ${FORMULA_NAME} to ${{ steps.version.outputs.version }}"' will fail when there are no changes (re-run same version); update the workflow to either use 'git commit --allow-empty' for that commit step or add a pre-check that inspects the repo (e.g., run 'git status --porcelain' or 'git diff --exit-code') and skip the commit/push when there are no staged changes for Formula/${FORMULA_NAME}.rb, ensuring the step that references FORMULA_NAME and steps.version.outputs.version handles the no-changes scenario gracefully..github/workflows/S4-Publish-Crates.yml (1)
9-24: Consider adding verification steps before publishing.Publishing without running tests or a dry-run is risky—a broken crate could be released. Consider adding verification steps:
♻️ Proposed enhancement with verification and dry-run
publish: name: Publish to crates.io runs-on: ubuntu-latest steps: - uses: actions/checkout@v6 - name: Install Rust uses: dtolnay/rust-toolchain@stable - uses: Swatinem/rust-cache@v2 + - name: Run tests + run: cargo test --locked + + - name: Dry run + run: cargo publish --locked --dry-run + - name: Publish - run: cargo publish + run: cargo publish --locked env: CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S4-Publish-Crates.yml around lines 9 - 24, The workflow's publish job (job name "publish" and the step named "Publish" that runs "cargo publish") lacks verification; add pre-publish checks by inserting steps to run tests (cargo test --workspace or appropriate test command) and a dry-run publish (cargo publish --dry-run) and optionally a package/verify step (cargo package or cargo check for the crate) before the actual cargo publish, failing the job if any of these steps fail.docs/README.md (1)
1-382: Consider adding extension-oriented README sections.A concise
Directory Structuresection and aCommon Modificationssection would improve contributor onboarding and maintenance.Based on learnings: Applies to **/README.md : Include a Directory Structure section with file tree and descriptions in each README; Include a Common Modifications section explaining how to extend the feature in each README.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/README.md` around lines 1 - 382, Add two new sections to the README: a "Directory Structure" heading that shows a concise file tree (top-level bins, src, docs, config example, assets) with one-line descriptions for each entry to help new contributors locate code and docs, and a "Common Modifications" heading that lists typical extension tasks (adding a provider, adding CLI commands/aliases, adjusting config.toml fields like providers/base_path/filters, and updating TUI screens) with step-by-step bullets and pointers to the relevant symbols (e.g., config.toml keys: structure, providers, clone.depth; CLI verbs: init, setup, sync, workspace; TUI screens: Dashboard, Setup Wizard) so maintainers know where to change code and docs when extending features.src/commands/support/mod.rs (1)
3-4: Tighten module visibility to crate scope.These are internal support helpers; exposing modules as
pubbroadens the surface unnecessarily.♻️ Suggested visibility tightening
-pub mod concurrency; -pub mod workspace; +pub(crate) mod concurrency; +pub(crate) mod workspace;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/support/mod.rs` around lines 3 - 4, The modules concurrency and workspace are declared public but should be restricted to crate scope; change their declarations from `pub mod concurrency;` and `pub mod workspace;` to crate-visible (`pub(crate) mod concurrency;` and `pub(crate) mod workspace;`) so these support helpers are not exported from the crate API, updating any internal uses if necessary to match the tightened visibility.src/infra/storage/mod.rs (1)
3-4: Consider replacing glob re-exports with explicit symbols.Lines 3-4 expose broad, implicit API surface. Explicit re-exports make public contracts easier to maintain and review over time.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/infra/storage/mod.rs` around lines 3 - 4, Replace the glob re-exports in mod.rs with explicit symbol lists: instead of `pub use crate::cache::*;` and `pub use crate::config::workspace_manager::*;` open the `crate::cache` and `crate::config::workspace_manager` modules, identify the concrete public types/functions you intend to expose (e.g. structs, enums, traits, functions like Cache, CacheError, init_cache, WorkspaceManager, WorkspaceConfig, etc.), and change the lines to `pub use crate::cache::{SymbolA, SymbolB, ...};` and `pub use crate::config::workspace_manager::{SymbolX, SymbolY, ...};` so the public API is explicit and reviewable.src/tui/backend.rs (1)
259-279: Minor: Redundant clones at function entry.
configandworkspaceare cloned unconditionally at Lines 261-262, but then cloned again at Lines 272-273 for the Status path. The initial clones are only needed for the Sync path.♻️ Suggested optimization
pub fn spawn_operation(operation: Operation, app: &App, tx: UnboundedSender<AppEvent>) { - let config = app.config.clone(); - let workspace = app.active_workspace.clone(); - let sync_pull = app.sync_pull; - match operation { Operation::Sync => { + let config = app.config.clone(); + let workspace = app.active_workspace.clone(); + let sync_pull = app.sync_pull; tokio::spawn(async move { run_sync_operation(config, workspace, tx, sync_pull).await; }); } Operation::Status => { let workspace = app.active_workspace.clone(); let config = app.config.clone();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/backend.rs` around lines 259 - 279, In spawn_operation, avoid cloning config and workspace unconditionally at the top; only clone what each branch needs: for Operation::Sync clone app.config and app.active_workspace (and capture app.sync_pull) before spawning run_sync_operation, and for Operation::Status clone app.config and app.active_workspace inside the Status arm before spawning run_status_scan; update the function so initial clones at the top are removed and each match arm performs its own necessary clones to eliminate the redundant copies.src/setup/mod.rs (1)
66-78: Consider a cleaner approach for triggering org discovery.Using a synthetic
KeyCode::Nullevent to trigger async discovery is unconventional and couples the handler to expect this special case. A more explicit approach would improve clarity.♻️ Suggested refactor
Consider extracting the discovery trigger into a dedicated function or using a separate message/signal pattern rather than routing through the key handler:
// If we're on the orgs step and loading, trigger discovery before waiting for input if state.step == SetupStep::SelectOrgs && state.org_loading { // Render loading state first, then do discovery terminal.draw(|frame| ui::render(state, frame))?; - handler::handle_key( - state, - crossterm::event::KeyEvent::new( - crossterm::event::KeyCode::Null, - crossterm::event::KeyModifiers::NONE, - ), - ) - .await; + handler::trigger_org_discovery(state).await; continue; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/mod.rs` around lines 66 - 78, The code uses a synthetic KeyEvent (KeyCode::Null) passed to handler::handle_key to kick off org discovery when state.step == SetupStep::SelectOrgs and state.org_loading; replace this by extracting the discovery trigger out of the key handler into an explicit function or signal (e.g., a new handler::start_org_discovery or a message enum variant) and call that directly after rendering (instead of creating KeyEvent::new(...)); update handler::handle_key to remove any special-case logic for KeyCode::Null and ensure the new function reads state.org_loading and invokes the existing discovery logic/state transitions used by handler::handle_key.src/setup/screens/orgs.rs (1)
121-150: Minor: Consider preventing integer overflow in bar/percentage calculations.While unlikely in practice, extremely large repo counts could cause integer overflow in the multiplication before division. Using
saturating_mulor casting to a wider type would be more defensive.🛡️ Defensive fix
// Proportional bar let filled = if max_repos > 0 { - (org.repo_count * bar_width) / max_repos + ((org.repo_count as u64 * bar_width as u64) / max_repos as u64) as usize } else { 0 } .max(if org.repo_count > 0 { 1 } else { 0 });// Percentage if total_repos > 0 { - let pct = (org.repo_count * 100) / total_repos; + let pct = ((org.repo_count as u64 * 100) / total_repos as u64) as usize;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/orgs.rs` around lines 121 - 150, The proportional bar and percentage math (the calculation assigning filled and pct using (org.repo_count * bar_width) / max_repos and (org.repo_count * 100) / total_repos) can overflow for very large repo counts; update these to do arithmetic in a wider/saturating integer first (e.g., cast org.repo_count, bar_width, max_repos, total_repos to u64 or use saturating_mul) before multiplying, then convert/clamp the result back to the target type for filled and pct and preserve the existing .max and zero-division guards in the filled and pct branches (refer to the filled calculation and the pct computation near org.repo_count, bar_width, max_repos, and total_repos).src/domain/repo_path_template.rs (1)
53-59:scan_depthassumes well-formed templates.The depth calculation only checks for
{provider}presence, returning 2 or 3. Custom templates like{org}/{org}/{repo}orcustom/{org}/{repo}would report incorrect depth. Consider counting actual path separators in the rendered output or documenting the assumption.♻️ Alternative: count segments dynamically
pub fn scan_depth(&self) -> usize { - if self.template.contains("{provider}") { - 3 - } else { - 2 - } + // Count the number of path segments in the template + self.template.matches('/').count() + 1 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/repo_path_template.rs` around lines 53 - 59, scan_depth currently assumes templates only vary by presence of "{provider}" and returns 2 or 3; update the scan_depth method to compute depth dynamically by splitting self.template on '/' (after trimming leading/trailing slashes), filtering out empty segments, and returning the resulting segment count so custom templates like "custom/{org}/{repo}" or repeated placeholders are handled correctly; reference the scan_depth method and the self.template field when making this change.src/tui/handler.rs (1)
204-204: Silent error handling on workspace reload.
unwrap_or_default()silently ignores errors when reloading workspaces after setup completion. If the filesystem becomes inaccessible, the user won't know why their workspaces disappeared.♻️ Consider logging the error
- app.workspaces = WorkspaceManager::list().unwrap_or_default(); + app.workspaces = WorkspaceManager::list().unwrap_or_else(|e| { + tracing::warn!(error = %e, "Failed to reload workspaces after setup"); + Vec::new() + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/handler.rs` at line 204, Replace the silent unwrap_or_default on WorkspaceManager::list() so failures are logged and users can be informed: call WorkspaceManager::list() and match or use if let Err(e) to capture the error, log it with a descriptive message (e.g., referencing "workspace reload" and include the error value) and then set app.workspaces to the default fallback; update the site where app.workspaces is assigned (the line using WorkspaceManager::list()) to perform this error handling and logging instead of unwrap_or_default().src/main.rs (1)
67-77: Silent error handling during config auto-creation.Errors during directory creation (line 72) and file writing (line 74) are silently ignored with
let _ = .... If the config directory is not writable, the TUI will fail later with a confusing error.♻️ Consider logging creation failures
if cli.config.is_none() { if let Ok(default_path) = Config::default_path() { if !default_path.exists() { if let Some(parent) = default_path.parent() { - let _ = std::fs::create_dir_all(parent); + if let Err(e) = std::fs::create_dir_all(parent) { + debug!(error = %e, "Failed to create config directory"); + } } - let _ = std::fs::write(&default_path, Config::default_toml()); + if let Err(e) = std::fs::write(&default_path, Config::default_toml()) { + debug!(error = %e, "Failed to write default config"); + } } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.rs` around lines 67 - 77, The auto-create block around Config::default_path() silently ignores errors from std::fs::create_dir_all and std::fs::write; change the logic in that block (the code referencing Config::default_path and Config::default_toml) to capture and handle those Result errors instead of using let _ = ..., log the failures (using the project's logger or eprintln!) with context about the path and operation, and propagate or exit with a clear error so the TUI won’t fail later with a confusing error.src/git/traits.rs (3)
255-270: Sharederror_messagemay cause unexpected behavior when configuring multiple failure modes.The
fail_clones,fail_fetches, andfail_pullsmethods all write to the sameerror_messagefield. If a test configures multiple failure modes with different messages, only the last message will be used for all failures.This is likely acceptable for simple test scenarios, but consider documenting this behavior or using separate error messages per operation type if more granular control is needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/git/traits.rs` around lines 255 - 270, The three setters fail_clones, fail_fetches, and fail_pulls all write to the same config.error_message which means configuring multiple failure modes with different messages will clobber earlier values; update the Config struct to hold per-operation messages (e.g., clone_error_message, fetch_error_message, pull_error_message) and change each method to set its corresponding field (and leave other *_succeeds flags unchanged), or alternatively document the single-shared-message behavior; search for references to config.error_message and adapt uses (and tests) to read the correct per-operation field when returning error text.
351-361: Mockstatus()always succeeds.Unlike
clone_repo,fetch, andpull, thestatusmethod has no failure path - it always returnsOk. If you need to test error handling for status checks, consider adding astatus_succeedsflag toMockConfig.This is fine if status failures aren't a test priority for now.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/git/traits.rs` around lines 351 - 361, The mock status() currently always returns Ok; add a boolean flag (e.g., status_succeeds) to MockConfig and use it in status(&self, repo_path: &Path) to optionally return an Err(GitError) when false (while still pushing the path into self.log.status_checks). Concretely: add status_succeeds: bool to MockConfig (default true), update any MockConfig constructors in tests, and modify the status method to check config.status_succeeds and return Err(GitError::... appropriate variant) when false, otherwise return the existing Ok(self.config.path_statuses.get(...).cloned().unwrap_or(self.config.default_status.clone())). Ensure you reference the RepoStatus and GitError types used by status so tests can control success/failure.
45-76: Consider consistency between boolean flags and count fields.The
RepoStatusstruct has both boolean flags (is_uncommitted,has_untracked) and count fields (staged_count,unstaged_count,untracked_count). The helpersis_clean_and_synced()andcan_fast_forward()only check the boolean flags, ignoring the count fields.This could lead to inconsistent states where:
has_untrackedisfalsebutuntracked_count > 0is_uncommittedisfalsebutstaged_count > 0orunstaged_count > 0Consider either:
- Deriving the booleans from counts in the helper methods
- Adding invariant documentation clarifying the expected relationship
- Using computed properties instead of storing redundant state
♻️ Option: Derive booleans from counts in helpers
impl RepoStatus { /// Returns true if the repo is clean and in sync with upstream. pub fn is_clean_and_synced(&self) -> bool { - !self.is_uncommitted && !self.has_untracked && self.ahead == 0 && self.behind == 0 + self.staged_count == 0 + && self.unstaged_count == 0 + && self.untracked_count == 0 + && self.ahead == 0 + && self.behind == 0 } /// Returns true if it's safe to do a fast-forward pull. pub fn can_fast_forward(&self) -> bool { - !self.is_uncommitted && self.ahead == 0 && self.behind > 0 + self.staged_count == 0 + && self.unstaged_count == 0 + && self.ahead == 0 + && self.behind > 0 } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/git/traits.rs` around lines 45 - 76, RepoStatus currently stores both boolean flags (is_uncommitted, has_untracked) and counts (staged_count, unstaged_count, untracked_count) but is_clean_and_synced() and can_fast_forward() only check the booleans, causing possible inconsistencies; update these helpers (is_clean_and_synced and can_fast_forward) to derive their results from the count fields (treat repo as uncommitted if staged_count>0 || unstaged_count>0, and has_untracked if untracked_count>0) or to combine checks (e.g., (!is_uncommitted && staged_count==0 && unstaged_count==0)), and add a short doc comment on RepoStatus stating the invariant you enforce (or remove the redundant boolean fields and compute them on demand) so the struct cannot represent contradictory state.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (90)
.claude/CLAUDE.md.github/workflows/S1-Test-CI.yml.github/workflows/S2-Release-GitHub.yml.github/workflows/S3-Publish-Homebrew.yml.github/workflows/S4-Publish-Crates.yml.gitignoreAGENTS.mdCargo.tomlconductor.jsondocs/README.mddocs/plans/move-config-to-sync-folder.mddocs/plans/optimize-binary-aliases.mddocs/plans/path-selector-ux.mdrust-toolchain.tomlsrc/app/cli/mod.rssrc/app/mod.rssrc/app/tui/mod.rssrc/banner.rssrc/cache/discovery.rssrc/cache/mod.rssrc/cache/sync_history.rssrc/checks.rssrc/cli.rssrc/commands/init.rssrc/commands/mod.rssrc/commands/reset.rssrc/commands/setup.rssrc/commands/status.rssrc/commands/support/concurrency.rssrc/commands/support/mod.rssrc/commands/support/workspace.rssrc/commands/sync_cmd.rssrc/commands/workspace.rssrc/config/mod.rssrc/config/parser.rssrc/config/workspace.rssrc/config/workspace_manager.rssrc/config/workspace_policy.rssrc/config/workspace_store.rssrc/discovery.rssrc/domain/mod.rssrc/domain/repo_path_template.rssrc/errors/app.rssrc/errors/git.rssrc/git/shell.rssrc/git/traits.rssrc/infra/mod.rssrc/infra/storage/mod.rssrc/lib.rssrc/main.rssrc/operations/clone.rssrc/operations/sync.rssrc/output/mod.rssrc/output/printer.rssrc/output/progress/clone.rssrc/output/progress/discovery.rssrc/output/progress/mod.rssrc/output/progress/styles.rssrc/output/progress/sync.rssrc/setup/handler.rssrc/setup/mod.rssrc/setup/screens/auth.rssrc/setup/screens/complete.rssrc/setup/screens/confirm.rssrc/setup/screens/mod.rssrc/setup/screens/orgs.rssrc/setup/screens/path.rssrc/setup/screens/provider.rssrc/setup/screens/welcome.rssrc/setup/state.rssrc/setup/ui.rssrc/tui/app.rssrc/tui/backend.rssrc/tui/event.rssrc/tui/handler.rssrc/tui/mod.rssrc/tui/screens/dashboard.rssrc/tui/screens/mod.rssrc/tui/screens/settings.rssrc/tui/screens/sync.rssrc/tui/screens/system_check.rssrc/tui/screens/workspaces.rssrc/tui/ui.rssrc/types/repo.rssrc/workflows/mod.rssrc/workflows/status_scan.rssrc/workflows/sync_workspace.rstests/integration_test.rstoolkit/Conductor/run.shtoolkit/Conductor/setup.sh
🚧 Files skipped from review as they are similar to previous changes (1)
- Cargo.toml
| on: | ||
| workflow_dispatch: | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# List all workflow files
echo "=== GitHub Workflows ==="
find .github/workflows -type f -name "*.yml" -o -name "*.yaml" 2>/dev/null | sort
# Read the S1-Test-CI.yml file
echo -e "\n=== S1-Test-CI.yml content ==="
cat -n .github/workflows/S1-Test-CI.yml 2>/dev/null || echo "File not found"Repository: ZAAI-com/git-same
Length of output: 3979
🏁 Script executed:
#!/bin/bash
# Read the other workflow files
echo "=== S2-Release-GitHub.yml (first 20 lines) ==="
head -20 .github/workflows/S2-Release-GitHub.yml 2>/dev/null
echo -e "\n=== S3-Publish-Homebrew.yml (first 20 lines) ==="
head -20 .github/workflows/S3-Publish-Homebrew.yml 2>/dev/null
echo -e "\n=== S4-Publish-Crates.yml (first 20 lines) ==="
head -20 .github/workflows/S4-Publish-Crates.yml 2>/dev/null
# Check for branch protection settings or required checks
echo -e "\n=== Looking for protection rules or config ==="
find . -maxdepth 2 -type f \( -name "*.json" -o -name "*.yml" -o -name "*.yaml" \) | xargs grep -l "required.*check\|branch.*protection" 2>/dev/null | head -5Repository: ZAAI-com/git-same
Length of output: 1289
Add automatic CI triggers for pull requests and pushes to main.
S1-Test-CI only has workflow_dispatch, meaning comprehensive tests (formatting, clippy, multi-OS builds, code coverage, security audits) don't run automatically on PRs or pushes. This allows changes to merge without validation.
Suggested trigger update
on:
+ pull_request:
+ push:
+ branches: [main]
workflow_dispatch:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| on: | |
| workflow_dispatch: | |
| on: | |
| pull_request: | |
| push: | |
| branches: [main] | |
| workflow_dispatch: | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/S1-Test-CI.yml around lines 3 - 5, The workflow currently
only uses manual trigger (on: workflow_dispatch) so CI won't run automatically;
update the workflow's triggers to include automatic events by replacing or
extending the on: block to include pull_request (to run on PRs) and push with
branches: [main] (to run on pushes to main) so the S1-Test-CI job executes for
PRs and main pushes; locate the on: workflow_dispatch entry in
.github/workflows/S1-Test-CI.yml and add those events accordingly.
src/tui/event.rs
Outdated
| if event::poll(tick_rate).unwrap_or(false) { | ||
| if let Ok(ev) = event::read() { | ||
| let app_event = match ev { | ||
| CtEvent::Key(key) => AppEvent::Terminal(key), | ||
| CtEvent::Resize(w, h) => AppEvent::Resize(w, h), | ||
| _ => continue, | ||
| }; | ||
| if event_tx.send(app_event).is_err() { | ||
| break; | ||
| } | ||
| } | ||
| } else { | ||
| // Tick on timeout | ||
| if event_tx.send(AppEvent::Tick).is_err() { | ||
| break; | ||
| } |
There was a problem hiding this comment.
Handle poll errors explicitly to prevent a hot loop.
Line 98 currently treats event::poll errors as false and immediately sends Tick, which can spin and flood the channel if polling keeps failing.
♻️ Suggested fix
- if event::poll(tick_rate).unwrap_or(false) {
+ let has_event = match event::poll(tick_rate) {
+ Ok(v) => v,
+ Err(_) => break,
+ };
+ if has_event {
if let Ok(ev) = event::read() {
let app_event = match ev {
CtEvent::Key(key) => AppEvent::Terminal(key),
CtEvent::Resize(w, h) => AppEvent::Resize(w, h),
_ => continue,
};
if event_tx.send(app_event).is_err() {
break;
}
}
} else {
// Tick on timeout
if event_tx.send(AppEvent::Tick).is_err() {
break;
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if event::poll(tick_rate).unwrap_or(false) { | |
| if let Ok(ev) = event::read() { | |
| let app_event = match ev { | |
| CtEvent::Key(key) => AppEvent::Terminal(key), | |
| CtEvent::Resize(w, h) => AppEvent::Resize(w, h), | |
| _ => continue, | |
| }; | |
| if event_tx.send(app_event).is_err() { | |
| break; | |
| } | |
| } | |
| } else { | |
| // Tick on timeout | |
| if event_tx.send(AppEvent::Tick).is_err() { | |
| break; | |
| } | |
| let has_event = match event::poll(tick_rate) { | |
| Ok(v) => v, | |
| Err(_) => break, | |
| }; | |
| if has_event { | |
| if let Ok(ev) = event::read() { | |
| let app_event = match ev { | |
| CtEvent::Key(key) => AppEvent::Terminal(key), | |
| CtEvent::Resize(w, h) => AppEvent::Resize(w, h), | |
| _ => continue, | |
| }; | |
| if event_tx.send(app_event).is_err() { | |
| break; | |
| } | |
| } | |
| } else { | |
| // Tick on timeout | |
| if event_tx.send(AppEvent::Tick).is_err() { | |
| break; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/event.rs` around lines 98 - 113, event::poll currently treats Err as
false which causes immediate Tick sends and can spin; change the poll handling
to match on event::poll(tick_rate) and handle three cases: Ok(true) => read
event and map to AppEvent as before (using event::read and CtEvent branches),
Ok(false) => send AppEvent::Tick, Err(e) => handle the error explicitly (e.g.,
log via your logger, optionally send a dedicated AppEvent::Error or break the
loop, or sleep briefly) instead of treating it as a timeout; update references
in this block (event::poll, event::read, tick_rate, event_tx, AppEvent::Tick)
accordingly so polling failures no longer cause a hot loop.
| enable_raw_mode()?; | ||
| let mut stdout = io::stdout(); | ||
| execute!(stdout, EnterAlternateScreen)?; | ||
| let backend = CrosstermBackend::new(stdout); | ||
| let mut terminal = Terminal::new(backend)?; | ||
|
|
There was a problem hiding this comment.
Terminal state can be left broken if initialization fails early.
If execute!(..., EnterAlternateScreen)? or Terminal::new(backend)? errors, function returns before the restore block (Lines 48-50), leaving raw mode / alternate screen active.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/mod.rs` around lines 28 - 33, The initialization can leave the
terminal in raw/alternate-screen state if EnterAlternateScreen or Terminal::new
fails; ensure cleanup always runs by wrapping the steps that may fail
(enable_raw_mode, execute!(..., EnterAlternateScreen), CrosstermBackend::new,
Terminal::new) with a panic-safe/early-return-safe guard or explicit error-path
cleanup: after calling enable_raw_mode() and/or execute!(...,
EnterAlternateScreen)? ensure that on any subsequent error you call
execute!(stdout, LeaveAlternateScreen) and disable_raw_mode() (or install a
small RAII guard type that calls LeaveAlternateScreen and disable_raw_mode in
Drop) so the terminal is restored even when Terminal::new or later
initialization returns an error.
src/tui/mod.rs
Outdated
| let mut terminal = Terminal::new(backend)?; | ||
|
|
||
| // Load workspaces | ||
| let workspaces = WorkspaceManager::list().unwrap_or_default(); |
There was a problem hiding this comment.
Workspace load errors are swallowed and converted to an empty list.
WorkspaceManager::list().unwrap_or_default() hides configuration/storage failures and can send users into the wrong screen flow instead of surfacing a real error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/mod.rs` at line 35, The call to WorkspaceManager::list() currently
swallows errors with unwrap_or_default(), so replace that with proper error
handling: call WorkspaceManager::list() and handle the Result instead of
defaulting to an empty vec — propagate the error up or convert it to a
user-facing error/diagnostic (e.g., return Err, map_err, or log and show a
message) so failures are surfaced; update the code around the workspaces
variable (the let workspaces = WorkspaceManager::list() usage) to either return
the Result from the function or explicitly handle and report the error to the
TUI flow instead of returning an empty list.
| KeyCode::Char('c') => { | ||
| // Open config directory in Finder / file manager | ||
| if let Ok(path) = crate::config::Config::default_path() { | ||
| if let Some(parent) = path.parent() { | ||
| let _ = std::process::Command::new("open").arg(parent).spawn(); | ||
| } |
There was a problem hiding this comment.
Make the config-folder opener cross-platform.
Line 32 hardcodes open, so the [c] shortcut won’t work on non-macOS systems.
🔧 Proposed fix
KeyCode::Char('c') => {
// Open config directory in Finder / file manager
if let Ok(path) = crate::config::Config::default_path() {
if let Some(parent) = path.parent() {
- let _ = std::process::Command::new("open").arg(parent).spawn();
+ #[cfg(target_os = "macos")]
+ let mut cmd = std::process::Command::new("open");
+ #[cfg(target_os = "linux")]
+ let mut cmd = std::process::Command::new("xdg-open");
+ #[cfg(target_os = "windows")]
+ let mut cmd = {
+ let mut c = std::process::Command::new("cmd");
+ c.args(["/C", "start", ""]);
+ c
+ };
+ let _ = cmd.arg(parent).spawn();
}
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/screens/settings.rs` around lines 28 - 33, The current config-folder
opener uses the macOS-only "open" command; update the KeyCode::Char('c') handler
to pick the platform-specific opener: use "open" on macOS, "explorer" on
Windows, and "xdg-open" on Linux/other Unix, and then spawn that command with
the parent path as the argument; locate the code around Config::default_path()
and the KeyCode::Char('c') match arm in settings.rs and replace the hardcoded
"open" invocation with a small platform dispatch (cfg! or match on
std::env::consts::OS) that chooses the right command and spawns it, propagating
or logging any spawn errors as before.
And Restructure Providers
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 16
♻️ Duplicate comments (2)
.github/workflows/S1-Test-CI.yml (1)
3-5:⚠️ Potential issue | 🟠 MajorEnable automatic CI triggers for pull requests and
mainpushes.Line 3-Line 5 currently only allow manual execution, so required CI checks won’t run automatically during normal development flow.
Suggested trigger update
on: + pull_request: + push: + branches: [main] workflow_dispatch:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S1-Test-CI.yml around lines 3 - 5, Update the GitHub Actions triggers so the CI runs automatically by adding pull_request and push events to the workflow 'on' configuration: replace the current manual-only 'workflow_dispatch' with an 'on' block that includes 'pull_request' and 'push' (with branches set to 'main') alongside 'workflow_dispatch' so PRs and pushes to main trigger the workflow automatically.src/checks.rs (1)
76-139:⚠️ Potential issue | 🟠 Majorgh CLI checks remain marked as critical despite alternative auth methods.
Lines 96, 104, 117, 128, and 136 mark GitHub CLI checks as
critical: true. However, per previous review feedback, the codebase supports alternative authentication via environment tokens. Users with validtoken_envauth would see these as blocking failures even though they don't need gh CLI.Consider either:
- Making these checks non-critical (
critical: false) since they're informational when alternatives exist- Adding an auth context parameter to conditionally set criticality based on selected method
🛠️ Suggested fix: Make gh checks non-critical
fn check_gh_installed() -> CheckResult { if gh_cli::is_installed() { // ... success case ... CheckResult { name: "GitHub CLI".to_string(), passed: true, message: version, suggestion: None, - critical: true, + critical: false, // Alternative auth methods available } } else { CheckResult { name: "GitHub CLI".to_string(), passed: false, message: "not found".to_string(), suggestion: Some("Install from https://cli.github.com/".to_string()), - critical: true, + critical: false, // Alternative auth methods available } } }Apply similar changes to
check_gh_authenticated().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/checks.rs` around lines 76 - 139, The GitHub CLI checks are currently marked critical; update both check_gh_installed and check_gh_authenticated so their returned CheckResult structs set critical: false in all branches (the installed branch and the "not found" branch in check_gh_installed, and the early-return when gh not installed plus the authenticated/not authenticated branches in check_gh_authenticated), keeping messages/suggestions unchanged; locate usages of gh_cli::is_installed, gh_cli::is_authenticated and gh_cli::get_username to find every returned CheckResult and flip critical: true to critical: false.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/S3-Publish-Homebrew.yml:
- Around line 118-120: The git commit will fail when there are no changes;
replace the unconditional git commit command with a guarded commit that only
runs if there are changes, or allow empty commits. For example, change the line
containing git commit -m "Update ${FORMULA_NAME} to ${{
steps.version.outputs.version }}" to a conditional such as using git diff
--quiet --exit-code || git commit -m "Update ${FORMULA_NAME} to ${{
steps.version.outputs.version }}" (or alternatively add --allow-empty to the git
commit command) so the step won't fail when the formula is identical.
In `@docs/plans/move-config-to-sync-folder.md`:
- Around line 18-25: The fenced code block in the markdown (the triple-backtick
block showing the repo tree) is missing a language tag which triggers
markdownlint MD040; update the opening fence from ``` to ```text (i.e., add the
language tag "text") so the block becomes ```text and the linter will accept
it—modify the fenced block that begins with the repo tree diagram in
docs/plans/move-config-to-sync-folder.md.
In `@docs/plans/remove-global-providers-config-opus.md`:
- Around line 51-57: The table entry is incorrect: do not remove AuthMethod and
ProviderEntry from the public exports in src/config/mod.rs — keep them in the
pub use provider_config::{AuthMethod, ProviderEntry, ...} export list to prevent
E0446; update the markdown table row to reflect that the mod.rs change is an
update to doc examples only (not removal of these symbols) so the table matches
Step 2's instruction to retain AuthMethod and ProviderEntry in the public
exports.
In `@src/auth/gh_cli_tests.rs`:
- Around line 26-33: The test in src/auth/gh_cli_tests.rs currently asserts that
the token variable starts with legacy GitHub prefixes (ghp_, gho_, ghu_, ghr_,
ghs_) but omits the fine-grained PAT prefix; update the assertion to also accept
the "github_pat_" prefix by adding a check token.starts_with("github_pat_")
alongside the existing starts_with checks so the GH CLI test accepts
fine-grained personal access tokens.
In `@src/commands/status_tests.rs`:
- Around line 20-21: The test currently only checks that run(&args, &config,
&output).await returned Err; instead assert the specific error for a nonexistent
workspace: unwrap the Err (e.g., let err = result.unwrap_err()) and assert its
variant/message/kind matches the expected one for a missing workspace (use the
concrete error enum/variant your code uses, or matches! with the expected
pattern). Update the assertion around the result variable in this test in
src/commands/status_tests.rs to compare against the specific error (referencing
run, args, config, output, and result) rather than using
assert!(result.is_err()).
In `@src/commands/status.rs`:
- Around line 95-97: The success summary is wrong because repos that hit Err(e)
in the git.status branch are excluded from counts; modify the Err(e) arm in the
status command (the branch that currently calls output.verbose(&format!(" {}/{}
- error: {}", org, name, e))) to increment a new error_count variable (e.g.,
error_count += 1) and keep logging the error as before, then update the final
summary logic (the code that prints the "everything clean/up-to-date" message)
to check error_count > 0 and suppress or replace the success message with a
summary that notes some repos failed to evaluate when any errors occurred.
Ensure the new error_count is initialized before iterating repos and used
alongside the existing success/fail counters when composing the final output.
In `@src/commands/workspace_tests.rs`:
- Around line 29-35: The test test_list_empty is swallowing errors by ignoring
list(&config, &output) — change it to assert the result so failures propagate
(e.g., assert!(list(&config, &output).is_ok()) or unwrap the Result) and keep
using Config::default() and quiet_output(); for a better fix, make the test
deterministic by creating an isolated temp workspace/environment before calling
list so the behavior is predictable. Ensure you update the invocation sites in
test_list_empty to check the return value from list rather than discarding it.
In `@src/commands/workspace.rs`:
- Around line 44-47: The println! calls that print the workspace lines (the
formatted string using marker, ws.base_path, provider_label, org_info,
last_synced) and the other stdout writes on lines 52–53 must be replaced to use
the command's output abstraction instead of raw println!; locate the code that
constructs the line with marker, ws.base_path, provider_label, org_info,
last_synced and replace the println! invocation with the command/output object's
print/println or write_fmt helper (e.g., output.println(...),
opts.output.write_fmt(...), or whatever the command struct exposes) so output
modes (quiet/json) are honored consistently.
In `@src/config/parser.rs`:
- Around line 287-289: The code builds new_line by interpolating workspace names
directly which can produce invalid TOML for names with
quotes/backslashes/newlines; update the match that sets new_line so the
workspace name is properly TOML-escaped instead of raw-interpolated (replace the
Some(name) arm with something like using toml::to_string(&name) or
toml::ser::to_string(&name) and then format!("default_workspace = {}",
escaped_name)) so the value is correctly quoted/escaped before writing; adjust
error handling for the serializer call as needed.
In `@src/config/workspace_store_tests.rs`:
- Around line 11-20: with_temp_home currently restores HOME only on normal
return; make it panic-safe by introducing a small guard type (e.g.,
TempHomeGuard) inside with_temp_home that captures original_home when you set
HOME and implements Drop to restore or remove HOME accordingly; create the guard
right after setting HOME and then call f() normally so the guard's Drop runs
during unwinding as well, ensuring HOME is restored even if f() panics (refer to
with_temp_home and the guard's Drop impl).
In `@src/main.rs`:
- Around line 72-75: The code currently ignores errors from
std::fs::create_dir_all(parent) and std::fs::write(&default_path,
Config::default_toml()), so update the bootstrap logic around parent and
default_path to properly handle I/O errors: replace the discarded results with
error-aware calls (use the ? operator to propagate errors from create_dir_all
and write, or explicitly match their Results and return/log and exit on Err),
ensuring any failure to create the directory or write Config::default_toml() is
surfaced (or logged with context including default_path) instead of being
silently ignored.
In `@src/operations/clone.rs`:
- Around line 213-214: The spawned clone tasks currently lose per-repo failure
details when a task panics because only the summary is updated; to fix this wrap
each task or its join in panic-aware handling (use std::panic::catch_unwind for
thread tasks or match JoinHandle::await/handle.join() for async/blocking tasks)
and, on Err (panic), create and insert a corresponding CloneResult::Failure (or
equivalent variant) into the results vector for that repository and invoke the
repo-specific error callback so the repo's failure is recorded and callbacks
run; locate the code that builds handles (variable handles), the results Vec
(results) and the CloneResult type to add this join/error branch and ensure
every spawned task path updates results and calls the error callback.
In `@src/operations/sync.rs`:
- Around line 233-245: The code only checks path.exists() before proceeding to
status/fetch/pull, so non-git directories flow into failing operations; update
the pre-check to explicitly detect whether the path is a git repository (e.g.,
check for a .git directory or attempt a lightweight git open like
git2::Repository::open_bare/open) and if not return a SyncResult for
local_repo.repo with result: OpResult::Skipped("not a git
repository".to_string()), keeping the same other fields (path, had_updates:
false, status/fetch/pull_result: None); make the same change for the
corresponding async and sync branches (the other blocks referenced around lines
247-285 and 455-483) so all early exits uniformly skip non-repo directories
before calling status/fetch/pull.
In `@src/output/progress/sync.rs`:
- Around line 96-100: The on_pull_complete handler (in
src/output/progress/sync.rs) incorrectly increments updates_count when
PullResult.success is true, causing counts to diverge from on_fetch_complete
which only increments when result.updated is true; update the logic so pulls
only increment updates_count when the pull actually changed the repo (e.g., add
a boolean field like PullResult.updated or PullResult.had_changes to the
PullResult struct and use that in on_pull_complete instead of success), or
alternatively change the increment condition to check an existing indicator of
applied changes (for example PullResult.fast_forward if that semantically
matches); adjust the handler in on_pull_complete to mirror on_fetch_complete so
updates_count.fetch_add(1, Ordering::SeqCst) only runs when real updates
occurred.
In `@src/provider/mock.rs`:
- Around line 218-229: The code passes the full self.user_repos.len() to
progress.on_personal_repos_complete even though you only add the
filtered/deduped set; fix by first collecting the filtered/deduped iterator into
a Vec (the existing personal_filtered variable), compute its length, push each
item into repos with OwnedRepo::new(&self.username, repo.clone()), and then call
progress.on_personal_repos_complete(personal_filtered.len()) so the reported
count matches what was actually added (use options.should_include and the
deduplication check when building personal_filtered).
In `@src/setup/state.rs`:
- Around line 266-281: The deduplication check uses the unexpanded candidate
string, causing duplicates when suggestions store expanded paths; update the
check to compare expanded paths instead—use the result of shellexpand::tilde
(expanded.as_ref()) or convert both sides to std::path::Path and compare
normalized/expanded paths so suggestions.iter().any(|s| ... ) tests against the
same expanded path representation; adjust the closure and any PathSuggestion
comparisons accordingly (reference: candidate, shellexpand::tilde, expanded,
suggestions, PathSuggestion).
---
Duplicate comments:
In @.github/workflows/S1-Test-CI.yml:
- Around line 3-5: Update the GitHub Actions triggers so the CI runs
automatically by adding pull_request and push events to the workflow 'on'
configuration: replace the current manual-only 'workflow_dispatch' with an 'on'
block that includes 'pull_request' and 'push' (with branches set to 'main')
alongside 'workflow_dispatch' so PRs and pushes to main trigger the workflow
automatically.
In `@src/checks.rs`:
- Around line 76-139: The GitHub CLI checks are currently marked critical;
update both check_gh_installed and check_gh_authenticated so their returned
CheckResult structs set critical: false in all branches (the installed branch
and the "not found" branch in check_gh_installed, and the early-return when gh
not installed plus the authenticated/not authenticated branches in
check_gh_authenticated), keeping messages/suggestions unchanged; locate usages
of gh_cli::is_installed, gh_cli::is_authenticated and gh_cli::get_username to
find every returned CheckResult and flip critical: true to critical: false.
---
Nitpick comments:
In @.claude/CLAUDE.md:
- Around line 1-112: CLAUDE.md largely duplicates AGENTS.md; update CLAUDE.md to
remove redundant sections and either (a) replace duplicated content with a short
pointer to AGENTS.md or (b) consolidate unique content into one canonical file
and have the other reference it; specifically edit the top-level headings and
the "Build & Test Commands", "Architecture", "TUI module", "Formatting",
"Testing", and "CI/CD Workflows" sections in CLAUDE.md so they contain only a
brief intro plus a reference link to AGENTS.md (or vice versa), keeping any
unique lines that do not exist in AGENTS.md and ensuring links reference
AGENTS.md by filename.
In @.github/workflows/S2-Release-GitHub.yml:
- Around line 76-137: The build-release-assets job is running for all triggers
while publish-release only runs on tag pushes; add a tag-only guard to the job
to avoid building unused assets. In the job definition for build-release-assets,
add an if condition such as startsWith(github.ref, 'refs/tags/') (or
github.ref_type == 'tag') so the job only executes on tag pushes; reference the
job name build-release-assets to locate where to add this check. Ensure the
condition is added at the job level (next to name/needs/runs-on) so the matrix
build, cargo build step, and artifact upload are skipped unless the workflow was
triggered by a tag.
In @.github/workflows/S3-Publish-Homebrew.yml:
- Around line 62-99: The heredoc written by cat > formula.rb << 'FORMULA' embeds
leading spaces into formula.rb; change the heredoc to a dedented form by
replacing << 'FORMULA' with a dedenting heredoc (e.g., <<-FORMULA) and adjust
the indentation to use tabs (so the shell strips them), or alternatively pipe
the heredoc through a whitespace-stripper (sed/awk) to remove the leading
indentation before writing; update the heredoc start (cat > formula.rb
<<-FORMULA) and/or add a dedent step so the generated formula.rb has no leading
indentation.
In @.github/workflows/S4-Publish-Crates.yml:
- Around line 21-24: Add a pre-check step before the existing "Publish" step
that runs cargo publish --dry-run and verifies it succeeds before running the
actual cargo publish; specifically, introduce a step (referencing the existing
"Publish" step that currently runs cargo publish and uses CARGO_REGISTRY_TOKEN)
which executes cargo publish --dry-run and fails the job if the dry-run
indicates the crate/version is already published or other errors, only then
proceed to the real publish step using the same CARGO_REGISTRY_TOKEN env
variable.
In `@docs/plans/path-selector-ux.md`:
- Line 93: Replace the vague phrase "Very small change" with a clearer, more
specific wording such as "Small implementation change" in the trade-off sentence
that currently reads "Very small change, but less flexible than full navigator
mode" so the tone is crisper and more precise when describing the magnitude of
the change.
In `@docs/plans/remove-global-providers-config.md`:
- Around line 30-35: Collapse the repetitive "Remove …" bullets into a single
grouped block by adding a short lead-in sentence (e.g., "Remove global providers
configuration and related helpers/entries:") followed by a compact list of the
specific items to remove — reference the symbols to keep it precise: remove
providers: Vec<ProviderEntry> from Config, default_providers(),
enabled_providers(), provider-specific validation (empty-check and per-provider
loop), the [[providers]] block in Config::default_toml(), and the unused
ProviderEntry import; this keeps the plan concise while still naming the exact
functions/types to change.
In `@docs/README.md`:
- Line 208: The flag name --no-skip-uncommitted is a confusing double-negative;
rename the flag to a positive, clearer name (e.g., --include-uncommitted or
--sync-uncommitted) throughout the codebase and docs by replacing occurrences of
"--no-skip-uncommitted" in CLI parsing logic (the flag definition/usage), help
text in docs/README.md, and any references in functions or tests that check this
option so behavior and messaging remain consistent.
- Around line 1-7: Add a labeled "Quick Context" one-line summary by placing a
small labeled paragraph below the "# Git-Same" heading that uses the existing
one-liner "Mirror GitHub structure /orgs/repos/ to local file system" as the
content, and add a new "Common Modifications" section later in the README that
briefly documents how to extend Git-Same (examples: adding new providers,
implementing custom filters, changing sync targets) and links to relevant code
entry points or configuration patterns; update the README title block around
"Git-Same" and the existing one-liner to include the label and append the new
"Common Modifications" section explaining steps and typical files to edit.
In `@src/auth/ssh_tests.rs`:
- Around line 4-16: Replace the weak length-only assertion with a consistency
check between has_ssh_keys() and get_ssh_key_files(): call has_ssh_keys() (from
test_has_ssh_keys_detection or inline in test_get_ssh_key_files), call
get_ssh_key_files(), then assert that has_ssh_keys() equals !keys.is_empty()
(i.e. if has_ssh_keys() is true there must be at least one entry in keys, and if
false keys must be empty) so the tests fail on regressions where detection and
file enumeration diverge.
In `@src/auth/ssh.rs`:
- Around line 40-67: The duplicated list of SSH key filenames should be
extracted into a module-level constant (e.g., SSH_KEY_FILENAMES) and both
has_ssh_keys() and get_ssh_key_files() should reference that constant instead of
declaring their own key_files arrays; locate the key_files usage in
has_ssh_keys() and the same list in get_ssh_key_files(), create a public or
private static slice/array constant at top of the module, replace the local
key_files declarations with that constant, and update any types/usages
accordingly.
- Around line 42-45: The code currently retrieves the home directory via
std::env::var("HOME") (variable home) which is unset on Windows; update both
occurrences (the HOME lookup around where "home" is defined and the similar
block at lines ~71-74) to first try std::env::var("HOME"), then fall back to
std::env::var("USERPROFILE"), and if still missing use a portable resolver such
as dirs::home_dir() (or std::env::var_os combined with PathBuf) to obtain a
PathBuf; ensure the function signatures that use "home" accept a PathBuf/string
and handle the case where no home is found by returning false or an appropriate
error as before.
In `@src/cache/discovery_tests.rs`:
- Around line 61-80: Increase the timing margins in the flaky tests: in
test_cache_validity and test_cache_age where DiscoveryCache::new is created,
replace the short 100ms/50ms sleeps and checks with larger gaps (for example
sleep 200ms and check is_valid with a 100ms TTL, and use sleep 200ms before
calling age_secs) to make CI more stable; update the assertions accordingly
(keep is_valid(false) expectation and allow age_secs to accept 0 or 1) and
reference the same functions/methods DiscoveryCache::new, is_valid, and age_secs
when applying the change.
In `@src/cache/sync_history.rs`:
- Around line 54-79: The save method currently uses fs::write(&self.path, &json)
which can corrupt the file mid-write; change SyncHistory::save to perform an
atomic write by creating a temporary file in the same directory (e.g.
self.path.with_extension("tmp") or use a NamedTempFile in the same parent),
write the serialized json to that temp file, flush and sync it, then atomically
rename (std::fs::rename) the temp file to self.path; keep the existing parent
dir creation, continue using SyncHistoryFile and MAX_HISTORY_ENTRIES logic, and
ensure errors are propagated as before.
In `@src/checks_tests.rs`:
- Around line 41-46: The test in checks_tests.rs currently asserts exact length
and ordering of the results vector (results and results[i].name), which is
brittle; change it to assert that the required check names (e.g., "Git", "GitHub
CLI", "GitHub Auth", "SSH Keys", "SSH GitHub") are present in the results
regardless of order or extra items—collect results.iter().map(|r|
r.name.clone()) into a set or vec and assert that each required name is
contained (and optionally that at least N required names exist) instead of
asserting results.len() and exact index positions.
In `@src/commands/init_tests.rs`:
- Around line 26-42: The two tests test_init_creates_config_dir and
test_init_creates_parent_dirs are overlapping—either consolidate them into a
single parameterized test or make their intent explicit by using clearly
distinct path depths and assertions; for example, keep
test_init_creates_config_dir using a single-level target (e.g.,
"git-same/config.toml") and rename/test_init_creates_parent_dirs to target a
deeper nested path (e.g., "a/b/c/git-same/config.toml"), or merge both into one
table-driven test that calls run(&args, &output).await for multiple path cases
and asserts that each parent directory exists and is a directory; update test
names and comments accordingly to remove redundancy (refer to the test functions
test_init_creates_config_dir and test_init_creates_parent_dirs and the
run/InitArgs usage to locate where to change).
In `@src/commands/reset.rs`:
- Around line 342-356: The prompt_number function currently loops indefinitely
and can block if stdin is closed/EOF; modify prompt_number (the loop reading
from stdin) to break out and return an Err when EOF is encountered or after a
configurable max_retry count (e.g., add a max_retries parameter or constant)
instead of looping forever; detect EOF by checking read_line result (zero bytes)
or stream error and map that to a meaningful error variant returned by Result,
and decrement/track retries on invalid input so the function returns an Err
after exceeding retries rather than infinite looping.
In `@src/commands/sync_cmd_tests.rs`:
- Around line 18-39: The test run_returns_error_when_no_workspace_is_configured
currently restores the HOME env after assertions which may never run on panic;
add a RAII guard struct (e.g., HomeGuard with an original: Option<String> field
and a Drop impl that restores or removes HOME) and instantiate it at the start
of the test (alongside acquiring HOME_LOCK) so that HOME is always restored even
if run(...) or the assertions panic; update the test to create the tempdir and
set HOME after creating the HomeGuard and leave the rest of the assertions using
run(...) unchanged.
- Around line 41-64: The test run_returns_error_for_unknown_workspace_name
manually restores HOME and can leak on panic; replace the manual
original_home/temp restoration with an RAII guard so HOME is always restored:
create or reuse a HomeEnvGuard (or TempEnv helper) that captures the original
HOME (original_home) in its constructor, sets HOME to temp.path(), and restores
or removes HOME in Drop, then in the test instantiate that guard (instead of the
current manual set_var/remove_var block) so the cleanup happens automatically
even on panic; update only this test to use HomeEnvGuard and remove the explicit
restoration code.
In `@src/config/provider_config.rs`:
- Around line 94-98: The validate method on ProviderConfig currently returns
Ok(()) unconditionally; update it to perform real validation or mark it
explicitly as TODO—either implement checks (e.g., inside fn validate(&self)
ensure required fields like api_url are present when self.provider ==
Provider::GitHubEnterprise, verify token/non-empty strings, and return Err with
descriptive messages on failure) or add a clear TODO comment and panic or remove
the method until needed; locate the validate function in provider_config.rs and
modify the method body accordingly, referencing the validate method and the
Provider enum variants (e.g., GitHubEnterprise) when adding checks or comments.
In `@src/config/workspace_manager_tests.rs`:
- Around line 3-8: Add additional unit tests for
WorkspaceManager::name_from_path to cover edge conditions: include paths with
trailing slashes (e.g., "/home/user/github/"), paths containing spaces or
special characters, different ProviderKind variants (e.g.,
ProviderKind::GitLab), root path "/" and relative paths (e.g., "./repo"), and
assert expected outputs for each; update or add tests in
workspace_manager_tests.rs using the same test harness pattern to ensure these
cases are validated alongside the existing test.
In `@src/config/workspace_policy.rs`:
- Around line 67-78: The code in WorkspaceConfig::resolve swallows the original
error from WorkspaceStore::load by using or_else(|_| ...); change that to
capture the first error (e.g., let err1 = e) and then attempt
WorkspaceStore::load_by_path; if load_by_path succeeds return it, otherwise
return a combined error or rewrap err1 (so the original "workspace not found"
message is preserved) — replace the or_else(|_|
WorkspaceStore::load_by_path(value)) with a closure that captures the first
error and either returns the second result or returns/chain-wrapps err1 (or
both) so callers can see the original load error; reference functions: resolve,
WorkspaceStore::load, WorkspaceStore::load_by_path, and the or_else usage.
In `@src/config/workspace_store.rs`:
- Around line 162-200: validate_workspace_name currently allows a single "."
because it only checks for ParentDir but not CurDir; update the validation in
validate_workspace_name to reject "." (or any path component that is
Component::CurDir) by adding Component::CurDir to the components() check or
explicitly returning an AppError::config when name == "." so that the function
treats "." as invalid and prevents creating a workspace at the config root.
In `@src/config/workspace.rs`:
- Around line 118-120: Change the last_synced field from Option<String> to a
proper timestamp type (either Option<i64> for Unix seconds or
Option<chrono::DateTime<Utc>>) and update serde handling accordingly: replace
pub last_synced: Option<String> with pub last_synced: Option<i64> (or
Option<DateTime<Utc>>) and add appropriate serde helpers (e.g., serde(with =
"ts_seconds_option") for chrono or default serialization for i64) so
serialization/deserialization remains compatible; then update any code that
reads/writes Workspace::last_synced (display formatting, comparisons, and tests)
to convert between the timestamp type and human-readable strings where needed.
In `@src/discovery_tests.rs`:
- Line 47: The test uses a hard-coded absolute path "/nonexistent" when calling
orchestrator.plan_clone(Path::new("/nonexistent"), repos, "github", &git); which
can be flaky; replace that with a deterministic temp-based path (create a
TempDir via tempfile::tempdir() or use std::env::temp_dir() and join a
non-existent child) and pass the joined child path (which you do not create)
into plan_clone so the path is guaranteed to be absent but environment-agnostic;
update both occurrences that call plan_clone with "/nonexistent" to use the same
temp-derived non-existent child path.
In `@src/discovery.rs`:
- Around line 49-51: compute_path and scan_local recreate RepoPathTemplate from
self.structure on every call; add a cached compiled template to
DiscoveryOrchestrator (e.g., a field such as compiled_template: RepoPathTemplate
or a lazy wrapper like OnceCell<RepoPathTemplate>) that is initialized once (in
DiscoveryOrchestrator::new or lazily on first use) and then use that cached
template inside compute_path and scan_local instead of calling
RepoPathTemplate::new(self.structure.clone()) each time, keeping existing method
signatures (compute_path, scan_local) but replacing their template construction
with the cached compiled_template.
In `@src/domain/repo_path_template.rs`:
- Around line 72-89: The sanitize_component function currently only replaces
slashes and ".." but should defensively remove or normalize other
filesystem-problematic characters: update sanitize_component to strip NUL bytes,
remove or replace colons (':'), remove control characters (<= 0x1F and 0x7F),
trim trailing dots and spaces (Windows), collapse repeated underscores, and
ensure the result is not empty (fall back to "_"); keep existing behavior of
replacing path separators and ".." and apply these additional normalizations
after trimming the input in sanitize_component to produce a safe single path
component.
In `@src/git/mod_tests.rs`:
- Line 16: The test uses a platform-specific absolute path when calling
MockGit::status; update the invocation to use a platform-neutral path by
replacing Path::new("/tmp/nonexistent") with a relative/path-neutral call such
as Path::new("nonexistent") (or PathBuf::from("nonexistent")) so the mock test
does not assume Unix-style paths and works cross-platform.
In `@src/git/shell.rs`:
- Around line 314-323: The method status runs git twice even though "git status
-b --porcelain" already contains file and branch info; change status() to call
run_git_output only once with ["status","-b","--porcelain"] (keep the resulting
variable, e.g., branch_output or rename to status_output_with_branch) and pass
that same output into parse_status_output (use
parse_status_output(&status_output_with_branch, &status_output_with_branch) or
equivalent), removing the redundant run_git_output call that used
["status","--porcelain"].
In `@src/output/progress/discovery.rs`:
- Around line 62-71: In on_org_complete, avoid the extra atomic load by using
the return value of repo_count.fetch_add(repo_count, Ordering::SeqCst) (which
yields the previous value) and compute the new total as previous + repo_count;
then use that computed total in the main_bar.set_message call instead of calling
repo_count.load(...). This replaces the two-step fetch_add + load with a single
atomic operation while keeping the same Ordering and message construction.
In `@src/provider/github/client.rs`:
- Around line 44-53: Replace the hardcoded USER_AGENT header value in the
headers construction with a package-version-derived string so it stays in sync
with Cargo.toml; use env!("CARGO_PKG_VERSION") (or a build-time constant) to
construct the user-agent (e.g. format!("gisa-cli/{}",
env!("CARGO_PKG_VERSION"))) and create the HeaderValue via HeaderValue::from_str
with a fallback to HeaderValue::from_static("gisa-cli") on error; update the
code around the headers.insert(USER_AGENT, ...) call and keep using HeaderValue
to avoid unwrap panics.
In `@src/provider/mock.rs`:
- Around line 101-115: Replace the bare unwrap() calls on the call_log mutex
with expect() including a short descriptive message so panics from a poisoned
mutex show context; update log_call, get_calls, and clear_calls to call
self.call_log.lock().expect("failed to lock call_log in <function_name>") (use
the respective function name in the message) so each lock failure indicates
which method encountered the error and aids debugging.
In `@src/provider/mod_tests.rs`:
- Around line 25-33: Replace the brittle substring assertion with a direct
pattern match against the error variant: when calling create_provider(&entry,
"token") ensure the Err value is
AppError::Provider(ProviderError::NotImplemented(_)) (use matches! or an if
let/case to assert the variant), and if desired keep an additional optional
assertion on the error message text; reference the create_provider function and
the AppError::Provider(ProviderError::NotImplemented(_)) pattern when updating
the test.
In `@src/setup/handler.rs`:
- Around line 772-783: The loop that calls provider.get_org_repos(&org.login)
for each org sequentially (iterating over orgs and pushing OrgEntry into
org_entries) causes slow setup for users with many organizations; change this to
fetch repo counts in parallel (e.g., map orgs into futures calling
provider.get_org_repos, use futures::future::join_all or a FuturesUnordered
stream to await all results concurrently), then construct OrgEntry items with
the resulting lengths and extend org_entries with them, preserving selected:
true and name: org.login.clone().
In `@src/setup/mod.rs`:
- Around line 65-78: Replace the synthetic KeyCode::Null event hack by
extracting the org-discovery logic out of handler::handle_key into a new async
function (e.g., discover_organizations or handler::discover_orgs) and call that
directly from the setup loop when state.step == SetupStep::SelectOrgs &&
state.org_loading; update handler::handle_key to invoke the new discover
function where it previously handled the fake event, and remove the
terminal.draw + synthetic KeyEvent sequence so the code calls terminal.draw(...)
then await discover_organizations(state). This keeps the discovery behavior tied
to a clear function, referenced by handler::discover_orgs (or whichever name you
choose), and avoids coupling with crossterm::event::KeyCode::Null.
In `@src/setup/screens/auth_tests.rs`:
- Around line 6-26: Duplicate render_output helper exists; consolidate by
extracting a shared test utility (e.g., a single render_output function or
render_helper) that accepts terminal dimensions (width, height) and a
&SetupState, replace the local render_output in auth_tests.rs and the one in
provider_tests.rs to call the new shared helper; locate usages of render_output
and the TestBackend::new(100, 22) / TestBackend::new(100, 24) calls and change
them to call the parameterized helper so both tests reuse the same
implementation (keep the call to render(state, frame, area) inside the shared
helper).
In `@src/setup/screens/confirm_tests.rs`:
- Around line 6-26: Extract the duplicated test helper by moving the
render_output function into a shared test helpers module (e.g., tests::helpers
or a new crate::test_helpers mod compiled under cfg(test)), so tests like
confirm_tests can call a single utility instead of duplicating logic.
Specifically, create a helper function that accepts &SetupState (or a generic
state param), uses TestBackend and Terminal as in the original render_output,
calls render(state, frame, area), and returns the rendered String; then replace
the local render_output with a use/import of that helper and ensure visibility
(pub(crate) or pub(super)) and any necessary feature attributes (cfg(test)) are
set so existing tests compile.
In `@src/setup/screens/path.rs`:
- Around line 343-358: In render_completions replace the manual index limit by
applying .take(6) to the enumerated iterator: change the loop over
state.path_completions.iter().enumerate() to
state.path_completions.iter().enumerate().take(6) and remove the if i >= 6 {
break; } block; keep the rest of the logic (style selection using
state.path_completion_index and pushing Lines) unchanged so behavior is
identical but more idiomatic.
In `@src/setup/screens/provider_tests.rs`:
- Around line 7-27: The render_output helper is duplicated; extract the function
(render_output(state: &SetupState) -> String) into a shared test utilities
module and reuse it from both provider_tests.rs and auth_tests.rs — move the
logic that creates TestBackend, Terminal, calls render(state, frame, area) and
builds the buffer string into the new module, export it (pub or pub(crate)) so
tests can import it, update both test files to call the shared render_output,
and ensure you import SetupState, TestBackend, Terminal, and render where
needed.
In `@src/setup/screens/welcome_tests.rs`:
- Around line 8-15: Replace the two bare unwrap() calls with expect(...) to
provide clearer test failure messages: change the
Terminal::new(backend).unwrap() call (constructor for Terminal in this test) to
Terminal::new(backend).expect("failed to create Terminal in welcome_tests"), and
change the terminal.draw(...).unwrap() call (the draw invocation on variable
terminal) to terminal.draw(...).expect("failed to draw UI in welcome_tests");
keep the same closure and arguments, only replace unwrap() with expect(...)
messages that describe the operation and test context.
In `@src/tui/app.rs`:
- Around line 46-96: OperationState::Running has many fields; extract them into
a new RunningState struct (e.g., pub struct RunningState { operation: Operation,
total: usize, completed: usize, failed: usize, skipped: usize, current_repo:
String, with_updates: usize, cloned: usize, synced: usize, to_clone: usize,
to_sync: usize, total_new_commits: u32, started_at: Instant, active_repos:
Vec<String>, throughput_samples: Vec<u64>, last_sample_completed: usize }) and
derive Debug/Clone as needed, then change the enum variant to
Running(RunningState). Update all creators, pattern matches, and field accesses
that previously used OperationState::Running { ... } to construct or destructure
a RunningState (e.g., OperationState::Running(running) and use
running.completed, running.started_at, etc.), and adjust any helpers/updates
that referenced fields like active_repos, throughput_samples,
last_sample_completed to use the new struct field access.
- Around line 335-342: The current chain that builds sync_history swallows
errors from crate::cache::SyncHistoryManager::for_workspace(&ws.name) and
m.load() via .ok() and unwrap_or_default(); change it to capture the Result/Err
and log a warning when either step fails (include the workspace name and error
details) before falling back to the default. Concretely, replace the
.and_then(...).ok() pattern used when constructing sync_history (and the similar
block at the other occurrence) with a match or .map_err(|e| { log a warning
(e.g. tracing::warn! or log::warn!) with ws.name and e; e }).ok() or otherwise
call .unwrap_or_else(|e| { log warning; Default::default() }) so failures are
logged while preserving graceful degradation.
In `@src/tui/event.rs`:
- Around line 107-117: The code silently ignores errors from event::read()
inside the has_event branch; update the block where you call event::read() (the
match on CtEvent -> AppEvent creation inside the if has_event) to handle the Err
case by logging the error before continuing or breaking; use the existing
logging/tracing facility in this module (or add one) to log the error returned
by event::read() along with context (e.g., "failed to read terminal event") and
then continue the loop if appropriate, keeping the existing flow that sends
AppEvent via event_tx.send(app_event).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (172)
.claude/CLAUDE.md.github/workflows/S1-Test-CI.yml.github/workflows/S2-Release-GitHub.yml.github/workflows/S3-Publish-Homebrew.yml.github/workflows/S4-Publish-Crates.yml.gitignoreAGENTS.mdCargo.tomlconductor.jsondocs/README.mddocs/Sync-Screen.mddocs/plans/move-config-to-sync-folder.mddocs/plans/optimize-binary-aliases.mddocs/plans/path-selector-ux.mddocs/plans/remove-global-providers-config-opus.mddocs/plans/remove-global-providers-config.mdrust-toolchain.tomlsrc/app/cli/mod.rssrc/app/mod.rssrc/app/tui/mod.rssrc/auth/gh_cli.rssrc/auth/gh_cli_tests.rssrc/auth/mod.rssrc/auth/mod_tests.rssrc/auth/ssh.rssrc/auth/ssh_tests.rssrc/banner.rssrc/banner_tests.rssrc/cache/discovery.rssrc/cache/discovery_tests.rssrc/cache/mod.rssrc/cache/sync_history.rssrc/cache/sync_history_tests.rssrc/checks.rssrc/checks_tests.rssrc/cli.rssrc/cli_tests.rssrc/commands/init.rssrc/commands/init_tests.rssrc/commands/mod.rssrc/commands/reset.rssrc/commands/reset_tests.rssrc/commands/setup.rssrc/commands/status.rssrc/commands/status_tests.rssrc/commands/support/concurrency.rssrc/commands/support/concurrency_tests.rssrc/commands/support/mod.rssrc/commands/support/workspace.rssrc/commands/support/workspace_tests.rssrc/commands/sync_cmd.rssrc/commands/sync_cmd_tests.rssrc/commands/workspace.rssrc/commands/workspace_tests.rssrc/config/mod.rssrc/config/parser.rssrc/config/parser_tests.rssrc/config/provider_config.rssrc/config/provider_config_tests.rssrc/config/workspace.rssrc/config/workspace_manager.rssrc/config/workspace_manager_tests.rssrc/config/workspace_policy.rssrc/config/workspace_policy_tests.rssrc/config/workspace_store.rssrc/config/workspace_store_tests.rssrc/config/workspace_tests.rssrc/discovery.rssrc/discovery_tests.rssrc/domain/mod.rssrc/domain/repo_path_template.rssrc/domain/repo_path_template_tests.rssrc/errors/app.rssrc/errors/app_tests.rssrc/errors/git.rssrc/errors/git_tests.rssrc/errors/provider.rssrc/errors/provider_tests.rssrc/git/mod.rssrc/git/mod_tests.rssrc/git/shell.rssrc/git/shell_tests.rssrc/git/traits.rssrc/git/traits_tests.rssrc/infra/mod.rssrc/infra/storage/mod.rssrc/lib.rssrc/lib_tests.rssrc/main.rssrc/main_tests.rssrc/operations/clone.rssrc/operations/clone_tests.rssrc/operations/sync.rssrc/operations/sync_tests.rssrc/output/mod.rssrc/output/printer.rssrc/output/printer_tests.rssrc/output/progress/clone.rssrc/output/progress/clone_tests.rssrc/output/progress/discovery.rssrc/output/progress/discovery_tests.rssrc/output/progress/mod.rssrc/output/progress/styles.rssrc/output/progress/sync.rssrc/output/progress/sync_tests.rssrc/provider/github/client.rssrc/provider/github/client_tests.rssrc/provider/github/pagination.rssrc/provider/github/pagination_tests.rssrc/provider/mock.rssrc/provider/mock_tests.rssrc/provider/mod.rssrc/provider/mod_tests.rssrc/provider/traits.rssrc/provider/traits_tests.rssrc/setup/handler.rssrc/setup/handler_tests.rssrc/setup/mod.rssrc/setup/screens/auth.rssrc/setup/screens/auth_tests.rssrc/setup/screens/complete.rssrc/setup/screens/complete_tests.rssrc/setup/screens/confirm.rssrc/setup/screens/confirm_tests.rssrc/setup/screens/mod.rssrc/setup/screens/orgs.rssrc/setup/screens/orgs_tests.rssrc/setup/screens/path.rssrc/setup/screens/path_tests.rssrc/setup/screens/provider.rssrc/setup/screens/provider_tests.rssrc/setup/screens/welcome.rssrc/setup/screens/welcome_tests.rssrc/setup/state.rssrc/setup/state_tests.rssrc/setup/ui.rssrc/setup/ui_tests.rssrc/tui/app.rssrc/tui/app_tests.rssrc/tui/backend.rssrc/tui/backend_tests.rssrc/tui/event.rssrc/tui/event_tests.rssrc/tui/handler.rssrc/tui/handler_tests.rssrc/tui/mod.rssrc/tui/screens/dashboard.rssrc/tui/screens/dashboard_tests.rssrc/tui/screens/mod.rssrc/tui/screens/settings.rssrc/tui/screens/settings_tests.rssrc/tui/screens/sync.rssrc/tui/screens/sync_tests.rssrc/tui/screens/system_check.rssrc/tui/screens/system_check_tests.rssrc/tui/screens/workspaces.rssrc/tui/screens/workspaces_tests.rssrc/tui/ui.rssrc/tui/widgets/repo_table.rssrc/tui/widgets/repo_table_tests.rssrc/types/provider.rssrc/types/provider_tests.rssrc/types/repo.rssrc/types/repo_tests.rssrc/workflows/mod.rssrc/workflows/status_scan.rssrc/workflows/status_scan_tests.rssrc/workflows/sync_workspace.rssrc/workflows/sync_workspace_tests.rstests/integration_test.rstoolkit/Conductor/run.shtoolkit/Conductor/setup.sh
✅ Files skipped from review due to trivial changes (1)
- docs/Sync-Screen.md
🚧 Files skipped from review as they are similar to previous changes (6)
- src/provider/mod.rs
- src/errors/provider.rs
- src/commands/init.rs
- Cargo.toml
- src/config/mod.rs
- src/provider/traits.rs
There was a problem hiding this comment.
Actionable comments posted: 17
♻️ Duplicate comments (7)
src/commands/status_tests.rs (1)
20-21:⚠️ Potential issue | 🟡 MinorAssert the expected missing-workspace error, not only
is_err().
assert!(result.is_err())is too permissive and can pass on unrelated failures. Please assert the specific error kind/message for the nonexistent workspace case.💡 Suggested assertion tightening
- let result = run(&args, &config, &output).await; - assert!(result.is_err()); + let err = run(&args, &config, &output).await.unwrap_err(); + let msg = err.to_string(); + assert!(msg.contains("workspace")); + assert!(msg.contains("nonexistent"));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/status_tests.rs` around lines 20 - 21, Replace the weak assert!(result.is_err()) with a precise check that the error from run(&args, &config, &output).await is the expected "missing workspace" error: unwrap the Err (or use result.expect_err(...)), then assert on the error's kind or message (e.g., assert_eq!(err.kind(), ErrorKind::MissingWorkspace) or assert!(err.to_string().contains("workspace not found"))); reference the run function result variable `result` and the error type's accessor (kind() or Display) to verify the specific missing-workspace condition rather than any error..github/workflows/S1-Test-CI.yml (1)
3-5:⚠️ Potential issue | 🟠 MajorEnable automatic CI triggers for PRs and main pushes.
Line 3-5 keeps the pipeline manual-only, so test/build/audit checks can be skipped unintentionally.
Suggested trigger update
on: + pull_request: + push: + branches: [main] workflow_dispatch:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S1-Test-CI.yml around lines 3 - 5, The workflow currently only has the 'on: workflow_dispatch' trigger which makes CI manual; update the 'on' block to include automatic triggers by adding 'pull_request' and 'push' (with branches: [main]) so PRs and main pushes run the pipeline; edit the 'on' mapping in .github workflow (preserve 'workflow_dispatch' so manual runs still work) and ensure YAML keys 'pull_request' and 'push' are added alongside 'workflow_dispatch'.docs/plans/move-config-to-sync-folder.md (1)
18-25:⚠️ Potential issue | 🟡 MinorAdd a language tag to the fenced code block.
Line 18 still uses an unlabeled fence; markdownlint MD040 will continue to fail until it becomes something like ```text.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/plans/move-config-to-sync-folder.md` around lines 18 - 25, The fenced code block that contains the repository tree (starts with ``` and the ASCII tree lines showing ~/.config/git-same/ and config.toml) is missing a language tag; replace the opening fence from ``` to a labeled fence such as ```text (or ```bash) so the block is language-tagged and markdownlint MD040 is satisfied.src/commands/workspace.rs (1)
44-47:⚠️ Potential issue | 🟠 MajorUse
Outputfor list rendering instead of rawprintln!.Direct stdout writes bypass configured output behavior (e.g., quiet/json mode) and can desync command UX.
🧩 Proposed fix
- println!( - " {} {} ({}, {}, last synced: {})", - marker, ws.base_path, provider_label, org_info, last_synced - ); + output.info(&format!( + " {} {} ({}, {}, last synced: {})", + marker, ws.base_path, provider_label, org_info, last_synced + )); @@ - println!(); + output.info(""); output.info(&format!("Default: {}", default_ws.display_label()));Also applies to: 52-53
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/workspace.rs` around lines 44 - 47, The code is printing workspace lines with println! (using marker, ws.base_path, provider_label, org_info, last_synced) which bypasses the configured Output handling; replace these println! calls with the project's Output API (e.g., accept or use the existing Output object and call its method to write a line such as output.write_line or output.plain/print with the same formatted string) so output modes (quiet/json) are respected; apply the same replacement for the other raw prints at the nearby occurrences (the ones using marker/ws.base_path/provider_label/org_info/last_synced and the two prints at lines 52–53). Ensure the function signature or call site obtains the Output instance rather than writing directly to stdout.docs/plans/remove-global-providers-config-opus.md (1)
54-54:⚠️ Potential issue | 🟡 MinorFiles Summary contradicts Step 2 export guidance.
Line 54 says to remove
AuthMethod/ProviderEntryexports, but Step 2 explicitly says to keep them publicly exported fromsrc/config/mod.rs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/plans/remove-global-providers-config-opus.md` at line 54, The docs contain a contradiction: the Files Summary line instructs removing public exports for AuthMethod and ProviderEntry from src/config/mod.rs, while Step 2 says to keep them public; reconcile by making the documentation consistent—either update the Files Summary to state that AuthMethod and ProviderEntry remain publicly exported from src/config/mod.rs (matching Step 2) or change Step 2 to instruct removal (matching the Files Summary); reference the symbols AuthMethod and ProviderEntry and ensure the doc example and any export list in src/config/mod.rs match the chosen behavior.src/operations/clone.rs (1)
292-297:⚠️ Potential issue | 🟡 MinorPer-repo context lost on task panic.
When a task panics, the
CloneResultfor that repo isn't added toresults, and the progress callback isn't invoked with repo-specific information. While rare, this makes debugging harder.🔧 Suggested approach: store repo/path alongside handle
- let mut handles = Vec::with_capacity(total); + let mut handles: Vec<(OwnedRepo, PathBuf, _)> = Vec::with_capacity(total); for (index, repo) in repos.into_iter().enumerate() { // ... existing code ... + let panic_repo = repo.clone(); + let panic_path = target_path.clone(); let handle = tokio::spawn(async move { // ... existing task code ... }); - handles.push(handle); + handles.push((panic_repo, panic_path, handle)); } // In collection loop: - for (index, handle) in handles.into_iter().enumerate() { + for (index, (panic_repo, panic_path, handle)) in handles.into_iter().enumerate() { match handle.await { // ... existing Ok case ... Err(e) => { - summary.record(&OpResult::Failed(format!("Task panicked: {}", e))); + let err = format!("Task panicked: {}", e); + progress.on_error(&panic_repo, &err, index, total); + let failed = CloneResult { + repo: panic_repo, + path: panic_path, + result: OpResult::Failed(err), + }; + summary.record(&failed.result); + results.push(failed); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/clone.rs` around lines 292 - 297, When a spawned task panics the code in the Err(e) branch only records a generic failure via summary.record and loses per-repo context; change the task spawning logic so the collection of task handles stores the repo identifier/path alongside the JoinHandle (e.g., store tuples of (repo.clone(), path.clone(), handle)), then in the Err(e) branch create and push a CloneResult for that specific repo (using the repo/path values), invoke the same progress callback you use for other outcomes with the repo-specific CloneResult, and still call summary.record(OpResult::Failed(...)) — this ensures the repo is added to results and progress is reported when a task panics.src/config/parser.rs (1)
287-289:⚠️ Potential issue | 🟡 MinorEscape
default_workspacevalue before writing TOML.Workspace names containing
",\, or newlines will produce invalid TOML, breaking config parsing on next load.🔧 Suggested fix using toml serialization
let new_line = match workspace { - Some(name) => format!("default_workspace = \"{}\"", name), + Some(name) => { + // Properly escape the value for TOML + let escaped = name + .replace('\\', "\\\\") + .replace('"', "\\\"") + .replace('\n', "\\n") + .replace('\r', "\\r") + .replace('\t', "\\t"); + format!("default_workspace = \"{}\"", escaped) + } None => String::new(), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/parser.rs` around lines 287 - 289, The code builds new_line using workspace and inserts raw names into TOML, which breaks when names contain quotes, backslashes, or newlines; update the logic in the parser where new_line is created (the match on workspace producing default_workspace) to serialize or escape the value properly instead of interpolating raw strings — e.g., use toml serialization (toml::to_string or building a toml::Value/map) to produce a valid TOML key/value for "default_workspace" or explicitly escape quotes/backslashes/newlines before formatting so the resulting TOML remains valid when parsed back.
🧹 Nitpick comments (34)
.github/workflows/S2-Release-GitHub.yml (1)
144-154: Usedownload-artifactmerge mode instead of shell-based file collection.Line 149–154 can be simplified and made less error-prone by letting
actions/download-artifact@v4merge artifacts directly.Suggested refactor
- name: Download built artifacts uses: actions/download-artifact@v4 with: - path: artifacts - - - name: Collect release assets - shell: bash - run: | - mkdir -p release-assets - find artifacts -type f -exec cp {} release-assets/ \; + pattern: git-same-* + path: release-assets + merge-multiple: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/S2-Release-GitHub.yml around lines 144 - 154, Replace the two-step pattern ("Download built artifacts" using actions/download-artifact@v4 into artifacts, then "Collect release assets" shell copying from artifacts to release-assets) with a single actions/download-artifact@v4 invocation that downloads/merges artifacts straight into release-assets (set path: release-assets and enable the action's merge mode/merge option so multiple artifacts are combined into that directory), updating the step name accordingly and removing the shell-based find/cp step.src/auth/ssh_tests.rs (1)
11-16: Strengthen this test to validate contract, not just count.
assert!(keys.len() <= 6)is implementation-coupled and won’t catch wrong file names/paths. Prefer asserting that returned entries map to allowed key names and actually exist.💡 Suggested test hardening
fn test_get_ssh_key_files() { // This test just checks that the function runs without panicking let keys = get_ssh_key_files(); - // Can't assert specific results as it depends on test environment - assert!(keys.len() <= 6); // At most 6 key types + let allowed = [ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", + ]; + + for key in &keys { + let name = key.file_name().and_then(|s| s.to_str()).unwrap_or(""); + assert!(allowed.contains(&name)); + assert!(key.exists()); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/ssh_tests.rs` around lines 11 - 16, Update the test_get_ssh_key_files test to validate the contract by checking that every returned entry from get_ssh_key_files() has an expected key filename and points to an existing file: call get_ssh_key_files(), assert that every entry's filename (or basename) is a member of the allowed key name set (e.g., id_rsa, id_ed25519, id_ecdsa, id_dsa, id_rsa.pub, etc.), and assert that Path::new(entry).exists() is true for each entry; keep the test tolerant of missing keys by allowing a subset of the allowed set but fail if any returned name is not in the allowed set or points to a non-existent path.docs/README.md (1)
1-384: Add missing README standard sections (Directory Structure,Common Modifications).This README is detailed, but adding those two sections will make extension/maintenance workflows faster for contributors.
Based on learnings: Applies to **/README.md : Include a Common Modifications section explaining how to extend the feature in each README; Applies to **/README.md : Include a Directory Structure section with file tree and descriptions in each README.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/README.md` around lines 1 - 384, Add two new top-level sections to README.md: "Directory Structure" (after Installation) should include a concise file tree of the repository (e.g., crates/binaries, src/, docs/, tests/, assets/, config/), brief descriptions for each entry and notes about where to find binaries (target/release) and README per crate; and "Common Modifications" (after Configuration or Commands) should document the typical change workflows such as where to add providers (reference the [[providers]] toml example and structure = "{org}/{repo}"), how to add CLI subcommands (binaries aliases like git-same/gisa), how to update TUI screens (mention Dashboard/Setup Wizard/Progress), and testing/linting steps to validate changes (cargo test, cargo clippy, cargo fmt). Ensure headings match existing style and link or reference existing sections like "Configuration", "Available Commands", and the toml snippets to guide contributors.src/setup/screens/welcome.rs (1)
73-73: Consider standardized key-hint formatting for readability.Line 73 would be easier to scan as bracketed hints (e.g.,
[Enter] Start [Esc] Cancel) to match common TUI conventions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/welcome.rs` at line 73, The paragraph text uses unbracketed key hints which hurt scanability; update the Paragraph::new(...) call that currently contains "Enter Start Esc Cancel" to use standardized bracketed hints like "[Enter] Start [Esc] Cancel" (keeping the existing Style::default().fg(Color::DarkGray) styling and placement) so the UI shows consistent, conventional key-hint formatting.src/cache/sync_history.rs (2)
70-72: Use atomic file replacement for history writes.Line 72 writes directly to the target path; interruption can leave a truncated JSON file. Write to a temp file and rename atomically.
Suggested write pattern
- fs::write(&self.path, &json).context("Failed to write sync history")?; + let tmp_path = self.path.with_extension("json.tmp"); + fs::write(&tmp_path, &json).context("Failed to write sync history temp file")?; + fs::rename(&tmp_path, &self.path).context("Failed to replace sync history atomically")?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache/sync_history.rs` around lines 70 - 72, The current save path writes the pretty JSON directly to self.path which can be truncated on interruption; instead serialize with serde_json::to_string_pretty(&file) as you already do, then write the bytes to a temporary file in the same directory (e.g., using tempfile::NamedTempFile or create a file at self.path.with_extension("tmp")), fs::write or temp_file.write_all the JSON, flush and sync if available, then atomically replace the target using std::fs::rename (or tempfile.persist) to self.path; ensure errors from writing/renaming are propagated and any temp file cleaned up on failure—apply this change inside the same method that currently calls serde_json::to_string_pretty and fs::write on self.path.
7-7: Consider movingSyncHistoryEntryto a shared types module for better layering.Line 7 couples
src/cache/sync_history.rstocrate::tui::app::SyncHistoryEntry. While both modules are feature-gated together (removing compile-time risk), this creates unnecessary architectural coupling between cache and TUI layers. MovingSyncHistoryEntrytosrc/typesor keeping it in cache allows the persistence layer to be cleanly separated from UI concerns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache/sync_history.rs` at line 7, The cache module imports SyncHistoryEntry from the TUI layer, creating unwanted coupling; refactor by relocating the SyncHistoryEntry type into a shared types module (e.g., src/types) or into the cache module itself so persistence doesn’t depend on UI. Update all references that currently use crate::tui::app::SyncHistoryEntry to import from the new shared location (SyncHistoryEntry) and adjust feature-gates if needed so the type remains available where required (e.g., in sync_history.rs and the TUI code).src/domain/repo_path_template.rs (1)
72-89: Consider handling additional path traversal patterns.The
sanitize_componentfunction replaces..with__, but this happens after the slash replacement. A malicious input like..%2f(URL-encoded) or....could potentially bypass the sanitization in edge cases.However, since the input is expected to come from Git provider APIs (owner/repo names), this is likely low-risk. The current implementation is reasonable for the expected use case.
♻️ Optional: More robust sanitization
fn sanitize_component(value: &str) -> String { let trimmed = value.trim(); if trimmed.is_empty() { return "_".to_string(); } let mut sanitized = trimmed .replace(['/', '\\'], "_") - .replace("..", "__") + .replace("..", "__"); + + // Ensure we don't end up with just dots + while sanitized.starts_with('.') { + sanitized = format!("_{}", &sanitized[1..]); + } + + sanitized = sanitized .trim() .to_string(); if sanitized.is_empty() { sanitized = "_".to_string(); } sanitized }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/repo_path_template.rs` around lines 72 - 89, sanitize_component currently replaces slashes then ".." which can miss patterns like percent-encoded slashes or multiple-dot sequences; update sanitize_component to first decode any percent-encoding (or at least strip/replace '%' and common encodings like "%2f" / "%5c"), normalize repeated dot runs by collapsing sequences of '.' into a single '_' or replacing any two-or-more consecutive dots with "__", and then replace both '/' and '\' (and any decoded variants) with '_' and trim; ensure the function still returns "_" for empty results and reference sanitize_component when making these changes.src/banner.rs (1)
99-119: Minor inefficiency: String allocation per character in gradient rendering.
gradient_linecreates a newStringviach.to_string()for each character (line 111). For a banner with ~60 characters per line × 6 lines, this creates ~360 small allocations per render. In TUI mode with animations, this could add up.This is a minor optimization opportunity and acceptable for a banner that renders infrequently.
♻️ Optional: Reduce allocations using Cow or owned strings from split
Since
split_inclusivealready returns borrowed&strslices, you could useCow<'a, str>or simply accept the allocations given the low render frequency of banners.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/banner.rs` around lines 99 - 119, gradient_line currently calls ch.to_string() for every character causing many small allocations; change the Span::styled call to use the borrowed &str directly (or wrap as a Cow<'a, str>) instead of allocating—i.e., replace ch.to_string() with ch (or Cow::Borrowed(ch)) inside the Span::styled call in gradient_line so the function (and related spans created with Span::styled and styling via Style::default().fg(...).add_modifier(...)) uses the existing borrowed slice and avoids per-character heap allocations when rendering the gradient.src/output/printer_tests.rs (1)
34-48: Tests verify string presence but not formatting correctness.The format function tests check that output contains the input strings, but don't verify the styling or emoji prefixes are present. This is acceptable since ANSI escape codes make exact matching fragile, but consider adding assertions for the emoji characters if terminal compatibility is important.
♻️ Optional: More thorough format verification
#[test] fn test_format_functions() { let count = format_count(42, "repos"); assert!(count.contains("42")); assert!(count.contains("repos")); let success = format_success("done"); assert!(success.contains("done")); + assert!(success.contains("✓")); let error = format_error("failed"); assert!(error.contains("failed")); + assert!(error.contains("✗")); let warning = format_warning("caution"); assert!(warning.contains("caution")); + assert!(warning.contains("⚠")); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/output/printer_tests.rs` around lines 34 - 48, Tests currently only assert the input strings are present; update the test_format_functions in printer_tests.rs to also verify the emoji/styling prefixes by first removing ANSI escape codes (use an existing strip function or add a small helper to strip ANSI) and then assert the cleaned outputs contain or start with the expected emoji/prefix for each formatter (referencing format_count, format_success, format_error, format_warning) so the tests ensure the emoji prefixes are present while remaining robust to ANSI differences.src/auth/mod_tests.rs (1)
9-47: Add coverage for URL userinfo edge cases.Current cases are good, but adding userinfo input (e.g.,
https://user:pass@host/api) would better protectextract_hostbehavior from auth-routing regressions.💡 Suggested additional test
+#[test] +fn test_extract_host_with_userinfo() { + assert_eq!( + extract_host("https://user:pass@github.example.com/api/v3"), + Some("github.example.com".to_string()) + ); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/mod_tests.rs` around lines 9 - 47, Add tests covering URLs that include userinfo so extract_host correctly strips credentials; create cases like test_extract_host_with_userinfo, asserting extract_host("https://user@host/api") == Some("host".to_string()), extract_host("https://user:pass@host/api") == Some("host".to_string()), and a percent-encoded userinfo case (e.g., "https://user%40me:pass@host/api") also returning Some("host".to_string()); follow existing test patterns and names (e.g., test_extract_host_with_userinfo) and assert the host retains any port if present (e.g., "https://user:pass@host:8443/api" -> Some("host:8443".to_string())) to prevent auth-routing regressions in extract_host.src/setup/ui_tests.rs (1)
6-7: Strengthen assertions to validate actual layout output.Line 7 and Line 12-13 only check width/containment. Exact-string assertions would catch subtle regressions in centering and dashed/solid connector rendering.
💡 Suggested assertions
#[test] fn center_cell_matches_width() { let out = center_cell("Auth", 10); assert_eq!(out.chars().count(), 10); - assert!(out.contains("Auth")); + assert_eq!(out, " Auth "); } #[test] fn connector_cell_matches_width() { - assert_eq!(connector_cell(7, true).chars().count(), 7); - assert_eq!(connector_cell(7, false).chars().count(), 7); + assert_eq!(connector_cell(7, true), "━━━━━━━"); + assert_eq!(connector_cell(7, false), "─ ─ ─ ─"); }Also applies to: 12-13
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/ui_tests.rs` around lines 6 - 7, The test currently only checks width and substring presence via assert_eq!(out.chars().count(), 10) and assert!(out.contains("Auth")); strengthen these by asserting the full rendered layout string to catch centering and connector style regressions: locate the test in src/setup/ui_tests.rs that builds the `out` string and replace or augment the existing assertions with exact-string assertions (e.g., assert_eq!(out, expected_full_output) or assert_eq!(out.trim_end(), expected_full_output)) and do the same for the other related assertions around lines 12-13 so the test verifies full line contents (including spaces, dashes, and connectors) rather than just length or substring.src/cache/discovery_tests.rs (1)
6-23: Prefer reusingRepo::testto avoid test-fixture drift.Line 6-23 manually rebuilds
Repofields that already exist incrate::types::Repo::test, which increases maintenance overhead when repo defaults evolve.♻️ Suggested simplification
fn create_test_repo(id: u64, name: &str, owner: &str) -> OwnedRepo { - OwnedRepo { - owner: owner.to_string(), - repo: Repo { - id, - name: name.to_string(), - full_name: format!("{}/{}", owner, name), - ssh_url: format!("git@github.com:{}/{}.git", owner, name), - clone_url: format!("https://github.com/{}/{}.git", owner, name), - default_branch: "main".to_string(), - private: false, - archived: false, - fork: false, - pushed_at: None, - description: None, - }, - } + let mut repo = Repo::test(name, owner); + repo.id = id; + OwnedRepo::new(owner, repo) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cache/discovery_tests.rs` around lines 6 - 23, create_test_repo currently manually constructs a Repo which duplicates crate::types::Repo::test; update create_test_repo to call Repo::test (or Repo::test(...) if it accepts owner/name) and then wrap that result in OwnedRepo, setting/overriding only the unique fields like id and owner string as needed. Specifically, replace the inline Repo { ... } with let mut repo = Repo::test(...); repo.id = id; repo.full_name = format!("{}/{}", owner, name); then return OwnedRepo { owner: owner.to_string(), repo } so tests inherit the canonical defaults from Repo::test.src/git/mod_tests.rs (1)
16-16: Use a platform-neutral mock path literal.On Line 16,
"/tmp/nonexistent"is Unix-flavored and not needed for this mock behavior. A neutral relative path keeps intent clearer across platforms.♻️ Proposed tweak
- let status = mock.status(Path::new("/tmp/nonexistent")).unwrap(); + let status = mock.status(Path::new("nonexistent-repo")).unwrap();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/git/mod_tests.rs` at line 16, Replace the platform-specific absolute Unix path used in the test with a platform-neutral relative path: change the Path::new("/tmp/nonexistent") used in the mock.status(...) call in mod_tests.rs to a simple relative path literal (e.g., "nonexistent" or "./nonexistent") so mock.status(Path::new(...)).unwrap() works identically across OSes while preserving the test intent.src/infra/storage/mod.rs (1)
3-4: Consider explicit re-exports for better API control.Glob re-exports (
pub use crate::cache::*) make the public API implicit. If naming conflicts arise betweencacheandworkspace_manager, or if either module adds new public items, this can cause unexpected breakage or ambiguity for consumers.For internal facade modules this is often acceptable, but explicit re-exports would provide clearer API boundaries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/infra/storage/mod.rs` around lines 3 - 4, The current module exposes everything with glob re-exports (`pub use crate::cache::*` and `pub use crate::config::workspace_manager::*`), which makes the public API implicit and risks name collisions; replace these globs with explicit re-exports listing only the intended public symbols (e.g., `pub use crate::cache::{CacheType, CacheError, ...}` and `pub use crate::config::workspace_manager::{WorkspaceManager, WorkspaceConfig, ...}`) so API surface is clear and collisions are avoided — update the export lists to include the exact types/functions you want to expose from the `cache` and `workspace_manager` modules.src/setup/screens/provider_tests.rs (1)
7-27: Consider extractingrender_outputhelper to reduce duplication.This helper is nearly identical to the one in
welcome_tests.rs. Consider extracting a shared test utility for rendering setup screens to avoid duplication across test modules.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/provider_tests.rs` around lines 7 - 27, Duplicate render_output logic in provider_tests.rs should be extracted into a shared test utility; create a single helper (e.g., render_output or render_state_output) that accepts &SetupState and performs the TestBackend/Terminal draw, calls render(state, frame, area), and returns the rendered String, then replace the local render_output in provider_tests.rs and the one in welcome_tests.rs to call this shared helper; reference the existing function name render_output, the render function, the SetupState type, and usage of TestBackend/Terminal to locate and consolidate the code.src/setup/screens/welcome_tests.rs (1)
6-26: Duplicatedrender_outputhelper across test modules.This helper is nearly identical to the one in
provider_tests.rs(only the terminal height differs: 20 vs 24). Consider extracting a shared test utility that accepts dimensions as parameters.♻️ Example shared helper
Create a test utilities module (e.g.,
src/setup/screens/test_utils.rs):pub fn render_to_string<F>(width: u16, height: u16, render_fn: F) -> String where F: FnOnce(&mut Frame, Rect), { let backend = TestBackend::new(width, height); let mut terminal = Terminal::new(backend).unwrap(); terminal .draw(|frame| { let area = frame.area(); render_fn(frame, area); }) .unwrap(); let buffer = terminal.backend().buffer(); let mut text = String::new(); for y in 0..buffer.area.height { for x in 0..buffer.area.width { text.push_str(buffer[(x, y)].symbol()); } text.push('\n'); } text }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/welcome_tests.rs` around lines 6 - 26, The render_output helper in this test duplicates code from provider_tests; extract a shared test utility (e.g., render_to_string) that accepts width: u16, height: u16 and a render closure (FnOnce(&mut Frame, Rect)) and move it to a test-only module (e.g., src/setup/screens/test_utils.rs or a mod tests::utils). Replace render_output in this file with a call to that helper (pass 100 and 20 and a closure that calls render(state, frame, area)), and update provider_tests to call the same helper with its height (24) so both tests reuse the single implementation.src/setup/screens/confirm.rs (1)
30-40: Minor UX inconsistency in organization truncation.When
selected_orgs.len() == 3, all three are displayed. Whenselected_orgs.len() == 4, only the first two are shown with+2 more. This creates a visual cliff where 4 organizations shows fewer names than 3.Consider adjusting the threshold to show the first 3 items before truncating:
Optional refinement
let orgs_display = if selected_orgs.is_empty() { "all organizations".to_string() } else if selected_orgs.len() <= 3 { selected_orgs.join(", ") } else { format!( "{}, ... +{} more", - selected_orgs[..2].join(", "), - selected_orgs.len() - 2 + selected_orgs[..3].join(", "), + selected_orgs.len() - 3 ) };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/confirm.rs` around lines 30 - 40, The current orgs_display logic in src/setup/screens/confirm.rs causes a cliff between 3 and 4 orgs; update the truncation threshold and slice so lists of up to 4 show full names and longer lists show the first 3 plus a "+N more" suffix: change the conditional from selected_orgs.len() <= 3 to selected_orgs.len() <= 4, and in the else branch use selected_orgs[..3].join(", ") and selected_orgs.len() - 3 to compute the remaining count (keeping the variable name orgs_display and the same format string).src/setup/screens/auth_tests.rs (1)
6-26: Consider extracting shared test helper.The
render_outputhelper is duplicated betweenauth_tests.rsandorgs_tests.rs. For maintainability, consider extracting to a shared test utility module in a future iteration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/auth_tests.rs` around lines 6 - 26, The helper function render_output is duplicated; extract it into a shared test utility module (e.g., tests::util or crate::test_utils) as a single public function signature pub fn render_output(state: &SetupState) -> String and move the implementation that constructs TestBackend, Terminal, calls render(state, frame, area), reads terminal.backend().buffer(), and builds the text; then update auth_tests.rs and orgs_tests.rs to call the shared render_output and remove the duplicated implementations; keep references to SetupState, render, TestBackend, and Terminal to locate the code and ensure visibility/imports are adjusted (use crate::test_utils::render_output or appropriate path).src/commands/sync_cmd.rs (1)
48-51: Use display formatting for username in verbose output.Using
{:?}forprepared.auth.usernamewill output debug formatting likeSome("octocat")instead of a cleanoctocat. Consider unwrapping or using a cleaner format.♻️ Suggested improvement
output.verbose(&format!( - "Authenticated as {:?} via {}", - prepared.auth.username, prepared.auth.method + "Authenticated as {} via {}", + prepared.auth.username.as_deref().unwrap_or("unknown"), + prepared.auth.method ));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/sync_cmd.rs` around lines 48 - 51, The verbose message currently uses debug formatting for the username (prepared.auth.username) which prints Option/debug info; change the output in output.verbose to use display formatting for prepared.auth.username and handle the Option safely (e.g., convert to a &str with as_deref()/map()/unwrap_or or similar) so the message shows a clean username (or a sensible fallback like "<unknown>") while keeping prepared.auth.method as before.src/setup/screens/complete.rs (1)
41-49: Avoid allocating selected org names just to compute a count.This path does two passes and clones org names even though only counts are needed. A single fold over
state.orgsis simpler and cheaper.♻️ Proposed refactor
- let selected_orgs = state.selected_orgs(); - let total_repos: usize = state - .orgs - .iter() - .filter(|o| o.selected) - .map(|o| o.repo_count) - .sum(); - let org_count = selected_orgs.len(); + let (org_count, total_repos) = state + .orgs + .iter() + .filter(|o| o.selected) + .fold((0usize, 0usize), |(count, repos), o| { + (count + 1, repos + o.repo_count) + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/complete.rs` around lines 41 - 49, The code currently calls state.selected_orgs() and then iterates state.orgs twice (once via selected_orgs() which clones names, and once to sum repo_count), causing unnecessary allocations; replace these with a single pass over state.orgs (e.g., a fold) to compute both org_count and total_repos without cloning names—update references to selected_orgs(), org_count and total_repos so they come from the single iterator/fold over state.orgs (look for selected_orgs, state.selected_orgs(), state.orgs, org_count, total_repos) and remove the extra allocation.src/checks.rs (1)
76-91: Avoid runninggh --versiontwice in the same check.
check_gh_installedfirst callsgh_cli::is_installed()and then spawns anothergh --version. You can simplify to one subprocess call and parse the version from that output.♻️ Possible simplification
fn check_gh_installed() -> CheckResult { - if gh_cli::is_installed() { - let version = Command::new("gh") - .arg("--version") - .output() - .ok() - .map(|o| { - String::from_utf8_lossy(&o.stdout) - .lines() - .next() - .unwrap_or("") - .trim() - .to_string() - }) - .unwrap_or_else(|| "installed".to_string()); + match Command::new("gh").arg("--version").output() { + Ok(output) if output.status.success() => { + let version = String::from_utf8_lossy(&output.stdout) + .lines() + .next() + .unwrap_or("installed") + .trim() + .to_string(); CheckResult { name: "GitHub CLI".to_string(), passed: true, message: version, suggestion: None, critical: true, } - } else { + } + _ => { CheckResult { name: "GitHub CLI".to_string(), passed: false, message: "not found".to_string(), suggestion: Some("Install from https://cli.github.com/".to_string()), critical: true, } + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/checks.rs` around lines 76 - 91, The check_gh_installed function currently calls gh_cli::is_installed() and then runs Command::new("gh").arg("--version") again; replace this with a single subprocess invocation: run Command::new("gh").arg("--version") once, inspect its Result/Output to determine whether gh is present (handle Err or non-zero exit as not installed) and parse stdout to extract the version string for the CheckResult. Update logic that previously relied on gh_cli::is_installed() to use the outcome of the single Command call and return the appropriate CheckResult (installed with parsed version or not installed).src/commands/init_tests.rs (1)
27-42: Strengthen the config-dir test with a file assertion.
test_init_creates_config_dircurrently passes if only the folder is created. Add an assertion forconfig.tomlexistence so this test also validates file creation.📌 Suggested test hardening
let config_dir = temp.path().join("git-same"); assert!(config_dir.exists()); assert!(config_dir.is_dir()); + assert!(config_path.exists());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/init_tests.rs` around lines 27 - 42, In test_init_creates_config_dir, after asserting the config directory exists, also assert that the expected config file (config.toml) was created: use the same config_path (or build temp.path().join("git-same/config.toml")) to check exists() and is_file(); update the test function test_init_creates_config_dir to include these assertions so the test verifies both directory and file creation (references: test_init_creates_config_dir, InitArgs, config_path, run).src/setup/screens/auth.rs (1)
32-35: Avoid hardcoded provider text in detection method.Consider deriving this line from
provider.display_name()as well, to keep the auth screen copy consistent with provider selection.💡 Possible tweak
lines.push(Line::from(Span::styled( - "Detection method: GitHub CLI (gh)", + format!("Detection method: {} CLI", provider.display_name()), Style::default().fg(Color::DarkGray), )));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/screens/auth.rs` around lines 32 - 35, The line constructing the detection method text uses a hardcoded "GitHub CLI (gh)"; change it to derive from the selected provider by using provider.display_name() when building the styled Span (the code creating Line::from(Span::styled(...)) / Span::styled(...) should format the string with provider.display_name() so the detection method text matches the provider selection and remains consistent across providers).src/auth/ssh.rs (1)
50-57: Consider extracting the duplicated key files list.The same
key_filesarray is defined in bothhas_ssh_keysandget_ssh_key_files.♻️ Optional: Extract to a constant
+/// Common SSH key filenames to check. +const SSH_KEY_FILES: &[&str] = &[ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", +]; + pub fn has_ssh_keys() -> bool { // ... - let key_files = [ - "id_rsa", - "id_ed25519", - // ... - ]; - - for key_file in &key_files { + for key_file in SSH_KEY_FILES {This reduces duplication and ensures both functions check the same files.
Also applies to: 78-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/ssh.rs` around lines 50 - 57, Both has_ssh_keys and get_ssh_key_files duplicate the same key_files array; extract that list into a shared constant (e.g., KEY_FILES or SSH_KEY_FILES) and replace the local arrays in both functions (has_ssh_keys and get_ssh_key_files) to reference the new constant so they always use the same source of truth; ensure the constant's type matches usage (slice or array of &str) and update any imports/visibility if it needs to be accessed from other modules.src/provider/github/client.rs (1)
29-66: Well-structured provider construction with sensible defaults.The client is configured with appropriate headers for GitHub API (including API version header), and the timeout is configurable. Error handling for client creation is appropriate.
💡 Optional: Consider using a version constant from Cargo.toml
The User-Agent version is hardcoded as "gisa-cli/0.1.0" at line 45. Consider using
env!("CARGO_PKG_VERSION")to keep it in sync:- headers.insert(USER_AGENT, HeaderValue::from_static("gisa-cli/0.1.0")); + let user_agent = format!("gisa-cli/{}", env!("CARGO_PKG_VERSION")); + headers.insert(USER_AGENT, HeaderValue::from_str(&user_agent) + .unwrap_or_else(|_| HeaderValue::from_static("gisa-cli")));This ensures the User-Agent stays accurate across version bumps.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/provider/github/client.rs` around lines 29 - 66, Replace the hardcoded User-Agent value used in GitHubProvider::with_timeout (currently HeaderValue::from_static("gisa-cli/0.1.0")) with a build-time package version so it stays in sync; construct the header value using the crate version (e.g. via env!("CARGO_PKG_VERSION") combined with "gisa-cli/") and create a HeaderValue from that string (handle conversion errors if necessary) when inserting into the USER_AGENT header.src/operations/sync_tests.rs (1)
154-206: Well-designed progress tracking helper for tests.
CountingSyncProgresscorrectly usesAtomicUsizefor thread-safe counting. TheSeqCstordering is safe but conservative.💡 Optional: Consider using Relaxed ordering for test counters
Since these counters are only read after all threads complete (via
join),Relaxedordering would be sufficient and slightly more efficient:- self.started.fetch_add(1, Ordering::SeqCst); + self.started.fetch_add(1, Ordering::Relaxed);This is a minor optimization and the current
SeqCstis perfectly correct.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/sync_tests.rs` around lines 154 - 206, Summary: Relax atomic ordering for test-only counters to Relaxed to avoid unnecessary synchronization costs. In CountingSyncProgress replace Ordering::SeqCst with Ordering::Relaxed in all fetch_add calls used by the trait methods (on_start, on_fetch_complete, on_pull_complete, on_error, on_skip) so the counters use relaxed ordering since they are only read after threads join; no other logic changes required.src/config/workspace_policy.rs (1)
52-63:unique_namehas a hard 100-collision ceiling.If many similarly named workspaces exist, Line 52–Line 63 can fail even when
-101or higher is available. Consider unbounded probing (or a configurable cap) to avoid false “no unique name” errors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/workspace_policy.rs` around lines 52 - 63, The current unique_name logic in workspace_policy.rs stops after suffix 100 and returns an error, causing false failures; update the probing loop in the function that generates the unique workspace name (the block using WorkspaceStore::workspace_dir and candidate = format!("{}-{}", base, suffix)) to probe unboundedly (e.g., use an open-ended loop starting at suffix = 2 and increment until a non-existent candidate_dir is found) or make the max suffix a configurable parameter instead of hardcoding 100, then return the first available candidate as before.src/setup/handler.rs (1)
772-777: Repo-count discovery is currently N+1 per organization.Line 773–Line 777 performs one API call per org for counts. On accounts with many orgs, setup will be slow and can hit rate limits quickly. Consider deferring counts or fetching with bounded concurrency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/setup/handler.rs` around lines 772 - 777, The loop over orgs currently calls provider.get_org_repos(&org.login) for each organization (N+1 behavior), which is slow and can hit rate limits; update the logic in the function handling org discovery (the block iterating over orgs and using provider.get_org_repos) to avoid sequential per-org calls by either deferring repo-count calculation until needed or performing the calls with bounded concurrency (e.g., use a futures stream like stream::iter(orgs).map(...).buffer_unordered(N) or FuturesUnordered with a semaphore) to limit simultaneous API requests and collect repo counts in parallel; ensure you still handle errors with unwrap_or(0) semantics or propagate errors as appropriate.src/auth/mod.rs (1)
109-123: Silent fallthrough on GitHub Enterprise authentication failure may mask configuration issues.When
get_token_for_hostfails for an enterprise host (line 113), the code silently falls through to default authentication. This could mask legitimate enterprise configuration problems—users might not realize their enterprise auth isn't working and are instead using their default GitHub.com credentials.Consider logging a debug/warning message when enterprise-specific auth fails before falling back.
💡 Suggested improvement
if host != "api.github.com" { debug!(host, "Attempting GitHub Enterprise authentication"); - if let Ok(token) = gh_cli::get_token_for_host(&host) { - debug!(host, "Authenticated via gh CLI for enterprise host"); - return Ok(AuthResult { - token, - method: ResolvedAuthMethod::GhCli, - username: None, - }); + match gh_cli::get_token_for_host(&host) { + Ok(token) => { + debug!(host, "Authenticated via gh CLI for enterprise host"); + return Ok(AuthResult { + token, + method: ResolvedAuthMethod::GhCli, + username: None, + }); + } + Err(e) => { + debug!(host, error = %e, "Enterprise auth failed, falling back to default"); + } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/auth/mod.rs` around lines 109 - 123, The enterprise auth branch currently swallows failures from gh_cli::get_token_for_host and falls back silently; update the block that checks provider.api_url -> extract_host(...) to detect Err from gh_cli::get_token_for_host and emit a debug or warn log (including the host and error) before continuing to fallback behavior, ensuring the log distinguishes enterprise token retrieval failure from normal GitHub.com auth; keep the existing return on Ok(token) that constructs AuthResult with ResolvedAuthMethod::GhCli and username None.src/operations/sync.rs (1)
233-245: Non-git directories are handled via error paths, but explicit detection would improve UX.The code checks if the path exists but doesn't explicitly verify it's a git repository. Non-git directories will fail at
git.status()and be handled by error paths, which works but produces less clear messages like "failed to get status: ..." instead of "not a git repository".This is a minor UX improvement rather than a correctness issue—the current code is functionally safe.
💡 Optional: Add explicit git repo check
// Check if path exists and is a repo if !path.exists() { drop(permit); return SyncResult { repo: local_repo.repo, path, result: OpResult::Skipped("path does not exist".to_string()), had_updates: false, status: None, fetch_result: None, pull_result: None, }; } + + // Check if it's actually a git repository + if !path.join(".git").exists() { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped("not a git repository".to_string()), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/sync.rs` around lines 233 - 245, The current check only verifies path.exists() but not that it is a git repository, so add an explicit repo detection before proceeding (e.g., check for a .git directory at path.join(".git") or attempt to open the repo using the git library) and if it is not a git repo return the same SyncResult structure used here (with repo: local_repo.repo, path, had_updates: false, status: None, fetch_result: None, pull_result: None) but set result to OpResult::Skipped("not a git repository".to_string()). Ensure this check occurs before calling git.status() or other git operations so the code path and message are clearer.src/config/workspace.rs (1)
32-34: Consider deduplicatingdefault_truehelper.This helper function is also defined in
src/config/provider_config.rs. While minor, you could extract it to a shared location to avoid duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config/workspace.rs` around lines 32 - 34, The small helper function default_true() is duplicated (also in provider_config.rs); extract it into a single shared location (e.g., a config-level helper module or a common utils module) and export it for reuse, then replace the local definitions in workspace.rs and provider_config.rs with a use/import of the shared default_true so both modules call the same function; ensure the function signature and visibility (pub(crate) or pub) match the callers.src/tui/app.rs (2)
143-157: Type mismatch:ahead/behinduseusizehere butu32inRepoStatus.
RepoEntryusesusizeforahead/behind(Lines 152-153) whileRepoStatusinsrc/git/traits.rsusesu32. This requires casting when converting between types and could cause truncation issues on 32-bit platforms (though unlikely in practice).Consider aligning the types for consistency.
♻️ Suggested alignment
pub struct RepoEntry { pub owner: String, pub name: String, pub full_name: String, pub path: PathBuf, pub branch: Option<String>, pub is_uncommitted: bool, - pub ahead: usize, - pub behind: usize, + pub ahead: u32, + pub behind: u32, pub staged_count: usize, pub unstaged_count: usize, pub untracked_count: usize, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/app.rs` around lines 143 - 157, RepoEntry's ahead and behind fields are usize but RepoStatus uses u32, causing mismatches and casts; change the types of RepoEntry::ahead and RepoEntry::behind from usize to u32 to match RepoStatus (and update any code that constructs or reads RepoEntry to stop casting between usize and u32), ensuring conversions are only applied where interfacing with APIs that require usize (e.g., indexing) and documenting any places that now need explicit casts.
168-308: Consider grouping related App fields into sub-structs for maintainability.The
Appstruct has ~50 fields covering UI state, operation state, sync history, filtering, etc. Grouping related fields (e.g.,SyncState,FilterState,NavigationState) would improve readability and make partial updates cleaner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/app.rs` around lines 168 - 308, The App struct is too large and should be split into cohesive sub-structs to improve readability and make partial updates safer: define compact types like NavigationState (screen, screen_stack, should_quit, workspace_index, repo_index, scroll_offset, stat_index, dashboard_table_state, settings_index), FilterState (filter_text, filter_active, log_filter, sync_log_index), SyncState (sync_pull, sync_log_entries, sync_history, show_sync_history, expanded_repo, repo_commits, changelog_commits, changelog_total, changelog_loaded, changelog_scroll), WorkspaceState (workspaces, active_workspace, workspace_pane, base_path, repos_by_org, all_repos, orgs, local_repos, workspace_detail_scroll, settings_config_expanded), OperationStateContainer (operation_state, log_lines, status_loading, last_status_scan, check_results, checks_loading, tick_count, sync-related flags), and ConfigState (config, config_created, config_path_display, setup_state, error_message); then replace the moved fields in App with those sub-struct fields (e.g., pub navigation: NavigationState, pub filter: FilterState, etc.), implement Default/From for each sub-struct and update any code that constructed or accessed App fields to use the new nested paths (e.g., app.repo_index -> app.navigation.repo_index), and run the compiler to fix/adjust all call-sites and tests accordingly.src/operations/clone.rs (1)
215-216: Consider acquiring semaphore permit inside the spawned task for better parallelism.Currently,
acquire_owned().awaithappens beforetokio::spawn, which serializes task creation. Moving the acquire inside the task allows all tasks to be spawned immediately, with the semaphore naturally throttling actual execution.♻️ Suggested refactor
for (index, repo) in repos.into_iter().enumerate() { - let permit = semaphore.clone().acquire_owned().await.unwrap(); + let sem = semaphore.clone(); let git = self.git.clone(); // ... other clones ... let handle = tokio::spawn(async move { + let _permit = sem.acquire_owned().await.unwrap(); // Notify progress - clone starting progress.on_start(&repo, index, total); // ... rest of task ... - drop(permit); // Release semaphore + // permit drops automatically at end of scope CloneResult { /* ... */ } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/operations/clone.rs` around lines 215 - 216, The loop currently calls semaphore.clone().acquire_owned().await before tokio::spawn which serializes task creation; move the acquire inside the spawned async task so you call tokio::spawn(async move { let permit = semaphore.clone().acquire_owned().await.unwrap(); ... }) ensuring the permit variable is owned by the task and dropped when the work completes; update the closure to async move and keep references (e.g., repo, index) moved in so task body uses them without holding the permit across spawn.
| on: | ||
| workflow_dispatch: | ||
|
|
There was a problem hiding this comment.
Release publishing won’t run automatically on tag pushes.
Line 3 only enables workflow_dispatch, while Line 142 requires a tag ref. A normal git push --tags won’t start this workflow, so release automation is skipped.
Suggested fix
on:
workflow_dispatch:
+ push:
+ tags:
+ - "v*"Also applies to: 142-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/S2-Release-GitHub.yml around lines 3 - 5, The workflow
currently only defines the manual trigger symbol workflow_dispatch, so tag
pushes won't start the release job; update the top-level on: triggers to include
a push trigger for tag refs (add a push: tags: entry) so that git push --tags
will invoke the workflow and satisfy the tag-based ref checks used later in the
release job; ensure the tag pattern in the push: tags matches the tag ref logic
referenced around the release job (the existing tag/ref checks) so the workflow
runs automatically on tag creation.
| cat > formula.rb << 'FORMULA' | ||
| class GitSame < Formula | ||
| desc "Discover and mirror GitHub org/repo structures locally" | ||
| homepage "https://github.com/zaai-com/git-same" | ||
| version "VERSION_PLACEHOLDER" | ||
| license "MIT" | ||
|
|
||
| if OS.mac? | ||
| if Hardware::CPU.arm? | ||
| url "URL_PLACEHOLDER/git-same-macos-aarch64" | ||
| sha256 "SHA_MACOS_AARCH64_PLACEHOLDER" | ||
| else | ||
| url "URL_PLACEHOLDER/git-same-macos-x86_64" | ||
| sha256 "SHA_MACOS_X86_64_PLACEHOLDER" | ||
| end | ||
| elsif OS.linux? | ||
| if Hardware::CPU.arm? | ||
| url "URL_PLACEHOLDER/git-same-linux-aarch64" | ||
| sha256 "SHA_LINUX_AARCH64_PLACEHOLDER" | ||
| else | ||
| url "URL_PLACEHOLDER/git-same-linux-x86_64" | ||
| sha256 "SHA_LINUX_X86_64_PLACEHOLDER" | ||
| end | ||
| end | ||
|
|
||
| def install | ||
| if OS.mac? | ||
| bin.install "git-same-macos-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" | ||
| elsif OS.linux? | ||
| bin.install "git-same-linux-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" | ||
| end | ||
| end | ||
|
|
||
| test do | ||
| assert_match version.to_s, shell_output("#{bin}/git-same --version") | ||
| end | ||
| end | ||
| FORMULA |
There was a problem hiding this comment.
Heredoc includes leading whitespace that will be embedded in the formula.
The cat > formula.rb << 'FORMULA' heredoc preserves the YAML indentation, resulting in each line of the formula having extra leading spaces. This may cause Homebrew to reject or misparse the formula.
🐛 Proposed fix: Use `<<-` or remove indentation
Either remove the indentation from the heredoc content, or use a shell trick to strip leading whitespace:
- cat > formula.rb << 'FORMULA'
- class GitSame < Formula
+ cat > formula.rb << 'FORMULA'
+class GitSame < Formula
+ desc "Discover and mirror GitHub org/repo structures locally"
+ homepage "https://github.com/zaai-com/git-same"Or use sed to strip leading whitespace after generating:
FORMULA
+
+ # Remove leading whitespace from heredoc
+ sed -i 's/^ //' formula.rb
sed -i "s|VERSION_PLACEHOLDER|${VERSION}|g" formula.rb🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/S3-Publish-Homebrew.yml around lines 62 - 99, The heredoc
starting at the `cat > formula.rb << 'FORMULA'` block preserves YAML indentation
and injects leading spaces into the generated Homebrew formula (the `class
GitSame < Formula` content), which can break parsing; fix it by removing the
indentation for the heredoc body or switching to a dedenting form (e.g., use a
left-stripping heredoc variant so the `class GitSame`, `def install`, `test do`,
etc. lines have no leading spaces) or post-process the generated `formula.rb` to
strip leading whitespace before writing/publishing.
| enable_raw_mode()?; | ||
| let mut stdout = io::stdout(); | ||
| execute!(stdout, EnterAlternateScreen, EnableMouseCapture)?; | ||
| let backend = CrosstermBackend::new(stdout); | ||
| let mut terminal = Terminal::new(backend)?; | ||
|
|
There was a problem hiding this comment.
Terminal cleanup is not guaranteed on initialization failures.
If setup fails after raw mode/alternate screen is enabled, early ? returns bypass restoration and can leave the user terminal in a bad state.
🛠️ Proposed hardening
// Setup terminal
enable_raw_mode()?;
let mut stdout = io::stdout();
- execute!(stdout, EnterAlternateScreen, EnableMouseCapture)?;
+ if let Err(e) = execute!(stdout, EnterAlternateScreen, EnableMouseCapture) {
+ let _ = disable_raw_mode();
+ return Err(e.into());
+ }
let backend = CrosstermBackend::new(stdout);
- let mut terminal = Terminal::new(backend)?;
+ let mut terminal = match Terminal::new(backend) {
+ Ok(t) => t,
+ Err(e) => {
+ let mut out = io::stdout();
+ let _ = execute!(out, LeaveAlternateScreen, DisableMouseCapture);
+ let _ = disable_raw_mode();
+ return Err(e.into());
+ }
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/setup/mod.rs` around lines 35 - 40, The initialization enables raw mode
and alternate screen with enable_raw_mode(), execute!(..., EnterAlternateScreen,
EnableMouseCapture) then constructs Terminal::new(backend) but returns early on
errors, which can leave the terminal in a bad state; modify setup so that any
failure after enable_raw_mode/EnterAlternateScreen/EnableMouseCapture always
restores the terminal (call disable_raw_mode() and execute!(...,
LeaveAlternateScreen, DisableMouseCapture) in the error path) — e.g. use a scope
guard/RAII or wrap the sequence in a try block that performs cleanup on Err and
ensure Terminal::new or any subsequent error triggers the restore logic.
| let cursor_pos = state.path_cursor.min(state.base_path.len()); | ||
|
|
||
| let input_line = Line::from(vec![ | ||
| Span::styled(" ", Style::default()), | ||
| Span::styled(&state.base_path, input_style), | ||
| ]); | ||
| let border_type = if state.path_browse_mode { | ||
| BorderType::Thick | ||
| } else if state.path_suggestions_mode { | ||
| BorderType::Plain | ||
| } else { | ||
| BorderType::Thick | ||
| }; | ||
| let border_color = if state.path_suggestions_mode { | ||
| Color::DarkGray | ||
| } else { | ||
| accent | ||
| }; | ||
| let input = Paragraph::new(input_line).block( | ||
| Block::default() | ||
| .borders(Borders::ALL) | ||
| .title(" Base Path ") | ||
| .border_type(border_type) | ||
| .border_style(Style::default().fg(border_color)), | ||
| ); | ||
| frame.render_widget(input, chunks[1]); | ||
|
|
||
| // Show cursor in input mode | ||
| if !state.path_suggestions_mode && !state.path_browse_mode { | ||
| let cursor_x = chunks[1].x + 1 + 2 + cursor_pos as u16; | ||
| let cursor_y = chunks[1].y + 1; | ||
| frame.set_cursor_position((cursor_x, cursor_y)); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/setup/screens/path.rs | sed -n '50,100p'Repository: ZAAI-com/git-same
Length of output: 2141
🏁 Script executed:
# Check the state structure and base_path definition
rg -n "base_path" src/setup/screens/path.rs | head -20Repository: ZAAI-com/git-same
Length of output: 231
🏁 Script executed:
# Check if there are any multibyte character constraints in the codebase
rg -n "chars\(\)|utf|multibyte" src/setup/screens/path.rsRepository: ZAAI-com/git-same
Length of output: 183
🏁 Script executed:
# Check the state structure definition for base_path type
rg -n "struct.*State|base_path:" src/setup/screens/path.rs -A 5 | head -40Repository: ZAAI-com/git-same
Length of output: 43
🏁 Script executed:
# Find state definition in the broader codebase
fd -t f "\.rs$" src/setup | xargs rg -l "base_path" | head -5Repository: ZAAI-com/git-same
Length of output: 198
🏁 Script executed:
# Check if base_path is defined/used elsewhere in the state module
rg -n "base_path" src/setup/ -t rust | head -30Repository: ZAAI-com/git-same
Length of output: 2537
🏁 Script executed:
# Check what path_cursor is and how it's updated
rg -n "path_cursor" src/setup/state.rs -B2 -A2Repository: ZAAI-com/git-same
Length of output: 580
🏁 Script executed:
# Check handler logic for path_cursor updates
rg -n "path_cursor" src/setup/handler.rs -B2 -A2 | head -50Repository: ZAAI-com/git-same
Length of output: 2252
🏁 Script executed:
# Look for any character/unicode handling in cursor or text rendering
rg -n "chars\(\)|char|unicode" src/setup/handler.rs | head -20Repository: ZAAI-com/git-same
Length of output: 43
🏁 Script executed:
# Check how chunks[1].width is used and what constraints exist
rg -n "chunks\[1\]|width" src/setup/screens/path.rsRepository: ZAAI-com/git-same
Length of output: 425
🏁 Script executed:
# Look at the full handler code for Backspace to understand the byte handling issue
rg -n "Backspace" src/setup/handler.rs -A 10Repository: ZAAI-com/git-same
Length of output: 1064
🏁 Script executed:
# Check if there are any tests that exercise multibyte characters
rg -n "utf|char|multibyte|ñ|中|emoji" src/setup/ -iRepository: ZAAI-com/git-same
Length of output: 2751
🏁 Script executed:
# Check if base_path is validated or sanitized anywhere
rg -n "base_path.*valid|sanitize|ascii" src/setup/Repository: ZAAI-com/git-same
Length of output: 147
🏁 Script executed:
# Check the complete_tests to see if multibyte characters are tested
cat -n src/setup/screens/path_tests.rsRepository: ZAAI-com/git-same
Length of output: 3762
🏁 Script executed:
# Check how chunks[1] is calculated and what width it has
rg -n "chunks" src/setup/screens/path.rs -B 5 -A 5 | head -40Repository: ZAAI-com/git-same
Length of output: 1520
🏁 Script executed:
# Look at the full render_path_screen function signature
rg -n "fn render_path_screen" src/setup/screens/path.rs -A 20Repository: ZAAI-com/git-same
Length of output: 43
🏁 Script executed:
# Check the exact behavior of String::remove and multibyte handling in Rust
cat > /tmp/test_rust_bytes.md << 'EOF'
The issue is:
1. path_cursor is stored as a byte offset (usize)
2. Backspace does: path_cursor -= 1; then base_path.remove(path_cursor)
3. If someone inputs a multibyte UTF-8 char (e.g., "café"), cursor position as bytes differs from char position
4. Example: "café" = 5 bytes (c-a-f-é[2 bytes]). If cursor at byte 5 and Backspace: decrement to 4, remove byte 4 = corrupts the é
EOF
cat /tmp/test_rust_bytes.mdRepository: ZAAI-com/git-same
Length of output: 439
🏁 Script executed:
# Check if there's any validation or sanitization of paths
rg -n "validate|sanitize|check" src/setup/handler.rs | grep -i pathRepository: ZAAI-com/git-same
Length of output: 43
🏁 Script executed:
# Look for the actual width calculation of chunks[1]
rg -n "Constraint::Length|Constraint::" src/setup/screens/path.rsRepository: ZAAI-com/git-same
Length of output: 927
🏁 Script executed:
# Check how character input is handled
rg -n "KeyCode::Char" src/setup/handler.rs -B 2 -A 5 | head -30Repository: ZAAI-com/git-same
Length of output: 1095
🏁 Script executed:
# Check if there's any sanitization of character input
rg -n "push\(c\)|insert" src/setup/handler.rs | grep -C 2 pathRepository: ZAAI-com/git-same
Length of output: 163
🏁 Script executed:
# Verify the actual byte vs char issue with a focused example
python3 << 'EOF'
# Simulate the issue with a multibyte character path
path = "café" # This is 5 bytes, 4 chars
print(f"Path: {path}")
print(f"String length (bytes): {len(path.encode('utf-8'))}")
print(f"Char count: {len(path)}")
print(f"Bytes: {path.encode('utf-8')}")
# If cursor is at byte position 4 (after 'f'), and we decrement by 1 to get byte 3
# then try to remove byte 3, we'd be in the middle of the 'é' (which is bytes 3-4)
print("\nSimulating the bug:")
print("If cursor_pos (byte) = 4, after backspace we'd do:")
print(" cursor_pos -= 1 -> cursor_pos = 3")
print(" remove byte at index 3")
print("This lands in the middle of 'é' (UTF-8 bytes 3-4), corrupting the string!")
EOFRepository: ZAAI-com/git-same
Length of output: 340
🏁 Script executed:
# Check if there are arrow key handlers that move cursor within the path
rg -n "Left|Right|arrow" src/setup/handler.rsRepository: ZAAI-com/git-same
Length of output: 318
🏁 Script executed:
# Verify if chunks[1].width would be available and should be used for clamping
rg -n "chunks\[1\]\.width" src/setup/screens/path.rsRepository: ZAAI-com/git-same
Length of output: 43
🏁 Script executed:
# Check what the Left/Right arrow handlers do
rg -n "KeyCode::Left\|KeyCode::Right" src/setup/handler.rs -A 3 -B 1 | grep -A 3 "path_cursor"Repository: ZAAI-com/git-same
Length of output: 43
🏁 Script executed:
# Check the specific arrow key handlers in path input mode
sed -n '585,600p' src/setup/handler.rsRepository: ZAAI-com/git-same
Length of output: 463
🏁 Script executed:
# Also check the earlier arrow handlers
sed -n '48,60p' src/setup/handler.rsRepository: ZAAI-com/git-same
Length of output: 412
🏁 Script executed:
# Verify the exact flow: can user input multibyte chars, and can they corrupt via arrow+backspace?
python3 << 'EOF'
print("=== Verification of multibyte UTF-8 corruption vulnerability ===\n")
# Simulate user input of multibyte path
path = "café" # User can input this via KeyCode::Char
print(f"1. User enters path: {repr(path)}")
print(f" Bytes: {path.encode('utf-8')} (length={len(path.encode('utf-8'))})")
print(f" Chars: {len(path)}\n")
# path_cursor is initialized to base_path.len() (bytes)
cursor = len(path.encode('utf-8')) # cursor = 5
print(f"2. Initial cursor position (bytes): {cursor}\n")
# User presses Left arrow key 1 time
cursor -= 1 # Now cursor = 4
print(f"3. After 1 Left arrow: cursor = {cursor} (now pointing to byte 4)")
print(f" Byte 4 is the SECOND byte of 'é' (which spans bytes 3-4)\n")
# User presses Backspace
cursor -= 1 # Now cursor = 3
path_bytes = list(path.encode('utf-8'))
print(f"4. Backspace decrements cursor to {cursor}, then removes byte at index {cursor}")
print(f" Removing byte index 3 from {path_bytes}")
del path_bytes[3]
print(f" Result: {path_bytes}")
# Try to decode
try:
corrupted = bytes(path_bytes).decode('utf-8')
print(f" Decoded: {repr(corrupted)} ✓ (Rust would panic here!)")
except UnicodeDecodeError as e:
print(f" ✗ UnicodeDecodeError: {e}")
print(" In Rust, String::remove() at invalid byte boundary causes PANIC!")
EOFRepository: ZAAI-com/git-same
Length of output: 676
Fix critical UTF-8 panic vulnerability and cursor rendering issues.
The code uses byte offsets for cursor positioning with no validation. This causes two critical problems:
-
Application crash: Arrow keys move the cursor by 1 byte. Backspace at a multibyte character boundary (e.g., after moving left in "café") calls
String::remove()at an invalid index, causing a panic. Any non-ASCII path input can crash the app. -
Cursor rendering misalignment: Line 89 casts byte count directly to screen position without converting to character count, placing the cursor at the wrong visual position for multibyte text.
-
Cursor overflow: No clamping to visible area width; cursor can move beyond the block boundary for long paths.
🩹 Suggested fix
- let cursor_pos = state.path_cursor.min(state.base_path.len());
+ // Ensure cursor is at valid char boundary and convert to char count for rendering
+ let cursor_byte_pos = state.path_cursor.min(state.base_path.len());
+ let cursor_char_pos = state.base_path[..cursor_byte_pos].chars().count() as u16;
@@
if !state.path_suggestions_mode && !state.path_browse_mode {
- let cursor_x = chunks[1].x + 1 + 2 + cursor_pos as u16;
+ let cursor_base_x = chunks[1].x + 3; // left border + left padding
+ let cursor_max_x = chunks[1].x + chunks[1].width.saturating_sub(2);
+ let cursor_x = cursor_base_x.saturating_add(cursor_char_pos).min(cursor_max_x);
let cursor_y = chunks[1].y + 1;
frame.set_cursor_position((cursor_x, cursor_y));
}Also update arrow key handlers to ensure cursor stays at char boundaries:
KeyCode::Left => {
if state.path_cursor > 0 {
- state.path_cursor -= 1;
+ let mut pos = state.path_cursor - 1;
+ while pos > 0 && !state.base_path.is_char_boundary(pos) {
+ pos -= 1;
+ }
+ state.path_cursor = pos;
}
}
KeyCode::Right => {
if state.path_cursor < state.base_path.len() {
- state.path_cursor += 1;
+ let mut pos = state.path_cursor + 1;
+ while pos < state.base_path.len() && !state.base_path.is_char_boundary(pos) {
+ pos += 1;
+ }
+ state.path_cursor = pos;
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/setup/screens/path.rs` around lines 60 - 91, state.path_cursor is treated
as a byte offset into state.base_path which causes panics on multibyte UTF-8 and
misrenders the cursor; change all cursor logic to operate on character
(grapheme) indices and map those to byte offsets only when performing string
mutations. Specifically: treat state.path_cursor as a character index, update
any arrow/backspace handlers to move by character (not byte) boundaries using
char/grapheme iteration, convert the character index to a byte index with
char_indices (or unicode-segmentation grapheme mapping) before calling
String::remove/insert on state.base_path, compute cursor_pos as the number of
characters up to the cursor (not bytes) when calculating cursor_x for
frame.set_cursor_position, and clamp the computed cursor_x to the visible block
width (derived from chunks[1].width minus borders/padding) so the cursor never
overflows the input block; also preserve the early checks for
state.path_suggestions_mode and state.path_browse_mode when deciding to show the
cursor.
| let bottom_left = if path_popup_active { | ||
| vec![ | ||
| Span::styled("[Esc]", blue), | ||
| Span::styled(" Close Popup", dim), | ||
| ] | ||
| } else { | ||
| vec![ | ||
| Span::styled("[q]", blue), | ||
| Span::styled(" Quit ", dim), | ||
| Span::styled("[Esc]", blue), | ||
| Span::styled(" Back", dim), | ||
| ] | ||
| }; |
There was a problem hiding this comment.
Fix misleading footer shortcuts in the setup status bar.
The footer advertises [Esc] Back and always shows [q] Quit, but actual behavior is cancel on Esc, and q is disabled on SelectPath. This can cause accidental cancellation and inconsistent UX.
🔧 Proposed fix
- let bottom_left = if path_popup_active {
+ let bottom_left = if path_popup_active {
vec![
Span::styled("[Esc]", blue),
Span::styled(" Close Popup", dim),
]
+ } else if state.step == SetupStep::SelectPath {
+ vec![
+ Span::styled("[Esc]", blue),
+ Span::styled(" Cancel", dim),
+ ]
} else {
vec![
Span::styled("[q]", blue),
Span::styled(" Quit ", dim),
Span::styled("[Esc]", blue),
- Span::styled(" Back", dim),
+ Span::styled(" Cancel", dim),
]
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let bottom_left = if path_popup_active { | |
| vec![ | |
| Span::styled("[Esc]", blue), | |
| Span::styled(" Close Popup", dim), | |
| ] | |
| } else { | |
| vec![ | |
| Span::styled("[q]", blue), | |
| Span::styled(" Quit ", dim), | |
| Span::styled("[Esc]", blue), | |
| Span::styled(" Back", dim), | |
| ] | |
| }; | |
| let bottom_left = if path_popup_active { | |
| vec![ | |
| Span::styled("[Esc]", blue), | |
| Span::styled(" Close Popup", dim), | |
| ] | |
| } else if state.step == SetupStep::SelectPath { | |
| vec![ | |
| Span::styled("[Esc]", blue), | |
| Span::styled(" Cancel", dim), | |
| ] | |
| } else { | |
| vec![ | |
| Span::styled("[q]", blue), | |
| Span::styled(" Quit ", dim), | |
| Span::styled("[Esc]", blue), | |
| Span::styled(" Cancel", dim), | |
| ] | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/setup/ui.rs` around lines 315 - 327, Update the footer generation in
ui.rs so the displayed shortcuts match actual behavior: when path_popup_active
is true (the SelectPath state) show "[Esc] Cancel" instead of "Back" and do not
advertise "[q] Quit" there; otherwise keep the normal "[q] Quit" and "[Esc]
Back" entries. Modify the branch that builds bottom_left (referencing the
path_popup_active variable and the bottom_left vector) to conditionally include
or omit the "[q] Quit" Span and to change the Esc Span text to "Cancel" when in
the popup/select-path state so UI text aligns with actual key handling.
Adds discovery caching, SSH key detection, and clone CLI options. Improves GitHub pagination retry handling and cache/clone behavior. Updates docs and changelog to reflect the new config and features.
Summary by CodeRabbit
New Features
Documentation