diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md new file mode 100644 index 0000000..1131589 --- /dev/null +++ b/.claude/CLAUDE.md @@ -0,0 +1,112 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +**Author:** Manuel from Eggenfelden. + +## Build & Test Commands + +```bash +cargo build # Debug build +cargo build --release # Optimized release build (LTO, stripped) +cargo test # Run all tests +cargo test # Run a single test by name +cargo test --test integration_test # Run only integration tests +cargo fmt -- --check # Check formatting +cargo clippy -- -D warnings # Lint (zero warnings enforced) +``` + +Logging is controlled via `GISA_LOG` env var (e.g., `GISA_LOG=debug cargo run -- sync`). + +## Architecture + +Git-Same is a Rust CLI + TUI tool that discovers GitHub org/repo structures and mirrors them locally with parallel cloning and syncing. + +**Binary aliases:** `git-same`, `gitsame`, `gitsa`, `gisa` — all point to `src/main.rs`. + +**Dual mode:** Running with a subcommand (`gisa sync`) uses the CLI path. Running without a subcommand (`gisa`) launches the interactive TUI. + +**CLI flow:** CLI parsing (`src/cli.rs`) → `main.rs` routes to command handler → handler orchestrates modules. + +**Commands:** `init`, `setup`, `sync`, `status`, `workspace {list,default}`, `reset`. + +### Core modules + +- **`auth/`** — GitHub CLI (`gh`) authentication only (`gh auth token`), with SSH clone support +- **`config/`** — TOML config parser. Default location: `~/.config/git-same/config.toml`. Sections: `[clone]`, `[filters]`, `[[providers]]` +- **`discovery/`** — `DiscoveryOrchestrator` coordinates repo discovery via providers, applies filters, builds `ActionPlan` (what to clone vs sync) +- **`operations/clone/`** — `CloneManager` handles concurrent cloning (configurable 1–32, default 4) +- **`operations/sync/`** — `SyncManager` handles fetch/pull with concurrency. Detects repos with uncommitted changes and optionally skips them +- **`provider/`** — Trait-based provider abstraction (`Provider` trait in `traits.rs`). GitHub implementation in `github/client.rs` with pagination. Mock provider in `mock.rs` for testing +- **`git/`** — `GitOperations` trait (`traits.rs`) with `ShellGit` implementation (`shell.rs`) that shells out to `git` commands +- **`cache/`** — `DiscoveryCache` with TTL-based validity at `~/.cache/git-same/` +- **`errors/`** — Custom error hierarchy: `AppError`, `GitError`, `ProviderError` with `suggested_action()` methods +- **`output/`** — Verbosity levels and `indicatif` progress bars (`CloneProgressBar`, `SyncProgressBar`, `DiscoveryProgressBar`) +- **`types/repo.rs`** — Core data types: `Repo`, `Org`, `ActionPlan`, `OpResult`, `OpSummary` + +### TUI module (`src/tui/`, feature-gated behind `tui`) + +Elm architecture: `app.rs` = Model, `screens/` = View, `handler.rs` = Update. + +- **`app.rs`** — `App` struct holds all TUI state. `Screen` enum: `InitCheck`, `SetupWizard`, `Workspace`, `Dashboard`, `Progress`, `Settings` +- **`handler.rs`** — Keyboard input handlers per screen + `handle_backend_message` for async results +- **`backend.rs`** — Spawns Tokio tasks for async operations (sync, status scan), sends `BackendMessage` variants via unbounded channels +- **`event.rs`** — `AppEvent` (terminal input, backend messages, ticks) and `BackendMessage` enum +- **`screens/`** — Stateless render functions per screen (dashboard, workspace, settings, etc.) +- **`widgets/`** — Shared widgets (status bar, spinner) +- **`setup/`** — Setup wizard state machine (shared between CLI `setup` command and TUI `SetupWizard` screen) + +### Recent workspace-screen updates + +- Sidebar navigation now supports `←`/`→` in addition to `↑`/`↓` and `Tab` (see `handle_workspace_key`). +- Workspace actions changed: `[f]` opens workspace folder; `[o]` is no longer bound. +- Default workspace behavior is now set-only from `[d]`: pressing `[d]` on the current default does not clear it. +- Workspace detail view is grouped into sections (`Identity`, `Paths`, `Sync`, `Account`) with wrapped org rendering for narrow widths. +- Last sync display now shows relative time and, when parseable RFC3339 exists, an absolute timestamp line. +- Added focused tests in `src/tui/handler.rs` and `src/tui/screens/workspace.rs` for key handling and workspace detail formatting helpers. + +### Key patterns + +- **Trait-based abstractions:** `GitOperations`, `Provider`, progress traits — enables mocking in tests +- **Concurrency:** Tokio tasks with `Arc` for sharing progress reporters across tasks +- **Error handling:** `thiserror` for typed errors + `anyhow` for propagation. Custom `Result` type alias in `errors/` +- **Channel-based TUI updates:** Backend operations send `BackendMessage` through `mpsc::UnboundedSender`, processed by the TUI event loop +- **Arrow-only navigation:** All directional movement uses arrow keys only (`←` `↑` `↓` `→`). No vim-style `j`/`k`/`h`/`l` letter navigation. Display hints use `[←] [↑] [↓] [→] Move`. + +## Formatting + +`rustfmt.toml`: `max_width = 100`, `tab_spaces = 4`, edition 2021. + +## Testing + +**Convention:** Colocated test files using `#[path]` attribute. Every source file `foo.rs` has a companion `foo_tests.rs` in the same directory. + +In the source file: +```rust +#[cfg(test)] +#[path = "foo_tests.rs"] +mod tests; +``` + +The test file contains `use super::*;` and all `#[test]` / `#[tokio::test]` functions. + +**Do not** write inline `#[cfg(test)] mod tests { ... }` blocks — always use separate `_tests.rs` files. + +**Integration tests** remain in `tests/integration_test.rs`. + +## CI/CD Workflows + +All workflows are `workflow_dispatch` (manual trigger) in `.github/workflows/`: + +| Workflow | Purpose | Trigger | +|----------|---------|---------| +| `S1-Test-CI.yml` | fmt, clippy, test, build dry-run, coverage, audit | Manual dispatch | +| `S2-Release-GitHub.yml` | Full CI + cross-compile 6 targets + GitHub Release | Manual dispatch (select tag) | +| `S3-Publish-Homebrew.yml` | Update Homebrew tap formula | Manual dispatch (select tag) | +| `S4-Publish-Crates.yml` | `cargo publish` to crates.io | Manual dispatch (select tag) | + +S2 runs all S1 jobs (test, coverage, audit) as gates before building release artifacts. + +## Specs & Docs + +Design specifications live in `docs/specs/` (S1–S5). Internal documentation in `.context/GIT-SAME-DOCUMENTATION.md`. diff --git a/.github/workflows/S1-Test-CI.yml b/.github/workflows/S1-Test-CI.yml new file mode 100644 index 0000000..3f489a9 --- /dev/null +++ b/.github/workflows/S1-Test-CI.yml @@ -0,0 +1,114 @@ +name: S1 - Test CI + +on: + workflow_dispatch: + +env: + CARGO_TERM_COLOR: always + RUST_BACKTRACE: 1 + +jobs: + test: + name: Test (${{ matrix.os }}) + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest, macos-latest, windows-latest] + rust: [stable] + include: + - os: ubuntu-latest + rust: beta + + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + with: + toolchain: ${{ matrix.rust }} + components: rustfmt, clippy + + - uses: Swatinem/rust-cache@v2 + + - name: Check formatting + run: cargo fmt --all -- --check + + - name: Clippy + run: cargo clippy --all-targets --all-features -- -D warnings + + - name: Run tests + run: cargo test --all-features + + build: + name: Build (${{ matrix.target }}) + needs: [test] + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + include: + - target: x86_64-unknown-linux-gnu + os: ubuntu-latest + - target: aarch64-unknown-linux-gnu + os: ubuntu-latest + - target: x86_64-apple-darwin + os: macos-latest + - target: aarch64-apple-darwin + os: macos-latest + - target: x86_64-pc-windows-msvc + os: windows-latest + - target: aarch64-pc-windows-msvc + os: windows-latest + + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + with: + targets: ${{ matrix.target }} + + - name: Install cross-compilation linker (Linux ARM64) + if: matrix.target == 'aarch64-unknown-linux-gnu' + run: sudo apt-get update && sudo apt-get install -y gcc-aarch64-linux-gnu + + - uses: Swatinem/rust-cache@v2 + + - name: Build release + run: cargo build --release --target ${{ matrix.target }} + env: + CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER: ${{ matrix.target == 'aarch64-unknown-linux-gnu' && 'aarch64-linux-gnu-gcc' || '' }} + + coverage: + name: Code Coverage + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + + - uses: Swatinem/rust-cache@v2 + + - name: Install cargo-tarpaulin + uses: taiki-e/install-action@v2 + with: + tool: cargo-tarpaulin + + - name: Generate coverage + run: cargo tarpaulin --all-features --workspace --timeout 120 --out xml + + - name: Upload coverage to Codecov + uses: codecov/codecov-action@v5 + with: + fail_ci_if_error: false + + audit: + name: Security Audit + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v6 + - uses: rustsec/audit-check@v2 + with: + token: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/S2-Release-GitHub.yml b/.github/workflows/S2-Release-GitHub.yml new file mode 100644 index 0000000..dcda46e --- /dev/null +++ b/.github/workflows/S2-Release-GitHub.yml @@ -0,0 +1,165 @@ +name: S2 - Release GitHub + +on: + workflow_dispatch: + +env: + CARGO_TERM_COLOR: always + RUST_BACKTRACE: 1 + +permissions: + contents: read + +jobs: + test: + name: Test (${{ matrix.os }}) + runs-on: ${{ matrix.os }} + strategy: + fail-fast: false + matrix: + os: [ubuntu-latest, macos-latest, windows-latest] + rust: [stable] + include: + - os: ubuntu-latest + rust: beta + + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + with: + toolchain: ${{ matrix.rust }} + components: rustfmt, clippy + + - uses: Swatinem/rust-cache@v2 + + - name: Check formatting + run: cargo fmt --all -- --check + + - name: Clippy + run: cargo clippy --all-targets --all-features -- -D warnings + + - name: Run tests + run: cargo test --all-features + + coverage: + name: Code Coverage + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + + - uses: Swatinem/rust-cache@v2 + + - name: Install cargo-tarpaulin + uses: taiki-e/install-action@v2 + with: + tool: cargo-tarpaulin + + - name: Generate coverage + run: cargo tarpaulin --all-features --workspace --timeout 120 --out xml + + - name: Upload coverage to Codecov + uses: codecov/codecov-action@v5 + with: + fail_ci_if_error: false + + audit: + name: Security Audit + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v6 + - uses: rustsec/audit-check@v2 + with: + token: ${{ secrets.GITHUB_TOKEN }} + + build-release-assets: + name: Build Release Asset (${{ matrix.target }}) + needs: [test, coverage, audit] + runs-on: ${{ matrix.os }} + strategy: + matrix: + include: + - target: x86_64-unknown-linux-gnu + os: ubuntu-latest + artifact_name: git-same + asset_name: git-same-linux-x86_64 + - target: aarch64-unknown-linux-gnu + os: ubuntu-latest + artifact_name: git-same + asset_name: git-same-linux-aarch64 + - target: x86_64-apple-darwin + os: macos-latest + artifact_name: git-same + asset_name: git-same-macos-x86_64 + - target: aarch64-apple-darwin + os: macos-latest + artifact_name: git-same + asset_name: git-same-macos-aarch64 + - target: x86_64-pc-windows-msvc + os: windows-latest + artifact_name: git-same.exe + asset_name: git-same-windows-x86_64.exe + - target: aarch64-pc-windows-msvc + os: windows-latest + artifact_name: git-same.exe + asset_name: git-same-windows-aarch64.exe + + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + with: + targets: ${{ matrix.target }} + + - name: Install cross-compilation linker (Linux ARM64) + if: matrix.target == 'aarch64-unknown-linux-gnu' + run: sudo apt-get update && sudo apt-get install -y gcc-aarch64-linux-gnu + + - uses: Swatinem/rust-cache@v2 + + - name: Build + run: cargo build --release --target ${{ matrix.target }} + env: + CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER: ${{ matrix.target == 'aarch64-unknown-linux-gnu' && 'aarch64-linux-gnu-gcc' || '' }} + + - name: Rename binary + shell: bash + run: | + mv target/${{ matrix.target }}/release/${{ matrix.artifact_name }} ${{ matrix.asset_name }} + + - name: Upload artifact + uses: actions/upload-artifact@v4 + with: + name: ${{ matrix.asset_name }} + path: ${{ matrix.asset_name }} + + publish-release: + name: Publish GitHub Release + needs: [build-release-assets] + runs-on: ubuntu-latest + if: startsWith(github.ref, 'refs/tags/') + permissions: + contents: write + steps: + - name: Download built artifacts + uses: actions/download-artifact@v4 + with: + path: artifacts + + - name: Collect release assets + shell: bash + run: | + mkdir -p release-assets + find artifacts -type f -exec cp {} release-assets/ \; + + - name: Create/update release + uses: softprops/action-gh-release@v2 + with: + files: release-assets/* + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/S3-Publish-Homebrew.yml b/.github/workflows/S3-Publish-Homebrew.yml new file mode 100644 index 0000000..5e9f126 --- /dev/null +++ b/.github/workflows/S3-Publish-Homebrew.yml @@ -0,0 +1,124 @@ +name: S3 - Publish Homebrew + +on: + workflow_dispatch: + inputs: + tag: + description: "Release tag (e.g., v0.9.0)" + required: true + type: string + +env: + TAP_REPO: zaai-com/homebrew-tap + FORMULA_NAME: git-same + +jobs: + homebrew: + name: Update Homebrew Formula + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v6 + + - name: Get version from tag + id: version + run: | + TAG="${{ inputs.tag }}" + if [ -z "$TAG" ]; then + TAG="${GITHUB_REF#refs/tags/}" + fi + if [ -z "$TAG" ] || [ "$TAG" = "${GITHUB_REF}" ]; then + echo "Invalid tag. Provide workflow_dispatch input 'tag' (e.g., v0.9.0)." + exit 1 + fi + echo "tag=$TAG" >> "$GITHUB_OUTPUT" + echo "version=${TAG#v}" >> "$GITHUB_OUTPUT" + + - name: Download release assets + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + run: | + mkdir -p assets + gh release download "${{ steps.version.outputs.tag }}" \ + --pattern "git-same-linux-x86_64" \ + --pattern "git-same-linux-aarch64" \ + --pattern "git-same-macos-x86_64" \ + --pattern "git-same-macos-aarch64" \ + --dir assets + + - name: Compute SHA256 hashes + id: sha + run: | + echo "linux_x86_64=$(sha256sum assets/git-same-linux-x86_64 | cut -d' ' -f1)" >> "$GITHUB_OUTPUT" + echo "linux_aarch64=$(sha256sum assets/git-same-linux-aarch64 | cut -d' ' -f1)" >> "$GITHUB_OUTPUT" + echo "macos_x86_64=$(sha256sum assets/git-same-macos-x86_64 | cut -d' ' -f1)" >> "$GITHUB_OUTPUT" + echo "macos_aarch64=$(sha256sum assets/git-same-macos-aarch64 | cut -d' ' -f1)" >> "$GITHUB_OUTPUT" + + - name: Generate formula + run: | + VERSION="${{ steps.version.outputs.version }}" + TAG="${{ steps.version.outputs.tag }}" + REPO_URL="https://github.com/zaai-com/git-same/releases/download/${TAG}" + + cat > formula.rb << 'FORMULA' + class GitSame < Formula + desc "Discover and mirror GitHub org/repo structures locally" + homepage "https://github.com/zaai-com/git-same" + version "VERSION_PLACEHOLDER" + license "MIT" + + if OS.mac? + if Hardware::CPU.arm? + url "URL_PLACEHOLDER/git-same-macos-aarch64" + sha256 "SHA_MACOS_AARCH64_PLACEHOLDER" + else + url "URL_PLACEHOLDER/git-same-macos-x86_64" + sha256 "SHA_MACOS_X86_64_PLACEHOLDER" + end + elsif OS.linux? + if Hardware::CPU.arm? + url "URL_PLACEHOLDER/git-same-linux-aarch64" + sha256 "SHA_LINUX_AARCH64_PLACEHOLDER" + else + url "URL_PLACEHOLDER/git-same-linux-x86_64" + sha256 "SHA_LINUX_X86_64_PLACEHOLDER" + end + end + + def install + if OS.mac? + bin.install "git-same-macos-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" + elsif OS.linux? + bin.install "git-same-linux-#{Hardware::CPU.arm? ? "aarch64" : "x86_64"}" => "git-same" + end + end + + test do + assert_match version.to_s, shell_output("#{bin}/git-same --version") + end + end + FORMULA + + sed -i "s|VERSION_PLACEHOLDER|${VERSION}|g" formula.rb + sed -i "s|URL_PLACEHOLDER|${REPO_URL}|g" formula.rb + sed -i "s|SHA_LINUX_X86_64_PLACEHOLDER|${{ steps.sha.outputs.linux_x86_64 }}|g" formula.rb + sed -i "s|SHA_MACOS_X86_64_PLACEHOLDER|${{ steps.sha.outputs.macos_x86_64 }}|g" formula.rb + sed -i "s|SHA_LINUX_AARCH64_PLACEHOLDER|${{ steps.sha.outputs.linux_aarch64 }}|g" formula.rb + sed -i "s|SHA_MACOS_AARCH64_PLACEHOLDER|${{ steps.sha.outputs.macos_aarch64 }}|g" formula.rb + + - name: Push formula to tap + env: + TAP_TOKEN: ${{ secrets.HOMEBREW_TAP_TOKEN }} + run: | + git clone "https://x-access-token:${TAP_TOKEN}@github.com/${TAP_REPO}.git" tap + mkdir -p tap/Formula + cp formula.rb "tap/Formula/${FORMULA_NAME}.rb" + cd tap + git config user.name "github-actions[bot]" + git config user.email "github-actions[bot]@users.noreply.github.com" + git add "Formula/${FORMULA_NAME}.rb" + if git diff --cached --quiet; then + echo "Formula unchanged, skipping commit" + else + git commit -m "Update ${FORMULA_NAME} to ${{ steps.version.outputs.version }}" + git push + fi diff --git a/.github/workflows/S4-Publish-Crates.yml b/.github/workflows/S4-Publish-Crates.yml new file mode 100644 index 0000000..6120a0a --- /dev/null +++ b/.github/workflows/S4-Publish-Crates.yml @@ -0,0 +1,24 @@ +name: S4 - Publish Crates + +on: + workflow_dispatch: + +env: + CARGO_TERM_COLOR: always + +jobs: + publish: + name: Publish to crates.io + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v6 + + - name: Install Rust + uses: dtolnay/rust-toolchain@stable + + - uses: Swatinem/rust-cache@v2 + + - name: Publish + run: cargo publish + env: + CARGO_REGISTRY_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }} diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..f1fba48 --- /dev/null +++ b/.gitignore @@ -0,0 +1,36 @@ +# Rust +/target/ +**/*.rs.bk +*.pdb + +# IDE +.vscode/ +.idea/ +*.swp +*.swo +*~ +.DS_Store + +# Config (don't commit user configs) +config.toml +*.local.toml + +# Cache +.cache/ +*.cache + +# Test artifacts +/tmp/ +/test-output/ +*.profraw +*.profdata + +# Environment +.env +.env.local + +# Conductor +.context/ + +# Claude Code +.claude/settings.local.json diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000..2995166 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,112 @@ +# AGENTS.md + +This file provides guidance to coding agents when working with code in this repository. + +**Author:** Manuel from Eggenfelden. + +## Build & Test Commands + +```bash +cargo build # Debug build +cargo build --release # Optimized release build (LTO, stripped) +cargo test # Run all tests +cargo test # Run a single test by name +cargo test --test integration_test # Run only integration tests +cargo fmt -- --check # Check formatting +cargo clippy -- -D warnings # Lint (zero warnings enforced) +``` + +Logging is controlled via `GISA_LOG` env var (e.g., `GISA_LOG=debug cargo run -- sync`). + +## Architecture + +Git-Same is a Rust CLI + TUI tool that discovers GitHub org/repo structures and mirrors them locally with parallel cloning and syncing. + +**Binary aliases:** `git-same`, `gitsame`, `gitsa`, `gisa` — all point to `src/main.rs`. + +**Dual mode:** Running with a subcommand (`gisa sync`) uses the CLI path. Running without a subcommand (`gisa`) launches the interactive TUI. + +**CLI flow:** CLI parsing (`src/cli.rs`) → `main.rs` routes to command handler → handler orchestrates modules. + +**Commands:** `init`, `setup`, `sync`, `status`, `workspace {list,default}`, `reset`. + +### Core modules + +- **`auth/`** — GitHub CLI (`gh`) authentication only (`gh auth token`), with SSH clone support +- **`config/`** — TOML config parser. Default location: `~/.config/git-same/config.toml`. Sections: `[clone]`, `[filters]`, `[[providers]]` +- **`discovery/`** — `DiscoveryOrchestrator` coordinates repo discovery via providers, applies filters, builds `ActionPlan` (what to clone vs sync) +- **`operations/clone/`** — `CloneManager` handles concurrent cloning (configurable 1–32, default 4) +- **`operations/sync/`** — `SyncManager` handles fetch/pull with concurrency. Detects repos with uncommitted changes and optionally skips them +- **`provider/`** — Trait-based provider abstraction (`Provider` trait in `traits.rs`). GitHub implementation in `github/client.rs` with pagination. Mock provider in `mock.rs` for testing +- **`git/`** — `GitOperations` trait (`traits.rs`) with `ShellGit` implementation (`shell.rs`) that shells out to `git` commands +- **`cache/`** — `DiscoveryCache` with TTL-based validity at `~/.cache/git-same/` +- **`errors/`** — Custom error hierarchy: `AppError`, `GitError`, `ProviderError` with `suggested_action()` methods +- **`output/`** — Verbosity levels and `indicatif` progress bars (`CloneProgressBar`, `SyncProgressBar`, `DiscoveryProgressBar`) +- **`types/repo.rs`** — Core data types: `Repo`, `Org`, `ActionPlan`, `OpResult`, `OpSummary` + +### TUI module (`src/tui/`, feature-gated behind `tui`) + +Elm architecture: `app.rs` = Model, `screens/` = View, `handler.rs` = Update. + +- **`app.rs`** — `App` struct holds all TUI state. `Screen` enum: `InitCheck`, `SetupWizard`, `Workspace`, `Dashboard`, `Progress`, `Settings` +- **`handler.rs`** — Keyboard input handlers per screen + `handle_backend_message` for async results +- **`backend.rs`** — Spawns Tokio tasks for async operations (sync, status scan), sends `BackendMessage` variants via unbounded channels +- **`event.rs`** — `AppEvent` (terminal input, backend messages, ticks) and `BackendMessage` enum +- **`screens/`** — Stateless render functions per screen (dashboard, workspace, settings, etc.) +- **`widgets/`** — Shared widgets (status bar, spinner) +- **`setup/`** — Setup wizard state machine (shared between CLI `setup` command and TUI `SetupWizard` screen) + +### Recent workspace-screen updates + +- Sidebar navigation now supports `←`/`→` in addition to `↑`/`↓` and `Tab` (see `handle_workspace_key`). +- Workspace actions changed: `[f]` opens workspace folder; `[o]` is no longer bound. +- Default workspace behavior is now set-only from `[d]`: pressing `[d]` on the current default does not clear it. +- Workspace detail view is grouped into sections (`Identity`, `Paths`, `Sync`, `Account`) with wrapped org rendering for narrow widths. +- Last sync display now shows relative time and, when parseable RFC3339 exists, an absolute timestamp line. +- Added focused tests in `src/tui/handler.rs` and `src/tui/screens/workspace.rs` for key handling and workspace detail formatting helpers. + +### Key patterns + +- **Trait-based abstractions:** `GitOperations`, `Provider`, progress traits — enables mocking in tests +- **Concurrency:** Tokio tasks with `Arc` for sharing progress reporters across tasks +- **Error handling:** `thiserror` for typed errors + `anyhow` for propagation. Custom `Result` type alias in `errors/` +- **Channel-based TUI updates:** Backend operations send `BackendMessage` through `mpsc::UnboundedSender`, processed by the TUI event loop +- **Arrow-only navigation:** All directional movement uses arrow keys only (`←` `↑` `↓` `→`). No vim-style `j`/`k`/`h`/`l` letter navigation. Display hints use `[←] [↑] [↓] [→] Move`. + +## Formatting + +`rustfmt.toml`: `max_width = 100`, `tab_spaces = 4`, edition 2021. + +## Testing + +**Convention:** Colocated test files using `#[path]` attribute. Every source file `foo.rs` has a companion `foo_tests.rs` in the same directory. + +In the source file: +```rust +#[cfg(test)] +#[path = "foo_tests.rs"] +mod tests; +``` + +The test file contains `use super::*;` and all `#[test]` / `#[tokio::test]` functions. + +**Do not** write inline `#[cfg(test)] mod tests { ... }` blocks — always use separate `_tests.rs` files. + +**Integration tests** remain in `tests/integration_test.rs`. + +## CI/CD Workflows + +All workflows are `workflow_dispatch` (manual trigger) in `.github/workflows/`: + +| Workflow | Purpose | Trigger | +|----------|---------|---------| +| `S1-Test-CI.yml` | fmt, clippy, test, build dry-run, coverage, audit | Manual dispatch | +| `S2-Release-GitHub.yml` | Full CI + cross-compile 6 targets + GitHub Release | Manual dispatch (select tag) | +| `S3-Publish-Homebrew.yml` | Update Homebrew tap formula | Manual dispatch (select tag) | +| `S4-Publish-Crates.yml` | `cargo publish` to crates.io | Manual dispatch (select tag) | + +S2 runs all S1 jobs (test, coverage, audit) as gates before building release artifacts. + +## Specs & Docs + +Design specifications live in `docs/specs/` (S1–S5). Internal documentation in `.context/GIT-SAME-DOCUMENTATION.md`. diff --git a/Cargo.lock b/Cargo.lock new file mode 100644 index 0000000..1b119bd --- /dev/null +++ b/Cargo.lock @@ -0,0 +1,3802 @@ +# This file is automatically @generated by Cargo. +# It is not intended for manual editing. +version = 4 + +[[package]] +name = "aho-corasick" +version = "1.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ddd31a130427c27518df266943a5308ed92d4b226cc639f5a8f1002816174301" +dependencies = [ + "memchr", +] + +[[package]] +name = "allocator-api2" +version = "0.2.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923" + +[[package]] +name = "android_system_properties" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311" +dependencies = [ + "libc", +] + +[[package]] +name = "anstream" +version = "0.6.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a" +dependencies = [ + "anstyle", + "anstyle-parse", + "anstyle-query", + "anstyle-wincon", + "colorchoice", + "is_terminal_polyfill", + "utf8parse", +] + +[[package]] +name = "anstyle" +version = "1.0.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78" + +[[package]] +name = "anstyle-parse" +version = "0.2.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2" +dependencies = [ + "utf8parse", +] + +[[package]] +name = "anstyle-query" +version = "1.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc" +dependencies = [ + "windows-sys 0.61.2", +] + +[[package]] +name = "anstyle-wincon" +version = "3.0.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d" +dependencies = [ + "anstyle", + "once_cell_polyfill", + "windows-sys 0.61.2", +] + +[[package]] +name = "anyhow" +version = "1.0.102" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7f202df86484c868dbad7eaa557ef785d5c66295e41b460ef922eca0723b842c" + +[[package]] +name = "assert-json-diff" +version = "2.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "47e4f2b81832e72834d7518d8487a0396a28cc408186a2e8854c0f98011faf12" +dependencies = [ + "serde", + "serde_json", +] + +[[package]] +name = "async-trait" +version = "0.1.89" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9035ad2d096bed7955a320ee7e2230574d28fd3c3a0f186cbea1ff3c7eed5dbb" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "atomic" +version = "0.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a89cbf775b137e9b968e67227ef7f775587cde3fd31b0d8599dbd0f598a48340" +dependencies = [ + "bytemuck", +] + +[[package]] +name = "atomic-waker" +version = "1.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0" + +[[package]] +name = "autocfg" +version = "1.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8" + +[[package]] +name = "aws-lc-rs" +version = "1.16.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d9a7b350e3bb1767102698302bc37256cbd48422809984b98d292c40e2579aa9" +dependencies = [ + "aws-lc-sys", + "zeroize", +] + +[[package]] +name = "aws-lc-sys" +version = "0.37.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b092fe214090261288111db7a2b2c2118e5a7f30dc2569f1732c4069a6840549" +dependencies = [ + "cc", + "cmake", + "dunce", + "fs_extra", +] + +[[package]] +name = "base64" +version = "0.22.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72b3254f16251a8381aa12e40e3c4d2f0199f8c6508fbecb9d91f575e0fbb8c6" + +[[package]] +name = "bit-set" +version = "0.5.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0700ddab506f33b20a03b13996eccd309a48e5ff77d0d95926aa0210fb4e95f1" +dependencies = [ + "bit-vec", +] + +[[package]] +name = "bit-vec" +version = "0.6.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb" + +[[package]] +name = "bitflags" +version = "1.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a" + +[[package]] +name = "bitflags" +version = "2.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "843867be96c8daad0d758b57df9392b6d8d271134fce549de6ce169ff98a92af" + +[[package]] +name = "block-buffer" +version = "0.10.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3078c7629b62d3f0439517fa394996acacc5cbc91c5a20d8c658e77abd503a71" +dependencies = [ + "generic-array", +] + +[[package]] +name = "bumpalo" +version = "3.20.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5d20789868f4b01b2f2caec9f5c4e0213b41e3e5702a50157d699ae31ced2fcb" + +[[package]] +name = "bytemuck" +version = "1.25.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c8efb64bd706a16a1bdde310ae86b351e4d21550d98d056f22f8a7f7a2183fec" + +[[package]] +name = "bytes" +version = "1.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33" + +[[package]] +name = "castaway" +version = "0.2.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dec551ab6e7578819132c713a93c022a05d60159dc86e7a7050223577484c55a" +dependencies = [ + "rustversion", +] + +[[package]] +name = "cc" +version = "1.2.56" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "aebf35691d1bfb0ac386a69bac2fde4dd276fb618cf8bf4f5318fe285e821bb2" +dependencies = [ + "find-msvc-tools", + "jobserver", + "libc", + "shlex", +] + +[[package]] +name = "cesu8" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6d43a04d8753f35258c91f8ec639f792891f748a1edbd759cf1dcea3382ad83c" + +[[package]] +name = "cfg-if" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801" + +[[package]] +name = "cfg_aliases" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724" + +[[package]] +name = "chrono" +version = "0.4.44" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c673075a2e0e5f4a1dde27ce9dee1ea4558c7ffe648f576438a20ca1d2acc4b0" +dependencies = [ + "iana-time-zone", + "js-sys", + "num-traits", + "serde", + "wasm-bindgen", + "windows-link", +] + +[[package]] +name = "clap" +version = "4.5.60" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2797f34da339ce31042b27d23607e051786132987f595b02ba4f6a6dffb7030a" +dependencies = [ + "clap_builder", + "clap_derive", +] + +[[package]] +name = "clap_builder" +version = "4.5.60" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "24a241312cea5059b13574bb9b3861cabf758b879c15190b37b6d6fd63ab6876" +dependencies = [ + "anstream", + "anstyle", + "clap_lex", + "strsim", +] + +[[package]] +name = "clap_derive" +version = "4.5.55" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a92793da1a46a5f2a02a6f4c46c6496b28c43638adea8306fcb0caa1634f24e5" +dependencies = [ + "heck", + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "clap_lex" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3a822ea5bc7590f9d40f1ba12c0dc3c2760f3482c6984db1573ad11031420831" + +[[package]] +name = "cmake" +version = "0.1.57" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "75443c44cd6b379beb8c5b45d85d0773baf31cce901fe7bb252f4eff3008ef7d" +dependencies = [ + "cc", +] + +[[package]] +name = "colorchoice" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75" + +[[package]] +name = "colored" +version = "3.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "faf9468729b8cbcea668e36183cb69d317348c2e08e994829fb56ebfdfbaac34" +dependencies = [ + "windows-sys 0.61.2", +] + +[[package]] +name = "combine" +version = "4.6.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ba5a308b75df32fe02788e748662718f03fde005016435c444eea572398219fd" +dependencies = [ + "bytes", + "memchr", +] + +[[package]] +name = "compact_str" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3fdb1325a1cece981e8a296ab8f0f9b63ae357bd0784a9faaf548cc7b480707a" +dependencies = [ + "castaway", + "cfg-if", + "itoa", + "rustversion", + "ryu", + "static_assertions", +] + +[[package]] +name = "console" +version = "0.16.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "03e45a4a8926227e4197636ba97a9fc9b00477e9f4bd711395687c5f0734bec4" +dependencies = [ + "encode_unicode", + "libc", + "once_cell", + "unicode-width", + "windows-sys 0.61.2", +] + +[[package]] +name = "convert_case" +version = "0.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "633458d4ef8c78b72454de2d54fd6ab2e60f9e02be22f3c6104cdc8a4e0fceb9" +dependencies = [ + "unicode-segmentation", +] + +[[package]] +name = "core-foundation" +version = "0.9.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "91e195e091a93c46f7102ec7818a2aa394e1e1771c3ab4825963fa03e45afb8f" +dependencies = [ + "core-foundation-sys", + "libc", +] + +[[package]] +name = "core-foundation" +version = "0.10.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b2a6cd9ae233e7f62ba4e9353e81a88df7fc8a5987b8d445b4d90c879bd156f6" +dependencies = [ + "core-foundation-sys", + "libc", +] + +[[package]] +name = "core-foundation-sys" +version = "0.8.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b" + +[[package]] +name = "cpufeatures" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "59ed5838eebb26a2bb2e58f6d5b5316989ae9d08bab10e0e6d103e656d1b0280" +dependencies = [ + "libc", +] + +[[package]] +name = "crossterm" +version = "0.29.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d8b9f2e4c67f833b660cdb0a3523065869fb35570177239812ed4c905aeff87b" +dependencies = [ + "bitflags 2.11.0", + "crossterm_winapi", + "derive_more", + "document-features", + "mio", + "parking_lot", + "rustix", + "signal-hook", + "signal-hook-mio", + "winapi", +] + +[[package]] +name = "crossterm_winapi" +version = "0.9.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "acdd7c62a3665c7f6830a51635d9ac9b23ed385797f70a83bb8bafe9c572ab2b" +dependencies = [ + "winapi", +] + +[[package]] +name = "crypto-common" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "78c8292055d1c1df0cce5d180393dc8cce0abec0a7102adb6c7b1eef6016d60a" +dependencies = [ + "generic-array", + "typenum", +] + +[[package]] +name = "csscolorparser" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "eb2a7d3066da2de787b7f032c736763eb7ae5d355f81a68bab2675a96008b0bf" +dependencies = [ + "lab", + "phf", +] + +[[package]] +name = "darling" +version = "0.23.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "25ae13da2f202d56bd7f91c25fba009e7717a1e4a1cc98a76d844b65ae912e9d" +dependencies = [ + "darling_core", + "darling_macro", +] + +[[package]] +name = "darling_core" +version = "0.23.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9865a50f7c335f53564bb694ef660825eb8610e0a53d3e11bf1b0d3df31e03b0" +dependencies = [ + "ident_case", + "proc-macro2", + "quote", + "strsim", + "syn 2.0.117", +] + +[[package]] +name = "darling_macro" +version = "0.23.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ac3984ec7bd6cfa798e62b4a642426a5be0e68f9401cfc2a01e3fa9ea2fcdb8d" +dependencies = [ + "darling_core", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "deltae" +version = "0.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5729f5117e208430e437df2f4843f5e5952997175992d1414f94c57d61e270b4" + +[[package]] +name = "deranged" +version = "0.5.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7cd812cc2bc1d69d4764bd80df88b4317eaef9e773c75226407d9bc0876b211c" +dependencies = [ + "powerfmt", +] + +[[package]] +name = "derive_more" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d751e9e49156b02b44f9c1815bcb94b984cdcc4396ecc32521c739452808b134" +dependencies = [ + "derive_more-impl", +] + +[[package]] +name = "derive_more-impl" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "799a97264921d8623a957f6c3b9011f3b5492f557bbb7a5a19b7fa6d06ba8dcb" +dependencies = [ + "convert_case", + "proc-macro2", + "quote", + "rustc_version", + "syn 2.0.117", +] + +[[package]] +name = "digest" +version = "0.10.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" +dependencies = [ + "block-buffer", + "crypto-common", +] + +[[package]] +name = "directories" +version = "6.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "16f5094c54661b38d03bd7e50df373292118db60b585c08a411c6d840017fe7d" +dependencies = [ + "dirs-sys", +] + +[[package]] +name = "dirs" +version = "6.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c3e8aa94d75141228480295a7d0e7feb620b1a5ad9f12bc40be62411e38cce4e" +dependencies = [ + "dirs-sys", +] + +[[package]] +name = "dirs-sys" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e01a3366d27ee9890022452ee61b2b63a67e6f13f58900b651ff5665f0bb1fab" +dependencies = [ + "libc", + "option-ext", + "redox_users", + "windows-sys 0.61.2", +] + +[[package]] +name = "displaydoc" +version = "0.2.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "97369cbbc041bc366949bc74d34658d6cda5621039731c6310521892a3a20ae0" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "document-features" +version = "0.2.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d4b8a88685455ed29a21542a33abd9cb6510b6b129abadabdcef0f4c55bc8f61" +dependencies = [ + "litrs", +] + +[[package]] +name = "dunce" +version = "1.0.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813" + +[[package]] +name = "either" +version = "1.15.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719" + +[[package]] +name = "encode_unicode" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "34aa73646ffb006b8f5147f3dc182bd4bcb190227ce861fc4a4844bf8e3cb2c0" + +[[package]] +name = "encoding_rs" +version = "0.8.35" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "75030f3c4f45dafd7586dd6780965a8c7e8e285a5ecb86713e63a79c5b2766f3" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "equivalent" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f" + +[[package]] +name = "errno" +version = "0.3.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "39cab71617ae0d63f51a36d69f866391735b51691dbda63cf6f96d042b63efeb" +dependencies = [ + "libc", + "windows-sys 0.61.2", +] + +[[package]] +name = "euclid" +version = "0.22.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "df61bf483e837f88d5c2291dcf55c67be7e676b3a51acc48db3a7b163b91ed63" +dependencies = [ + "num-traits", +] + +[[package]] +name = "fancy-regex" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b95f7c0680e4142284cf8b22c14a476e87d61b004a3a0861872b32ef7ead40a2" +dependencies = [ + "bit-set", + "regex", +] + +[[package]] +name = "fastrand" +version = "2.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be" + +[[package]] +name = "filedescriptor" +version = "0.8.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e40758ed24c9b2eeb76c35fb0aebc66c626084edd827e07e1552279814c6682d" +dependencies = [ + "libc", + "thiserror 1.0.69", + "winapi", +] + +[[package]] +name = "find-msvc-tools" +version = "0.1.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5baebc0774151f905a1a2cc41989300b1e6fbb29aff0ceffa1064fdd3088d582" + +[[package]] +name = "finl_unicode" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9844ddc3a6e533d62bba727eb6c28b5d360921d5175e9ff0f1e621a5c590a4d5" + +[[package]] +name = "fixedbitset" +version = "0.4.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0ce7134b9999ecaf8bcd65542e436736ef32ddca1b3e06094cb6ec5755203b80" + +[[package]] +name = "fnv" +version = "1.0.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1" + +[[package]] +name = "foldhash" +version = "0.1.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d9c4f5dac5e15c24eb999c26181a6ca40b39fe946cbe4c263c7209467bc83af2" + +[[package]] +name = "foldhash" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "77ce24cb58228fbb8aa041425bb1050850ac19177686ea6e0f41a70416f56fdb" + +[[package]] +name = "form_urlencoded" +version = "1.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cb4cb245038516f5f85277875cdaa4f7d2c9a0fa0468de06ed190163b1581fcf" +dependencies = [ + "percent-encoding", +] + +[[package]] +name = "fs_extra" +version = "1.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "42703706b716c37f96a77aea830392ad231f44c9e9a67872fa5548707e11b11c" + +[[package]] +name = "futures" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8b147ee9d1f6d097cef9ce628cd2ee62288d963e16fb287bd9286455b241382d" +dependencies = [ + "futures-channel", + "futures-core", + "futures-executor", + "futures-io", + "futures-sink", + "futures-task", + "futures-util", +] + +[[package]] +name = "futures-channel" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "07bbe89c50d7a535e539b8c17bc0b49bdb77747034daa8087407d655f3f7cc1d" +dependencies = [ + "futures-core", + "futures-sink", +] + +[[package]] +name = "futures-core" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7e3450815272ef58cec6d564423f6e755e25379b217b0bc688e295ba24df6b1d" + +[[package]] +name = "futures-executor" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "baf29c38818342a3b26b5b923639e7b1f4a61fc5e76102d4b1981c6dc7a7579d" +dependencies = [ + "futures-core", + "futures-task", + "futures-util", +] + +[[package]] +name = "futures-io" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cecba35d7ad927e23624b22ad55235f2239cfa44fd10428eecbeba6d6a717718" + +[[package]] +name = "futures-macro" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e835b70203e41293343137df5c0664546da5745f82ec9b84d40be8336958447b" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "futures-sink" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c39754e157331b013978ec91992bde1ac089843443c49cbc7f46150b0fad0893" + +[[package]] +name = "futures-task" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393" + +[[package]] +name = "futures-util" +version = "0.3.32" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "389ca41296e6190b48053de0321d02a77f32f8a5d2461dd38762c0593805c6d6" +dependencies = [ + "futures-channel", + "futures-core", + "futures-io", + "futures-macro", + "futures-sink", + "futures-task", + "memchr", + "pin-project-lite", + "slab", +] + +[[package]] +name = "generic-array" +version = "0.14.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "85649ca51fd72272d7821adaf274ad91c288277713d9c18820d8499a7ff69e9a" +dependencies = [ + "typenum", + "version_check", +] + +[[package]] +name = "getrandom" +version = "0.2.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ff2abc00be7fca6ebc474524697ae276ad847ad0a6b3faa4bcb027e9a4614ad0" +dependencies = [ + "cfg-if", + "js-sys", + "libc", + "wasi", + "wasm-bindgen", +] + +[[package]] +name = "getrandom" +version = "0.3.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "899def5c37c4fd7b2664648c28120ecec138e4d395b459e5ca34f9cce2dd77fd" +dependencies = [ + "cfg-if", + "js-sys", + "libc", + "r-efi", + "wasip2", + "wasm-bindgen", +] + +[[package]] +name = "getrandom" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "139ef39800118c7683f2fd3c98c1b23c09ae076556b435f8e9064ae108aaeeec" +dependencies = [ + "cfg-if", + "libc", + "r-efi", + "wasip2", + "wasip3", +] + +[[package]] +name = "git-same" +version = "0.9.0" +dependencies = [ + "anyhow", + "async-trait", + "chrono", + "clap", + "console", + "crossterm", + "directories", + "futures", + "indicatif", + "mockito", + "ratatui", + "reqwest", + "serde", + "serde_json", + "shellexpand", + "tempfile", + "thiserror 2.0.18", + "tokio", + "tokio-test", + "toml", + "tracing", + "tracing-subscriber", +] + +[[package]] +name = "h2" +version = "0.4.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2f44da3a8150a6703ed5d34e164b875fd14c2cdab9af1252a9a1020bde2bdc54" +dependencies = [ + "atomic-waker", + "bytes", + "fnv", + "futures-core", + "futures-sink", + "http", + "indexmap", + "slab", + "tokio", + "tokio-util", + "tracing", +] + +[[package]] +name = "hashbrown" +version = "0.15.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9229cfe53dfd69f0609a49f65461bd93001ea1ef889cd5529dd176593f5338a1" +dependencies = [ + "foldhash 0.1.5", +] + +[[package]] +name = "hashbrown" +version = "0.16.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100" +dependencies = [ + "allocator-api2", + "equivalent", + "foldhash 0.2.0", +] + +[[package]] +name = "heck" +version = "0.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea" + +[[package]] +name = "hex" +version = "0.4.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" + +[[package]] +name = "http" +version = "1.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e3ba2a386d7f85a81f119ad7498ebe444d2e22c2af0b86b069416ace48b3311a" +dependencies = [ + "bytes", + "itoa", +] + +[[package]] +name = "http-body" +version = "1.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1efedce1fb8e6913f23e0c92de8e62cd5b772a67e7b3946df930a62566c93184" +dependencies = [ + "bytes", + "http", +] + +[[package]] +name = "http-body-util" +version = "0.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b021d93e26becf5dc7e1b75b1bed1fd93124b374ceb73f43d4d4eafec896a64a" +dependencies = [ + "bytes", + "futures-core", + "http", + "http-body", + "pin-project-lite", +] + +[[package]] +name = "httparse" +version = "1.10.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6dbf3de79e51f3d586ab4cb9d5c3e2c14aa28ed23d180cf89b4df0454a69cc87" + +[[package]] +name = "httpdate" +version = "1.0.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9" + +[[package]] +name = "hyper" +version = "1.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2ab2d4f250c3d7b1c9fcdff1cece94ea4e2dfbec68614f7b87cb205f24ca9d11" +dependencies = [ + "atomic-waker", + "bytes", + "futures-channel", + "futures-core", + "h2", + "http", + "http-body", + "httparse", + "httpdate", + "itoa", + "pin-project-lite", + "pin-utils", + "smallvec", + "tokio", + "want", +] + +[[package]] +name = "hyper-rustls" +version = "0.27.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e3c93eb611681b207e1fe55d5a71ecf91572ec8a6705cdb6857f7d8d5242cf58" +dependencies = [ + "http", + "hyper", + "hyper-util", + "rustls", + "rustls-pki-types", + "tokio", + "tokio-rustls", + "tower-service", +] + +[[package]] +name = "hyper-util" +version = "0.1.20" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "96547c2556ec9d12fb1578c4eaf448b04993e7fb79cbaad930a656880a6bdfa0" +dependencies = [ + "base64", + "bytes", + "futures-channel", + "futures-util", + "http", + "http-body", + "hyper", + "ipnet", + "libc", + "percent-encoding", + "pin-project-lite", + "socket2", + "system-configuration", + "tokio", + "tower-service", + "tracing", + "windows-registry", +] + +[[package]] +name = "iana-time-zone" +version = "0.1.65" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e31bc9ad994ba00e440a8aa5c9ef0ec67d5cb5e5cb0cc7f8b744a35b389cc470" +dependencies = [ + "android_system_properties", + "core-foundation-sys", + "iana-time-zone-haiku", + "js-sys", + "log", + "wasm-bindgen", + "windows-core", +] + +[[package]] +name = "iana-time-zone-haiku" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f" +dependencies = [ + "cc", +] + +[[package]] +name = "icu_collections" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4c6b649701667bbe825c3b7e6388cb521c23d88644678e83c0c4d0a621a34b43" +dependencies = [ + "displaydoc", + "potential_utf", + "yoke", + "zerofrom", + "zerovec", +] + +[[package]] +name = "icu_locale_core" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "edba7861004dd3714265b4db54a3c390e880ab658fec5f7db895fae2046b5bb6" +dependencies = [ + "displaydoc", + "litemap", + "tinystr", + "writeable", + "zerovec", +] + +[[package]] +name = "icu_normalizer" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5f6c8828b67bf8908d82127b2054ea1b4427ff0230ee9141c54251934ab1b599" +dependencies = [ + "icu_collections", + "icu_normalizer_data", + "icu_properties", + "icu_provider", + "smallvec", + "zerovec", +] + +[[package]] +name = "icu_normalizer_data" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7aedcccd01fc5fe81e6b489c15b247b8b0690feb23304303a9e560f37efc560a" + +[[package]] +name = "icu_properties" +version = "2.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "020bfc02fe870ec3a66d93e677ccca0562506e5872c650f893269e08615d74ec" +dependencies = [ + "icu_collections", + "icu_locale_core", + "icu_properties_data", + "icu_provider", + "zerotrie", + "zerovec", +] + +[[package]] +name = "icu_properties_data" +version = "2.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "616c294cf8d725c6afcd8f55abc17c56464ef6211f9ed59cccffe534129c77af" + +[[package]] +name = "icu_provider" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "85962cf0ce02e1e0a629cc34e7ca3e373ce20dda4c4d7294bbd0bf1fdb59e614" +dependencies = [ + "displaydoc", + "icu_locale_core", + "writeable", + "yoke", + "zerofrom", + "zerotrie", + "zerovec", +] + +[[package]] +name = "id-arena" +version = "2.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3d3067d79b975e8844ca9eb072e16b31c3c1c36928edf9c6789548c524d0d954" + +[[package]] +name = "ident_case" +version = "1.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b9e0384b61958566e926dc50660321d12159025e767c18e043daf26b70104c39" + +[[package]] +name = "idna" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3b0875f23caa03898994f6ddc501886a45c7d3d62d04d2d90788d47be1b1e4de" +dependencies = [ + "idna_adapter", + "smallvec", + "utf8_iter", +] + +[[package]] +name = "idna_adapter" +version = "1.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3acae9609540aa318d1bc588455225fb2085b9ed0c4f6bd0d9d5bcd86f1a0344" +dependencies = [ + "icu_normalizer", + "icu_properties", +] + +[[package]] +name = "indexmap" +version = "2.13.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7714e70437a7dc3ac8eb7e6f8df75fd8eb422675fc7678aff7364301092b1017" +dependencies = [ + "equivalent", + "hashbrown 0.16.1", + "serde", + "serde_core", +] + +[[package]] +name = "indicatif" +version = "0.18.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "25470f23803092da7d239834776d653104d551bc4d7eacaf31e6837854b8e9eb" +dependencies = [ + "console", + "portable-atomic", + "unicode-width", + "unit-prefix", + "web-time", +] + +[[package]] +name = "indoc" +version = "2.0.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "79cf5c93f93228cf8efb3ba362535fb11199ac548a09ce117c9b1adc3030d706" +dependencies = [ + "rustversion", +] + +[[package]] +name = "instability" +version = "0.3.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "357b7205c6cd18dd2c86ed312d1e70add149aea98e7ef72b9fdf0270e555c11d" +dependencies = [ + "darling", + "indoc", + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "ipnet" +version = "2.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "469fb0b9cefa57e3ef31275ee7cacb78f2fdca44e4765491884a2b119d4eb130" + +[[package]] +name = "iri-string" +version = "0.7.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c91338f0783edbd6195decb37bae672fd3b165faffb89bf7b9e6942f8b1a731a" +dependencies = [ + "memchr", + "serde", +] + +[[package]] +name = "is_terminal_polyfill" +version = "1.70.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695" + +[[package]] +name = "itertools" +version = "0.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2b192c782037fadd9cfa75548310488aabdbf3d2da73885b31bd0abd03351285" +dependencies = [ + "either", +] + +[[package]] +name = "itoa" +version = "1.0.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "92ecc6618181def0457392ccd0ee51198e065e016d1d527a7ac1b6dc7c1f09d2" + +[[package]] +name = "jni" +version = "0.21.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1a87aa2bb7d2af34197c04845522473242e1aa17c12f4935d5856491a7fb8c97" +dependencies = [ + "cesu8", + "cfg-if", + "combine", + "jni-sys", + "log", + "thiserror 1.0.69", + "walkdir", + "windows-sys 0.45.0", +] + +[[package]] +name = "jni-sys" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8eaf4bc02d17cbdd7ff4c7438cafcdf7fb9a4613313ad11b4f8fefe7d3fa0130" + +[[package]] +name = "jobserver" +version = "0.1.34" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9afb3de4395d6b3e67a780b6de64b51c978ecf11cb9a462c66be7d4ca9039d33" +dependencies = [ + "getrandom 0.3.4", + "libc", +] + +[[package]] +name = "js-sys" +version = "0.3.90" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "14dc6f6450b3f6d4ed5b16327f38fed626d375a886159ca555bd7822c0c3a5a6" +dependencies = [ + "once_cell", + "wasm-bindgen", +] + +[[package]] +name = "kasuari" +version = "0.4.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8fe90c1150662e858c7d5f945089b7517b0a80d8bf7ba4b1b5ffc984e7230a5b" +dependencies = [ + "hashbrown 0.16.1", + "portable-atomic", + "thiserror 2.0.18", +] + +[[package]] +name = "lab" +version = "0.11.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bf36173d4167ed999940f804952e6b08197cae5ad5d572eb4db150ce8ad5d58f" + +[[package]] +name = "lazy_static" +version = "1.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe" + +[[package]] +name = "leb128fmt" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "09edd9e8b54e49e587e4f6295a7d29c3ea94d469cb40ab8ca70b288248a81db2" + +[[package]] +name = "libc" +version = "0.2.182" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6800badb6cb2082ffd7b6a67e6125bb39f18782f793520caee8cb8846be06112" + +[[package]] +name = "libredox" +version = "0.1.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3d0b95e02c851351f877147b7deea7b1afb1df71b63aa5f8270716e0c5720616" +dependencies = [ + "bitflags 2.11.0", + "libc", +] + +[[package]] +name = "line-clipping" +version = "0.3.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5f4de44e98ddbf09375cbf4d17714d18f39195f4f4894e8524501726fd9a8a4a" +dependencies = [ + "bitflags 2.11.0", +] + +[[package]] +name = "linux-raw-sys" +version = "0.12.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "32a66949e030da00e8c7d4434b251670a91556f4144941d37452769c25d58a53" + +[[package]] +name = "litemap" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6373607a59f0be73a39b6fe456b8192fcc3585f602af20751600e974dd455e77" + +[[package]] +name = "litrs" +version = "1.0.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "11d3d7f243d5c5a8b9bb5d6dd2b1602c0cb0b9db1621bafc7ed66e35ff9fe092" + +[[package]] +name = "lock_api" +version = "0.4.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "224399e74b87b5f3557511d98dff8b14089b3dadafcab6bb93eab67d3aace965" +dependencies = [ + "scopeguard", +] + +[[package]] +name = "log" +version = "0.4.29" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897" + +[[package]] +name = "lru" +version = "0.16.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a1dc47f592c06f33f8e3aea9591776ec7c9f9e4124778ff8a3c3b87159f7e593" +dependencies = [ + "hashbrown 0.16.1", +] + +[[package]] +name = "lru-slab" +version = "0.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154" + +[[package]] +name = "mac_address" +version = "1.1.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c0aeb26bf5e836cc1c341c8106051b573f1766dfa05aa87f0b98be5e51b02303" +dependencies = [ + "nix", + "winapi", +] + +[[package]] +name = "matchers" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9" +dependencies = [ + "regex-automata", +] + +[[package]] +name = "memchr" +version = "2.8.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8ca58f447f06ed17d5fc4043ce1b10dd205e060fb3ce5b979b8ed8e59ff3f79" + +[[package]] +name = "memmem" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a64a92489e2744ce060c349162be1c5f33c6969234104dbd99ddb5feb08b8c15" + +[[package]] +name = "memoffset" +version = "0.9.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "488016bfae457b036d996092f6cb448677611ce4449e970ceaf42695203f218a" +dependencies = [ + "autocfg", +] + +[[package]] +name = "mime" +version = "0.3.17" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a" + +[[package]] +name = "minimal-lexical" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a" + +[[package]] +name = "mio" +version = "1.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a69bcab0ad47271a0234d9422b131806bf3968021e5dc9328caf2d4cd58557fc" +dependencies = [ + "libc", + "log", + "wasi", + "windows-sys 0.61.2", +] + +[[package]] +name = "mockito" +version = "1.7.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "90820618712cab19cfc46b274c6c22546a82affcb3c3bdf0f29e3db8e1bb92c0" +dependencies = [ + "assert-json-diff", + "bytes", + "colored", + "futures-core", + "http", + "http-body", + "http-body-util", + "hyper", + "hyper-util", + "log", + "pin-project-lite", + "rand 0.9.2", + "regex", + "serde_json", + "serde_urlencoded", + "similar", + "tokio", +] + +[[package]] +name = "nix" +version = "0.29.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "71e2746dc3a24dd78b3cfcb7be93368c6de9963d30f43a6a73998a9cf4b17b46" +dependencies = [ + "bitflags 2.11.0", + "cfg-if", + "cfg_aliases", + "libc", + "memoffset", +] + +[[package]] +name = "nom" +version = "7.1.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d273983c5a657a70a3e8f2a01329822f3b8c8172b73826411a55751e404a0a4a" +dependencies = [ + "memchr", + "minimal-lexical", +] + +[[package]] +name = "nu-ansi-term" +version = "0.50.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7957b9740744892f114936ab4a57b3f487491bbeafaf8083688b16841a4240e5" +dependencies = [ + "windows-sys 0.61.2", +] + +[[package]] +name = "num-conv" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cf97ec579c3c42f953ef76dbf8d55ac91fb219dde70e49aa4a6b7d74e9919050" + +[[package]] +name = "num-derive" +version = "0.4.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ed3955f1a9c7c0c15e092f9c887db08b1fc683305fdf6eb6684f22555355e202" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "num-traits" +version = "0.2.19" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841" +dependencies = [ + "autocfg", +] + +[[package]] +name = "num_threads" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5c7398b9c8b70908f6371f47ed36737907c87c52af34c268fed0bf0ceb92ead9" +dependencies = [ + "libc", +] + +[[package]] +name = "once_cell" +version = "1.21.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d" + +[[package]] +name = "once_cell_polyfill" +version = "1.70.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe" + +[[package]] +name = "openssl-probe" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7c87def4c32ab89d880effc9e097653c8da5d6ef28e6b539d313baaacfbafcbe" + +[[package]] +name = "option-ext" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "04744f49eae99ab78e0d5c0b603ab218f515ea8cfe5a456d7629ad883a3b6e7d" + +[[package]] +name = "ordered-float" +version = "4.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7bb71e1b3fa6ca1c61f383464aaf2bb0e2f8e772a1f01d486832464de363b951" +dependencies = [ + "num-traits", +] + +[[package]] +name = "parking_lot" +version = "0.12.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "93857453250e3077bd71ff98b6a65ea6621a19bb0f559a85248955ac12c45a1a" +dependencies = [ + "lock_api", + "parking_lot_core", +] + +[[package]] +name = "parking_lot_core" +version = "0.9.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2621685985a2ebf1c516881c026032ac7deafcda1a2c9b7850dc81e3dfcb64c1" +dependencies = [ + "cfg-if", + "libc", + "redox_syscall", + "smallvec", + "windows-link", +] + +[[package]] +name = "percent-encoding" +version = "2.3.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9b4f627cb1b25917193a259e49bdad08f671f8d9708acfd5fe0a8c1455d87220" + +[[package]] +name = "pest" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e0848c601009d37dfa3430c4666e147e49cdcf1b92ecd3e63657d8a5f19da662" +dependencies = [ + "memchr", + "ucd-trie", +] + +[[package]] +name = "pest_derive" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "11f486f1ea21e6c10ed15d5a7c77165d0ee443402f0780849d1768e7d9d6fe77" +dependencies = [ + "pest", + "pest_generator", +] + +[[package]] +name = "pest_generator" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8040c4647b13b210a963c1ed407c1ff4fdfa01c31d6d2a098218702e6664f94f" +dependencies = [ + "pest", + "pest_meta", + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "pest_meta" +version = "2.8.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "89815c69d36021a140146f26659a81d6c2afa33d216d736dd4be5381a7362220" +dependencies = [ + "pest", + "sha2", +] + +[[package]] +name = "phf" +version = "0.11.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1fd6780a80ae0c52cc120a26a1a42c1ae51b247a253e4e06113d23d2c2edd078" +dependencies = [ + "phf_macros", + "phf_shared", +] + +[[package]] +name = "phf_codegen" +version = "0.11.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "aef8048c789fa5e851558d709946d6d79a8ff88c0440c587967f8e94bfb1216a" +dependencies = [ + "phf_generator", + "phf_shared", +] + +[[package]] +name = "phf_generator" +version = "0.11.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3c80231409c20246a13fddb31776fb942c38553c51e871f8cbd687a4cfb5843d" +dependencies = [ + "phf_shared", + "rand 0.8.5", +] + +[[package]] +name = "phf_macros" +version = "0.11.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f84ac04429c13a7ff43785d75ad27569f2951ce0ffd30a3321230db2fc727216" +dependencies = [ + "phf_generator", + "phf_shared", + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "phf_shared" +version = "0.11.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "67eabc2ef2a60eb7faa00097bd1ffdb5bd28e62bf39990626a582201b7a754e5" +dependencies = [ + "siphasher", +] + +[[package]] +name = "pin-project-lite" +version = "0.2.16" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3b3cff922bd51709b605d9ead9aa71031d81447142d828eb4a6eba76fe619f9b" + +[[package]] +name = "pin-utils" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184" + +[[package]] +name = "portable-atomic" +version = "1.13.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c33a9471896f1c69cecef8d20cbe2f7accd12527ce60845ff44c153bb2a21b49" + +[[package]] +name = "potential_utf" +version = "0.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b73949432f5e2a09657003c25bca5e19a0e9c84f8058ca374f49e0ebe605af77" +dependencies = [ + "zerovec", +] + +[[package]] +name = "powerfmt" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "439ee305def115ba05938db6eb1644ff94165c5ab5e9420d1c1bcedbba909391" + +[[package]] +name = "ppv-lite86" +version = "0.2.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "85eae3c4ed2f50dcfe72643da4befc30deadb458a9b590d720cde2f2b1e97da9" +dependencies = [ + "zerocopy", +] + +[[package]] +name = "prettyplease" +version = "0.2.37" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "479ca8adacdd7ce8f1fb39ce9ecccbfe93a3f1344b3d0d97f20bc0196208f62b" +dependencies = [ + "proc-macro2", + "syn 2.0.117", +] + +[[package]] +name = "proc-macro2" +version = "1.0.106" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8fd00f0bb2e90d81d1044c2b32617f68fcb9fa3bb7640c23e9c748e53fb30934" +dependencies = [ + "unicode-ident", +] + +[[package]] +name = "quinn" +version = "0.11.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b9e20a958963c291dc322d98411f541009df2ced7b5a4f2bd52337638cfccf20" +dependencies = [ + "bytes", + "cfg_aliases", + "pin-project-lite", + "quinn-proto", + "quinn-udp", + "rustc-hash", + "rustls", + "socket2", + "thiserror 2.0.18", + "tokio", + "tracing", + "web-time", +] + +[[package]] +name = "quinn-proto" +version = "0.11.13" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f1906b49b0c3bc04b5fe5d86a77925ae6524a19b816ae38ce1e426255f1d8a31" +dependencies = [ + "aws-lc-rs", + "bytes", + "getrandom 0.3.4", + "lru-slab", + "rand 0.9.2", + "ring", + "rustc-hash", + "rustls", + "rustls-pki-types", + "slab", + "thiserror 2.0.18", + "tinyvec", + "tracing", + "web-time", +] + +[[package]] +name = "quinn-udp" +version = "0.5.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "addec6a0dcad8a8d96a771f815f0eaf55f9d1805756410b39f5fa81332574cbd" +dependencies = [ + "cfg_aliases", + "libc", + "once_cell", + "socket2", + "tracing", + "windows-sys 0.60.2", +] + +[[package]] +name = "quote" +version = "1.0.44" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "21b2ebcf727b7760c461f091f9f0f539b77b8e87f2fd88131e7f1b433b3cece4" +dependencies = [ + "proc-macro2", +] + +[[package]] +name = "r-efi" +version = "5.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f" + +[[package]] +name = "rand" +version = "0.8.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "34af8d1a0e25924bc5b7c43c079c942339d8f0a8b57c39049bef581b46327404" +dependencies = [ + "rand_core 0.6.4", +] + +[[package]] +name = "rand" +version = "0.9.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6db2770f06117d490610c7488547d543617b21bfa07796d7a12f6f1bd53850d1" +dependencies = [ + "rand_chacha", + "rand_core 0.9.5", +] + +[[package]] +name = "rand_chacha" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d3022b5f1df60f26e1ffddd6c66e8aa15de382ae63b3a0c1bfc0e4d3e3f325cb" +dependencies = [ + "ppv-lite86", + "rand_core 0.9.5", +] + +[[package]] +name = "rand_core" +version = "0.6.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ec0be4795e2f6a28069bec0b5ff3e2ac9bafc99e6a9a7dc3547996c5c816922c" + +[[package]] +name = "rand_core" +version = "0.9.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "76afc826de14238e6e8c374ddcc1fa19e374fd8dd986b0d2af0d02377261d83c" +dependencies = [ + "getrandom 0.3.4", +] + +[[package]] +name = "ratatui" +version = "0.30.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d1ce67fb8ba4446454d1c8dbaeda0557ff5e94d39d5e5ed7f10a65eb4c8266bc" +dependencies = [ + "instability", + "ratatui-core", + "ratatui-crossterm", + "ratatui-macros", + "ratatui-termwiz", + "ratatui-widgets", +] + +[[package]] +name = "ratatui-core" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5ef8dea09a92caaf73bff7adb70b76162e5937524058a7e5bff37869cbbec293" +dependencies = [ + "bitflags 2.11.0", + "compact_str", + "hashbrown 0.16.1", + "indoc", + "itertools", + "kasuari", + "lru", + "strum", + "thiserror 2.0.18", + "unicode-segmentation", + "unicode-truncate", + "unicode-width", +] + +[[package]] +name = "ratatui-crossterm" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "577c9b9f652b4c121fb25c6a391dd06406d3b092ba68827e6d2f09550edc54b3" +dependencies = [ + "cfg-if", + "crossterm", + "instability", + "ratatui-core", +] + +[[package]] +name = "ratatui-macros" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a7f1342a13e83e4bb9d0b793d0ea762be633f9582048c892ae9041ef39c936f4" +dependencies = [ + "ratatui-core", + "ratatui-widgets", +] + +[[package]] +name = "ratatui-termwiz" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0f76fe0bd0ed4295f0321b1676732e2454024c15a35d01904ddb315afd3d545c" +dependencies = [ + "ratatui-core", + "termwiz", +] + +[[package]] +name = "ratatui-widgets" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d7dbfa023cd4e604c2553483820c5fe8aa9d71a42eea5aa77c6e7f35756612db" +dependencies = [ + "bitflags 2.11.0", + "hashbrown 0.16.1", + "indoc", + "instability", + "itertools", + "line-clipping", + "ratatui-core", + "strum", + "time", + "unicode-segmentation", + "unicode-width", +] + +[[package]] +name = "redox_syscall" +version = "0.5.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d" +dependencies = [ + "bitflags 2.11.0", +] + +[[package]] +name = "redox_users" +version = "0.5.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a4e608c6638b9c18977b00b475ac1f28d14e84b27d8d42f70e0bf1e3dec127ac" +dependencies = [ + "getrandom 0.2.17", + "libredox", + "thiserror 2.0.18", +] + +[[package]] +name = "regex" +version = "1.12.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e10754a14b9137dd7b1e3e5b0493cc9171fdd105e0ab477f51b72e7f3ac0e276" +dependencies = [ + "aho-corasick", + "memchr", + "regex-automata", + "regex-syntax", +] + +[[package]] +name = "regex-automata" +version = "0.4.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6e1dd4122fc1595e8162618945476892eefca7b88c52820e74af6262213cae8f" +dependencies = [ + "aho-corasick", + "memchr", + "regex-syntax", +] + +[[package]] +name = "regex-syntax" +version = "0.8.10" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "dc897dd8d9e8bd1ed8cdad82b5966c3e0ecae09fb1907d58efaa013543185d0a" + +[[package]] +name = "reqwest" +version = "0.13.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ab3f43e3283ab1488b624b44b0e988d0acea0b3214e694730a055cb6b2efa801" +dependencies = [ + "base64", + "bytes", + "encoding_rs", + "futures-core", + "h2", + "http", + "http-body", + "http-body-util", + "hyper", + "hyper-rustls", + "hyper-util", + "js-sys", + "log", + "mime", + "percent-encoding", + "pin-project-lite", + "quinn", + "rustls", + "rustls-pki-types", + "rustls-platform-verifier", + "serde", + "serde_json", + "sync_wrapper", + "tokio", + "tokio-rustls", + "tower", + "tower-http", + "tower-service", + "url", + "wasm-bindgen", + "wasm-bindgen-futures", + "web-sys", +] + +[[package]] +name = "ring" +version = "0.17.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a4689e6c2294d81e88dc6261c768b63bc4fcdb852be6d1352498b114f61383b7" +dependencies = [ + "cc", + "cfg-if", + "getrandom 0.2.17", + "libc", + "untrusted", + "windows-sys 0.52.0", +] + +[[package]] +name = "rustc-hash" +version = "2.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "357703d41365b4b27c590e3ed91eabb1b663f07c4c084095e60cbed4362dff0d" + +[[package]] +name = "rustc_version" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "cfcb3a22ef46e85b45de6ee7e79d063319ebb6594faafcf1c225ea92ab6e9b92" +dependencies = [ + "semver", +] + +[[package]] +name = "rustix" +version = "1.1.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6fe4565b9518b83ef4f91bb47ce29620ca828bd32cb7e408f0062e9930ba190" +dependencies = [ + "bitflags 2.11.0", + "errno", + "libc", + "linux-raw-sys", + "windows-sys 0.61.2", +] + +[[package]] +name = "rustls" +version = "0.23.37" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "758025cb5fccfd3bc2fd74708fd4682be41d99e5dff73c377c0646c6012c73a4" +dependencies = [ + "aws-lc-rs", + "once_cell", + "rustls-pki-types", + "rustls-webpki", + "subtle", + "zeroize", +] + +[[package]] +name = "rustls-native-certs" +version = "0.8.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "612460d5f7bea540c490b2b6395d8e34a953e52b491accd6c86c8164c5932a63" +dependencies = [ + "openssl-probe", + "rustls-pki-types", + "schannel", + "security-framework", +] + +[[package]] +name = "rustls-pki-types" +version = "1.14.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "be040f8b0a225e40375822a563fa9524378b9d63112f53e19ffff34df5d33fdd" +dependencies = [ + "web-time", + "zeroize", +] + +[[package]] +name = "rustls-platform-verifier" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1d99feebc72bae7ab76ba994bb5e121b8d83d910ca40b36e0921f53becc41784" +dependencies = [ + "core-foundation 0.10.1", + "core-foundation-sys", + "jni", + "log", + "once_cell", + "rustls", + "rustls-native-certs", + "rustls-platform-verifier-android", + "rustls-webpki", + "security-framework", + "security-framework-sys", + "webpki-root-certs", + "windows-sys 0.61.2", +] + +[[package]] +name = "rustls-platform-verifier-android" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f87165f0995f63a9fbeea62b64d10b4d9d8e78ec6d7d51fb2125fda7bb36788f" + +[[package]] +name = "rustls-webpki" +version = "0.103.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d7df23109aa6c1567d1c575b9952556388da57401e4ace1d15f79eedad0d8f53" +dependencies = [ + "aws-lc-rs", + "ring", + "rustls-pki-types", + "untrusted", +] + +[[package]] +name = "rustversion" +version = "1.0.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b39cdef0fa800fc44525c84ccb54a029961a8215f9619753635a9c0d2538d46d" + +[[package]] +name = "ryu" +version = "1.0.23" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9774ba4a74de5f7b1c1451ed6cd5285a32eddb5cccb8cc655a4e50009e06477f" + +[[package]] +name = "same-file" +version = "1.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "93fc1dc3aaa9bfed95e02e6eadabb4baf7e3078b0bd1b4d7b6b0b68378900502" +dependencies = [ + "winapi-util", +] + +[[package]] +name = "schannel" +version = "0.1.28" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "891d81b926048e76efe18581bf793546b4c0eaf8448d72be8de2bbee5fd166e1" +dependencies = [ + "windows-sys 0.61.2", +] + +[[package]] +name = "scopeguard" +version = "1.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49" + +[[package]] +name = "security-framework" +version = "3.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b7f4bc775c73d9a02cde8bf7b2ec4c9d12743edf609006c7facc23998404cd1d" +dependencies = [ + "bitflags 2.11.0", + "core-foundation 0.10.1", + "core-foundation-sys", + "libc", + "security-framework-sys", +] + +[[package]] +name = "security-framework-sys" +version = "2.17.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6ce2691df843ecc5d231c0b14ece2acc3efb62c0a398c7e1d875f3983ce020e3" +dependencies = [ + "core-foundation-sys", + "libc", +] + +[[package]] +name = "semver" +version = "1.0.27" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d767eb0aabc880b29956c35734170f26ed551a859dbd361d140cdbeca61ab1e2" + +[[package]] +name = "serde" +version = "1.0.228" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e" +dependencies = [ + "serde_core", + "serde_derive", +] + +[[package]] +name = "serde_core" +version = "1.0.228" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad" +dependencies = [ + "serde_derive", +] + +[[package]] +name = "serde_derive" +version = "1.0.228" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "serde_json" +version = "1.0.149" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86" +dependencies = [ + "itoa", + "memchr", + "serde", + "serde_core", + "zmij", +] + +[[package]] +name = "serde_spanned" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f8bbf91e5a4d6315eee45e704372590b30e260ee83af6639d64557f51b067776" +dependencies = [ + "serde_core", +] + +[[package]] +name = "serde_urlencoded" +version = "0.7.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d3491c14715ca2294c4d6a88f15e84739788c1d030eed8c110436aafdaa2f3fd" +dependencies = [ + "form_urlencoded", + "itoa", + "ryu", + "serde", +] + +[[package]] +name = "sha2" +version = "0.10.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283" +dependencies = [ + "cfg-if", + "cpufeatures", + "digest", +] + +[[package]] +name = "sharded-slab" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f40ca3c46823713e0d4209592e8d6e826aa57e928f09752619fc696c499637f6" +dependencies = [ + "lazy_static", +] + +[[package]] +name = "shellexpand" +version = "3.1.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "32824fab5e16e6c4d86dc1ba84489390419a39f97699852b66480bb87d297ed8" +dependencies = [ + "dirs", +] + +[[package]] +name = "shlex" +version = "1.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64" + +[[package]] +name = "signal-hook" +version = "0.3.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d881a16cf4426aa584979d30bd82cb33429027e42122b169753d6ef1085ed6e2" +dependencies = [ + "libc", + "signal-hook-registry", +] + +[[package]] +name = "signal-hook-mio" +version = "0.2.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b75a19a7a740b25bc7944bdee6172368f988763b744e3d4dfe753f6b4ece40cc" +dependencies = [ + "libc", + "mio", + "signal-hook", +] + +[[package]] +name = "signal-hook-registry" +version = "1.4.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c4db69cba1110affc0e9f7bcd48bbf87b3f4fc7c61fc9155afd4c469eb3d6c1b" +dependencies = [ + "errno", + "libc", +] + +[[package]] +name = "similar" +version = "2.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bbbb5d9659141646ae647b42fe094daf6c6192d1620870b449d9557f748b2daa" + +[[package]] +name = "siphasher" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b2aa850e253778c88a04c3d7323b043aeda9d3e30d5971937c1855769763678e" + +[[package]] +name = "slab" +version = "0.4.12" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0c790de23124f9ab44544d7ac05d60440adc586479ce501c1d6d7da3cd8c9cf5" + +[[package]] +name = "smallvec" +version = "1.15.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "67b1b7a3b5fe4f1376887184045fcf45c69e92af734b7aaddc05fb777b6fbd03" + +[[package]] +name = "socket2" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "86f4aa3ad99f2088c990dfa82d367e19cb29268ed67c574d10d0a4bfe71f07e0" +dependencies = [ + "libc", + "windows-sys 0.60.2", +] + +[[package]] +name = "stable_deref_trait" +version = "1.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596" + +[[package]] +name = "static_assertions" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a2eb9349b6444b326872e140eb1cf5e7c522154d69e7a0ffb0fb81c06b37543f" + +[[package]] +name = "strsim" +version = "0.11.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f" + +[[package]] +name = "strum" +version = "0.27.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "af23d6f6c1a224baef9d3f61e287d2761385a5b88fdab4eb4c6f11aeb54c4bcf" +dependencies = [ + "strum_macros", +] + +[[package]] +name = "strum_macros" +version = "0.27.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7695ce3845ea4b33927c055a39dc438a45b059f7c1b3d91d38d10355fb8cbca7" +dependencies = [ + "heck", + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "subtle" +version = "2.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "13c2bddecc57b384dee18652358fb23172facb8a2c51ccc10d74c157bdea3292" + +[[package]] +name = "syn" +version = "1.0.109" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72b64191b275b66ffe2469e8af2c1cfe3bafa67b529ead792a6d0160888b4237" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "syn" +version = "2.0.117" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e665b8803e7b1d2a727f4023456bbbbe74da67099c585258af0ad9c5013b9b99" +dependencies = [ + "proc-macro2", + "quote", + "unicode-ident", +] + +[[package]] +name = "sync_wrapper" +version = "1.0.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0bf256ce5efdfa370213c1dabab5935a12e49f2c58d15e9eac2870d3b4f27263" +dependencies = [ + "futures-core", +] + +[[package]] +name = "synstructure" +version = "0.13.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "728a70f3dbaf5bab7f0c4b1ac8d7ae5ea60a4b5549c8a5914361c99147a709d2" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "system-configuration" +version = "0.7.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a13f3d0daba03132c0aa9767f98351b3488edc2c100cda2d2ec2b04f3d8d3c8b" +dependencies = [ + "bitflags 2.11.0", + "core-foundation 0.9.4", + "system-configuration-sys", +] + +[[package]] +name = "system-configuration-sys" +version = "0.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8e1d1b10ced5ca923a1fcb8d03e96b8d3268065d724548c0211415ff6ac6bac4" +dependencies = [ + "core-foundation-sys", + "libc", +] + +[[package]] +name = "tempfile" +version = "3.26.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "82a72c767771b47409d2345987fda8628641887d5466101319899796367354a0" +dependencies = [ + "fastrand", + "getrandom 0.4.1", + "once_cell", + "rustix", + "windows-sys 0.61.2", +] + +[[package]] +name = "terminfo" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d4ea810f0692f9f51b382fff5893887bb4580f5fa246fde546e0b13e7fcee662" +dependencies = [ + "fnv", + "nom", + "phf", + "phf_codegen", +] + +[[package]] +name = "termios" +version = "0.3.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "411c5bf740737c7918b8b1fe232dca4dc9f8e754b8ad5e20966814001ed0ac6b" +dependencies = [ + "libc", +] + +[[package]] +name = "termwiz" +version = "0.23.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4676b37242ccbd1aabf56edb093a4827dc49086c0ffd764a5705899e0f35f8f7" +dependencies = [ + "anyhow", + "base64", + "bitflags 2.11.0", + "fancy-regex", + "filedescriptor", + "finl_unicode", + "fixedbitset", + "hex", + "lazy_static", + "libc", + "log", + "memmem", + "nix", + "num-derive", + "num-traits", + "ordered-float", + "pest", + "pest_derive", + "phf", + "sha2", + "signal-hook", + "siphasher", + "terminfo", + "termios", + "thiserror 1.0.69", + "ucd-trie", + "unicode-segmentation", + "vtparse", + "wezterm-bidi", + "wezterm-blob-leases", + "wezterm-color-types", + "wezterm-dynamic", + "wezterm-input-types", + "winapi", +] + +[[package]] +name = "thiserror" +version = "1.0.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6aaf5339b578ea85b50e080feb250a3e8ae8cfcdff9a461c9ec2904bc923f52" +dependencies = [ + "thiserror-impl 1.0.69", +] + +[[package]] +name = "thiserror" +version = "2.0.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4288b5bcbc7920c07a1149a35cf9590a2aa808e0bc1eafaade0b80947865fbc4" +dependencies = [ + "thiserror-impl 2.0.18", +] + +[[package]] +name = "thiserror-impl" +version = "1.0.69" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4fee6c4efc90059e10f81e6d42c60a18f76588c3d74cb83a0b242a2b6c7504c1" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "thiserror-impl" +version = "2.0.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ebc4ee7f67670e9b64d05fa4253e753e016c6c95ff35b89b7941d6b856dec1d5" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "thread_local" +version = "1.1.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f60246a4944f24f6e018aa17cdeffb7818b76356965d03b07d6a9886e8962185" +dependencies = [ + "cfg-if", +] + +[[package]] +name = "time" +version = "0.3.47" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "743bd48c283afc0388f9b8827b976905fb217ad9e647fae3a379a9283c4def2c" +dependencies = [ + "deranged", + "libc", + "num-conv", + "num_threads", + "powerfmt", + "serde_core", + "time-core", +] + +[[package]] +name = "time-core" +version = "0.1.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7694e1cfe791f8d31026952abf09c69ca6f6fa4e1a1229e18988f06a04a12dca" + +[[package]] +name = "tinystr" +version = "0.8.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "42d3e9c45c09de15d06dd8acf5f4e0e399e85927b7f00711024eb7ae10fa4869" +dependencies = [ + "displaydoc", + "zerovec", +] + +[[package]] +name = "tinyvec" +version = "1.10.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bfa5fdc3bce6191a1dbc8c02d5c8bffcf557bafa17c124c5264a458f1b0613fa" +dependencies = [ + "tinyvec_macros", +] + +[[package]] +name = "tinyvec_macros" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20" + +[[package]] +name = "tokio" +version = "1.49.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72a2903cd7736441aac9df9d7688bd0ce48edccaadf181c3b90be801e81d3d86" +dependencies = [ + "bytes", + "libc", + "mio", + "parking_lot", + "pin-project-lite", + "signal-hook-registry", + "socket2", + "tokio-macros", + "windows-sys 0.61.2", +] + +[[package]] +name = "tokio-macros" +version = "2.6.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "af407857209536a95c8e56f8231ef2c2e2aff839b22e07a1ffcbc617e9db9fa5" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "tokio-rustls" +version = "0.26.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1729aa945f29d91ba541258c8df89027d5792d85a8841fb65e8bf0f4ede4ef61" +dependencies = [ + "rustls", + "tokio", +] + +[[package]] +name = "tokio-stream" +version = "0.1.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "32da49809aab5c3bc678af03902d4ccddea2a87d028d86392a4b1560c6906c70" +dependencies = [ + "futures-core", + "pin-project-lite", + "tokio", +] + +[[package]] +name = "tokio-test" +version = "0.4.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3f6d24790a10a7af737693a3e8f1d03faef7e6ca0cc99aae5066f533766de545" +dependencies = [ + "futures-core", + "tokio", + "tokio-stream", +] + +[[package]] +name = "tokio-util" +version = "0.7.18" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9ae9cec805b01e8fc3fd2fe289f89149a9b66dd16786abd8b19cfa7b48cb0098" +dependencies = [ + "bytes", + "futures-core", + "futures-sink", + "pin-project-lite", + "tokio", +] + +[[package]] +name = "toml" +version = "1.0.3+spec-1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c7614eaf19ad818347db24addfa201729cf2a9b6fdfd9eb0ab870fcacc606c0c" +dependencies = [ + "indexmap", + "serde_core", + "serde_spanned", + "toml_datetime", + "toml_parser", + "toml_writer", + "winnow", +] + +[[package]] +name = "toml_datetime" +version = "1.0.0+spec-1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "32c2555c699578a4f59f0cc68e5116c8d7cabbd45e1409b989d4be085b53f13e" +dependencies = [ + "serde_core", +] + +[[package]] +name = "toml_parser" +version = "1.0.9+spec-1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "702d4415e08923e7e1ef96cd5727c0dfed80b4d2fa25db9647fe5eb6f7c5a4c4" +dependencies = [ + "winnow", +] + +[[package]] +name = "toml_writer" +version = "1.0.6+spec-1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ab16f14aed21ee8bfd8ec22513f7287cd4a91aa92e44edfe2c17ddd004e92607" + +[[package]] +name = "tower" +version = "0.5.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ebe5ef63511595f1344e2d5cfa636d973292adc0eec1f0ad45fae9f0851ab1d4" +dependencies = [ + "futures-core", + "futures-util", + "pin-project-lite", + "sync_wrapper", + "tokio", + "tower-layer", + "tower-service", +] + +[[package]] +name = "tower-http" +version = "0.6.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8" +dependencies = [ + "bitflags 2.11.0", + "bytes", + "futures-util", + "http", + "http-body", + "iri-string", + "pin-project-lite", + "tower", + "tower-layer", + "tower-service", +] + +[[package]] +name = "tower-layer" +version = "0.3.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "121c2a6cda46980bb0fcd1647ffaf6cd3fc79a013de288782836f6df9c48780e" + +[[package]] +name = "tower-service" +version = "0.3.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3" + +[[package]] +name = "tracing" +version = "0.1.44" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100" +dependencies = [ + "pin-project-lite", + "tracing-attributes", + "tracing-core", +] + +[[package]] +name = "tracing-attributes" +version = "0.1.31" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7490cfa5ec963746568740651ac6781f701c9c5ea257c58e057f3ba8cf69e8da" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "tracing-core" +version = "0.1.36" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "db97caf9d906fbde555dd62fa95ddba9eecfd14cb388e4f491a66d74cd5fb79a" +dependencies = [ + "once_cell", + "valuable", +] + +[[package]] +name = "tracing-log" +version = "0.2.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ee855f1f400bd0e5c02d150ae5de3840039a3f54b025156404e34c23c03f47c3" +dependencies = [ + "log", + "once_cell", + "tracing-core", +] + +[[package]] +name = "tracing-subscriber" +version = "0.3.22" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2f30143827ddab0d256fd843b7a66d164e9f271cfa0dde49142c5ca0ca291f1e" +dependencies = [ + "matchers", + "nu-ansi-term", + "once_cell", + "regex-automata", + "sharded-slab", + "smallvec", + "thread_local", + "tracing", + "tracing-core", + "tracing-log", +] + +[[package]] +name = "try-lock" +version = "0.2.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e421abadd41a4225275504ea4d6566923418b7f05506fbc9c0fe86ba7396114b" + +[[package]] +name = "typenum" +version = "1.19.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "562d481066bde0658276a35467c4af00bdc6ee726305698a55b86e61d7ad82bb" + +[[package]] +name = "ucd-trie" +version = "0.1.7" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2896d95c02a80c6d6a5d6e953d479f5ddf2dfdb6a244441010e373ac0fb88971" + +[[package]] +name = "unicode-ident" +version = "1.0.24" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e6e4313cd5fcd3dad5cafa179702e2b244f760991f45397d14d4ebf38247da75" + +[[package]] +name = "unicode-segmentation" +version = "1.12.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f6ccf251212114b54433ec949fd6a7841275f9ada20dddd2f29e9ceea4501493" + +[[package]] +name = "unicode-truncate" +version = "2.0.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "16b380a1238663e5f8a691f9039c73e1cdae598a30e9855f541d29b08b53e9a5" +dependencies = [ + "itertools", + "unicode-segmentation", + "unicode-width", +] + +[[package]] +name = "unicode-width" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b4ac048d71ede7ee76d585517add45da530660ef4390e49b098733c6e897f254" + +[[package]] +name = "unicode-xid" +version = "0.2.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ebc1c04c71510c7f702b52b7c350734c9ff1295c464a03335b00bb84fc54f853" + +[[package]] +name = "unit-prefix" +version = "0.5.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "81e544489bf3d8ef66c953931f56617f423cd4b5494be343d9b9d3dda037b9a3" + +[[package]] +name = "untrusted" +version = "0.9.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8ecb6da28b8a351d773b68d5825ac39017e680750f980f3a1a85cd8dd28a47c1" + +[[package]] +name = "url" +version = "2.5.8" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ff67a8a4397373c3ef660812acab3268222035010ab8680ec4215f38ba3d0eed" +dependencies = [ + "form_urlencoded", + "idna", + "percent-encoding", + "serde", +] + +[[package]] +name = "utf8_iter" +version = "1.0.4" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be" + +[[package]] +name = "utf8parse" +version = "0.2.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821" + +[[package]] +name = "uuid" +version = "1.21.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b672338555252d43fd2240c714dc444b8c6fb0a5c5335e65a07bba7742735ddb" +dependencies = [ + "atomic", + "getrandom 0.4.1", + "js-sys", + "wasm-bindgen", +] + +[[package]] +name = "valuable" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ba73ea9cf16a25df0c8caa16c51acb937d5712a8429db78a3ee29d5dcacd3a65" + +[[package]] +name = "version_check" +version = "0.9.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0b928f33d975fc6ad9f86c8f283853ad26bdd5b10b7f1542aa2fa15e2289105a" + +[[package]] +name = "vtparse" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6d9b2acfb050df409c972a37d3b8e08cdea3bddb0c09db9d53137e504cfabed0" +dependencies = [ + "utf8parse", +] + +[[package]] +name = "walkdir" +version = "2.5.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "29790946404f91d9c5d06f9874efddea1dc06c5efe94541a7d6863108e3a5e4b" +dependencies = [ + "same-file", + "winapi-util", +] + +[[package]] +name = "want" +version = "0.3.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bfa7760aed19e106de2c7c0b581b509f2f25d3dacaf737cb82ac61bc6d760b0e" +dependencies = [ + "try-lock", +] + +[[package]] +name = "wasi" +version = "0.11.1+wasi-snapshot-preview1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ccf3ec651a847eb01de73ccad15eb7d99f80485de043efb2f370cd654f4ea44b" + +[[package]] +name = "wasip2" +version = "1.0.2+wasi-0.2.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9517f9239f02c069db75e65f174b3da828fe5f5b945c4dd26bd25d89c03ebcf5" +dependencies = [ + "wit-bindgen", +] + +[[package]] +name = "wasip3" +version = "0.4.0+wasi-0.3.0-rc-2026-01-06" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5428f8bf88ea5ddc08faddef2ac4a67e390b88186c703ce6dbd955e1c145aca5" +dependencies = [ + "wit-bindgen", +] + +[[package]] +name = "wasm-bindgen" +version = "0.2.113" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "60722a937f594b7fde9adb894d7c092fc1bb6612897c46368d18e7a20208eff2" +dependencies = [ + "cfg-if", + "once_cell", + "rustversion", + "wasm-bindgen-macro", + "wasm-bindgen-shared", +] + +[[package]] +name = "wasm-bindgen-futures" +version = "0.4.63" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8a89f4650b770e4521aa6573724e2aed4704372151bd0de9d16a3bbabb87441a" +dependencies = [ + "cfg-if", + "futures-util", + "js-sys", + "once_cell", + "wasm-bindgen", + "web-sys", +] + +[[package]] +name = "wasm-bindgen-macro" +version = "0.2.113" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0fac8c6395094b6b91c4af293f4c79371c163f9a6f56184d2c9a85f5a95f3950" +dependencies = [ + "quote", + "wasm-bindgen-macro-support", +] + +[[package]] +name = "wasm-bindgen-macro-support" +version = "0.2.113" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ab3fabce6159dc20728033842636887e4877688ae94382766e00b180abac9d60" +dependencies = [ + "bumpalo", + "proc-macro2", + "quote", + "syn 2.0.117", + "wasm-bindgen-shared", +] + +[[package]] +name = "wasm-bindgen-shared" +version = "0.2.113" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "de0e091bdb824da87dc01d967388880d017a0a9bc4f3bdc0d86ee9f9336e3bb5" +dependencies = [ + "unicode-ident", +] + +[[package]] +name = "wasm-encoder" +version = "0.244.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "990065f2fe63003fe337b932cfb5e3b80e0b4d0f5ff650e6985b1048f62c8319" +dependencies = [ + "leb128fmt", + "wasmparser", +] + +[[package]] +name = "wasm-metadata" +version = "0.244.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "bb0e353e6a2fbdc176932bbaab493762eb1255a7900fe0fea1a2f96c296cc909" +dependencies = [ + "anyhow", + "indexmap", + "wasm-encoder", + "wasmparser", +] + +[[package]] +name = "wasmparser" +version = "0.244.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "47b807c72e1bac69382b3a6fb3dbe8ea4c0ed87ff5629b8685ae6b9a611028fe" +dependencies = [ + "bitflags 2.11.0", + "hashbrown 0.15.5", + "indexmap", + "semver", +] + +[[package]] +name = "web-sys" +version = "0.3.90" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "705eceb4ce901230f8625bd1d665128056ccbe4b7408faa625eec1ba80f59a97" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + +[[package]] +name = "web-time" +version = "1.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a6580f308b1fad9207618087a65c04e7a10bc77e02c8e84e9b00dd4b12fa0bb" +dependencies = [ + "js-sys", + "wasm-bindgen", +] + +[[package]] +name = "webpki-root-certs" +version = "1.0.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "804f18a4ac2676ffb4e8b5b5fa9ae38af06df08162314f96a68d2a363e21a8ca" +dependencies = [ + "rustls-pki-types", +] + +[[package]] +name = "wezterm-bidi" +version = "0.2.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0c0a6e355560527dd2d1cf7890652f4f09bb3433b6aadade4c9b5ed76de5f3ec" +dependencies = [ + "log", + "wezterm-dynamic", +] + +[[package]] +name = "wezterm-blob-leases" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "692daff6d93d94e29e4114544ef6d5c942a7ed998b37abdc19b17136ea428eb7" +dependencies = [ + "getrandom 0.3.4", + "mac_address", + "sha2", + "thiserror 1.0.69", + "uuid", +] + +[[package]] +name = "wezterm-color-types" +version = "0.3.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7de81ef35c9010270d63772bebef2f2d6d1f2d20a983d27505ac850b8c4b4296" +dependencies = [ + "csscolorparser", + "deltae", + "lazy_static", + "wezterm-dynamic", +] + +[[package]] +name = "wezterm-dynamic" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5f2ab60e120fd6eaa68d9567f3226e876684639d22a4219b313ff69ec0ccd5ac" +dependencies = [ + "log", + "ordered-float", + "strsim", + "thiserror 1.0.69", + "wezterm-dynamic-derive", +] + +[[package]] +name = "wezterm-dynamic-derive" +version = "0.1.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "46c0cf2d539c645b448eaffec9ec494b8b19bd5077d9e58cb1ae7efece8d575b" +dependencies = [ + "proc-macro2", + "quote", + "syn 1.0.109", +] + +[[package]] +name = "wezterm-input-types" +version = "0.1.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7012add459f951456ec9d6c7e6fc340b1ce15d6fc9629f8c42853412c029e57e" +dependencies = [ + "bitflags 1.3.2", + "euclid", + "lazy_static", + "serde", + "wezterm-dynamic", +] + +[[package]] +name = "winapi" +version = "0.3.9" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419" +dependencies = [ + "winapi-i686-pc-windows-gnu", + "winapi-x86_64-pc-windows-gnu", +] + +[[package]] +name = "winapi-i686-pc-windows-gnu" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6" + +[[package]] +name = "winapi-util" +version = "0.1.11" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c2a7b1c03c876122aa43f3020e6c3c3ee5c05081c9a00739faf7503aeba10d22" +dependencies = [ + "windows-sys 0.61.2", +] + +[[package]] +name = "winapi-x86_64-pc-windows-gnu" +version = "0.4.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f" + +[[package]] +name = "windows-core" +version = "0.62.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b8e83a14d34d0623b51dce9581199302a221863196a1dde71a7663a4c2be9deb" +dependencies = [ + "windows-implement", + "windows-interface", + "windows-link", + "windows-result", + "windows-strings", +] + +[[package]] +name = "windows-implement" +version = "0.60.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "053e2e040ab57b9dc951b72c264860db7eb3b0200ba345b4e4c3b14f67855ddf" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "windows-interface" +version = "0.59.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "3f316c4a2570ba26bbec722032c4099d8c8bc095efccdc15688708623367e358" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "windows-link" +version = "0.2.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5" + +[[package]] +name = "windows-registry" +version = "0.6.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "02752bf7fbdcce7f2a27a742f798510f3e5ad88dbe84871e5168e2120c3d5720" +dependencies = [ + "windows-link", + "windows-result", + "windows-strings", +] + +[[package]] +name = "windows-result" +version = "0.4.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7781fa89eaf60850ac3d2da7af8e5242a5ea78d1a11c49bf2910bb5a73853eb5" +dependencies = [ + "windows-link", +] + +[[package]] +name = "windows-strings" +version = "0.5.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "7837d08f69c77cf6b07689544538e017c1bfcf57e34b4c0ff58e6c2cd3b37091" +dependencies = [ + "windows-link", +] + +[[package]] +name = "windows-sys" +version = "0.45.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "75283be5efb2831d37ea142365f009c02ec203cd29a3ebecbc093d52315b66d0" +dependencies = [ + "windows-targets 0.42.2", +] + +[[package]] +name = "windows-sys" +version = "0.52.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d" +dependencies = [ + "windows-targets 0.52.6", +] + +[[package]] +name = "windows-sys" +version = "0.60.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "f2f500e4d28234f72040990ec9d39e3a6b950f9f22d3dba18416c35882612bcb" +dependencies = [ + "windows-targets 0.53.5", +] + +[[package]] +name = "windows-sys" +version = "0.61.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc" +dependencies = [ + "windows-link", +] + +[[package]] +name = "windows-targets" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8e5180c00cd44c9b1c88adb3693291f1cd93605ded80c250a75d472756b4d071" +dependencies = [ + "windows_aarch64_gnullvm 0.42.2", + "windows_aarch64_msvc 0.42.2", + "windows_i686_gnu 0.42.2", + "windows_i686_msvc 0.42.2", + "windows_x86_64_gnu 0.42.2", + "windows_x86_64_gnullvm 0.42.2", + "windows_x86_64_msvc 0.42.2", +] + +[[package]] +name = "windows-targets" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9b724f72796e036ab90c1021d4780d4d3d648aca59e491e6b98e725b84e99973" +dependencies = [ + "windows_aarch64_gnullvm 0.52.6", + "windows_aarch64_msvc 0.52.6", + "windows_i686_gnu 0.52.6", + "windows_i686_gnullvm 0.52.6", + "windows_i686_msvc 0.52.6", + "windows_x86_64_gnu 0.52.6", + "windows_x86_64_gnullvm 0.52.6", + "windows_x86_64_msvc 0.52.6", +] + +[[package]] +name = "windows-targets" +version = "0.53.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4945f9f551b88e0d65f3db0bc25c33b8acea4d9e41163edf90dcd0b19f9069f3" +dependencies = [ + "windows-link", + "windows_aarch64_gnullvm 0.53.1", + "windows_aarch64_msvc 0.53.1", + "windows_i686_gnu 0.53.1", + "windows_i686_gnullvm 0.53.1", + "windows_i686_msvc 0.53.1", + "windows_x86_64_gnu 0.53.1", + "windows_x86_64_gnullvm 0.53.1", + "windows_x86_64_msvc 0.53.1", +] + +[[package]] +name = "windows_aarch64_gnullvm" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "597a5118570b68bc08d8d59125332c54f1ba9d9adeedeef5b99b02ba2b0698f8" + +[[package]] +name = "windows_aarch64_gnullvm" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3" + +[[package]] +name = "windows_aarch64_gnullvm" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "a9d8416fa8b42f5c947f8482c43e7d89e73a173cead56d044f6a56104a6d1b53" + +[[package]] +name = "windows_aarch64_msvc" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "e08e8864a60f06ef0d0ff4ba04124db8b0fb3be5776a5cd47641e942e58c4d43" + +[[package]] +name = "windows_aarch64_msvc" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469" + +[[package]] +name = "windows_aarch64_msvc" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b9d782e804c2f632e395708e99a94275910eb9100b2114651e04744e9b125006" + +[[package]] +name = "windows_i686_gnu" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "c61d927d8da41da96a81f029489353e68739737d3beca43145c8afec9a31a84f" + +[[package]] +name = "windows_i686_gnu" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b" + +[[package]] +name = "windows_i686_gnu" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "960e6da069d81e09becb0ca57a65220ddff016ff2d6af6a223cf372a506593a3" + +[[package]] +name = "windows_i686_gnullvm" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66" + +[[package]] +name = "windows_i686_gnullvm" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "fa7359d10048f68ab8b09fa71c3daccfb0e9b559aed648a8f95469c27057180c" + +[[package]] +name = "windows_i686_msvc" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "44d840b6ec649f480a41c8d80f9c65108b92d89345dd94027bfe06ac444d1060" + +[[package]] +name = "windows_i686_msvc" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66" + +[[package]] +name = "windows_i686_msvc" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "1e7ac75179f18232fe9c285163565a57ef8d3c89254a30685b57d83a38d326c2" + +[[package]] +name = "windows_x86_64_gnu" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "8de912b8b8feb55c064867cf047dda097f92d51efad5b491dfb98f6bbb70cb36" + +[[package]] +name = "windows_x86_64_gnu" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78" + +[[package]] +name = "windows_x86_64_gnu" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9c3842cdd74a865a8066ab39c8a7a473c0778a3f29370b5fd6b4b9aa7df4a499" + +[[package]] +name = "windows_x86_64_gnullvm" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "26d41b46a36d453748aedef1486d5c7a85db22e56aff34643984ea85514e94a3" + +[[package]] +name = "windows_x86_64_gnullvm" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d" + +[[package]] +name = "windows_x86_64_gnullvm" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0ffa179e2d07eee8ad8f57493436566c7cc30ac536a3379fdf008f47f6bb7ae1" + +[[package]] +name = "windows_x86_64_msvc" +version = "0.42.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9aec5da331524158c6d1a4ac0ab1541149c0b9505fde06423b02f5ef0106b9f0" + +[[package]] +name = "windows_x86_64_msvc" +version = "0.52.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec" + +[[package]] +name = "windows_x86_64_msvc" +version = "0.53.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d6bbff5f0aada427a1e5a6da5f1f98158182f26556f345ac9e04d36d0ebed650" + +[[package]] +name = "winnow" +version = "0.7.14" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "5a5364e9d77fcdeeaa6062ced926ee3381faa2ee02d3eb83a5c27a8825540829" + +[[package]] +name = "wit-bindgen" +version = "0.51.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d7249219f66ced02969388cf2bb044a09756a083d0fab1e566056b04d9fbcaa5" +dependencies = [ + "wit-bindgen-rust-macro", +] + +[[package]] +name = "wit-bindgen-core" +version = "0.51.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ea61de684c3ea68cb082b7a88508a8b27fcc8b797d738bfc99a82facf1d752dc" +dependencies = [ + "anyhow", + "heck", + "wit-parser", +] + +[[package]] +name = "wit-bindgen-rust" +version = "0.51.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b7c566e0f4b284dd6561c786d9cb0142da491f46a9fbed79ea69cdad5db17f21" +dependencies = [ + "anyhow", + "heck", + "indexmap", + "prettyplease", + "syn 2.0.117", + "wasm-metadata", + "wit-bindgen-core", + "wit-component", +] + +[[package]] +name = "wit-bindgen-rust-macro" +version = "0.51.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "0c0f9bfd77e6a48eccf51359e3ae77140a7f50b1e2ebfe62422d8afdaffab17a" +dependencies = [ + "anyhow", + "prettyplease", + "proc-macro2", + "quote", + "syn 2.0.117", + "wit-bindgen-core", + "wit-bindgen-rust", +] + +[[package]] +name = "wit-component" +version = "0.244.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9d66ea20e9553b30172b5e831994e35fbde2d165325bec84fc43dbf6f4eb9cb2" +dependencies = [ + "anyhow", + "bitflags 2.11.0", + "indexmap", + "log", + "serde", + "serde_derive", + "serde_json", + "wasm-encoder", + "wasm-metadata", + "wasmparser", + "wit-parser", +] + +[[package]] +name = "wit-parser" +version = "0.244.0" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "ecc8ac4bc1dc3381b7f59c34f00b67e18f910c2c0f50015669dde7def656a736" +dependencies = [ + "anyhow", + "id-arena", + "indexmap", + "log", + "semver", + "serde", + "serde_derive", + "serde_json", + "unicode-xid", + "wasmparser", +] + +[[package]] +name = "writeable" +version = "0.6.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "9edde0db4769d2dc68579893f2306b26c6ecfbe0ef499b013d731b7b9247e0b9" + +[[package]] +name = "yoke" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "72d6e5c6afb84d73944e5cedb052c4680d5657337201555f9f2a16b7406d4954" +dependencies = [ + "stable_deref_trait", + "yoke-derive", + "zerofrom", +] + +[[package]] +name = "yoke-derive" +version = "0.8.1" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b659052874eb698efe5b9e8cf382204678a0086ebf46982b79d6ca3182927e5d" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", + "synstructure", +] + +[[package]] +name = "zerocopy" +version = "0.8.39" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "db6d35d663eadb6c932438e763b262fe1a70987f9ae936e60158176d710cae4a" +dependencies = [ + "zerocopy-derive", +] + +[[package]] +name = "zerocopy-derive" +version = "0.8.39" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "4122cd3169e94605190e77839c9a40d40ed048d305bfdc146e7df40ab0f3e517" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "zerofrom" +version = "0.1.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "50cc42e0333e05660c3587f3bf9d0478688e15d870fab3346451ce7f8c9fbea5" +dependencies = [ + "zerofrom-derive", +] + +[[package]] +name = "zerofrom-derive" +version = "0.1.6" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "d71e5d6e06ab090c67b5e44993ec16b72dcbaabc526db883a360057678b48502" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", + "synstructure", +] + +[[package]] +name = "zeroize" +version = "1.8.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b97154e67e32c85465826e8bcc1c59429aaaf107c1e4a9e53c8d8ccd5eff88d0" + +[[package]] +name = "zerotrie" +version = "0.2.3" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "2a59c17a5562d507e4b54960e8569ebee33bee890c70aa3fe7b97e85a9fd7851" +dependencies = [ + "displaydoc", + "yoke", + "zerofrom", +] + +[[package]] +name = "zerovec" +version = "0.11.5" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "6c28719294829477f525be0186d13efa9a3c602f7ec202ca9e353d310fb9a002" +dependencies = [ + "yoke", + "zerofrom", + "zerovec-derive", +] + +[[package]] +name = "zerovec-derive" +version = "0.11.2" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "eadce39539ca5cb3985590102671f2567e659fca9666581ad3411d59207951f3" +dependencies = [ + "proc-macro2", + "quote", + "syn 2.0.117", +] + +[[package]] +name = "zmij" +version = "1.0.21" +source = "registry+https://github.com/rust-lang/crates.io-index" +checksum = "b8848ee67ecc8aedbaf3e4122217aff892639231befc6a1b58d29fff4c2cabaa" diff --git a/Cargo.toml b/Cargo.toml new file mode 100644 index 0000000..86e2b01 --- /dev/null +++ b/Cargo.toml @@ -0,0 +1,90 @@ +[package] +name = "git-same" +version = "0.9.0" +edition = "2021" +authors = ["Git-Same Contributors"] +description = "Mirror GitHub structure /orgs/repos/ to local file system." +license = "MIT" +repository = "https://github.com/zaai-com/git-same" +keywords = ["git", "github", "cli", "clone", "sync"] +categories = ["command-line-utilities", "development-tools"] + +# All binaries are installed by default +[[bin]] +name = "git-same" +path = "src/main.rs" + +[[bin]] +name = "gitsame" +path = "src/main.rs" + +[[bin]] +name = "gitsa" +path = "src/main.rs" + +[[bin]] +name = "gisa" +path = "src/main.rs" + +[features] +default = ["tui"] +tui = ["dep:ratatui", "dep:crossterm"] + +[dependencies] +# CLI parsing +clap = { version = "4", features = ["derive"] } + + +# Async runtime +tokio = { version = "1", features = ["full"] } + +# HTTP client for GitHub API +reqwest = { version = "0.13", features = ["json"] } + +# JSON/TOML serialization +serde = { version = "1", features = ["derive"] } +serde_json = "1" +toml = "1" + +# Progress bars and terminal output +indicatif = "0.18" +console = "0.16" + +# XDG directories (~/.config/git-same) +directories = "6" + +# Error handling +thiserror = "2" +anyhow = "1" + +# Shell expansion (~/ paths) +shellexpand = "3" + +# Async trait support +async-trait = "0.1" + +# Date/time handling +chrono = { version = "0.4", features = ["serde"] } + +# Futures utilities +futures = "0.3" + +# Structured logging +tracing = "0.1" +tracing-subscriber = { version = "0.3", features = ["env-filter"] } + +# TUI (optional, behind "tui" feature) +ratatui = { version = "0.30", optional = true } +crossterm = { version = "0.29", optional = true } + +[dev-dependencies] +# Testing +tokio-test = "0.4" +mockito = "1" +tempfile = "3" + +[profile.release] +strip = true +lto = true +codegen-units = 1 +panic = "abort" diff --git a/Docs/README.md b/Docs/README.md deleted file mode 100644 index 4888ab1..0000000 --- a/Docs/README.md +++ /dev/null @@ -1,2 +0,0 @@ -# git-same -Terminal command to fetch all your Github repositories. diff --git a/Docs/Specs/Gisa-S1-overview.md b/Docs/Specs/Gisa-S1-overview.md deleted file mode 100644 index bf744b5..0000000 --- a/Docs/Specs/Gisa-S1-overview.md +++ /dev/null @@ -1,4 +0,0 @@ -# Git-Same - -Gisa (short for git-same) is a CLI to git clone the GitHub org- and repo-structure to a local folderol the macOS file system.  - diff --git a/Docs/Specs/Gisa-S2-language-recommendation.md b/Docs/Specs/Gisa-S2-language-recommendation.md deleted file mode 100644 index 17cf162..0000000 --- a/Docs/Specs/Gisa-S2-language-recommendation.md +++ /dev/null @@ -1,304 +0,0 @@ -# Language & Framework Recommendation - -## Evaluation Criteria - -For Gisa, the ideal language should excel at: - -1. **CLI experience** — Argument parsing, help generation, shell completion -2. **Concurrency** — Parallel HTTP requests and git operations -3. **Distribution** — Easy installation, minimal dependencies -4. **HTTP client** — GitHub API integration with proper error handling -5. **Process spawning** — Running git commands reliably -6. **Cross-platform** — macOS primary, Linux/Windows secondary -7. **Developer velocity** — Time to functional prototype - -## Language Comparison - -### Rust - -**Ecosystem**: -- CLI: `clap` (best-in-class argument parsing, derives, shell completions) -- HTTP: `reqwest` (async, well-maintained) -- Async: `tokio` (mature runtime) -- Progress: `indicatif` (beautiful progress bars) -- Git: `git2` (libgit2 bindings) or shell out to `git` - -**Pros**: -- Single static binary, no runtime needed -- Excellent performance, low memory footprint -- Strong type system catches bugs at compile time -- `clap` derives generate help, completions, and validation automatically -- Great error handling with `Result` and `?` operator -- Memory safety without garbage collection - -**Cons**: -- Steeper learning curve -- Longer compile times during development -- More verbose than scripting languages -- `git2` (libgit2) can be tricky to compile; shelling out to `git` is often simpler - -**Distribution**: -- `cargo install gisa` -- Homebrew formula (single binary) -- Pre-built binaries for all platforms - ---- - -### Go - -**Ecosystem**: -- CLI: `cobra` + `viper` (widely used, battle-tested) -- HTTP: `net/http` (stdlib) or `resty` -- Concurrency: goroutines + channels (built-in) -- Progress: `progressbar` or `mpb` -- Git: `go-git` (pure Go) or shell out - -**Pros**: -- Single static binary -- Fast compilation -- Simple concurrency model (goroutines) -- `go-git` is pure Go, no C dependencies -- Straightforward to learn -- Excellent stdlib for HTTP/JSON - -**Cons**: -- Error handling is verbose (`if err != nil`) -- Less expressive type system -- `cobra` requires more boilerplate than `clap` -- No sum types makes error states harder to model - -**Distribution**: -- `go install github.com/user/gisa@latest` -- Homebrew -- Pre-built binaries - ---- - -### TypeScript (Node.js) - -**Ecosystem**: -- CLI: `commander`, `yargs`, or `oclif` (feature-rich) -- HTTP: `axios`, `undici`, or native `fetch` -- Concurrency: `Promise.all`, worker threads -- Progress: `ora`, `cli-progress`, `listr2` -- Git: `simple-git` (wrapper around git CLI) - -**Pros**: -- Fastest development velocity -- Excellent async/await ergonomics -- Rich npm ecosystem -- Type safety with TypeScript -- `oclif` provides plugins, hooks, auto-updates - -**Cons**: -- **Requires Node.js runtime** — Major friction for users -- Larger install size -- Startup time slower than native binaries -- Managing npm dependencies adds complexity - -**Distribution**: -- `npm install -g gisa` (requires Node) -- Or bundle with `pkg`/`nexe` (larger binaries, ~50MB) - ---- - -### Python - -**Ecosystem**: -- CLI: `click` or `typer` (modern, type-hint based) -- HTTP: `httpx` (async) or `requests` -- Concurrency: `asyncio`, `concurrent.futures` -- Progress: `rich` (beautiful output), `tqdm` -- Git: `GitPython` or shell out - -**Pros**: -- Very fast prototyping -- `typer` + `rich` create beautiful CLIs quickly -- Excellent for scripting operations -- Large community, many examples - -**Cons**: -- **Requires Python runtime** — Version conflicts, venv complexity -- Slower execution than compiled languages -- Distribution is painful (PyInstaller, but large bundles) -- GIL limits true parallelism - -**Distribution**: -- `pipx install gisa` (requires Python) -- PyInstaller bundles (~30-50MB) - ---- - -### Swift - -**Ecosystem**: -- CLI: `swift-argument-parser` (Apple's official) -- HTTP: `AsyncHTTPClient` or Foundation's URLSession -- Concurrency: Swift concurrency (async/await, actors) -- Progress: Limited options, would need custom or port -- Git: Shell out to `git` - -**Pros**: -- Native on macOS, excellent integration -- Modern concurrency with async/await -- Good performance -- Single binary possible - -**Cons**: -- **macOS-centric** — Cross-compilation is difficult -- Smaller CLI ecosystem -- Less battle-tested for this use case -- Linux support exists but is secondary - -**Distribution**: -- Homebrew (macOS only realistically) -- Mint (`mint install user/gisa`) - ---- - -### Deno (TypeScript) - -**Ecosystem**: -- CLI: `cliffy` (clap-inspired), or `@std/cli` -- HTTP: Native `fetch`, `Deno.HttpClient` -- Concurrency: Native promises, workers -- Progress: `progress` module - -**Pros**: -- Single executable runtime -- TypeScript native, no build step -- Better security model than Node -- Can compile to single binary (`deno compile`) -- Modern stdlib - -**Cons**: -- Smaller ecosystem than Node -- Compiled binaries still large (~80MB) -- Less mature than established options -- Some npm packages don't work - -**Distribution**: -- `deno install` from URL -- `deno compile` for standalone binary - ---- - -## Comparison Matrix - -| Criteria | Rust | Go | TypeScript | Python | Swift | Deno | -| --- | --- | --- | --- | --- | --- | --- | -| Single binary | ✅ | ✅ | ⚠️ pkg | ⚠️ PyInstaller | ✅ | ⚠️ large | -| No runtime needed | ✅ | ✅ | ❌ | ❌ | ✅ | ❌ | -| CLI ecosystem | ⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ | ⭐ | ⭐⭐ | -| Concurrency | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ | ⭐ | ⭐⭐ | ⭐⭐ | -| Dev velocity | ⭐ | ⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ | ⭐⭐ | -| Cross-platform | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ | ⭐⭐ | ⭐ | ⭐⭐ | -| Performance | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐ | ⭐ | ⭐⭐⭐ | ⭐⭐ | -| Binary size | ~3-5MB | ~8-12MB | ~50MB | ~40MB | ~5-10MB | ~80MB | - -## Decision - -**Rust** — Confirmed as the implementation language. - ---- - -## Recommendation - -### **Primary: Rust** ⭐ - -**Why Rust is the best fit for Gisa:** - -1. **Zero-friction distribution**: Users run `brew install gisa` or download a binary. No "install Node first" or "use Python 3.9+". This is critical for CLI adoption. - -2. **`clap`**** is exceptional**: Derive macros generate argument parsing, help text, shell completions, and validation from struct definitions. Less code, fewer bugs. - -3. **Fearless concurrency**: Parallel cloning is safe by default. Rust's ownership model prevents data races at compile time. - -4. **Small, fast binaries**: ~3-5MB binary that starts instantly. Users expect CLI tools to be snappy. - -5. **Reliability**: If it compiles, it usually works. The type system catches entire categories of bugs. - -6. **Shell out to ****`git`**: Don't fight `libgit2`. Shell out to the user's `git` binary — it's what they expect and handles auth/SSH correctly. - -### **Alternative: Go** - -If Rust's learning curve is a concern, Go is a solid second choice: - -- Simpler language, faster to prototype -- Also produces single binaries -- `go-git` works well for pure-Go git operations -- Goroutines make concurrency straightforward - -The tradeoff is more boilerplate and less expressive error handling. - -### **Not Recommended for V1** - -- **TypeScript/Python**: Runtime requirement is a dealbreaker for CLI distribution -- **Swift**: Limited to macOS, smaller ecosystem -- **Deno**: Immature, large binary sizes - -## Recommended Rust Stack - -```toml -[dependencies] -clap = { version = "4", features = ["derive"] } # CLI parsing -tokio = { version = "1", features = ["full"] } # Async runtime -reqwest = { version = "0.11", features = ["json"] } # HTTP client -serde = { version = "1", features = ["derive"] } # JSON serialization -serde_yaml = "0.9" # Config file parsing -indicatif = "0.17" # Progress bars -console = "0.15" # Terminal colors/styling -directories = "5" # XDG paths -thiserror = "1" # Error handling -``` - -## Example CLI Structure (Rust + Clap) - -```rust -use clap::{Parser, Subcommand}; - -#[derive(Parser)] -#[command(name = "gisa")] -#[command(about = "Mirror GitHub org/repo structure locally")] -struct Cli { - #[command(subcommand)] - command: Commands, -} - -#[derive(Subcommand)] -enum Commands { - /// Clone all repos from your GitHub orgs - Clone { - /// Base directory for cloned repos - #[arg(default_value = "~/github")] - path: String, - - /// Parallel clone operations - #[arg(short, long, default_value = "4")] - jobs: usize, - - /// Preview without cloning - #[arg(long)] - dry_run: bool, - }, - - /// Sync existing clones with remote - Sync { - /// Base directory - path: String, - - /// Sync mode - #[arg(long, default_value = "fetch")] - mode: SyncMode, - }, - - /// Initialize config file - Init, -} -``` - -This generates: -- `gisa --help` -- `gisa clone --help` -- Shell completions for bash/zsh/fish -- Typed, validated arguments diff --git a/Docs/Specs/Gisa-S3-github-api-access.md b/Docs/Specs/Gisa-S3-github-api-access.md deleted file mode 100644 index b7722eb..0000000 --- a/Docs/Specs/Gisa-S3-github-api-access.md +++ /dev/null @@ -1,274 +0,0 @@ -# GitHub API Access Strategy - -## Overview - -Gisa needs to discover all organizations and repositories a user has access to. This document details the API endpoints, authentication methods, and implementation considerations. - -**Important distinction**: Gisa uses the `gh` CLI only to **obtain the authentication token**. All GitHub API calls are made directly by Gisa using HTTP requests (not via `gh api`). This provides: -- Full control over pagination and rate limiting -- Parallel API requests for faster discovery -- Custom progress reporting and error handling - -## Required API Endpoints - -### 1. List User's Organizations - -``` -GET /user/orgs -``` - -**Response** (paginated, 30 per page default, max 100): -```json -[ - { - "login": "my-org", - "id": 12345, - "url": "https://api.github.com/orgs/my-org", - "repos_url": "https://api.github.com/orgs/my-org/repos" - } -] -``` - -**Required Scope**: `read:org` - -### 2. List Organization Repositories - -``` -GET /orgs/{org}/repos -``` - -**Parameters**: -- `type`: `all`, `public`, `private`, `forks`, `sources`, `member` -- `sort`: `created`, `updated`, `pushed`, `full_name` -- `per_page`: up to 100 - -**Response** (paginated): -```json -[ - { - "id": 67890, - "name": "repo-name", - "full_name": "my-org/repo-name", - "private": false, - "clone_url": "https://github.com/my-org/repo-name.git", - "ssh_url": "git@github.com:my-org/repo-name.git", - "archived": false, - "default_branch": "main" - } -] -``` - -**Required Scope**: `repo` (for private repos) - -### 3. List User's Personal Repositories - -``` -GET /user/repos -``` - -**Parameters**: -- `visibility`: `all`, `public`, `private` -- `affiliation`: `owner`, `collaborator`, `organization_member` -- `type`: `all`, `owner`, `public`, `private`, `member` - -For personal repos only, use: `affiliation=owner&type=owner` - -**Required Scope**: `repo` - -## Authentication Methods - -### Priority Order - -| Priority | Method | How it Works | Pros | Cons | -| --- | --- | --- | --- | --- | -| 1 | GitHub CLI | `gh auth token` | Secure, managed tokens, SSO support | Requires `gh` installed | -| 2 | SSH Keys | Uses existing `~/.ssh` keys | Already configured for most devs | Only for git operations, not API | -| 3 | PAT (env) | `GITHUB_TOKEN` or `GISA_TOKEN` | Simple, CI-friendly | User manages token security | -| 4 | PAT (config) | Stored in `.gisarc` | Persistent | Less secure if committed | - -### Recommended: GitHub CLI Integration - -```bash -# Check if gh is authenticated -gh auth status - -# Get token for API calls -gh auth token -``` - -**Benefits**: -- Handles OAuth flow and token refresh -- Supports SSO-enabled organizations -- Secure credential storage (OS keychain) -- Users likely already have it configured - -### SSH for Clone Operations - -SSH keys authenticate git clone/fetch operations, not API calls. - -```bash -# Test SSH access -ssh -T git@github.com - -# Clone URL format -git@github.com:{org}/{repo}.git -``` - -### PAT (Personal Access Token) Fallback - -Required scopes: -- `repo` — Full access to private repositories -- `read:org` — Read organization membership - -```bash -# Environment variable -export GITHUB_TOKEN=ghp_xxxxxxxxxxxx - -# Or in .gisarc -auth: - token: ghp_xxxxxxxxxxxx # Not recommended for shared configs -``` - -## Pagination Handling - -GitHub API uses Link headers for pagination: - -``` -Link: ; rel="next", - ; rel="last" -``` - -### Implementation Strategy - -``` -function fetchAllPages(url): - results = [] - while url: - response = GET(url + "?per_page=100") - results.append(response.body) - url = parseLinkHeader(response.headers["Link"], "next") - return flatten(results) -``` - -## Rate Limiting - -| Auth Type | Rate Limit | -| --- | --- | -| Unauthenticated | 60 requests/hour | -| Authenticated | 5,000 requests/hour | -| GitHub App | 15,000 requests/hour | - -### Headers to Monitor - -``` -X-RateLimit-Limit: 5000 -X-RateLimit-Remaining: 4990 -X-RateLimit-Reset: 1609459200 # Unix timestamp -``` - -### Handling Rate Limits - -1. Check `X-RateLimit-Remaining` before operations -2. If low, warn user and estimate time needed -3. If exhausted, calculate wait time from `X-RateLimit-Reset` -4. Implement exponential backoff for 403 responses - -## Discovery Algorithm - -``` -1. Authenticate (gh → SSH → PAT) - -2. Fetch organizations - orgs = fetchAllPages("/user/orgs") - -3. For each org, fetch repos (parallel) - for org in orgs: - repos[org] = fetchAllPages("/orgs/{org}/repos?type=all") - -4. Fetch personal repos - personal = fetchAllPages("/user/repos?affiliation=owner") - -5. Build unified repo list - all_repos = [] - for org, repos in repos: - for repo in repos: - all_repos.append({ - org: org.login, - name: repo.name, - ssh_url: repo.ssh_url, - https_url: repo.clone_url, - archived: repo.archived, - private: repo.private - }) - - for repo in personal: - all_repos.append({ - org: "personal", - name: repo.name, - ... - }) - -6. Return all_repos for clone/sync planning -``` - -## Caching Considerations - -For large organizations, consider caching discovery results: - -```yaml -# .gisa-cache.json (auto-generated) -{ - "last_discovery": "2024-01-15T10:30:00Z", - "orgs": ["org-a", "org-b"], - "repo_count": 234 -} -``` - -- Cache invalidation: 1 hour default, or `--refresh` flag -- Incremental: store `pushed_at` to detect changes -- Skip cache with `--no-cache` - -## Error Scenarios - -| Error | Cause | Handling | -| --- | --- | --- | -| 401 Unauthorized | Invalid/expired token | Prompt re-auth | -| 403 Forbidden | Rate limit or insufficient scope | Check headers, advise user | -| 404 Not Found | Org/repo deleted or no access | Skip, log warning | -| 422 Unprocessable | Bad parameters | Log, likely a bug | -| 5xx Server Error | GitHub outage | Retry with backoff | - -## Token Storage Strategy - -**Gisa does not store tokens itself.** It retrieves tokens at runtime from external sources: - -| Source | Storage Location | Managed By | -| --- | --- | --- | -| `gh` CLI (recommended) | OS keychain (macOS Keychain, Windows Credential Manager, Linux secret-service) | GitHub CLI | -| Environment variable | Shell session / CI secrets | User / CI system | -| `.gisarc` config | Project directory | User (not recommended) | - -**Why this approach:** -- No token management code to maintain in Gisa -- No security liability for storing secrets -- Leverages existing secure storage mechanisms -- Users don't need to generate/paste tokens if they already use `gh` - -**Runtime flow:** -``` -gisa sync ~/github - │ - ├─→ Check: `gh auth token` succeeds? → Use returned token - │ - ├─→ Check: $GITHUB_TOKEN or $GISA_TOKEN set? → Use env var - │ - └─→ Check: .gisarc has auth.token? → Use config token (warn user) -``` - -## Security Considerations - -1. **Never log tokens** — Mask in debug output -2. **Prefer ****`gh`**** CLI** — It handles secure storage -3. **Warn about ****`.gisarc`**** tokens** — Suggest `.gitignore` -4. **Minimal scopes** — Request only `repo` and `read:org` -5. **Token rotation** — Support for short-lived tokens via `gh` diff --git a/Docs/Specs/Gisa-S4-config-specification.md b/Docs/Specs/Gisa-S4-config-specification.md deleted file mode 100644 index 6867913..0000000 --- a/Docs/Specs/Gisa-S4-config-specification.md +++ /dev/null @@ -1,155 +0,0 @@ -# Configuration Specification - -## Config File - -**Filename**: `gisa.config.toml` -**Location**: Project directory (where gisa is run) -**Format**: TOML - -## Full Configuration Example - -```toml -# gisa.config.toml - -# Base directory for all cloned repos -base_path = "~/github" - -# Directory structure pattern -# {org} = organization name or GitHub username for personal repos -# {repo} = repository name -structure = "{org}/{repo}" - -# Number of parallel clone/sync operations -concurrency = 4 - -# Sync behavior: "fetch" (safe) or "pull" (updates working tree) -sync_mode = "fetch" - -[clone] -# Clone depth (0 = full history) -depth = 0 - -# Clone specific branch (empty = default branch) -branch = "" - -# Include submodules -recurse_submodules = false - -[filters] -# Include archived repositories -include_archived = false - -# Include forked repositories -include_forks = false - -# Filter by visibility (future V2) -# visibility = "all" # "all", "public", "private" - -# Filter by specific orgs (future V2) -# orgs = ["org-a", "org-b"] - -# Exclude specific repos (future V2) -# exclude_repos = ["org/repo-to-skip"] -``` - -## Configuration Options - -### Core Settings - -| Option | Type | Default | Description | -| --- | --- | --- | --- | -| `base_path` | string | `"~/github"` | Root directory for cloned repos | -| `structure` | string | `"{org}/{repo}"` | Directory structure pattern | -| `concurrency` | integer | `4` | Parallel operations (1-16) | -| `sync_mode` | string | `"fetch"` | `"fetch"` or `"pull"` | - -### Clone Options (`[clone]`) - -| Option | Type | Default | Description | -| --- | --- | --- | --- | -| `depth` | integer | `0` | Shallow clone depth (0 = full) | -| `branch` | string | `""` | Specific branch (empty = default) | -| `recurse_submodules` | boolean | `false` | Clone submodules | - -### Filter Options (`[filters]`) - -| Option | Type | Default | Description | -| --- | --- | --- | --- | -| `include_archived` | boolean | `false` | Clone archived repos | -| `include_forks` | boolean | `false` | Clone forked repos | - -## CLI Flag Overrides - -All config options can be overridden via CLI flags: - -```bash -# Override concurrency -gisa clone ~/github --jobs 8 - -# Override sync mode -gisa sync ~/github --mode pull - -# Override filters -gisa clone ~/github --include-archived --include-forks - -# Shallow clone -gisa clone ~/github --depth 1 - -# Include submodules -gisa clone ~/github --recurse-submodules -``` - -**Precedence**: CLI flags > config file > defaults - -## Minimal Config - -For most users, a minimal config is sufficient: - -```toml -# gisa.config.toml -base_path = "~/github" -``` - -All other options use sensible defaults. - -## Config Initialization - -```bash -# Create default config file -gisa init - -# Creates gisa.config.toml with documented defaults -``` - -## Directory Structure Examples - -### Default: `{org}/{repo}` -``` -~/github/ -├── acme-corp/ -│ ├── api/ -│ └── web/ -└── octocat/ # Personal repos under username - └── dotfiles/ -``` - -### Flat: `{org}-{repo}` -``` -~/github/ -├── acme-corp-api/ -├── acme-corp-web/ -└── octocat-dotfiles/ -``` - -## Defaults Summary - -| Setting | Default | Rationale | -| --- | --- | --- | -| `base_path` | `~/github` | Common convention | -| `structure` | `{org}/{repo}` | Mirrors GitHub structure | -| `concurrency` | `4` | Balance speed and system load | -| `sync_mode` | `fetch` | Safe, doesn't modify working tree | -| `depth` | `0` | Full history by default | -| `recurse_submodules` | `false` | Submodules can be large/slow | -| `include_archived` | `false` | Archived = inactive, usually skip | -| `include_forks` | `false` | Forks clutter, clone explicitly if needed | diff --git a/Docs/Specs/Gisa-S5-architecture-overview.md b/Docs/Specs/Gisa-S5-architecture-overview.md deleted file mode 100644 index b5d2627..0000000 --- a/Docs/Specs/Gisa-S5-architecture-overview.md +++ /dev/null @@ -1,3166 +0,0 @@ -# Gisa Architecture Overview - -## Quick Start (TL;DR) - -```bash -# 1. Create project -cargo new gisa && cd gisa - -# 2. Copy Cargo.toml from Task 0.2 (dependencies section) - -# 3. Create file structure from Task 0.3: -mkdir -p src/{config,auth,discovery,clone,sync} -touch src/{lib,cli,types}.rs -touch src/config/{mod,parser}.rs -touch src/auth/{mod,gh_cli}.rs -touch src/discovery/{mod,github}.rs -touch src/clone/{mod,parallel}.rs -touch src/sync/{mod,manager}.rs - -# 4. Work through Phases 1-7 in order, copy-pasting code - -# 5. Test with: -cargo run -- clone ~/test --dry-run -``` - -**Scope:** ~2000 lines of Rust across 14 files -**Time estimate:** Varies by experience -**Prerequisites:** Rust installed, GitHub CLI authenticated - ---- - -## System Overview - -Gisa is a CLI tool that mirrors GitHub organization and repository structures to the local filesystem. It discovers all orgs/repos a user has access to and clones them with configurable options. - -``` -┌─────────────────────────────────────────────────────────────────┐ -│ Gisa CLI │ -├─────────────────────────────────────────────────────────────────┤ -│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ -│ │ Config │ │ Auth │ │ CLI │ │ -│ │ Manager │ │ Manager │ │ Interface │ │ -│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │ -│ │ │ │ │ -│ └────────────────┼────────────────┘ │ -│ │ │ -│ ┌─────▼─────┐ │ -│ │ Core │ │ -│ │ Engine │ │ -│ └─────┬─────┘ │ -│ │ │ -│ ┌────────────────┼────────────────┐ │ -│ │ │ │ │ -│ ┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐ │ -│ │ Discovery │ │ Clone │ │ Sync │ │ -│ │ Module │ │ Manager │ │ Manager │ │ -│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │ -│ │ │ │ │ -└─────────┼────────────────┼────────────────┼─────────────────────┘ - │ │ │ - ▼ ▼ ▼ - ┌────────────┐ ┌────────────┐ ┌────────────┐ - │ GitHub │ │ Local │ │ Git │ - │ API │ │ Filesystem│ │ Operations│ - └────────────┘ └────────────┘ └────────────┘ -``` - -## Core Components - -### 1. CLI Interface -- Command parsing and validation -- Progress bars and output formatting -- Interactive prompts for missing config -- Dry-run mode display - -### 2. Config Manager -- Loads `gisa.config.toml` from project directory -- TOML configuration format (Rust ecosystem standard) -- Validates and merges CLI flags with config file -- Stores: base path, clone options, concurrency, sync behavior, filters - -### 3. Auth Manager -- **Primary**: GitHub CLI (`gh auth token`) integration -- **Fallback 1**: SSH key detection and validation -- **Fallback 2**: Personal Access Token from env/config -- Token validation before operations begin - -### 4. Discovery Module -- Fetches all orgs user belongs to via GitHub API -- Fetches all repos per org (handles pagination) -- Fetches user's personal repos -- Returns unified list with metadata (visibility, clone URLs, archived status) - -### 5. Clone Manager -- Parallel cloning with configurable concurrency (default: 4) -- SSH clone URL preferred, HTTPS fallback -- Supports clone options: `--depth`, `--branch`, `--recurse-submodules` -- Creates directory structure: `///` - -### 6. Sync Manager -- Detects existing clones -- Configurable behavior: `fetch` (safe) or `pull` (updates working tree) -- Reports conflicts/uncommitted changes without modifying -- Tracks new repos added to orgs since last sync - -## Data Flow - -``` -1. User runs: gisa sync ~/github - -2. Auth Manager - └─→ Obtains GitHub token (gh CLI → SSH → PAT) - -3. Discovery Module - └─→ GET /user/orgs → List of orgs - └─→ GET /orgs/{org}/repos → Repos per org (paginated) - └─→ GET /user/repos → Personal repos - -4. Core Engine - └─→ Compares discovered repos with local filesystem - └─→ Generates action plan: [clone: 12, sync: 45, skip: 3] - -5. Dry-run check - └─→ If --dry-run: display plan and exit - -6. Clone/Sync Manager (parallel) - └─→ Clone new repos (SSH preferred) - └─→ Sync existing repos (fetch or pull) - └─→ Report failures at end - -7. Output - └─→ Summary: cloned 12, synced 45, failed 2 - └─→ Failed repos listed with error reasons -``` - -## Directory Structure - -### Default Structure -``` -~/github/ # Base path (configurable) -├── my-org/ # Organization -│ ├── repo-one/ -│ ├── repo-two/ -│ └── repo-three/ -├── another-org/ -│ └── their-repo/ -└── octocat/ # User's personal repos (GitHub username) - ├── my-project/ - └── dotfiles/ -``` - -### Configurable via `gisa.config.toml` -```toml -base_path = "~/github" -structure = "{org}/{repo}" # Default -# Alternative: "{org}-{repo}" for flat structure -``` - -## Error Handling Strategy - -| Scenario | Behavior | -| --- | --- | -| Auth failure | Stop, display auth instructions | -| API rate limit | Pause, retry with backoff | -| Single repo clone fails | Log error, continue with others | -| Network timeout | Retry 3x, then skip and log | -| Repo exists with changes | Skip sync, warn user | -| Permission denied (private repo) | Skip, log (user may have lost access) | - -At completion: display summary with all failures and reasons. - -## Distribution - -| Priority | Method | Command | Target Audience | -| --- | --- | --- | --- | -| 1 | Homebrew | `brew install gisa` | macOS users (primary) | -| 2 | GitHub Releases | Download binary | All platforms, no toolchain needed | -| 3 | Cargo | `cargo install gisa` | Rust developers | - -### Homebrew (Primary) - -```bash -brew install gisa -``` - -Homebrew formula maintained in homebrew-core or custom tap. - -### GitHub Releases - -Pre-built binaries for each release: -- `gisa-x86_64-apple-darwin` (macOS Intel) -- `gisa-aarch64-apple-darwin` (macOS Apple Silicon) -- `gisa-x86_64-unknown-linux-gnu` (Linux) -- `gisa-x86_64-pc-windows-msvc.exe` (Windows) - -### Cargo (Rust developers) - -```bash -cargo install gisa -``` - -Builds from source via crates.io. Requires Rust toolchain. - -## CLI Command Naming - -Commands follow standard git naming conventions for familiarity: - -| Gisa Command | Git Equivalent | Description | -| --- | --- | --- | -| `gisa clone` | `git clone` | Clone all repos | -| `gisa fetch` | `git fetch` | Fetch updates (safe, no working tree changes) | -| `gisa pull` | `git pull` | Pull updates (modifies working tree) | -| `gisa status` | `git status` | Show sync status of all repos | -| `gisa init` | `git init` | Initialize config file | - -## Code Organization - -### Colocated Documentation - -Each module includes its own README for discoverability: - -``` -src/ -├── auth/ -│ ├── mod.rs -│ ├── gh_cli.rs -│ ├── gh_cli.test.rs # Colocated test -│ └── README.md # Auth module docs -├── discovery/ -│ ├── mod.rs -│ ├── github.rs -│ ├── github.test.rs # Colocated test -│ └── README.md # Discovery module docs -├── clone/ -│ ├── mod.rs -│ ├── parallel.rs -│ ├── parallel.test.rs # Colocated test -│ └── README.md # Clone module docs -└── README.md # Root src docs -``` - -### Colocated Tests - -Tests live next to the code they test using the `.test.rs` suffix: - -- `auth/gh_cli.rs` → `auth/gh_cli.test.rs` -- `discovery/github.rs` → `discovery/github.test.rs` -- `config/parser.rs` → `config/parser.test.rs` - -Benefits: -- Easy to find tests for any module -- Tests stay in sync with implementation -- Clear ownership of test coverage - -## State Management - -### V1: File-Based Cache - -No database required. State is managed via simple files: - -``` -~/.config/gisa/ # XDG config directory -└── gisa.cache.json # Discovery cache (auto-generated) - -~/github/ # Base path -└── gisa.config.toml # User config (project-level) -``` - -**Cache file** (`gisa.cache.json`): -```json -{ - "last_discovery": "2024-01-15T10:30:00Z", - "username": "octocat", - "orgs": ["my-org", "another-org"], - "repos": [ - { - "full_name": "my-org/repo-one", - "ssh_url": "git@github.com:my-org/repo-one.git", - "pushed_at": "2024-01-14T08:00:00Z" - } - ] -} -``` - -**Cache behavior:** -- Invalidated after 1 hour (configurable) -- Force refresh with `--refresh` flag -- Used to detect new repos without full API scan -- Stores `pushed_at` for incremental sync detection - -### V2+: SQLite (Future) - -SQLite may be added if these features become requirements: -- Sync history tracking ("what changed last week?") -- Per-repo metadata (custom tags, notes) -- Offline mode with full local state -- Query interface for repo management - -## Future Extensibility - -The architecture supports planned features: - -- **V2: Filters** — Discovery module accepts filter predicates -- **V2: Single org** — Discovery module accepts org parameter -- **V3: GitHub Enterprise** — Auth/Discovery modules accept base URL -- **V4: GitLab/Bitbucket** — Abstract Discovery/Clone behind provider interface - -``` -┌─────────────────────────────────────────┐ -│ Provider Interface │ -├─────────────────────────────────────────┤ -│ + authenticate() │ -│ + discoverOrgs() │ -│ + discoverRepos(org) │ -│ + getCloneUrl(repo, protocol) │ -└─────────────────────────────────────────┘ - ▲ ▲ ▲ - │ │ │ - ┌────┴────┐ ┌────┴────┐ ┌────┴────┐ - │ GitHub │ │ GitLab │ │Bitbucket│ - │Provider │ │Provider │ │Provider │ - └─────────┘ └─────────┘ └─────────┘ -``` - ---- - -# Implementation Plan - -This plan breaks Gisa into small, testable tasks. Each task has clear inputs, outputs, and acceptance criteria. - ---- - -## Phase 0: Project Setup - -### Task 0.1: Create Rust Project - -**What to do:** -```bash -cargo new gisa -cd gisa -``` - -**Files created:** -- `Cargo.toml` -- `src/main.rs` - -**Done when:** `cargo build` succeeds. - ---- - -### Task 0.2: Add Dependencies to Cargo.toml - -**Replace \****`Cargo.toml`**\*\* with:** -```toml -[package] -name = "gisa" -version = "0.1.0" -edition = "2021" -description = "Mirror GitHub org/repo structure locally" -license = "MIT" - -[dependencies] -# CLI parsing -clap = { version = "4", features = ["derive"] } - -# Async runtime -tokio = { version = "1", features = ["full"] } - -# HTTP client for GitHub API -reqwest = { version = "0.12", features = ["json"] } - -# JSON/TOML serialization -serde = { version = "1", features = ["derive"] } -serde_json = "1" -toml = "0.8" - -# Progress bars and terminal output -indicatif = "0.17" -console = "0.15" - -# XDG directories (~/.config/gisa) -directories = "5" - -# Error handling -thiserror = "1" -anyhow = "1" - -# Shell expansion (~/ paths) -shellexpand = "3" - -[dev-dependencies] -# Testing -tokio-test = "0.4" -mockito = "1" -tempfile = "3" -``` - -**Done when:** `cargo build` succeeds with all dependencies. - ---- - -### Task 0.3: Create Module Structure - -**Create these empty files:** -``` -src/ -├── main.rs # Entry point -├── lib.rs # Library root (re-exports modules) -├── cli.rs # CLI argument parsing -├── config/ -│ ├── mod.rs # Config module root -│ └── parser.rs # TOML parsing -├── auth/ -│ ├── mod.rs # Auth module root -│ └── gh_cli.rs # GitHub CLI integration -├── discovery/ -│ ├── mod.rs # Discovery module root -│ └── github.rs # GitHub API calls -├── clone/ -│ ├── mod.rs # Clone module root -│ └── parallel.rs # Parallel cloning -├── sync/ -│ ├── mod.rs # Sync module root -│ └── manager.rs # Sync logic -└── types.rs # Shared types (Repo, Org, etc.) -``` - -**For each \****`mod.rs`**\*\* file, add:** -```rust -// mod.rs template - replace with actual module name -pub mod parser; // or gh_cli, github, parallel, manager -``` - -**For \****`lib.rs`**\*\*:** -```rust -pub mod cli; -pub mod config; -pub mod auth; -pub mod discovery; -pub mod clone; -pub mod sync; -pub mod types; -``` - -**Done when:** `cargo check` passes with no errors. - ---- - -## Phase 1: Types and Config - -### Task 1.1: Define Core Types - -**File:** `src/types.rs` - -```rust -use serde::{Deserialize, Serialize}; - -/// A GitHub organization -#[derive(Debug, Clone, Serialize, Deserialize)] -pub struct Org { - pub login: String, - pub id: u64, -} - -/// A GitHub repository -#[derive(Debug, Clone, Serialize, Deserialize)] -pub struct Repo { - pub id: u64, - pub name: String, - pub full_name: String, // "org/repo" - pub ssh_url: String, // "git@github.com:org/repo.git" - pub clone_url: String, // "https://github.com/org/repo.git" - pub default_branch: String, - pub private: bool, - pub archived: bool, - pub fork: bool, -} - -/// Which organization or user owns a repo -#[derive(Debug, Clone)] -pub struct OwnedRepo { - pub owner: String, // Org name or username - pub repo: Repo, -} - -/// Result of comparing discovered repos with local filesystem -#[derive(Debug)] -pub struct ActionPlan { - pub to_clone: Vec, // New repos to clone - pub to_sync: Vec, // Existing repos to sync - pub skipped: Vec, // Repos skipped (dirty, conflicts) -} - -/// Outcome of a clone or sync operation -#[derive(Debug)] -pub enum OpResult { - Success, - Failed(String), // Error message - Skipped(String), // Reason for skipping -} -``` - -**Done when:** `cargo check` passes. - ---- - -### Task 1.2: Implement Config Parser - -**File:** `src/config/parser.rs` - -```rust -use anyhow::{Context, Result}; -use serde::Deserialize; -use std::path::{Path, PathBuf}; - -/// Clone-specific options -#[derive(Debug, Clone, Deserialize, Default)] -pub struct CloneOptions { - #[serde(default)] - pub depth: u32, // 0 = full clone - - #[serde(default)] - pub branch: String, // Empty = default branch - - #[serde(default)] - pub recurse_submodules: bool, -} - -/// Filter options -#[derive(Debug, Clone, Deserialize, Default)] -pub struct FilterOptions { - #[serde(default)] - pub include_archived: bool, - - #[serde(default)] - pub include_forks: bool, -} - -/// Full configuration -#[derive(Debug, Clone, Deserialize)] -pub struct Config { - #[serde(default = "default_base_path")] - pub base_path: String, - - #[serde(default = "default_structure")] - pub structure: String, - - #[serde(default = "default_concurrency")] - pub concurrency: usize, - - #[serde(default = "default_sync_mode")] - pub sync_mode: String, - - #[serde(default)] - pub clone: CloneOptions, - - #[serde(default)] - pub filters: FilterOptions, -} - -fn default_base_path() -> String { "~/github".to_string() } -fn default_structure() -> String { "{org}/{repo}".to_string() } -fn default_concurrency() -> usize { 4 } -fn default_sync_mode() -> String { "fetch".to_string() } - -impl Default for Config { - fn default() -> Self { - Config { - base_path: default_base_path(), - structure: default_structure(), - concurrency: default_concurrency(), - sync_mode: default_sync_mode(), - clone: CloneOptions::default(), - filters: FilterOptions::default(), - } - } -} - -impl Config { - /// Load config from file, or return defaults if file doesn't exist - pub fn load(path: &Path) -> Result { - if path.exists() { - let content = std::fs::read_to_string(path) - .with_context(|| format!("Failed to read config: {}", path.display()))?; - let config: Config = toml::from_str(&content) - .with_context(|| "Failed to parse config file")?; - Ok(config) - } else { - Ok(Config::default()) - } - } - - /// Expand ~ in base_path to actual home directory - pub fn expanded_base_path(&self) -> Result { - let expanded = shellexpand::tilde(&self.base_path); - Ok(PathBuf::from(expanded.as_ref())) - } - - /// Generate repo path from structure pattern - pub fn repo_path(&self, org: &str, repo: &str) -> Result { - let base = self.expanded_base_path()?; - let relative = self.structure - .replace("{org}", org) - .replace("{repo}", repo); - Ok(base.join(relative)) - } -} -``` - -**File:** `src/config/mod.rs` -```rust -pub mod parser; -pub use parser::Config; -``` - -**Test manually:** -Create a file `test.toml`: -```toml -base_path = "~/test" -concurrency = 8 -``` - -Add temporary test code to `main.rs`: -```rust -use std::path::Path; -mod config; - -fn main() { - let cfg = config::Config::load(Path::new("test.toml")).unwrap(); - println!("{:?}", cfg); -} -``` - -**Done when:** Running `cargo run` prints the config with `base_path = "~/test"` and `concurrency = 8`. - ---- - -### Task 1.3: Write Config Tests - -**File:** `src/config/parser.test.rs` - -```rust -use super::*; -use tempfile::NamedTempFile; -use std::io::Write; - -#[test] -fn test_default_config() { - let config = Config::default(); - assert_eq!(config.base_path, "~/github"); - assert_eq!(config.concurrency, 4); - assert_eq!(config.sync_mode, "fetch"); - assert!(!config.filters.include_archived); -} - -#[test] -fn test_load_minimal_config() { - let mut file = NamedTempFile::new().unwrap(); - writeln!(file, "base_path = \"~/custom\"").unwrap(); - - let config = Config::load(file.path()).unwrap(); - assert_eq!(config.base_path, "~/custom"); - assert_eq!(config.concurrency, 4); // Default preserved -} - -#[test] -fn test_load_full_config() { - let mut file = NamedTempFile::new().unwrap(); - writeln!(file, r#" -base_path = "~/repos" -concurrency = 8 -sync_mode = "pull" - -[clone] -depth = 1 -recurse_submodules = true - -[filters] -include_archived = true -include_forks = true -"#).unwrap(); - - let config = Config::load(file.path()).unwrap(); - assert_eq!(config.base_path, "~/repos"); - assert_eq!(config.concurrency, 8); - assert_eq!(config.clone.depth, 1); - assert!(config.clone.recurse_submodules); - assert!(config.filters.include_archived); -} - -#[test] -fn test_missing_file_returns_defaults() { - let config = Config::load(Path::new("/nonexistent/config.toml")).unwrap(); - assert_eq!(config.base_path, "~/github"); -} - -#[test] -fn test_repo_path_generation() { - let config = Config { - base_path: "/home/user/github".to_string(), - structure: "{org}/{repo}".to_string(), - ..Config::default() - }; - - let path = config.repo_path("my-org", "my-repo").unwrap(); - assert_eq!(path, PathBuf::from("/home/user/github/my-org/my-repo")); -} -``` - -**Add to \****`src/config/mod.rs`**\*\*:** -```rust -#[cfg(test)] -mod parser_test; -``` - -**Done when:** `cargo test config` passes all tests. - ---- - -## Phase 2: CLI Interface - -### Task 2.1: Implement CLI Parser - -**File:** `src/cli.rs` - -```rust -use clap::{Parser, Subcommand, ValueEnum}; -use std::path::PathBuf; - -#[derive(Parser)] -#[command(name = "gisa")] -#[command(version)] -#[command(about = "Mirror GitHub org/repo structure locally")] -#[command(long_about = "Gisa discovers all GitHub organizations and repositories you have access to, then clones them to your local filesystem maintaining the org/repo directory structure.")] -pub struct Cli { - #[command(subcommand)] - pub command: Commands, - - /// Path to config file - #[arg(short, long, global = true)] - pub config: Option, - - /// Increase output verbosity - #[arg(short, long, global = true)] - pub verbose: bool, -} - -#[derive(Subcommand)] -pub enum Commands { - /// Clone all repos from your GitHub organizations - Clone { - /// Base directory for cloned repos (overrides config) - #[arg(default_value = "~/github")] - path: String, - - /// Number of parallel clone operations - #[arg(short, long)] - jobs: Option, - - /// Preview what would be cloned without actually cloning - #[arg(long)] - dry_run: bool, - - /// Shallow clone with specified depth - #[arg(long)] - depth: Option, - - /// Clone submodules - #[arg(long)] - recurse_submodules: bool, - - /// Include archived repositories - #[arg(long)] - include_archived: bool, - - /// Include forked repositories - #[arg(long)] - include_forks: bool, - - /// Force re-discovery (ignore cache) - #[arg(long)] - refresh: bool, - }, - - /// Fetch updates for all cloned repos (safe, no working tree changes) - Fetch { - /// Base directory containing cloned repos - #[arg(default_value = "~/github")] - path: String, - - /// Number of parallel operations - #[arg(short, long)] - jobs: Option, - - /// Preview what would be fetched - #[arg(long)] - dry_run: bool, - }, - - /// Pull updates for all cloned repos (modifies working tree) - Pull { - /// Base directory containing cloned repos - #[arg(default_value = "~/github")] - path: String, - - /// Number of parallel operations - #[arg(short, long)] - jobs: Option, - - /// Preview what would be pulled - #[arg(long)] - dry_run: bool, - }, - - /// Show sync status of all repos - Status { - /// Base directory containing cloned repos - #[arg(default_value = "~/github")] - path: String, - }, - - /// Initialize a new gisa.config.toml file - Init { - /// Directory to create config in - #[arg(default_value = ".")] - path: String, - }, -} - -/// Parse command line arguments -pub fn parse() -> Cli { - Cli::parse() -} -``` - -**Update \****`src/main.rs`**\*\*:** -```rust -mod cli; - -fn main() { - let args = cli::parse(); - - match args.command { - cli::Commands::Clone { path, dry_run, .. } => { - println!("Would clone to: {}", path); - if dry_run { - println!("(dry run mode)"); - } - } - cli::Commands::Fetch { path, .. } => { - println!("Would fetch in: {}", path); - } - cli::Commands::Pull { path, .. } => { - println!("Would pull in: {}", path); - } - cli::Commands::Status { path } => { - println!("Would show status for: {}", path); - } - cli::Commands::Init { path } => { - println!("Would create config in: {}", path); - } - } -} -``` - -**Done when:** All these commands work: -```bash -cargo run -- --help -cargo run -- clone --help -cargo run -- clone ~/github --dry-run -cargo run -- fetch ~/github -cargo run -- init -``` - ---- - -## Phase 3: Authentication - -### Task 3.1: Implement GitHub CLI Token Retrieval - -**File:** `src/auth/gh_cli.rs` - -```rust -use anyhow::{bail, Context, Result}; -use std::process::Command; - -/// Check if GitHub CLI is installed -pub fn is_gh_installed() -> bool { - Command::new("gh") - .arg("--version") - .output() - .map(|o| o.status.success()) - .unwrap_or(false) -} - -/// Check if user is authenticated with GitHub CLI -pub fn is_gh_authenticated() -> bool { - Command::new("gh") - .args(["auth", "status"]) - .output() - .map(|o| o.status.success()) - .unwrap_or(false) -} - -/// Get GitHub token from gh CLI -pub fn get_token() -> Result { - let output = Command::new("gh") - .args(["auth", "token"]) - .output() - .context("Failed to run 'gh auth token'")?; - - if !output.status.success() { - let stderr = String::from_utf8_lossy(&output.stderr); - bail!("gh auth token failed: {}", stderr); - } - - let token = String::from_utf8(output.stdout) - .context("Invalid UTF-8 in token")? - .trim() - .to_string(); - - if token.is_empty() { - bail!("gh auth token returned empty token"); - } - - Ok(token) -} - -/// Get the authenticated GitHub username -pub fn get_username() -> Result { - let output = Command::new("gh") - .args(["api", "user", "--jq", ".login"]) - .output() - .context("Failed to get username from gh")?; - - if !output.status.success() { - let stderr = String::from_utf8_lossy(&output.stderr); - bail!("Failed to get username: {}", stderr); - } - - let username = String::from_utf8(output.stdout) - .context("Invalid UTF-8 in username")? - .trim() - .to_string(); - - Ok(username) -} -``` - -**File:** `src/auth/mod.rs` - -```rust -pub mod gh_cli; - -use anyhow::{bail, Result}; -use std::env; - -/// Authentication token and method used -#[derive(Debug)] -pub struct Auth { - pub token: String, - pub method: AuthMethod, - pub username: String, -} - -#[derive(Debug)] -pub enum AuthMethod { - GhCli, - EnvVar(String), // Which env var was used -} - -/// Get authentication token, trying methods in priority order -pub fn get_auth() -> Result { - // Priority 1: GitHub CLI - if gh_cli::is_gh_installed() && gh_cli::is_gh_authenticated() { - let token = gh_cli::get_token()?; - let username = gh_cli::get_username()?; - return Ok(Auth { - token, - method: AuthMethod::GhCli, - username, - }); - } - - // Priority 2: Environment variables - for var_name in ["GITHUB_TOKEN", "GH_TOKEN", "GISA_TOKEN"] { - if let Ok(token) = env::var(var_name) { - if !token.is_empty() { - // We need to fetch username via API since we don't have gh - // For now, return a placeholder - will be filled by discovery - return Ok(Auth { - token, - method: AuthMethod::EnvVar(var_name.to_string()), - username: String::new(), // Will be fetched later - }); - } - } - } - - // No auth found - bail!( - "No GitHub authentication found.\n\n\ - Please authenticate using one of these methods:\n\n\ - 1. GitHub CLI (recommended):\n \ - gh auth login\n\n\ - 2. Environment variable:\n \ - export GITHUB_TOKEN=ghp_xxxx\n\n\ - For more info: https://cli.github.com/manual/gh_auth_login" - ); -} -``` - -**Test manually:** -```rust -// Temporary test in main.rs -use mod auth; - -fn main() { - match auth::get_auth() { - Ok(auth) => println!("Authenticated as: {} via {:?}", auth.username, auth.method), - Err(e) => eprintln!("Auth failed: {}", e), - } -} -``` - -**Done when:** Running `cargo run` shows your GitHub username (if `gh` is installed and authenticated). - ---- - -## Phase 4: GitHub API Discovery - -### Task 4.1: Implement GitHub API Client - -**File:** `src/discovery/github.rs` - -```rust -use crate::types::{Org, Repo, OwnedRepo}; -use anyhow::{bail, Context, Result}; -use reqwest::header::{HeaderMap, HeaderValue, AUTHORIZATION, USER_AGENT, ACCEPT}; -use reqwest::Client; -use serde::de::DeserializeOwned; - -const GITHUB_API_URL: &str = "https://api.github.com"; - -/// GitHub API client -pub struct GitHubClient { - client: Client, - token: String, -} - -impl GitHubClient { - /// Create a new GitHub API client - pub fn new(token: String) -> Result { - let mut headers = HeaderMap::new(); - headers.insert(USER_AGENT, HeaderValue::from_static("gisa-cli")); - headers.insert( - ACCEPT, - HeaderValue::from_static("application/vnd.github+json"), - ); - headers.insert( - "X-GitHub-Api-Version", - HeaderValue::from_static("2022-11-28"), - ); - - let client = Client::builder() - .default_headers(headers) - .build() - .context("Failed to create HTTP client")?; - - Ok(Self { client, token }) - } - - /// Make an authenticated GET request - async fn get(&self, url: &str) -> Result { - let response = self.client - .get(url) - .header(AUTHORIZATION, format!("Bearer {}", self.token)) - .send() - .await - .context("HTTP request failed")?; - - let status = response.status(); - if !status.is_success() { - let body = response.text().await.unwrap_or_default(); - bail!("GitHub API error ({}): {}", status, body); - } - - response.json().await.context("Failed to parse JSON response") - } - - /// Fetch all pages of a paginated endpoint - async fn get_all_pages(&self, base_url: &str) -> Result> { - let mut results = Vec::new(); - let mut page = 1; - - loop { - let url = format!("{}?per_page=100&page={}", base_url, page); - let items: Vec = self.get(&url).await?; - - if items.is_empty() { - break; - } - - results.extend(items); - page += 1; - - // Safety limit to prevent infinite loops - if page > 100 { - break; - } - } - - Ok(results) - } - - /// Get the authenticated user's login - pub async fn get_username(&self) -> Result { - #[derive(serde::Deserialize)] - struct User { - login: String, - } - - let user: User = self.get(&format!("{}/user", GITHUB_API_URL)).await?; - Ok(user.login) - } - - /// Fetch all organizations the user belongs to - pub async fn get_orgs(&self) -> Result> { - self.get_all_pages(&format!("{}/user/orgs", GITHUB_API_URL)).await - } - - /// Fetch all repos for an organization - pub async fn get_org_repos(&self, org: &str) -> Result> { - self.get_all_pages(&format!("{}/orgs/{}/repos", GITHUB_API_URL, org)).await - } - - /// Fetch user's personal repos (owned by them, not org repos) - pub async fn get_user_repos(&self) -> Result> { - self.get_all_pages(&format!("{}/user/repos?affiliation=owner", GITHUB_API_URL)).await - } -} - -/// Discover all repos the user has access to -pub async fn discover_all(token: &str, include_archived: bool, include_forks: bool) -> Result> { - let client = GitHubClient::new(token.to_string())?; - let username = client.get_username().await?; - - let mut all_repos = Vec::new(); - - // Fetch orgs and their repos - let orgs = client.get_orgs().await?; - for org in &orgs { - let repos = client.get_org_repos(&org.login).await?; - for repo in repos { - // Apply filters - if !include_archived && repo.archived { - continue; - } - if !include_forks && repo.fork { - continue; - } - - all_repos.push(OwnedRepo { - owner: org.login.clone(), - repo, - }); - } - } - - // Fetch personal repos - let personal_repos = client.get_user_repos().await?; - for repo in personal_repos { - // Skip if already added via org - if all_repos.iter().any(|r| r.repo.id == repo.id) { - continue; - } - - if !include_archived && repo.archived { - continue; - } - if !include_forks && repo.fork { - continue; - } - - all_repos.push(OwnedRepo { - owner: username.clone(), - repo, - }); - } - - Ok(all_repos) -} -``` - -**File:** `src/discovery/mod.rs` -```rust -pub mod github; -pub use github::{discover_all, GitHubClient}; -``` - -**Done when:** You can call `discover_all` and get a list of repos. - ---- - -### Task 4.2: Add Progress Reporting to Discovery - -**Update \****`src/discovery/github.rs`** to add progress callbacks: - -```rust -use indicatif::{ProgressBar, ProgressStyle}; - -/// Discover all repos with progress reporting -pub async fn discover_all_with_progress( - token: &str, - include_archived: bool, - include_forks: bool, -) -> Result> { - let client = GitHubClient::new(token.to_string())?; - - // Spinner for initial fetch - let spinner = ProgressBar::new_spinner(); - spinner.set_style( - ProgressStyle::default_spinner() - .template("{spinner:.green} {msg}") - .unwrap() - ); - - spinner.set_message("Fetching GitHub username..."); - let username = client.get_username().await?; - spinner.set_message(format!("Authenticated as {}", username)); - - spinner.set_message("Fetching organizations..."); - let orgs = client.get_orgs().await?; - spinner.finish_with_message(format!("Found {} organizations", orgs.len())); - - // Progress bar for org repos - let mut all_repos = Vec::new(); - - if !orgs.is_empty() { - let pb = ProgressBar::new(orgs.len() as u64); - pb.set_style( - ProgressStyle::default_bar() - .template("{spinner:.green} [{bar:40.cyan/blue}] {pos}/{len} orgs - {msg}") - .unwrap() - .progress_chars("#>-") - ); - - for org in &orgs { - pb.set_message(org.login.clone()); - let repos = client.get_org_repos(&org.login).await?; - - for repo in repos { - if !include_archived && repo.archived { continue; } - if !include_forks && repo.fork { continue; } - - all_repos.push(OwnedRepo { - owner: org.login.clone(), - repo, - }); - } - pb.inc(1); - } - pb.finish_with_message("Organizations complete"); - } - - // Personal repos - let spinner = ProgressBar::new_spinner(); - spinner.set_message("Fetching personal repositories..."); - - let personal_repos = client.get_user_repos().await?; - let mut personal_count = 0; - - for repo in personal_repos { - if all_repos.iter().any(|r| r.repo.id == repo.id) { continue; } - if !include_archived && repo.archived { continue; } - if !include_forks && repo.fork { continue; } - - all_repos.push(OwnedRepo { - owner: username.clone(), - repo, - }); - personal_count += 1; - } - - spinner.finish_with_message(format!("Found {} personal repositories", personal_count)); - - println!("\n✓ Discovered {} total repositories\n", all_repos.len()); - - Ok(all_repos) -} -``` - -**Done when:** Running discovery shows progress bars. - ---- - -## Phase 5: Clone Manager - -### Task 5.1: Implement Git Clone Operations - -**File:** `src/clone/parallel.rs` - -```rust -use crate::types::{OwnedRepo, OpResult}; -use crate::config::Config; -use anyhow::{Context, Result}; -use indicatif::{MultiProgress, ProgressBar, ProgressStyle}; -use std::path::Path; -use std::process::Command; -use tokio::sync::Semaphore; -use std::sync::Arc; - -/// Clone a single repository -fn clone_repo(repo: &OwnedRepo, target_path: &Path, config: &Config) -> OpResult { - // Check if already exists - if target_path.exists() { - return OpResult::Skipped("Already exists".to_string()); - } - - // Create parent directory - if let Some(parent) = target_path.parent() { - if let Err(e) = std::fs::create_dir_all(parent) { - return OpResult::Failed(format!("Failed to create directory: {}", e)); - } - } - - // Build git clone command - let mut cmd = Command::new("git"); - cmd.args(["clone", "--progress"]); - - // Clone options - if config.clone.depth > 0 { - cmd.args(["--depth", &config.clone.depth.to_string()]); - } - if !config.clone.branch.is_empty() { - cmd.args(["--branch", &config.clone.branch]); - } - if config.clone.recurse_submodules { - cmd.arg("--recurse-submodules"); - } - - // Use SSH URL - cmd.arg(&repo.repo.ssh_url); - cmd.arg(target_path); - - // Run clone - let output = match cmd.output() { - Ok(o) => o, - Err(e) => return OpResult::Failed(format!("Failed to run git: {}", e)), - }; - - if output.status.success() { - OpResult::Success - } else { - let stderr = String::from_utf8_lossy(&output.stderr); - OpResult::Failed(stderr.to_string()) - } -} - -/// Clone multiple repositories in parallel -pub async fn clone_repos( - repos: Vec, - config: &Config, - dry_run: bool, -) -> Result> { - let mp = MultiProgress::new(); - let semaphore = Arc::new(Semaphore::new(config.concurrency)); - - let pb = mp.add(ProgressBar::new(repos.len() as u64)); - pb.set_style( - ProgressStyle::default_bar() - .template("{spinner:.green} [{bar:40.cyan/blue}] {pos}/{len} - {msg}") - .unwrap() - .progress_chars("#>-") - ); - - let mut results = Vec::new(); - - for repo in repos { - let target_path = config.repo_path(&repo.owner, &repo.repo.name)?; - - pb.set_message(repo.repo.full_name.clone()); - - if dry_run { - println!(" Would clone: {} -> {}", repo.repo.full_name, target_path.display()); - results.push((repo, OpResult::Skipped("Dry run".to_string()))); - } else { - // Acquire semaphore permit (limits concurrency) - let _permit = semaphore.acquire().await?; - - let result = clone_repo(&repo, &target_path, config); - results.push((repo, result)); - } - - pb.inc(1); - } - - pb.finish_with_message("Complete"); - - Ok(results) -} - -/// Print summary of clone results -pub fn print_summary(results: &[(OwnedRepo, OpResult)]) { - let success = results.iter().filter(|(_, r)| matches!(r, OpResult::Success)).count(); - let skipped = results.iter().filter(|(_, r)| matches!(r, OpResult::Skipped(_))).count(); - let failed: Vec<_> = results.iter() - .filter_map(|(repo, r)| { - if let OpResult::Failed(msg) = r { - Some((repo, msg)) - } else { - None - } - }) - .collect(); - - println!("\n=== Summary ==="); - println!("✓ Cloned: {}", success); - println!("○ Skipped: {}", skipped); - println!("✗ Failed: {}", failed.len()); - - if !failed.is_empty() { - println!("\nFailed repositories:"); - for (repo, msg) in failed { - println!(" {} - {}", repo.repo.full_name, msg); - } - } -} -``` - -**File:** `src/clone/mod.rs` -```rust -pub mod parallel; -pub use parallel::{clone_repos, print_summary}; -``` - -**Done when:** `cargo check` passes. - ---- - -## Phase 6: Sync Manager - -### Task 6.1: Implement Fetch/Pull Operations - -**File:** `src/sync/manager.rs` - -```rust -use crate::types::{OwnedRepo, OpResult}; -use crate::config::Config; -use anyhow::Result; -use indicatif::{ProgressBar, ProgressStyle}; -use std::path::Path; -use std::process::Command; - -/// Check if a directory is a git repository -fn is_git_repo(path: &Path) -> bool { - path.join(".git").exists() -} - -/// Check if a repo has uncommitted changes -fn has_uncommitted_changes(path: &Path) -> bool { - let output = Command::new("git") - .args(["status", "--porcelain"]) - .current_dir(path) - .output(); - - match output { - Ok(o) => !o.stdout.is_empty(), - Err(_) => true, // Assume dirty if we can't check - } -} - -/// Fetch updates for a repository -fn fetch_repo(path: &Path) -> OpResult { - let output = Command::new("git") - .args(["fetch", "--all", "--prune"]) - .current_dir(path) - .output(); - - match output { - Ok(o) if o.status.success() => OpResult::Success, - Ok(o) => OpResult::Failed(String::from_utf8_lossy(&o.stderr).to_string()), - Err(e) => OpResult::Failed(e.to_string()), - } -} - -/// Pull updates for a repository -fn pull_repo(path: &Path) -> OpResult { - // First check for uncommitted changes - if has_uncommitted_changes(path) { - return OpResult::Skipped("Has uncommitted changes".to_string()); - } - - let output = Command::new("git") - .args(["pull", "--ff-only"]) - .current_dir(path) - .output(); - - match output { - Ok(o) if o.status.success() => OpResult::Success, - Ok(o) => OpResult::Failed(String::from_utf8_lossy(&o.stderr).to_string()), - Err(e) => OpResult::Failed(e.to_string()), - } -} - -/// Sync mode -pub enum SyncMode { - Fetch, - Pull, -} - -/// Find all existing repos under the base path -pub fn find_existing_repos(base_path: &Path) -> Result> { - let mut repos = Vec::new(); - - // Walk directory structure: base/org/repo - if let Ok(orgs) = std::fs::read_dir(base_path) { - for org_entry in orgs.flatten() { - if !org_entry.path().is_dir() { continue; } - - if let Ok(repos_in_org) = std::fs::read_dir(org_entry.path()) { - for repo_entry in repos_in_org.flatten() { - let repo_path = repo_entry.path(); - if repo_path.is_dir() && is_git_repo(&repo_path) { - repos.push(repo_path); - } - } - } - } - } - - Ok(repos) -} - -/// Sync all repos in the base path -pub fn sync_repos( - base_path: &Path, - mode: SyncMode, - dry_run: bool, -) -> Result> { - let repos = find_existing_repos(base_path)?; - - let pb = ProgressBar::new(repos.len() as u64); - pb.set_style( - ProgressStyle::default_bar() - .template("{spinner:.green} [{bar:40.cyan/blue}] {pos}/{len} - {msg}") - .unwrap() - .progress_chars("#>-") - ); - - let mut results = Vec::new(); - - let mode_name = match mode { - SyncMode::Fetch => "fetch", - SyncMode::Pull => "pull", - }; - - for repo_path in repos { - let name = repo_path.file_name() - .map(|n| n.to_string_lossy().to_string()) - .unwrap_or_default(); - - pb.set_message(name); - - let result = if dry_run { - println!(" Would {}: {}", mode_name, repo_path.display()); - OpResult::Skipped("Dry run".to_string()) - } else { - match mode { - SyncMode::Fetch => fetch_repo(&repo_path), - SyncMode::Pull => pull_repo(&repo_path), - } - }; - - results.push((repo_path, result)); - pb.inc(1); - } - - pb.finish_with_message("Complete"); - - Ok(results) -} - -/// Print sync summary -pub fn print_sync_summary(results: &[(std::path::PathBuf, OpResult)]) { - let success = results.iter().filter(|(_, r)| matches!(r, OpResult::Success)).count(); - let skipped = results.iter().filter(|(_, r)| matches!(r, OpResult::Skipped(_))).count(); - let failed: Vec<_> = results.iter() - .filter_map(|(path, r)| { - if let OpResult::Failed(msg) = r { - Some((path, msg)) - } else { - None - } - }) - .collect(); - - println!("\n=== Sync Summary ==="); - println!("✓ Synced: {}", success); - println!("○ Skipped: {}", skipped); - println!("✗ Failed: {}", failed.len()); - - if !failed.is_empty() { - println!("\nFailed repositories:"); - for (path, msg) in failed { - println!(" {} - {}", path.display(), msg); - } - } -} -``` - -**File:** `src/sync/mod.rs` -```rust -pub mod manager; -pub use manager::{sync_repos, print_sync_summary, SyncMode, find_existing_repos}; -``` - -**Done when:** `cargo check` passes. - ---- - -## Phase 7: Wire Everything Together - -### Task 7.1: Complete Main.rs - -**File:** `src/main.rs` - -```rust -mod cli; -mod config; -mod auth; -mod discovery; -mod clone; -mod sync; -mod types; - -use anyhow::Result; -use config::Config; -use std::path::Path; - -#[tokio::main] -async fn main() -> Result<()> { - let args = cli::parse(); - - // Load config file if specified or from default location - let config_path = args.config.unwrap_or_else(|| Path::new("gisa.config.toml").to_path_buf()); - let mut config = Config::load(&config_path)?; - - match args.command { - cli::Commands::Clone { - path, - jobs, - dry_run, - depth, - recurse_submodules, - include_archived, - include_forks, - refresh: _, - } => { - // Override config with CLI args - config.base_path = path; - if let Some(j) = jobs { - config.concurrency = j; - } - if let Some(d) = depth { - config.clone.depth = d; - } - if recurse_submodules { - config.clone.recurse_submodules = true; - } - if include_archived { - config.filters.include_archived = true; - } - if include_forks { - config.filters.include_forks = true; - } - - // Authenticate - println!("Authenticating...\n"); - let auth = auth::get_auth()?; - println!("✓ Authenticated as {} via {:?}\n", auth.username, auth.method); - - // Discover repos - println!("Discovering repositories...\n"); - let repos = discovery::discover_all_with_progress( - &auth.token, - config.filters.include_archived, - config.filters.include_forks, - ).await?; - - if repos.is_empty() { - println!("No repositories found."); - return Ok(()); - } - - // Clone - if dry_run { - println!("=== Dry Run ===\n"); - } - println!("Cloning {} repositories...\n", repos.len()); - - let results = clone::clone_repos(repos, &config, dry_run).await?; - clone::print_summary(&results); - } - - cli::Commands::Fetch { path, jobs, dry_run } => { - config.base_path = path; - if let Some(j) = jobs { - config.concurrency = j; - } - - let base_path = config.expanded_base_path()?; - println!("Fetching repos in {}...\n", base_path.display()); - - let results = sync::sync_repos(&base_path, sync::SyncMode::Fetch, dry_run)?; - sync::print_sync_summary(&results); - } - - cli::Commands::Pull { path, jobs, dry_run } => { - config.base_path = path; - if let Some(j) = jobs { - config.concurrency = j; - } - - let base_path = config.expanded_base_path()?; - println!("Pulling repos in {}...\n", base_path.display()); - - let results = sync::sync_repos(&base_path, sync::SyncMode::Pull, dry_run)?; - sync::print_sync_summary(&results); - } - - cli::Commands::Status { path } => { - config.base_path = path; - let base_path = config.expanded_base_path()?; - - let repos = sync::find_existing_repos(&base_path)?; - println!("Found {} repositories in {}\n", repos.len(), base_path.display()); - - for repo in repos { - let name = repo.strip_prefix(&base_path) - .map(|p| p.display().to_string()) - .unwrap_or_else(|_| repo.display().to_string()); - println!(" {}", name); - } - } - - cli::Commands::Init { path } => { - let config_file = Path::new(&path).join("gisa.config.toml"); - - if config_file.exists() { - println!("Config file already exists: {}", config_file.display()); - return Ok(()); - } - - let default_config = r#"# Gisa configuration file -# See: https://github.com/user/gisa for documentation - -# Base directory for all cloned repos -base_path = "~/github" - -# Directory structure pattern -# {org} = organization name or GitHub username -# {repo} = repository name -structure = "{org}/{repo}" - -# Number of parallel clone/sync operations -concurrency = 4 - -# Sync behavior: "fetch" (safe) or "pull" (updates working tree) -sync_mode = "fetch" - -[clone] -# Clone depth (0 = full history) -depth = 0 - -# Clone submodules -recurse_submodules = false - -[filters] -# Include archived repositories -include_archived = false - -# Include forked repositories -include_forks = false -"#; - - std::fs::write(&config_file, default_config)?; - println!("Created: {}", config_file.display()); - } - } - - Ok(()) -} -``` - -**Done when:** All commands work end-to-end: -```bash -cargo run -- init -cargo run -- clone ~/github --dry-run -cargo run -- status ~/github -``` - ---- - -## Phase 8: Testing and Polish - -### Task 8.1: Add Integration Tests - -**File:** `tests/integration_test.rs` - -```rust -use std::process::Command; - -#[test] -fn test_help_command() { - let output = Command::new("cargo") - .args(["run", "--", "--help"]) - .output() - .expect("Failed to run command"); - - assert!(output.status.success()); - let stdout = String::from_utf8_lossy(&output.stdout); - assert!(stdout.contains("gisa")); - assert!(stdout.contains("clone")); - assert!(stdout.contains("fetch")); -} - -#[test] -fn test_init_creates_config() { - let temp_dir = tempfile::tempdir().unwrap(); - - let output = Command::new("cargo") - .args(["run", "--", "init", temp_dir.path().to_str().unwrap()]) - .output() - .expect("Failed to run command"); - - assert!(output.status.success()); - assert!(temp_dir.path().join("gisa.config.toml").exists()); -} - -#[test] -fn test_clone_dry_run() { - let output = Command::new("cargo") - .args(["run", "--", "clone", "/tmp/test", "--dry-run"]) - .output() - .expect("Failed to run command"); - - // Should fail gracefully if not authenticated - // (the dry-run flag should be recognized) - let stderr = String::from_utf8_lossy(&output.stderr); - assert!(!stderr.contains("error: unrecognized")); -} -``` - -**Done when:** `cargo test` passes all tests. - ---- - -### Task 8.2: Add Shell Completions - -**Update \****`src/cli.rs`** to add completion generation: - -```rust -use clap::CommandFactory; -use clap_complete::{generate, Shell}; - -/// Generate shell completions to stdout -pub fn generate_completions(shell: Shell) { - let mut cmd = Cli::command(); - generate(shell, &mut cmd, "gisa", &mut std::io::stdout()); -} -``` - -**Add completion command to CLI:** -```rust -#[derive(Subcommand)] -pub enum Commands { - // ... existing commands ... - - /// Generate shell completions - Completions { - /// Shell to generate completions for - #[arg(value_enum)] - shell: Shell, - }, -} -``` - -**Add to Cargo.toml:** -```toml -clap_complete = "4" -``` - -**Done when:** `cargo run -- completions bash` outputs bash completions. - ---- - -## Phase 9: Build and Release - -### Task 9.1: Create Release Build Script - -**File:** `scripts/build-release.sh` - -```bash -#!/bin/bash -set -e - -VERSION=${1:-"0.1.0"} -TARGETS=( - "x86_64-apple-darwin" - "aarch64-apple-darwin" - "x86_64-unknown-linux-gnu" -) - -mkdir -p dist - -for target in "${TARGETS[@]}"; do - echo "Building for $target..." - cargo build --release --target "$target" - - if [[ "$target" == *"windows"* ]]; then - cp "target/$target/release/gisa.exe" "dist/gisa-$VERSION-$target.exe" - else - cp "target/$target/release/gisa" "dist/gisa-$VERSION-$target" - fi -done - -echo "Release builds complete in dist/" -ls -la dist/ -``` - -**Done when:** `./scripts/build-release.sh` creates release binaries. - ---- - -### Task 9.2: Create Homebrew Formula - -**File:** `Formula/gisa.rb` (for your tap) - -```ruby -class Gisa < Formula - desc "Mirror GitHub org/repo structure locally" - homepage "https://github.com/yourusername/gisa" - version "0.1.0" - - on_macos do - if Hardware::CPU.arm? - url "https://github.com/yourusername/gisa/releases/download/v0.1.0/gisa-0.1.0-aarch64-apple-darwin.tar.gz" - sha256 "PLACEHOLDER_SHA256" - else - url "https://github.com/yourusername/gisa/releases/download/v0.1.0/gisa-0.1.0-x86_64-apple-darwin.tar.gz" - sha256 "PLACEHOLDER_SHA256" - end - end - - def install - bin.install "gisa" - - # Install shell completions - generate_completions_from_executable(bin/"gisa", "completions") - end - - test do - assert_match "gisa", shell_output("#{bin}/gisa --version") - end -end -``` - -**Done when:** Formula file is ready for publishing. - ---- - -## Verification Checklist - -After completing all phases, verify: - -- [ ] `cargo build --release` succeeds -- [ ] `cargo test` passes all tests -- [ ] `gisa --help` shows all commands -- [ ] `gisa init` creates config file -- [ ] `gisa clone ~/test --dry-run` shows discovered repos -- [ ] `gisa clone ~/test` clones all repos (with auth) -- [ ] `gisa fetch ~/test` fetches updates -- [ ] `gisa pull ~/test` pulls updates -- [ ] `gisa status ~/test` lists repos -- [ ] `gisa completions bash` outputs completions -- [ ] Binary size is reasonable (~5MB) -- [ ] Runs without Rust toolchain installed - ---- - -## Quick Reference: File to Create - -| File | Purpose | -| --- | --- | -| `src/main.rs` | Entry point, command dispatch | -| `src/lib.rs` | Module exports | -| `src/cli.rs` | Argument parsing (clap) | -| `src/types.rs` | Shared data structures | -| `src/config/mod.rs` | Config module | -| `src/config/parser.rs` | TOML parsing | -| `src/auth/mod.rs` | Auth module | -| `src/auth/gh_cli.rs` | GitHub CLI integration | -| `src/discovery/mod.rs` | Discovery module | -| `src/discovery/github.rs` | GitHub API client | -| `src/clone/mod.rs` | Clone module | -| `src/clone/parallel.rs` | Parallel cloning | -| `src/sync/mod.rs` | Sync module | -| `src/sync/manager.rs` | Fetch/pull logic | - ---- - -## Dependency Quick Reference - -```toml -[dependencies] -clap = { version = "4", features = ["derive"] } -clap_complete = "4" -tokio = { version = "1", features = ["full"] } -reqwest = { version = "0.12", features = ["json"] } -serde = { version = "1", features = ["derive"] } -serde_json = "1" -toml = "0.8" -indicatif = "0.17" -console = "0.15" -directories = "5" -thiserror = "1" -anyhow = "1" -shellexpand = "3" - -[dev-dependencies] -tokio-test = "0.4" -tempfile = "3" -``` - ---- - -## Troubleshooting Guide - -### Common Errors and Solutions - -#### SSH Key Issues - -**Error:** `Permission denied (publickey)` - -**Cause:** SSH key not configured or not added to GitHub. - -**Fix:** -```bash -# Check if SSH key exists -ls -la ~/.ssh/id_ed25519.pub - -# If not, create one -ssh-keygen -t ed25519 -C "your_email@example.com" - -# Add to SSH agent -eval "$(ssh-agent -s)" -ssh-add ~/.ssh/id_ed25519 - -# Copy public key and add to GitHub Settings > SSH Keys -cat ~/.ssh/id_ed25519.pub - -# Test connection -ssh -T git@github.com -``` - ---- - -#### GitHub API Rate Limits - -**Error:** `GitHub API error (403): rate limit exceeded` - -**Cause:** Too many API requests. Unauthenticated: 60/hour. Authenticated: 5,000/hour. - -**Fix:** -```bash -# Check current rate limit -gh api rate_limit - -# Wait for reset (shown in response) or ensure authentication is working -gh auth status -``` - -**Prevention:** The code uses authenticated requests which have much higher limits. - ---- - -#### gh CLI Not Authenticated - -**Error:** `No GitHub authentication found` - -**Fix:** -```bash -# Interactive login (opens browser) -gh auth login - -# Or with token -gh auth login --with-token < token.txt - -# Verify -gh auth status -``` - ---- - -#### Clone Fails for Private Repo - -**Error:** `Repository not found` or `Could not read from remote repository` - -**Causes:** -1. Token lacks `repo` scope -2. SSH key not added to GitHub -3. Not a member of the organization - -**Fix:** -```bash -# Check token scopes -gh auth status - -# Re-authenticate with correct scopes -gh auth login --scopes repo,read:org - -# For SSH issues, test connection -ssh -T git@github.com -``` - ---- - -#### Network/Timeout Errors - -**Error:** `HTTP request failed` or `Connection refused` - -**Causes:** -1. No internet connection -2. Firewall blocking GitHub -3. GitHub is down - -**Fix:** -```bash -# Test basic connectivity -ping github.com -curl -I https://api.github.com - -# Check GitHub status -open https://www.githubstatus.com -``` - ---- - -#### "Already exists" for All Repos - -**Symptom:** Every repo shows "Skipped: Already exists" but directories are empty or wrong. - -**Cause:** Target directory exists but isn't a git repo (maybe failed previous clone). - -**Fix:** -```bash -# Remove empty/broken directories -find ~/github -type d -empty -delete - -# Or manually remove and re-clone -rm -rf ~/github/org/repo -gisa clone ~/github -``` - ---- - -#### Pull Fails with "Uncommitted Changes" - -**Error:** `Skipped: Has uncommitted changes` - -**Cause:** Local modifications exist that would be overwritten. - -**Fix:** -```bash -# Go to the repo -cd ~/github/org/repo - -# Check what changed -git status - -# Either commit, stash, or discard changes -git stash # Save for later -git checkout . # Discard all changes -``` - ---- - -#### Rust Compilation Errors - -**Error:** `error[E0433]: failed to resolve: use of undeclared crate or module` - -**Cause:** Missing module declaration or import. - -**Fix checklist:** -1. Did you add `pub mod modulename;` to `mod.rs` or `lib.rs`? -2. Did you add `use crate::modulename;` where needed? -3. Did you run `cargo check` after creating new files? - ---- - -## Code Review Checklist - -Before marking any task complete, verify: - -### Functionality -- [ ] Code compiles: `cargo check` -- [ ] Tests pass: `cargo test` -- [ ] Feature works manually (run and test) - -### Code Quality -- [ ] No `unwrap()` on user input or external data (use `?` or proper error handling) -- [ ] No hardcoded paths (use config or CLI args) -- [ ] No secrets in code (tokens come from env/gh CLI) -- [ ] Error messages are helpful (tell user what went wrong AND how to fix it) - -### Rust Specifics -- [ ] No compiler warnings: `cargo check 2>&1 | grep warning` -- [ ] No clippy warnings: `cargo clippy` -- [ ] Code is formatted: `cargo fmt` - -### Documentation -- [ ] Public functions have `///` doc comments -- [ ] Complex logic has inline comments explaining WHY -- [ ] README updated if user-facing behavior changed - ---- - -## Git Commit Guide - -### When to Commit - -Commit after completing each **task** (not each phase). Small, focused commits are easier to review and revert. - -### Commit Message Format - -``` -: - - -``` - -**Format rules:** -- Start with a present-tense verb (Add, Create, Implement, Fix, Update, Remove) -- Describe what was done, not what will be done -- Keep under 50 characters - -**Examples:** -- `Add config parser with TOML support` -- `Create module structure for auth and discovery` -- `Implement parallel clone manager` -- `Fix rate limit handling in GitHub client` -- `Update error messages with troubleshooting hints` -- `Remove deprecated sync option` - -### Commit Schedule - -| After Task | Commit Message | -| --- | --- | -| 0.1 | `Create Rust project skeleton` | -| 0.2 | `Add project dependencies` | -| 0.3 | `Create module structure` | -| 1.1 | `Add core type definitions` | -| 1.2 | `Implement config parser` | -| 1.3 | `Add config parser tests` | -| 2.1 | `Implement CLI argument parsing` | -| 3.1 | `Add GitHub CLI authentication` | -| 4.1 | `Implement GitHub API client` | -| 4.2 | `Add progress reporting to discovery` | -| 5.1 | `Implement parallel clone manager` | -| 6.1 | `Implement fetch and pull sync operations` | -| 7.1 | `Wire up main.rs with all commands` | -| 8.1 | `Add integration tests` | -| 8.2 | `Add shell completion generation` | -| 9.1 | `Add release build script` | -| 9.2 | `Add Homebrew formula` | - -### Example Workflow - -```bash -# After completing Task 1.2 -git add src/config/ -git commit -m "Implement config parser - -Support TOML config files with defaults for all options. -Handle ~ expansion in paths." - -# After completing Task 3.1 -git add src/auth/ -git commit -m "Add GitHub CLI authentication - -Try gh CLI first, fall back to GITHUB_TOKEN env var. -Provide helpful error message if no auth found." -``` - ---- - -## "I'm Stuck" Decision Tree - -``` -commit messagSTART: Something isn't working - │ - ├─► Does it compile? (`cargo check`) - │ │ - │ NO ──► Read the error message carefully - │ │ │ - │ │ ├─► "cannot find" ──► Missing `use` or `mod` statement - │ │ ├─► "expected X, found Y" ──► Type mismatch, check function signatures - │ │ └─► "borrowed value" ──► Ownership issue, try `.clone()` or `&` - │ │ - │ YES ──► Continue below - │ - ├─► Do tests pass? (`cargo test`) - │ │ - │ NO ──► Read which test failed and why - │ │ │ - │ │ └─► Compare expected vs actual output - │ │ - │ YES ──► Continue below - │ - ├─► Does it run? (`cargo run -- `) - │ │ - │ NO ──► Check the runtime error - │ │ │ - │ │ ├─► "No GitHub authentication" ──► Run `gh auth login` - │ │ ├─► "Permission denied" ──► SSH key issue (see above) - │ │ └─► "rate limit" ──► Wait or check auth - │ │ - │ YES ──► Continue below - │ - └─► Does it do the right thing? - │ - NO ──► Add debug prints - │ │ - │ └─► `println!("DEBUG: {:?}", variable);` - │ Run again and trace the values - │ - YES ──► Task complete! Commit and move on. -``` - ---- - -## Quick Commands Reference - -```bash -# Development -cargo check # Fast compile check (no binary) -cargo build # Build debug binary -cargo run -- clone # Run with arguments -cargo test # Run all tests -cargo test config # Run tests matching "config" -cargo fmt # Format code -cargo clippy # Lint code - -# Git -git status # See what changed -git diff # See changes in detail -git add -p # Stage changes interactively -git commit -m "msg" # Commit with message -git log --oneline -5 # See recent commits - -# GitHub CLI -gh auth status # Check authentication -gh auth login # Authenticate -gh api rate_limit # Check API rate limits -gh api user # Test API access - -# Debugging -RUST_BACKTRACE=1 cargo run -- clone # Show stack trace on panic -cargo run -- clone --dry-run # Preview without doing -``` - ---- - -## First-Run Experience - -When a user runs `gisa` for the first time, provide a guided setup: - -### Task: Implement First-Run Detection - -**File:** `src/main.rs` (add before command dispatch) - -```rust -use directories::ProjectDirs; - -fn is_first_run() -> bool { - if let Some(proj_dirs) = ProjectDirs::from("", "", "gisa") { - let config_dir = proj_dirs.config_dir(); - !config_dir.join(".initialized").exists() - } else { - true - } -} - -fn mark_initialized() -> Result<()> { - if let Some(proj_dirs) = ProjectDirs::from("", "", "gisa") { - let config_dir = proj_dirs.config_dir(); - std::fs::create_dir_all(config_dir)?; - std::fs::write(config_dir.join(".initialized"), "")?; - } - Ok(()) -} - -fn run_first_time_setup() -> Result<()> { - println!("Welcome to Gisa!\n"); - println!("Gisa mirrors your GitHub organizations and repositories locally.\n"); - - // Check prerequisites - println!("Checking prerequisites...\n"); - - // 1. Check git - print!(" Git installed: "); - if Command::new("git").arg("--version").output().is_ok() { - println!("✓"); - } else { - println!("✗"); - println!("\n Please install git: https://git-scm.com/downloads"); - std::process::exit(1); - } - - // 2. Check gh CLI - print!(" GitHub CLI installed: "); - if Command::new("gh").arg("--version").output().is_ok() { - println!("✓"); - } else { - println!("✗ (optional, but recommended)"); - println!(" Install: https://cli.github.com"); - } - - // 3. Check authentication - print!(" GitHub authenticated: "); - match auth::get_auth() { - Ok(auth) => println!("✓ ({})", auth.username), - Err(_) => { - println!("✗"); - println!("\n Please authenticate:"); - println!(" gh auth login"); - println!(" Or set environment variable:"); - println!(" export GITHUB_TOKEN=ghp_xxxx"); - std::process::exit(1); - } - } - - println!("\nSetup complete! Run 'gisa clone ~/github' to get started.\n"); - - mark_initialized()?; - Ok(()) -} -``` - -**Add to main():** -```rust -// At the start of main(), before command dispatch -if is_first_run() { - run_first_time_setup()?; - return Ok(()); -} -``` - -**Done when:** First run shows welcome message and checks prerequisites. - ---- - -## Error Handling Scenarios - -| Scenario | Detection | User Message | Recovery | -| --- | --- | --- | --- | -| Git not installed | `git --version` fails | "Git is required. Install: https://git-scm.com" | Exit with code 1 | -| gh CLI not installed | `gh --version` fails | Continue (it's optional) | Fall back to env var | -| gh CLI not authenticated | `gh auth status` fails | "Run 'gh auth login' or set GITHUB_TOKEN" | Exit with code 1 | -| Invalid token | API returns 401 | "Authentication failed. Token may be expired." | Exit with code 1 | -| Rate limit exceeded | API returns 403 + rate limit header | "Rate limit exceeded. Resets at {time}." | Exit with code 1, show reset time | -| Network unreachable | Connection timeout | "Cannot reach GitHub. Check your internet connection." | Exit with code 1 | -| SSH key missing | Clone fails with "Permission denied" | "SSH key not configured. See: {link}" | Skip repo, continue others | -| Disk full | Write fails with ENOSPC | "Disk full. Free up space and retry." | Exit with code 1 | -| Permission denied | Write fails with EACCES | "Cannot write to {path}. Check permissions." | Exit with code 1 | -| Repo already exists | Directory exists | Skip silently (or with --verbose) | Continue to next repo | -| Clone timeout | No progress for 5 minutes | "Clone timed out for {repo}. Skipping." | Skip repo, continue others | - -### Implement Error Types - -**File:** `src/errors.rs` - -```rust -use thiserror::Error; - -#[derive(Error, Debug)] -pub enum GisaError { - #[error("Git is not installed. Install from: https://git-scm.com")] - GitNotInstalled, - - #[error("GitHub authentication failed. Run 'gh auth login' or set GITHUB_TOKEN")] - NotAuthenticated, - - #[error("Authentication token is invalid or expired")] - InvalidToken, - - #[error("GitHub API rate limit exceeded. Resets at {reset_time}")] - RateLimitExceeded { reset_time: String }, - - #[error("Cannot reach GitHub. Check your internet connection")] - NetworkUnreachable, - - #[error("SSH key not configured for GitHub. See: https://docs.github.com/en/authentication/connecting-to-github-with-ssh")] - SshKeyMissing, - - #[error("Disk is full. Free up space in {path}")] - DiskFull { path: String }, - - #[error("Permission denied writing to {path}")] - PermissionDenied { path: String }, - - #[error("Clone timed out for {repo}")] - CloneTimeout { repo: String }, - - #[error("{0}")] - Other(String), -} -``` - ---- - -## Logging Strategy - -### Log Files Location - -``` -~/.local/share/gisa/ # Linux -~/Library/Application Support/gisa/ # macOS - -├── gisa.log # Main application log -├── clone.log # Clone operation details -└── sync.log # Sync operation details -``` - -### Log Levels - -| Level | When to Use | Example | -| --- | --- | --- | -| ERROR | Operation failed | "Failed to clone repo: permission denied" | -| WARN | Something unexpected but recoverable | "Repo skipped: has uncommitted changes" | -| INFO | Normal operation milestones | "Discovered 47 repositories" | -| DEBUG | Detailed operation info | "GET https://api.github.com/user/orgs" | -| TRACE | Very verbose (API responses, etc.) | Full JSON response bodies | - -### Implementation - -**Add to Cargo.toml:** -```toml -tracing = "0.1" -tracing-subscriber = { version = "0.3", features = ["env-filter"] } -tracing-appender = "0.2" -``` - -**File:** `src/logging.rs` - -```rust -use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt, EnvFilter}; -use tracing_appender::rolling::{RollingFileAppender, Rotation}; -use directories::ProjectDirs; - -pub fn init_logging(verbose: bool) { - let filter = if verbose { - EnvFilter::new("gisa=debug") - } else { - EnvFilter::new("gisa=info") - }; - - // Console output - let console_layer = tracing_subscriber::fmt::layer() - .with_target(false) - .without_time(); - - // File output (if we can get a log directory) - if let Some(proj_dirs) = ProjectDirs::from("", "", "gisa") { - let log_dir = proj_dirs.data_dir(); - std::fs::create_dir_all(log_dir).ok(); - - let file_appender = RollingFileAppender::new( - Rotation::DAILY, - log_dir, - "gisa.log", - ); - let file_layer = tracing_subscriber::fmt::layer() - .with_writer(file_appender) - .with_ansi(false); - - tracing_subscriber::registry() - .with(filter) - .with(console_layer) - .with(file_layer) - .init(); - } else { - tracing_subscriber::registry() - .with(filter) - .with(console_layer) - .init(); - } -} -``` - -**Usage:** -```rust -use tracing::{info, warn, error, debug}; - -info!("Discovered {} repositories", count); -warn!(repo = %repo.name, "Skipped: has uncommitted changes"); -error!(error = %e, "Clone failed"); -debug!(url = %url, "Fetching API endpoint"); -``` - ---- - -## Offline Behavior - -### Detection - -```rust -fn is_online() -> bool { - // Try to reach GitHub API - reqwest::blocking::Client::new() - .get("https://api.github.com") - .timeout(std::time::Duration::from_secs(5)) - .send() - .is_ok() -} -``` - -### Behavior by Command - -| Command | Offline Behavior | -| --- | --- | -| `clone` | Error: "Cannot reach GitHub. Check your connection." | -| `fetch` | Error: "Cannot reach GitHub. Check your connection." | -| `pull` | Error: "Cannot reach GitHub. Check your connection." | -| `status` | Works fully offline (reads local filesystem) | -| `init` | Works fully offline (creates local file) | - -### Cached Data (Future Enhancement) - -Store last-known repo list for offline reference: - -``` -~/.cache/gisa/ -└── repos.json # Cached repo list from last successful discovery -``` - ---- - -## Performance Requirements - -| Metric | Target | How to Verify | -| --- | --- | --- | -| Startup time | < 100ms | `time gisa --help` | -| Memory usage (idle) | < 20MB | `ps aux | grep gisa` | -| Memory usage (cloning 100 repos) | < 100MB | Monitor during operation | -| Discovery (100 repos) | < 30 seconds | `time gisa clone --dry-run` | -| Clone throughput | Limited by network | Parallel clones (default: 4) | - -### Optimization Checklist - -- [ ] Use `--release` builds for production -- [ ] Stream large responses instead of loading into memory -- [ ] Limit concurrent operations to avoid overwhelming network -- [ ] Use shallow clones (--depth 1) for faster initial sync - ---- - -## Version Compatibility - -### Minimum Supported Versions - -| Dependency | Minimum Version | Check Command | -| --- | --- | --- | -| Git | 2.0.0 | `git --version` | -| GitHub CLI | 2.0.0 | `gh --version` | -| Rust (build only) | 1.70.0 | `rustc --version` | - -### Version Check Implementation - -```rust -fn check_git_version() -> Result<()> { - let output = Command::new("git") - .args(["--version"]) - .output()?; - - let version_str = String::from_utf8_lossy(&output.stdout); - // Parse "git version 2.39.0" -> "2.39.0" - let version = version_str - .split_whitespace() - .nth(2) - .ok_or_else(|| anyhow!("Cannot parse git version"))?; - - let parts: Vec = version - .split('.') - .filter_map(|s| s.parse().ok()) - .collect(); - - if parts.get(0).unwrap_or(&0) < &2 { - bail!("Git 2.0.0 or higher required. Found: {}", version); - } - - Ok(()) -} -``` - ---- - -## Data Migration - -### Export Configuration - -```bash -# Export current config -gisa config export > gisa-backup.toml - -# On new machine -gisa config import < gisa-backup.toml -``` - -### Manual Migration Steps - -1. Copy config file: -```bash - # From old machine - scp ~/.config/gisa/config.toml newmachine:~/.config/gisa/ -``` - -2. Re-authenticate on new machine: -```bash - gh auth login -``` - -3. Re-clone repositories: -```bash - gisa clone ~/github -``` - -Note: Git history is not migrated. Repos are cloned fresh from GitHub. - ---- - -## Manual Test Checklist - -Before each release, manually verify: - -### Setup & Auth -- [ ] First run shows welcome message -- [ ] `gh auth login` workflow works -- [ ] `GITHUB_TOKEN` env var works -- [ ] Clear error when no auth configured - -### Clone Command -- [ ] `gisa clone ~/test --dry-run` shows repos without cloning -- [ ] `gisa clone ~/test` clones all accessible repos -- [ ] Progress bar shows during clone -- [ ] Skips already-cloned repos -- [ ] Creates org/repo directory structure -- [ ] `--depth 1` creates shallow clones -- [ ] `--include-archived` includes archived repos -- [ ] `--include-forks` includes forked repos -- [ ] `-j 8` runs 8 parallel clones - -### Sync Commands -- [ ] `gisa fetch ~/test` fetches all repos -- [ ] `gisa pull ~/test` pulls all repos -- [ ] Pull skips repos with uncommitted changes -- [ ] Summary shows success/skipped/failed counts - -### Status Command -- [ ] `gisa status ~/test` lists all repos -- [ ] Shows repo count - -### Init Command -- [ ] `gisa init` creates config file -- [ ] Doesn't overwrite existing config -- [ ] Created config is valid TOML - -### Error Handling -- [ ] Graceful error when offline -- [ ] Graceful error when rate limited -- [ ] Graceful error when SSH key missing -- [ ] Continues after single repo failure - -### Edge Cases -- [ ] Works with 0 repositories -- [ ] Works with 500+ repositories -- [ ] Handles repos with special characters in names -- [ ] Handles very long organization names - ---- - -## Security Considerations - -### Token Storage - -**Never store tokens in:** -- Source code -- Git history -- Plain text config files -- Environment variables in scripts checked into git - -**Safe token sources (in order of preference):** -1. GitHub CLI (`gh auth token`) - tokens stored in system keychain -2. Environment variable set in shell profile (not in repo) -3. OS keychain/credential manager - -### HTTPS Enforcement - -All GitHub API calls must use HTTPS. The code already enforces this: - -```rust -const GITHUB_API_URL: &str = "https://api.github.com"; // Always HTTPS -``` - -For git clone operations, prefer SSH URLs over HTTPS to avoid credential prompts: -```rust -cmd.arg(&repo.repo.ssh_url); // git@github.com:org/repo.git -``` - -### Dependency Auditing - -Run security audits regularly: - -```bash -# Install cargo-audit -cargo install cargo-audit - -# Run audit -cargo audit - -# Fix vulnerabilities -cargo update # Update to patched versions -``` - -**Add to CI pipeline** (see CI/CD section below). - -### Input Validation - -Sanitize all user input that becomes file paths or shell commands: - -```rust -fn sanitize_path_component(s: &str) -> String { - s.chars() - .filter(|c| c.is_alphanumeric() || *c == '-' || *c == '_' || *c == '.') - .collect() -} - -// Usage: prevent directory traversal -let safe_org = sanitize_path_component(&org.login); -let safe_repo = sanitize_path_component(&repo.name); -let path = base_path.join(safe_org).join(safe_repo); -``` - -### Secrets in Logs - -Never log tokens or sensitive data: - -```rust -// BAD -debug!("Using token: {}", token); - -// GOOD -debug!("Using token: {}...", &token[..8]); // Only first 8 chars -// Or better: don't log tokens at all -debug!("Authentication successful"); -``` - -### Security Checklist - -- [ ] Tokens never written to disk by gisa -- [ ] Tokens never logged (even at debug level) -- [ ] All API calls use HTTPS -- [ ] User input sanitized before use in paths -- [ ] Dependencies audited with `cargo audit` -- [ ] No shell injection possible (use Command builder, not shell strings) - ---- - -## Accessibility - -### Color-Blind Friendly Output - -Don't rely solely on color to convey information. Use symbols too: - -```rust -// BAD: Only color distinguishes success/failure -println!("\x1b[32mCloned\x1b[0m"); // Green -println!("\x1b[31mFailed\x1b[0m"); // Red - -// GOOD: Symbol + color -println!("✓ Cloned"); // Checkmark for success -println!("✗ Failed"); // X for failure -println!("○ Skipped"); // Circle for skipped -println!("⚠ Warning"); // Warning triangle -``` - -### Respect NO_COLOR - -Honor the `NO_COLOR` environment variable (https://no-color.org): - -```rust -fn should_use_color() -> bool { - // Respect NO_COLOR standard - if std::env::var("NO_COLOR").is_ok() { - return false; - } - // Also check if stdout is a terminal - atty::is(atty::Stream::Stdout) -} -``` - -**Add to Cargo.toml:** -```toml -atty = "0.2" -``` - -### Progress Bar Accessibility - -Indicatif already handles terminal detection, but ensure fallback works: - -```rust -let pb = if atty::is(atty::Stream::Stdout) { - ProgressBar::new(total) -} else { - // No terminal: use simple line output instead - ProgressBar::hidden() -}; -``` - -### Screen Reader Friendly Output - -- Use clear, complete sentences for important messages -- Avoid ASCII art that doesn't make sense when read aloud -- Put the most important information first - -```rust -// BAD: Relies on visual layout -println!("my-org/my-repo .............. ✓"); - -// GOOD: Clear sentence -println!("✓ Cloned my-org/my-repo"); -``` - -### High Contrast Mode - -When using colors, ensure sufficient contrast. Use bold for emphasis: - -```rust -use console::Style; - -let success = Style::new().green().bold(); -let error = Style::new().red().bold(); -let warning = Style::new().yellow().bold(); - -println!("{}", success.apply_to("✓ Clone complete")); -``` - -### Accessibility Checklist - -- [ ] All status indicators use symbols, not just colors -- [ ] `NO_COLOR` environment variable is respected -- [ ] Output works without a terminal (piped to file) -- [ ] Important messages are complete sentences -- [ ] No critical info conveyed only through visual layout - ---- - -## CI/CD Pipeline - -### GitHub Actions Workflow - -**File:** `.github/workflows/ci.yml` - -```yaml -name: CI - -on: - push: - branches: [main] - pull_request: - branches: [main] - -env: - CARGO_TERM_COLOR: always - -jobs: - test: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - - name: Install Rust - uses: dtolnay/rust-action@stable - - - name: Cache cargo - uses: actions/cache@v4 - with: - path: | - ~/.cargo/bin/ - ~/.cargo/registry/index/ - ~/.cargo/registry/cache/ - ~/.cargo/git/db/ - target/ - key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }} - - - name: Check formatting - run: cargo fmt --all -- --check - - - name: Clippy - run: cargo clippy -- -D warnings - - - name: Build - run: cargo build --verbose - - - name: Run tests - run: cargo test --verbose - - security: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - - name: Install Rust - uses: dtolnay/rust-action@stable - - - name: Install cargo-audit - run: cargo install cargo-audit - - - name: Security audit - run: cargo audit - - build-binaries: - needs: [test, security] - if: github.ref == 'refs/heads/main' - strategy: - matrix: - include: - - os: macos-latest - target: x86_64-apple-darwin - - os: macos-latest - target: aarch64-apple-darwin - - os: ubuntu-latest - target: x86_64-unknown-linux-gnu - - runs-on: ${{ matrix.os }} - steps: - - uses: actions/checkout@v4 - - - name: Install Rust - uses: dtolnay/rust-action@stable - with: - targets: ${{ matrix.target }} - - - name: Build release binary - run: cargo build --release --target ${{ matrix.target }} - - - name: Upload artifact - uses: actions/upload-artifact@v4 - with: - name: gisa-${{ matrix.target }} - path: target/${{ matrix.target }}/release/gisa -``` - -### Release Workflow - -**File:** `.github/workflows/release.yml` - -```yaml -name: Release - -on: - push: - tags: - - 'v*' - -permissions: - contents: write - -jobs: - create-release: - runs-on: ubuntu-latest - outputs: - upload_url: ${{ steps.create_release.outputs.upload_url }} - steps: - - name: Create Release - id: create_release - uses: actions/create-release@v1 - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - with: - tag_name: ${{ github.ref_name }} - release_name: ${{ github.ref_name }} - draft: false - prerelease: false - - build-and-upload: - needs: create-release - strategy: - matrix: - include: - - os: macos-latest - target: x86_64-apple-darwin - asset_name: gisa-x86_64-apple-darwin - - os: macos-latest - target: aarch64-apple-darwin - asset_name: gisa-aarch64-apple-darwin - - os: ubuntu-latest - target: x86_64-unknown-linux-gnu - asset_name: gisa-x86_64-unknown-linux-gnu - - runs-on: ${{ matrix.os }} - steps: - - uses: actions/checkout@v4 - - - name: Install Rust - uses: dtolnay/rust-action@stable - with: - targets: ${{ matrix.target }} - - - name: Build release binary - run: cargo build --release --target ${{ matrix.target }} - - - name: Create tarball - run: | - cd target/${{ matrix.target }}/release - tar -czvf ${{ matrix.asset_name }}.tar.gz gisa - - - name: Upload Release Asset - uses: actions/upload-release-asset@v1 - env: - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - with: - upload_url: ${{ needs.create-release.outputs.upload_url }} - asset_path: target/${{ matrix.target }}/release/${{ matrix.asset_name }}.tar.gz - asset_name: ${{ matrix.asset_name }}.tar.gz - asset_content_type: application/gzip -``` - -### Creating a Release - -```bash -# 1. Update version in Cargo.toml -# 2. Commit the change -git add Cargo.toml -git commit -m "Bump version to 0.2.0" - -# 3. Create and push tag -git tag v0.2.0 -git push origin v0.2.0 - -# 4. GitHub Actions will automatically: -# - Run tests -# - Build binaries for all platforms -# - Create GitHub release with binaries attached -``` - -### CI/CD Checklist - -- [ ] `.github/workflows/ci.yml` created -- [ ] `.github/workflows/release.yml` created -- [ ] CI runs on every push and PR -- [ ] Tests must pass before merge -- [ ] Security audit runs in CI -- [ ] Release creates binaries for macOS (Intel + ARM) and Linux -- [ ] Binaries are attached to GitHub releases diff --git a/conductor.json b/conductor.json new file mode 100644 index 0000000..98c0a1f --- /dev/null +++ b/conductor.json @@ -0,0 +1,24 @@ +{ + "name": "Git-Same", + "description": "Mirror GitHub org/repo structure locally - supports multiple providers", + "scripts": { + "setup": "./toolkit/Conductor/setup.sh", + "run": "./toolkit/Conductor/run.sh", + "archive": "./toolkit/Conductor/archive.sh" + }, + "stack": { + "language": "Rust", + "cli": "Clap v4", + "async": "Tokio", + "http": "Reqwest" + }, + "commands": { + "build": "cargo build --release", + "test": "cargo test", + "run": "gisa", + "init": "gisa init", + "sync": "gisa sync", + "status": "gisa status", + "sync-pull": "gisa sync --pull" + } +} diff --git a/docs/README.md b/docs/README.md new file mode 100644 index 0000000..3611171 --- /dev/null +++ b/docs/README.md @@ -0,0 +1,384 @@ +# Git-Same + +Mirror GitHub structure /orgs/repos/ to local file system + +[![Crates.io](https://img.shields.io/crates/v/git-same.svg)](https://crates.io/crates/git-same) +[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) +[![Build Status](https://github.com/zaai-com/git-same/actions/workflows/S1-Test-CI.yml/badge.svg)](https://github.com/zaai-com/git-same/actions/workflows/S1-Test-CI.yml) + +## Features + +- **Interactive TUI**: Full terminal UI with dashboard, workspace management, and live progress +- **Multi-Provider Support**: Works with GitHub and GitHub Enterprise (GitLab and Bitbucket planned) +- **Parallel Operations**: Clones and syncs repositories concurrently +- **Smart Filtering**: Filter by archived status, forks, organizations +- **Incremental Sync**: Only fetches/pulls what has changed +- **Progress Reporting**: Beautiful progress bars and status updates +- **Multiple Aliases**: Install once, use with your preferred command name + +## Installation + +### From crates.io + +```bash +cargo install git-same +``` + +### GitHub Releases + +Download pre-built binaries from [GitHub Releases](https://github.com/zaai-com/git-same/releases) for Linux (x86_64, ARM64), macOS (x86_64, Apple Silicon), and Windows (x86_64, ARM64). + +### From source + +```bash +git clone https://github.com/zaai-com/git-same +cd git-same +cargo install --path . +``` + +### Homebrew + +```bash +brew install zaai-com/tap/git-same +``` + +## Available Commands + +The tool can be invoked using any of these names (all installed by default): + +- `git-same` - Main command +- `gitsame` - No hyphen variant +- `gitsa` - Short form +- `gisa` - Shortest variant +- `git same` - Git subcommand (requires git-same in PATH) + +## Quick Start + +### Interactive (TUI) + +Run `gisa` with no arguments to launch the interactive terminal UI: + +```bash +gisa +``` + +The TUI provides a dashboard with workspace management, sync operations, repo status, and settings — all accessible via keyboard shortcuts. + +### CLI + +```bash +# 1. Initialize configuration +gisa init + +# 2. Set up a workspace (interactive wizard) +gisa setup + +# 3. Sync repositories (discover, clone new, fetch/pull existing) +gisa sync + +# 4. Check repository status +gisa status +gisa status --uncommitted +gisa status --behind +``` + +## Authentication + +Git-Same uses GitHub CLI (`gh`) for authentication: + +```bash +# Install GitHub CLI +brew install gh # macOS +# or: sudo apt install gh # Ubuntu + +# Authenticate +gh auth login + +# Git-Same will now use your gh credentials +gisa sync +``` + +## Configuration + +Edit `~/.config/git-same/config.toml` to customize behavior: + +```toml +# Directory structure: {org}/{repo} or {provider}/{org}/{repo} +structure = "{org}/{repo}" + +# Number of concurrent clone/sync operations +concurrency = 4 + +# Default sync mode: fetch or pull +sync_mode = "fetch" + +[clone] +# Clone depth (0 = full history) +depth = 0 + +# Default branch to clone (empty = provider's default) +branch = "" + +# Recursively clone submodules +recurse_submodules = false + +[filters] +# Include archived repositories +include_archived = false + +# Include forked repositories +include_forks = false + +# Filter by organizations (empty = all) +orgs = [] + +# Default provider (GitHub.com) +[[providers]] +kind = "github" +auth = "gh-cli" +prefer_ssh = true +enabled = true +``` + +`base_path` is workspace-specific (`WorkspaceConfig.base_path`) and is set during +`gisa setup` (or via workspace config files), not in the global `Config`. + +### Multi-Provider Setup + +```toml +# GitHub.com +[[providers]] +kind = "github" +auth = "gh-cli" +prefer_ssh = true +enabled = true + +# GitHub Enterprise +[[providers]] +kind = "github-enterprise" +name = "Work GitHub" +api_url = "https://github.company.com/api/v3" +auth = "gh-cli" +prefer_ssh = true +enabled = true +base_path = "~/work/code" +``` + +Authenticate GitHub Enterprise once with: + +```bash +gh auth login --hostname github.company.com +``` + +## Commands + +### `init` + +Initialize git-same configuration: + +```bash +gisa init [-p ] [-f | --force] +``` + +Creates a config file at `~/.config/git-same/config.toml` with sensible defaults. + +### `setup` + +Configure a workspace (interactive wizard): + +```bash +gisa setup [--name ] +``` + +Walks through provider selection, authentication, org filters, and base path. + +### `sync` + +Sync repositories — discover, clone new, fetch/pull existing: + +```bash +gisa sync [OPTIONS] + +Options: + -w, --workspace Workspace to sync + --pull Use pull instead of fetch for existing repos + -n, --dry-run Show what would be done + -c, --concurrency Number of parallel operations (1-32) + --refresh Force re-discovery (ignore cache) + --no-skip-uncommitted Don't skip repos with uncommitted changes +``` + +### `status` + +Show status of local repositories: + +```bash +gisa status [OPTIONS] + +Options: + -w, --workspace Workspace to check + -o, --org ... Filter by organization (repeatable) + -d, --uncommitted Show only repositories with uncommitted changes + -b, --behind Show only repositories behind upstream + --detailed Show detailed status information +``` + +### `workspace` + +Manage workspaces: + +```bash +gisa workspace list # List configured workspaces +gisa workspace default [NAME] # Set default workspace +gisa workspace default --clear # Clear default workspace +``` + +### `reset` + +Remove all config, workspaces, and cache: + +```bash +gisa reset [-f | --force] +``` + +## TUI Mode + +Running `gisa` without a subcommand launches the interactive terminal UI. + +### Screens + +| Screen | Purpose | Key bindings | +|--------|---------|-------------| +| **Dashboard** | Overview with stats, quick actions | `s`: Sync, `t`: Status, `w`: Workspaces, `?`: Settings | +| **Workspace Selector** | Pick active workspace | `[←] [↑] [↓] [→]`: Move, `Enter`: Select, `d`: Set default, `n`: New | +| **Init Check** | System requirements check | `Enter`: Check, `c`: Create config, `s`: Setup | +| **Setup Wizard** | Interactive workspace configuration | Step-by-step prompts | +| **Command Picker** | Choose operation to run | `Enter`: Run | +| **Progress** | Live sync progress with per-repo updates | `Esc`: Back when complete | +| **Repo Status** | Table of local repos with git status | `[←] [↑] [↓] [→]`: Move, `/`: Filter, `D`: Uncommitted, `B`: Behind, `r`: Refresh | +| **Org Browser** | Browse discovered repos by organization | `[←] [↑] [↓] [→]`: Move | +| **Settings** | View workspace settings | `Esc`: Back | + +## Examples + +### Sync all repositories in default workspace + +```bash +gisa sync +``` + +### Sync with pull mode for a specific workspace + +```bash +gisa sync --workspace work --pull +``` + +### Check which repositories have uncommitted changes + +```bash +gisa status --uncommitted +``` + +### Dry run to see what would be synced + +```bash +gisa sync --dry-run +``` + +## Development + +### Building from source + +```bash +git clone https://github.com/zaai-com/git-same +cd git-same + +# Development build +cargo build + +# Release build (optimized, stripped, with LTO) +cargo build --release +``` + +Binaries are output to `target/release/` (or `target/debug/`): `git-same`, `gitsame`, `gitsa`, `gisa`. + +### Running tests + +```bash +# Run all tests +cargo test + +# Run with all features enabled +cargo test --all-features + +# Run tests that require GitHub authentication +cargo test -- --ignored + +# Run with verbose output +cargo test -- --nocapture +``` + +### Test file organization + +Unit tests use colocated test files — each `foo.rs` has a companion `foo_tests.rs` in the same directory, linked via `#[path]` attribute. Integration tests live in `tests/`. + +### Linting and formatting + +```bash +# Lint +cargo clippy --all-targets --all-features -- -D warnings + +# Check formatting +cargo fmt --all -- --check +``` + +### Installing locally + +```bash +# Install from source to ~/.cargo/bin/ +cargo install --path . +``` + +This installs all 4 binary aliases (`git-same`, `gitsame`, `gitsa`, `gisa`). Make sure `~/.cargo/bin` is in your `$PATH`. + +### Rebuilding + +```bash +# Incremental rebuild +cargo build --release + +# Clean rebuild +cargo clean && cargo build --release +``` + +### Uninstalling + +```bash +# Remove binaries +cargo uninstall git-same + +# Remove config and cache +rm -rf ~/.config/git-same/ +rm -rf ~/.cache/git-same/ +``` + +## License + +MIT License - see [LICENSE](../LICENSE) for details + +## Contributing + +Contributions welcome! Please open an issue or PR on [GitHub](https://github.com/zaai-com/git-same). + +## Roadmap + +- [x] GitHub support +- [x] Parallel cloning +- [x] Smart filtering +- [x] Progress bars +- [x] Interactive TUI mode +- [x] Workspace management +- [ ] GitLab support +- [ ] Bitbucket support +- [ ] Repo groups +- [ ] Web dashboard diff --git a/docs/Sync-Screen.md b/docs/Sync-Screen.md new file mode 100644 index 0000000..8acd86f --- /dev/null +++ b/docs/Sync-Screen.md @@ -0,0 +1,428 @@ +# Sync Screen Reference + +Validated against commit `9e7db03` on 2026-02-26. + +This is the implementation-level reference for the TUI Sync experience. + +## What This Covers + +- state machine and transitions (`Idle`, `Discovering`, `Running`, `Finished`) +- backend message contract and message ordering +- popup layout and per-line meaning +- keymap by state +- config and runtime precedence rules +- persistence side effects +- troubleshooting and known limitations +- test coverage map and testing gaps + +## Source of Truth + +Primary files: + +- `src/tui/app.rs` +- `src/tui/event.rs` +- `src/tui/backend.rs` +- `src/tui/handler.rs` +- `src/tui/screens/sync.rs` +- `src/tui/screens/dashboard.rs` +- `src/tui/screens/settings.rs` +- `src/workflows/sync_workspace.rs` + +## Quick Mental Model + +- Sync work starts from Dashboard via `s`. +- Work runs in the background; Dashboard stays visible. +- Sync popup is opened/closed with `p`. +- Popup can show `Idle`, `Discovering`, `Running`, `Finished`. +- Backend emits typed messages; `handler.rs` reduces those into app state. + +## Entry Points and Visibility + +### Start sync + +- Dashboard `s` calls `start_sync_operation()`. +- This sets `operation_state = Discovering { operation: Sync, message: "Starting Sync..." }` and spawns backend sync. +- Dashboard remains current screen. + +### Show popup + +- Dashboard `p` calls `show_sync_progress()`. +- Screen switches to `Screen::Sync` and current screen is pushed on `screen_stack`. + +### Hide popup + +- Sync `p` calls `hide_sync_progress()`. +- Returns to previous screen (typically Dashboard) without resetting sync state. + +## State Machine + +`OperationState` variants in `src/tui/app.rs`: + +- `Idle` +- `Discovering { operation, message }` +- `Running { operation, total, completed, ... }` +- `Finished { operation, summary, ... }` + +### Transition Matrix + +| From | Trigger | To | Side effects | +|---|---|---|---| +| `Idle` | Dashboard/Sync key `s` -> `start_sync_operation()` | `Discovering(Sync)` | reset tick count, clear running log, spawn backend | +| `Discovering(Sync)` | `BackendMessage::OperationStarted` | `Running(Sync)` | clear structured log/filter selection, reset run counters | +| `Discovering(Sync)` | `BackendMessage::DiscoveryError` | `Idle` | set `error_message` | +| `Discovering(Sync)` | `BackendMessage::OperationError` | `Idle` | set `error_message` | +| `Discovering(Sync)` | `BackendMessage::OperationComplete` (empty repo path) | `Finished(Sync)` | default filter set, completion side effects | +| `Running(Sync)` | `BackendMessage::RepoStarted` | `Running(Sync)` | add repo to `active_repos` | +| `Running(Sync)` | `BackendMessage::RepoProgress` | `Running(Sync)` | increment counters, append structured + legacy log lines | +| `Running(Sync)` | `BackendMessage::OperationComplete` | `Finished(Sync)` | compute duration/metrics, persist timestamps/history, trigger status scan | +| `Running(Sync)` | `BackendMessage::OperationError` | `Idle` | set `error_message` | +| `Finished(Sync)` | key `s` | `Discovering(Sync)` | starts new run | +| any | key `p` (Dashboard/Sync) | same operation state | only screen visibility changes | +| any | key `q` | app exits | global immediate quit (`should_quit = true`) | + +Notes: + +- Starting a new operation while any operation is `Discovering` or `Running` is blocked with `error_message`. +- `Esc` in Sync first collapses expanded commit detail, then navigates back. + +## Backend Message Contract + +### Message table + +| Message | Producer | Consumed in | Effect | +|---|---|---|---| +| `OrgsDiscovered(count)` | `TuiDiscoveryProgress::on_orgs_discovered` | `handle_backend_message` | sets discovering message | +| `OrgStarted(name)` | `TuiDiscoveryProgress::on_org_started` | `handle_backend_message` | sets discovering message | +| `OrgComplete(name, count)` | `TuiDiscoveryProgress::on_org_complete` | `handle_backend_message` | appends `[ok] org (N repos)` line | +| `DiscoveryComplete(repos)` | `run_sync_operation` | `handle_backend_message` | populates `orgs`, `repos_by_org`, `all_repos` | +| `DiscoveryError(msg)` | `TuiDiscoveryProgress::on_error` | `handle_backend_message` | move to `Idle` + error | +| `OperationStarted { total, to_clone, to_sync }` | `run_sync_operation` | `handle_backend_message` | move to `Running`, reset per-run UI state | +| `RepoStarted { repo_name }` | clone/sync progress adapters | `handle_backend_message` | push active worker repo | +| `RepoProgress { ... }` | clone/sync progress adapters | `handle_backend_message` | update counters + log entries | +| `OperationComplete(summary)` | `run_sync_operation` | `handle_backend_message` | move to `Finished`, persist metadata/history | +| `OperationError(msg)` | `run_sync_operation` / `run_status_scan` | `handle_backend_message` | move to `Idle` + error | +| `RepoCommitLog { repo_name, commits }` | `spawn_commit_fetch` / `spawn_changelog_fetch` | `handle_backend_message` | update expanded repo commits or changelog aggregation | +| `StatusResults(entries)` | `run_status_scan` | `handle_backend_message` | refresh dashboard repo table | + +### Ordering guarantees + +Guaranteed ordering: + +- `DiscoveryComplete` always occurs before `OperationStarted`. +- `OperationStarted` occurs before any run-phase `RepoProgress`. +- `OperationComplete` is emitted once per sync run. + +Not guaranteed ordering: + +- `RepoStarted`/`RepoProgress` across repos are interleaved due concurrency. +- `RepoCommitLog` messages for changelog mode are completion-order, not repo-order. + +## Event-by-Event Sequence + +### Compact text sequence + +```text +User presses [s] on Dashboard + -> local state set to Discovering(Sync) + -> backend sync task spawned + +Backend discovery + -> OrgsDiscovered? + -> OrgStarted/OrgComplete* (0..N) + -> DiscoveryComplete(repos) + +If repos.empty + -> OperationComplete(empty summary) + -> Finished +Else + -> OperationStarted(total, to_clone, to_sync) + -> RepoStarted/RepoProgress* (interleaved) + -> OperationComplete(combined summary) + -> Finished + +On OperationComplete in handler + -> update last_synced in workspace + -> append/persist sync history + -> default log filter (Updated or All) + -> spawn status scan + -> StatusResults +``` + +### Mermaid sequence diagram + +```mermaid +sequenceDiagram + participant U as User + participant D as Dashboard/Sync UI + participant B as TUI Backend + participant H as Handler Reducer + + U->>D: press s + D->>H: set Discovering(Sync) + D->>B: spawn_operation(Sync) + + B-->>H: OrgsDiscovered? + B-->>H: OrgStarted/OrgComplete* + B-->>H: DiscoveryComplete(repos) + + alt repos empty + B-->>H: OperationComplete(empty) + else repos exist + B-->>H: OperationStarted(total,to_clone,to_sync) + loop per repo (interleaved) + B-->>H: RepoStarted(repo) + B-->>H: RepoProgress(...) + end + B-->>H: OperationComplete(summary) + end + + H->>B: spawn_operation(Status) + B-->>H: StatusResults(entries) +``` + +## Popup UI Anatomy + +The Sync screen is a centered popup (`80% x 80%`) with dimmed background. + +Top-to-bottom rows: + +1. Banner +2. Title line +3. Main progress gauge +4. Counters/summary line +5. Throughput/performance line +6. Phase/filter line +7. Worker/status line +8. Main log panel +9. Bottom actions + navigation hints + +### Title meanings + +- `Idle` -> `Sync Progress` +- `Discovering` or `Running` -> `Sync Running` +- `Finished` -> `Sync Completed` + +### Progress gauge labels + +- `Idle` -> `Press [s] to start sync` +- `Discovering` -> `Discovering repositories...` +- `Running` -> `completed/total (pct%)` +- `Finished` -> `Done` + +### Line-by-line semantics by state + +#### Idle + +- counters line: `No sync activity yet.` +- throughput line: `Press [p] to hide, [s] to start.` +- worker line: `Use [p] to close this popup.` +- log panel: `No sync activity yet. Press [s] to start sync.` + +#### Discovering + +- counters line: `Discovering: ` +- throughput line: `Building sync plan...` +- worker line: `Waiting for workers...` +- log panel initially `Discovering repositories...` + +#### Running + +- counters line: Updated / Current / Cloned / optional Failed / optional Skipped / current repo +- throughput line: elapsed, repos/sec, optional ETA, optional sparkline +- phase line: clone bar and sync bar +- worker line: active repo slots or `Workers idle` +- log panel: color-coded lines from `app.log_lines` + +#### Finished + +- counters line: Updated / Current / Cloned / Failed / Skipped +- throughput line becomes performance line (repos, duration, repos/sec, optional commit and clone totals) +- phase line becomes filter status (active filter, entry count, left/right filter hint) +- worker line becomes navigation helper (`[Up]/[Down]`, `[Enter]`) and optional new-commit count +- log panel is filterable with optional inline expanded commit details + +## Keymap + +### Global keys (all screens) + +- `q` -> immediate quit +- `Ctrl+C` -> immediate quit +- `Esc` -> navigate back (Sync special behavior: collapse expanded row first) + +### Dashboard keys relevant to Sync + +- `s` -> start Sync in background +- `p` -> open Sync popup + +### Sync popup keys + +Always available: + +- `s` -> start Sync +- `p` -> hide popup + +When `Discovering` or `Running`: + +- `Up` / `Down` -> scroll running log +- `Left` / `Right` -> adjust running log scroll offset + +When `Finished`: + +- `Up` / `Down` -> move selected row (or scroll changelog view) +- `Left` / `Right` -> cycle filters (`All -> Updated -> Failed -> Skipped -> Changelog`) +- `Enter` -> expand/collapse selected repo commit details +- `a` -> All +- `u` -> Updated +- `f` -> Failed +- `x` -> Skipped +- `c` -> Changelog (batch commit fetch) +- `h` -> toggle sync history overlay + +## Counter and Metric Glossary + +| Name | Definition | +|---|---| +| `total` | `to_clone + to_sync` at run start | +| `completed` | incremented on each `RepoProgress` | +| `failed` | incremented when `success == false` | +| `skipped` | incremented when `skipped == true` | +| `cloned` | successful non-skipped entries with `is_clone == true` | +| `synced` | successful non-skipped entries with `is_clone == false` | +| `with_updates` | successful non-skipped entries where `had_updates == true` | +| `total_new_commits` | sum of `new_commits` where provided on updated entries | +| `current` (running UI) | `completed - failed - skipped - with_updates - cloned` | +| `current` (finished UI) | `summary.success - with_updates - cloned` | +| `changelog_total` | number of updated repos with resolvable path | +| `changelog_loaded` | count of `RepoCommitLog` received while in changelog mode | + +## Config and Option Precedence Matrix + +### Sync mode (fetch/pull) + +Effective mode comes from `prepare_sync_workspace()`: + +| Priority | Source | Rule | +|---|---|---| +| 1 | TUI runtime toggle `app.sync_pull` (`m` key) | if `true`, force `Pull` | +| 2 | Workspace config `workspace.sync_mode` | used when toggle is not forcing pull | +| 3 | Global config `config.sync_mode` | fallback | + +Important nuance: + +- `m` only flips `app.sync_pull` boolean. +- `false` means "do not force pull", not "force fetch". +- If workspace/global default is pull, mode can still be pull with toggle shown as Fetch in settings. + +### Other sync knobs + +| Concern | Effective source in TUI Sync | Notes | +|---|---|---| +| concurrency | `concurrency_override` (none in TUI) -> `workspace.concurrency` -> `config.concurrency` -> clamp 1..32 | resolved in `prepare_sync_workspace` | +| skip uncommitted | hard-coded `true` in TUI backend request | currently not user-tunable in popup | +| refresh discovery | hard-coded `true` in TUI backend request | discovery cache bypassed | +| create base path | hard-coded `true` in TUI backend request | missing base dir auto-created | +| dry run | hard-coded `false` in `execute_prepared_sync(..., false, ...)` | settings `dry_run` not wired | +| structure template | `workspace.structure` or `config.structure` | used for path resolution | +| clone options | workspace clone options override global clone options | depth/branch/submodules | + +## Persistence and Side Effects + +On `OperationComplete` (Sync): + +- `last_synced` is set on active workspace (RFC3339). +- workspace is persisted via `WorkspaceManager::save`. +- sync history entry is appended and capped to 50 in memory. +- history is persisted via `SyncHistoryManager`. +- a status scan is auto-spawned to refresh Dashboard data. + +Tick behavior: + +- tick rate is 100ms. +- throughput sample added every 10 ticks (1 second). +- sampling continues even when Sync popup is hidden. + +Status auto-refresh behavior: + +- Dashboard periodic status scan is suppressed while sync is in progress. + +## Troubleshooting + +| Symptom | Probable cause | What to check | Recovery | +|---|---|---|---| +| Pressing `s` does not open Sync popup | expected behavior: sync runs in background | Dashboard bottom status line should show background sync | press `p` to open popup | +| Sync does not start and error appears | another operation already `Discovering` or `Running` | check `operation_state` summary in Dashboard footer | wait for current run to finish, then press `s` | +| Changelog stuck in loading view | waiting for `RepoCommitLog` events | verify run had updated repos with valid paths | switch filter away/back to `c`; retry sync | +| `Enter` on finished row shows nothing | selected entry has no computed `path` | inspect entry type/path computation/template | ensure workspace structure/provider mapping resolves existing local path | +| Unexpected pull behavior with settings showing Fetch | defaults can still request pull when not forced | compare workspace/global `sync_mode` values | set workspace/global mode explicitly, or force pull with `m` as needed | + +## Known Limitations and Recommended Follow-ups + +Current limitations: + +1. TUI `dry_run` flag is not wired into Sync execution path. +2. TUI always requests `skip_uncommitted = true`. +3. `RepoProgress.skip_reason` is carried in messages but ignored in reducer/render. +4. Settings mode toggle can mislead because `sync_pull = false` means "use defaults", not guaranteed fetch. +5. Reducer coverage for `handle_backend_message` is still thin. + +Recommended follow-ups: + +1. Thread `app.dry_run` into backend call and `execute_prepared_sync`. +2. Add explicit skip-uncommitted toggle in settings/popup. +3. Render `skip_reason` in finished log rows and/or tooltip line. +4. Replace boolean `sync_pull` with explicit runtime mode enum (`Default`, `Fetch`, `Pull`). +5. Add focused reducer tests for message-to-state transitions. + +## Testing Map + +| File | What it covers today | Gaps | +|---|---|---| +| `src/tui/screens/sync_tests.rs` | popup hide (`p`), start sync (`s`), right-arrow filter cycle in finished state | no rendering assertions for counters/log variants/history/changelog | +| `src/tui/screens/dashboard_tests.rs` | dashboard starts sync in background, opens popup via `p`, show/hide preserves indices | no assertions for footer runtime text or background status content | +| `src/tui/backend_tests.rs` | discovery/clone/sync adapters emit expected messages; spawn errors without workspace | no full integration ordering test through real sync workflow | +| `src/tui/event_tests.rs` | enum/message construction and clone/debug sanity | no behavioral routing assertions | +| `src/tui/handler_tests.rs` | global quit and setup-wizard navigation basics | no direct tests for `handle_backend_message` sync reducer paths | + +## User Journey Examples + +### 1) Everything up to date + +1. User presses `s` on Dashboard. +2. State: `Idle -> Discovering -> Running -> Finished`. +3. Most repo events are `success=true`, `had_updates=false`. +4. Finished counters show high `Current`, low/zero `Updated`, `Failed`, `Skipped`. +5. Performance line shows total repos, duration, repos/sec. + +### 2) Mixed updates, clones, and failures + +1. Run starts as usual. +2. Interleaved `RepoProgress` includes: + - `Updated` entries (`[**]`) + - `Cloned` entries (`[++]`) + - `Failed` entries (`[!!]`) +3. Finished counters reflect all categories. +4. User can filter with `u`, `f`, `x`, or cycle via left/right. +5. `Enter` on a row fetches commit details; `c` shows aggregate changelog. + +### 3) Discovery returns zero repos + +1. Discovery completes with empty repo list. +2. Backend sends `OperationComplete(OpSummary::new())` without `OperationStarted`. +3. Handler still transitions to `Finished` with zeroed metrics. +4. Popup/log shows completion with no repo processing entries. + +## Appendix: Exact Sync Request Values from TUI + +When TUI starts Sync, it calls `prepare_sync_workspace()` with: + +- `refresh: true` +- `skip_uncommitted: true` +- `pull: app.sync_pull` +- `concurrency_override: None` +- `create_base_path: true` + +Then execution uses: + +- `execute_prepared_sync(prepared, false, clone_progress, sync_progress)` + +This is the exact reason dry-run and skip-uncommitted are currently fixed in TUI Sync. diff --git a/docs/plans/move-config-to-sync-folder.md b/docs/plans/move-config-to-sync-folder.md new file mode 100644 index 0000000..39d7966 --- /dev/null +++ b/docs/plans/move-config-to-sync-folder.md @@ -0,0 +1,98 @@ +# Move Workspace Config to Sync Folder + +## Status: Proposal (not yet implemented) + +## Problem + +Workspace configs live in `~/.config/git-same//` — a location disconnected from the repos they manage. This creates several friction points: + +- **Requires auto-generated names** — The `` directory (e.g., `github-repos`) is an artifact of this storage model. Users never chose it and gain nothing from it. +- **Not portable** — Moving `~/repos` to another machine loses the config. You'd need to re-run setup. +- **Two locations to back up** — Config in `~/.config`, data in `~/repos`. +- **Not self-describing** — No way to tell a folder is a gisa sync target by looking at it. + +## Proposed Design + +Move workspace config into the sync folder itself: + +```text +~/repos/ ~/.config/git-same/ +├── .git-same/ └── config.toml (global only) +│ ├── config.toml ├── structure = "{org}/{repo}" +│ └── cache.json ├── concurrency = 8 +├── org1/repo1/.git/ └── sync_mode = "fetch" +└── org2/repo3/.git/ +``` + +### Key changes + +1. **Workspace config moves to `{base_path}/.git-same/config.toml`** +2. **Cache moves to `{base_path}/.git-same/cache.json`** +3. **Global config stays at `~/.config/git-same/config.toml`** — holds defaults + a registry of known workspace paths +4. **`default_workspace` becomes a path** — e.g., `default_workspace = "~/repos"` instead of `default_workspace = "github-repos"` +5. **Workspace discovery** — scan registered paths in global config, verify `.git-same/config.toml` exists + +### Global config changes + +```toml +# ~/.config/git-same/config.toml +structure = "{org}/{repo}" +concurrency = 8 +sync_mode = "fetch" + +# Default workspace (by path) +default_workspace = "~/repos" + +# Known workspace paths (for discovery) +workspaces = [ + "~/repos", + "~/work/code", +] +``` + +### Migration strategy + +1. On first run after update, detect old-format configs in `~/.config/git-same//` +2. Move each workspace config into its `base_path/.git-same/` +3. Update global config with `workspaces = [...]` array +4. Remove old workspace directories from `~/.config/git-same/` +5. Print a migration summary + +### What this eliminates + +- `WorkspaceConfig.name` field (no longer needed — path IS the identity) +- `WorkspaceManager::name_from_path()` / `unique_name()` +- The entire `~/.config/git-same//` directory structure +- `SetupState.workspace_name` / `name_editing` + +### What this enables + +- `gisa setup` in any directory drops config right there +- Moving a sync folder to another machine preserves the config +- `base_path` becomes the sole workspace identifier everywhere + +## Edge Cases to Handle + +| Case | Resolution | +|------|-----------| +| Sync folder on read-only mount | Fall back to `~/.config` location, warn | +| Sync folder is itself a git repo | Add `.git-same/` to `.gitignore` automatically | +| Org or repo named `.git-same` | Use a more unique name like `.gisa/` | +| User deletes `.git-same/` | Workspace disappears from registry; `gisa setup` re-creates | +| Two workspaces with same base_path | Not possible — path is unique identity | + +## Files to Modify + +- `src/config/workspace_manager.rs` — Complete rewrite of discovery/save/load +- `src/config/workspace.rs` — Remove `name` field, update constructors +- `src/config/parser.rs` — Add `workspaces` array, change `default_workspace` to path +- `src/setup/handler.rs` — Save to `base_path/.git-same/` instead of `~/.config` +- `src/setup/state.rs` — Remove `workspace_name` / `name_editing` +- `src/cache.rs` — Update cache path resolution +- `src/commands/workspace.rs` — Rewrite for path-based operations +- `src/tui/handler.rs` — Update workspace matching +- `src/tui/app.rs` — Update workspace loading + +## Estimated Scope + +Medium-large refactor. Migration logic is the riskiest part — must handle partial migrations, permission errors, and rollback. Consider feature-flagging the new storage format during development. diff --git a/docs/plans/optimize-binary-aliases.md b/docs/plans/optimize-binary-aliases.md new file mode 100644 index 0000000..d01b160 --- /dev/null +++ b/docs/plans/optimize-binary-aliases.md @@ -0,0 +1,91 @@ +# Optimize Binary Aliases + +**Status:** Proposed +**Impact:** ~4x faster release link stage + +## Problem + +The release build produces 4 identical binaries (`git-same`, `gitsame`, `gitsa`, `gisa`), all compiled from `src/main.rs` with no behavioral differences. Combined with the release profile (`lto = true`, `codegen-units = 1`), each binary triggers a full LTO link pass — the most expensive build step. This roughly quadruples link time. + +## Current State + +- `Cargo.toml` defines 4 `[[bin]]` entries all pointing to `src/main.rs` +- `src/main.rs` does not inspect `argv[0]` — all binaries behave identically +- Integration tests only reference `git-same` +- Homebrew formula already installs only `git-same` +- GitHub Release artifacts are single binaries per platform + +## Proposed Solution + +Replace the 4 `[[bin]]` entries with a single `git-same` binary and create aliases via symlinks or documentation depending on the install method. + +### Cargo.toml + +Remove 3 duplicate `[[bin]]` sections, keeping only: + +```toml +[[bin]] +name = "git-same" +path = "src/main.rs" +``` + +### Homebrew (S3-Publish-Homebrew.yml) + +Add symlinks in the formula's `install` method: + +```ruby +bin.install_symlink "git-same" => "gitsame" +bin.install_symlink "git-same" => "gitsa" +bin.install_symlink "git-same" => "gisa" +``` + +### cargo install / GitHub Releases + +Document that users can create shell aliases: + +```bash +alias gitsame="git-same" +alias gitsa="git-same" +alias gisa="git-same" +``` + +Or symlinks: + +```bash +for alias in gitsame gitsa gisa; do + ln -sf "$(which git-same)" "$(dirname $(which git-same))/$alias" +done +``` + +### toolkit/Conductor/run.sh + +Add symlink creation after `cargo install --path .`: + +```bash +for alias in gitsame gitsa gisa; do + ln -sf "$HOME/.cargo/bin/git-same" "$HOME/.cargo/bin/$alias" +done +``` + +## Files to Modify + +| File | Change | +|------|--------| +| `Cargo.toml` | Remove 3 duplicate `[[bin]]` entries | +| `toolkit/Conductor/run.sh` | Add symlink creation after install | +| `.github/workflows/S3-Publish-Homebrew.yml` | Add `bin.install_symlink` lines | +| `docs/README.md` | Document alias setup for manual installs | + +## No Changes Needed + +- `src/main.rs` — no binary-name awareness +- `src/cli.rs` — display name hardcoded to `git-same`, completions generate as `gisa` (works via symlink) +- `tests/integration_test.rs` — already only references `git-same` +- `.github/workflows/S2-Release-GitHub.yml` — already builds single artifact per platform + +## Trade-offs + +- **Pro:** ~4x faster link stage in release builds +- **Pro:** Smaller build output (1 binary instead of 4) +- **Con:** `cargo install git-same` no longer auto-installs all 4 aliases +- **Con:** Users need to manually set up aliases or symlinks (unless using Homebrew) diff --git a/docs/plans/path-selector-ux.md b/docs/plans/path-selector-ux.md new file mode 100644 index 0000000..907ec06 --- /dev/null +++ b/docs/plans/path-selector-ux.md @@ -0,0 +1,109 @@ +# Setup Path Selector UX Ideas + +**Status:** Proposed +**Scope:** Setup wizard (`SelectPath` screen) + +## Goal + +Reduce friction when choosing a base path during setup, especially for users who do not remember exact directory names. + +## Current Friction + +- Path entry depends on free typing + tab completion. +- Suggestions are helpful, but users cannot visually browse real folders. +- New users can get stuck on path syntax (`~`, trailing `/`, nested folders). + +## Option A (Recommended): Inline Folder Navigator Mode + +Add a toggleable browse mode inside the existing `SelectPath` screen. + +### Interaction + +- `b` opens navigator mode +- `Up`/`Down` selects folder +- `Right` enters selected folder +- `Left` goes to parent folder +- `Enter` selects current folder as base path +- `Esc` exits navigator mode back to typed path mode + +### Mockup + +```text + Where should repositories be cloned? + Repos will be organized as: // + + Base Path: ~/Developer + + Browse Folders (Navigator) + Current: ~/Developer + + > projects/ + clients/ + playground/ + archives/ + .. (parent) + + [Enter] Use Folder [Left/Right] Open/Back [Esc] Close +``` + +### Why this fits now + +- Reuses current key model (arrow navigation already standard). +- Keeps existing typed mode and tab completion for power users. +- Minimal architecture impact: can live inside `setup/screens/path.rs` + `setup/handler.rs`. + +## Option B: Two-Pane Explorer + +Split path screen into left tree (folders) + right preview/details. + +### Mockup + +```text + Base Path Picker + + ~/Developer Preview + > projects/ Final path: + clients/ ~/Developer/projects + playground/ Clone layout: + archives/ ~/Developer// +``` + +### Trade-off + +Clearer context, but more rendering complexity and harder to support narrow terminals. + +## Option C: Guided Presets + "Browse from here" + +Keep suggestions first, but add one action: "Browse from selected suggestion". + +### Mockup + +```text + Suggestions: + > ~/Git-Same/GitHub (current directory) + ~/Developer + ~/Projects + ~ + + [Enter] Use Suggestion [b] Browse From Suggestion [Tab] Edit +``` + +### Trade-off + +Very small change, but less flexible than full navigator mode. + +## Recommended Rollout + +1. Ship Option C first (fast, low risk). +2. Add Option A navigator in next iteration. +3. Keep typed + completion mode permanently for advanced users. + +## Implementation Notes + +- New `PathInputMode` enum (e.g., `Suggestions | Typing | Browsing`). +- Navigator state fields: + - `browse_current_dir: String` + - `browse_entries: Vec` + - `browse_index: usize` +- Hide dot-folders by default; allow toggle later. +- Always show resulting normalized path in a preview line. diff --git a/docs/plans/remove-global-providers-config-opus.md b/docs/plans/remove-global-providers-config-opus.md new file mode 100644 index 0000000..f052add --- /dev/null +++ b/docs/plans/remove-global-providers-config-opus.md @@ -0,0 +1,64 @@ +# Plan: Remove [[providers]] from Global Config + +## Context + +Previous work simplified auth to gh-cli only and restructured providers (Steps 1–5, all done). The remaining task: remove `[[providers]]` from the global user config entirely. Confirmed that `config.providers: Vec` is never used at runtime — all sync/clone operations use `WorkspaceConfig.provider` (workspace-level). The global config should contain only: `concurrency`, `sync_mode`, `structure`, `default_workspace`, `[clone]`, `[filters]`. `ProviderEntry` / `AuthMethod` remain as internal types used by workspace config and the provider factory. + +--- + +## Step 1: `src/config/parser.rs` + +- Remove `providers: Vec` field and `#[serde(default = "default_providers")]` annotation from `Config` +- Remove `default_providers()` function +- Remove `use super::provider_config::ProviderEntry;` import (no longer needed here) +- Remove provider validation block from `Config::validate()` (the `for (i, provider)` loop and the empty-providers check) +- Remove `enabled_providers()` method +- Remove the `[[providers]]` section from `Config::default_toml()` (lines ~260–265) +- Remove `ProviderEntry` from `Config::default()` (it's in the providers field) + +--- + +## Step 2: `src/config/mod.rs` + +- Update the doc comment example to remove `[[providers]]` +- Keep `AuthMethod` and `ProviderEntry` in `pub use provider_config::{...}` — required because `WorkspaceProvider.auth: AuthMethod` is a `pub` field and `to_provider_entry()` returns `ProviderEntry`. Removing them from `mod.rs` while `provider_config` is a private module would cause a `E0446` compile error (public field/method using a type that is unreachable outside the module). + +--- + +## Step 3: `src/lib.rs` prelude + +- Remove `AuthMethod` and `ProviderEntry` from the prelude re-exports in `src/lib.rs:73` — they remain accessible as `crate::config::AuthMethod` / `crate::config::ProviderEntry` but are no longer advertised at the top-level API surface + +--- + +## Step 4: `src/config/parser_tests.rs` + +- `test_default_config` (line 12): remove `assert_eq!(config.providers.len(), 1)` +- `test_load_full_config` (lines 41–43): remove `[[providers]]` section from the test TOML string (TOML parses fine without it) +- Remove `test_load_multi_provider_config` entirely (lines 58–75) +- Remove `test_validation_rejects_empty_providers` entirely (lines 104–113) +- Remove `test_enabled_providers_filter` entirely (lines 131–152) +- `test_parse_config_with_default_workspace` (lines 165–167): remove `[[providers]]` from content +- `test_parse_config_without_default_workspace` (lines 175–177): remove `[[providers]]` from content +- `test_save_default_workspace_to_replace_without_sync_mode` (lines 241–244): remove `[[providers]]` from content + +> Note: serde ignores unknown TOML keys by default, so existing user config files with `[[providers]]` will continue to load without error — the section is silently ignored. + +--- + +## Files Summary + +| File | Change | +|------|--------| +| `src/config/parser.rs` | Remove `providers` field, `default_providers()`, empty-providers validation, `enabled_providers()`, `[[providers]]` from default TOML | +| `src/config/mod.rs` | Keep `AuthMethod`/`ProviderEntry` in public exports (required for public API); update doc example | +| `src/lib.rs` | Remove `AuthMethod`/`ProviderEntry` from prelude | +| `src/config/parser_tests.rs` | Remove provider-related assertions and tests | + +--- + +## Verification + +1. `cargo fmt -- --check` +2. `cargo clippy -- -D warnings` +3. `cargo test` diff --git a/docs/plans/remove-global-providers-config.md b/docs/plans/remove-global-providers-config.md new file mode 100644 index 0000000..a72f4d3 --- /dev/null +++ b/docs/plans/remove-global-providers-config.md @@ -0,0 +1,114 @@ +# Plan: Remove `[[providers]]` From Global Config + +## Goal +Remove provider definitions from the global config file (`~/.config/git-same/config.toml`) and keep provider configuration workspace-scoped. +After this change, global config should only contain: +- `concurrency` +- `sync_mode` +- `structure` +- `default_workspace` +- `[clone]` +- `[filters]` + +## Scope Decision +- Treat this as a **breaking library API change** (CLI behavior remains aligned with current workspace-based flow). +- Runtime already uses workspace provider config for setup/sync operations. + +## Implementation Steps + +### 1. Confirm release/API scope +- Mark this work as breaking for crate consumers because `AuthMethod`/`ProviderEntry` are currently part of public interfaces. +- Audit affected surfaces: + - `src/lib.rs` + - `src/config/mod.rs` + - `src/config/workspace.rs` + - `src/auth/mod.rs` + - `src/provider/mod.rs` + +### 2. Remove global `[[providers]]` schema from parser +Update `src/config/parser.rs`: +- Remove `providers: Vec` from `Config`. +- Remove `default_providers()` helper. +- Remove provider-specific validation (empty-check and per-provider loop). +- Remove `enabled_providers()` method. +- Remove `[[providers]]` block from `Config::default_toml()`. +- Remove unused import of `ProviderEntry`. + +### 3. Redesign workspace/provider bridge API +Update `src/config/workspace.rs`: +- Remove `to_provider_entry()` adapter from `WorkspaceProvider`. +- Add direct helpers required by auth/provider code paths (for example, API URL/name helpers), so runtime no longer depends on `ProviderEntry`. + +### 4. Remove `AuthMethod` from public workspace model +- Since auth is gh-cli-only, remove `auth` from `WorkspaceProvider` and related serialization/tests. +- Keep workspace provider fields that are still user-specific (`kind`, `api_url`, `prefer_ssh`). +- Update `src/config/provider_config.rs` as needed so legacy type usage is minimized or internalized. + +### 5. Update auth/provider entrypoints to use workspace provider type +Update: +- `src/auth/mod.rs` +- `src/provider/mod.rs` + +Actions: +- Replace function signatures that currently accept `ProviderEntry`. +- Preserve existing behavior for host extraction and enterprise URL handling. + +### 6. Migrate all runtime call sites +Update: +- `src/workflows/sync_workspace.rs` +- `src/setup/handler.rs` + +Actions: +- Pass `WorkspaceProvider` directly to auth/provider layers. +- Remove intermediate conversion calls. + +### 7. Remove public re-exports for legacy provider config types +Update: +- `src/config/mod.rs` +- `src/lib.rs` +- `src/lib_tests.rs` + +Actions: +- Remove prelude/config re-exports of `AuthMethod` and `ProviderEntry`. +- Adjust prelude tests to validate remaining public API. + +### 8. Update parser tests for new global schema +Update `src/config/parser_tests.rs`: +- Remove provider-related assertions/tests. +- Remove `[[providers]]` snippets where no longer necessary. +- Add backward-compat test: config containing legacy `[[providers]]` still parses and is ignored. + +### 9. Update workspace/auth/provider tests +Update relevant tests to new interfaces and structs: +- `src/config/workspace_tests.rs` +- `src/provider/mod_tests.rs` +- `src/auth/mod_tests.rs` +- `src/workflows/sync_workspace_tests.rs` + +### 10. Update docs and examples +Update: +- `docs/README.md` +- `.context/GIT-SAME-DOCUMENTATION.md` (if maintained in parallel) + +Actions: +- Remove global `[[providers]]` examples. +- Document provider configuration as workspace-scoped. + +### 11. Validation +Run: +- `cargo fmt -- --check` +- `cargo clippy -- -D warnings` +- `cargo test` + +Manual smoke checks: +- `gisa init` +- `gisa setup` +- `gisa sync --dry-run` +- Verify legacy config with `[[providers]]` still loads without failure. + +### 12. Delivery strategy (recommended) +Split into 3 commits: +1. Parser/schema cleanup + parser tests +2. API redesign + runtime call-site migration +3. Docs + remaining test updates + diff --git a/rust-toolchain.toml b/rust-toolchain.toml new file mode 100644 index 0000000..5355133 --- /dev/null +++ b/rust-toolchain.toml @@ -0,0 +1,2 @@ +[toolchain] +channel = "1.93.1" diff --git a/rustfmt.toml b/rustfmt.toml new file mode 100644 index 0000000..255db29 --- /dev/null +++ b/rustfmt.toml @@ -0,0 +1,5 @@ +# Git-Same Rust formatting configuration +edition = "2021" +max_width = 100 +tab_spaces = 4 +use_small_heuristics = "Default" diff --git a/src/app/cli/mod.rs b/src/app/cli/mod.rs new file mode 100644 index 0000000..f49676f --- /dev/null +++ b/src/app/cli/mod.rs @@ -0,0 +1,4 @@ +//! CLI-facing adapters. + +pub use crate::cli::*; +pub use crate::commands::*; diff --git a/src/app/mod.rs b/src/app/mod.rs new file mode 100644 index 0000000..4bec085 --- /dev/null +++ b/src/app/mod.rs @@ -0,0 +1,5 @@ +//! Application entry layers. + +pub mod cli; +#[cfg(feature = "tui")] +pub mod tui; diff --git a/src/app/tui/mod.rs b/src/app/tui/mod.rs new file mode 100644 index 0000000..931898b --- /dev/null +++ b/src/app/tui/mod.rs @@ -0,0 +1,6 @@ +//! TUI-facing adapters. + +#[cfg(feature = "tui")] +pub use crate::setup::run_setup; +#[cfg(feature = "tui")] +pub use crate::tui::run_tui; diff --git a/src/auth/gh_cli.rs b/src/auth/gh_cli.rs new file mode 100644 index 0000000..81cff3b --- /dev/null +++ b/src/auth/gh_cli.rs @@ -0,0 +1,118 @@ +//! GitHub CLI authentication integration. +//! +//! Uses the `gh` CLI tool to obtain authentication tokens securely. + +use crate::errors::AppError; +use std::process::Command; + +/// Check if the GitHub CLI is installed. +pub fn is_installed() -> bool { + Command::new("gh") + .arg("--version") + .output() + .map(|o| o.status.success()) + .unwrap_or(false) +} + +/// Check if the user is authenticated with the GitHub CLI. +pub fn is_authenticated() -> bool { + Command::new("gh") + .args(["auth", "status"]) + .output() + .map(|o| o.status.success()) + .unwrap_or(false) +} + +/// Get the authentication token from the GitHub CLI. +pub fn get_token() -> Result { + let output = Command::new("gh") + .args(["auth", "token"]) + .output() + .map_err(|e| AppError::auth(format!("Failed to run 'gh auth token': {}", e)))?; + + if !output.status.success() { + let stderr = String::from_utf8_lossy(&output.stderr); + return Err(AppError::auth(format!( + "gh auth token failed: {}", + stderr.trim() + ))); + } + + let token = String::from_utf8(output.stdout) + .map_err(|_| AppError::auth("Invalid UTF-8 in token output"))? + .trim() + .to_string(); + + if token.is_empty() { + return Err(AppError::auth("gh auth token returned empty token")); + } + + Ok(token) +} + +/// Get the authenticated GitHub username. +pub fn get_username() -> Result { + let output = Command::new("gh") + .args(["api", "user", "--jq", ".login"]) + .output() + .map_err(|e| AppError::auth(format!("Failed to get username from gh: {}", e)))?; + + if !output.status.success() { + let stderr = String::from_utf8_lossy(&output.stderr); + return Err(AppError::auth(format!( + "Failed to get username: {}", + stderr.trim() + ))); + } + + let username = String::from_utf8(output.stdout) + .map_err(|_| AppError::auth("Invalid UTF-8 in username output"))? + .trim() + .to_string(); + + if username.is_empty() { + return Err(AppError::auth("gh returned empty username")); + } + + Ok(username) +} + +/// Get token for a specific GitHub host (for GitHub Enterprise). +pub fn get_token_for_host(host: &str) -> Result { + let output = Command::new("gh") + .args(["auth", "token", "--hostname", host]) + .output() + .map_err(|e| { + AppError::auth(format!( + "Failed to run 'gh auth token --hostname {}': {}", + host, e + )) + })?; + + if !output.status.success() { + let stderr = String::from_utf8_lossy(&output.stderr); + return Err(AppError::auth(format!( + "gh auth token for {} failed: {}", + host, + stderr.trim() + ))); + } + + let token = String::from_utf8(output.stdout) + .map_err(|_| AppError::auth("Invalid UTF-8 in token output"))? + .trim() + .to_string(); + + if token.is_empty() { + return Err(AppError::auth(format!( + "gh auth token for {} returned empty token", + host + ))); + } + + Ok(token) +} + +#[cfg(test)] +#[path = "gh_cli_tests.rs"] +mod tests; diff --git a/src/auth/gh_cli_tests.rs b/src/auth/gh_cli_tests.rs new file mode 100644 index 0000000..6d13e62 --- /dev/null +++ b/src/auth/gh_cli_tests.rs @@ -0,0 +1,48 @@ +use super::*; + +#[test] +fn test_is_installed_returns_bool() { + // This test just verifies the function runs without panicking + // The actual result depends on whether gh is installed + let _result = is_installed(); +} + +#[test] +fn test_is_authenticated_returns_bool() { + let _result = is_authenticated(); +} + +// Integration tests that require gh to be installed and authenticated +// These are ignored by default +#[test] +#[ignore] +fn test_get_token_when_authenticated() { + if !is_installed() || !is_authenticated() { + return; + } + + let token = get_token().unwrap(); + assert!(!token.is_empty()); + // GitHub tokens start with specific prefixes + assert!( + token.starts_with("ghp_") + || token.starts_with("github_pat_") + || token.starts_with("gho_") + || token.starts_with("ghu_") + || token.starts_with("ghr_") + || token.starts_with("ghs_") + ); +} + +#[test] +#[ignore] +fn test_get_username_when_authenticated() { + if !is_installed() || !is_authenticated() { + return; + } + + let username = get_username().unwrap(); + assert!(!username.is_empty()); + // Usernames shouldn't contain whitespace + assert!(!username.contains(char::is_whitespace)); +} diff --git a/src/auth/mod.rs b/src/auth/mod.rs new file mode 100644 index 0000000..86c5f71 --- /dev/null +++ b/src/auth/mod.rs @@ -0,0 +1,165 @@ +//! Authentication management for gisa. +//! +//! This module handles authentication with Git hosting providers +//! using the GitHub CLI (`gh auth token`). +//! +//! # Example +//! +//! ```no_run +//! use git_same::auth::{get_auth_for_provider, AuthResult}; +//! use git_same::config::ProviderEntry; +//! +//! let provider = ProviderEntry::github(); +//! let auth = get_auth_for_provider(&provider).expect("Failed to authenticate"); +//! println!("Authenticated as {:?} via {}", auth.username, auth.method); +//! ``` + +pub mod gh_cli; +pub mod ssh; + +use crate::config::ProviderEntry; +use crate::errors::AppError; +use tracing::{debug, warn}; + +/// Authentication result containing the token and metadata. +#[derive(Debug, Clone)] +pub struct AuthResult { + /// The authentication token + pub token: String, + /// Method used to obtain the token + pub method: ResolvedAuthMethod, + /// The authenticated username (if available) + pub username: Option, +} + +/// The actual method used for authentication. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum ResolvedAuthMethod { + /// Used GitHub CLI + GhCli, +} + +impl std::fmt::Display for ResolvedAuthMethod { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + ResolvedAuthMethod::GhCli => write!(f, "GitHub CLI"), + } + } +} + +/// Get authentication using the GitHub CLI. +/// +/// Requires `gh` to be installed and authenticated. +pub fn get_auth() -> Result { + debug!("Resolving authentication via gh CLI"); + + // Try gh CLI + let gh_installed = gh_cli::is_installed(); + let gh_authenticated = gh_installed && gh_cli::is_authenticated(); + debug!(gh_installed, gh_authenticated, "Checking GitHub CLI status"); + + if gh_installed && gh_authenticated { + match gh_cli::get_token() { + Ok(token) => { + let username = gh_cli::get_username().ok(); + debug!( + username = username.as_deref().unwrap_or(""), + "Authenticated via GitHub CLI" + ); + return Ok(AuthResult { + token, + method: ResolvedAuthMethod::GhCli, + username, + }); + } + Err(e) => { + warn!(error = %e, "gh CLI token retrieval failed"); + } + } + } + + // No authentication found - provide helpful error message + let ssh_note = if ssh::has_ssh_keys() { + "\n\nNote: SSH keys detected. While SSH keys work for git clone/push,\n\ + you still need a provider API token for repository discovery.\n\ + The SSH keys will be used automatically for cloning." + } else { + "" + }; + + Err(AppError::auth(format!( + "No authentication found.\n\n\ + Please authenticate using the GitHub CLI:\n\n\ + For GitHub.com: gh auth login\n\ + For GitHub Enterprise: gh auth login --hostname \n\ + {}\n\ + Install from: https://cli.github.com/", + ssh_note + ))) +} + +/// Get authentication for a specific provider configuration. +pub fn get_auth_for_provider(provider: &ProviderEntry) -> Result { + debug!( + api_url = provider.api_url.as_deref().unwrap_or("default"), + "Resolving authentication for provider" + ); + + // For GitHub Enterprise, try to get token for specific host + if let Some(api_url) = &provider.api_url { + if let Some(host) = extract_host(api_url) { + if host != "api.github.com" { + debug!(host, "Attempting GitHub Enterprise authentication"); + if let Ok(token) = gh_cli::get_token_for_host(&host) { + debug!(host, "Authenticated via gh CLI for enterprise host"); + return Ok(AuthResult { + token, + method: ResolvedAuthMethod::GhCli, + username: None, + }); + } + } + } + } + + // Default gh auth + if !gh_cli::is_installed() { + debug!("gh CLI not installed"); + return Err(AppError::auth( + "GitHub CLI is not installed. Install from https://cli.github.com/", + )); + } + if !gh_cli::is_authenticated() { + debug!("gh CLI not authenticated"); + return Err(AppError::auth( + "GitHub CLI is not authenticated. Run: gh auth login", + )); + } + + let token = gh_cli::get_token()?; + let username = gh_cli::get_username().ok(); + debug!( + username = username.as_deref().unwrap_or(""), + "Authenticated via gh CLI" + ); + + Ok(AuthResult { + token, + method: ResolvedAuthMethod::GhCli, + username, + }) +} + +/// Extract hostname from an API URL. +fn extract_host(url: &str) -> Option { + let without_scheme = url.split_once("://").map(|(_, rest)| rest).unwrap_or(url); + let host = without_scheme.split('/').next()?; + if host.is_empty() { + return None; + } + Some(host.to_string()) +} + +#[cfg(test)] +#[path = "mod_tests.rs"] +mod tests; diff --git a/src/auth/mod_tests.rs b/src/auth/mod_tests.rs new file mode 100644 index 0000000..3270a61 --- /dev/null +++ b/src/auth/mod_tests.rs @@ -0,0 +1,48 @@ +use super::*; + +#[test] +fn test_resolved_auth_method_display() { + assert_eq!(format!("{}", ResolvedAuthMethod::GhCli), "GitHub CLI"); +} + +#[test] +fn test_extract_host() { + assert_eq!( + extract_host("https://api.github.com"), + Some("api.github.com".to_string()) + ); + assert_eq!( + extract_host("https://github.company.com/api/v3"), + Some("github.company.com".to_string()) + ); + assert_eq!( + extract_host("http://localhost:8080/api"), + Some("localhost:8080".to_string()) + ); +} + +#[test] +fn test_extract_host_no_scheme() { + assert_eq!( + extract_host("api.github.com/v3"), + Some("api.github.com".to_string()) + ); +} + +#[test] +fn test_extract_host_empty() { + assert_eq!(extract_host(""), None); +} + +#[test] +fn test_extract_host_scheme_only() { + assert_eq!(extract_host("https://"), None); +} + +#[test] +fn test_extract_host_with_port() { + assert_eq!( + extract_host("https://github.example.com:8443/api/v3"), + Some("github.example.com:8443".to_string()) + ); +} diff --git a/src/auth/ssh.rs b/src/auth/ssh.rs new file mode 100644 index 0000000..04d73fd --- /dev/null +++ b/src/auth/ssh.rs @@ -0,0 +1,101 @@ +//! SSH key detection for git operations. +//! +//! Note: SSH keys authenticate git clone/fetch/pull operations, +//! NOT GitHub API calls. This module detects if SSH keys are configured +//! so we can provide better error messages and suggest SSH clone URLs. + +use std::path::PathBuf; +use std::process::Command; + +/// Check if SSH is likely configured for GitHub. +/// +/// Uses BatchMode to avoid interactive prompts. If the host key is not +/// already known, this returns false (user should run `ssh -T git@github.com` +/// manually to verify and accept the host key). +pub fn has_github_ssh_access() -> bool { + // Try to test SSH connection to GitHub + // BatchMode=yes prevents interactive prompts (for host key verification, passwords, etc.) + // ConnectTimeout=5 prevents hanging on network issues + let output = Command::new("ssh") + .args([ + "-T", + "-o", + "BatchMode=yes", + "-o", + "ConnectTimeout=5", + "git@github.com", + ]) + .output(); + + if let Ok(output) = output { + // GitHub SSH test returns exit code 1 with success message + // "Hi username! You've successfully authenticated..." + let stderr = String::from_utf8_lossy(&output.stderr); + stderr.contains("successfully authenticated") + } else { + false + } +} + +/// Detect if SSH keys exist in the standard locations. +pub fn has_ssh_keys() -> bool { + let home = match std::env::var("HOME") { + Ok(h) => h, + Err(_) => return false, + }; + + let ssh_dir = PathBuf::from(home).join(".ssh"); + + // Check for common SSH key types + let key_files = [ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", + ]; + + for key_file in &key_files { + let key_path = ssh_dir.join(key_file); + if key_path.exists() { + return true; + } + } + + false +} + +/// Get SSH key files that exist. +pub fn get_ssh_key_files() -> Vec { + let home = match std::env::var("HOME") { + Ok(h) => h, + Err(_) => return vec![], + }; + + let ssh_dir = PathBuf::from(home).join(".ssh"); + + let key_files = [ + "id_rsa", + "id_ed25519", + "id_ecdsa", + "id_dsa", + "github_rsa", + "github_ed25519", + ]; + + key_files + .iter() + .map(|f| ssh_dir.join(f)) + .filter(|p| p.exists()) + .collect() +} + +/// Check if SSH agent is running. +pub fn has_ssh_agent() -> bool { + std::env::var("SSH_AUTH_SOCK").is_ok() +} + +#[cfg(test)] +#[path = "ssh_tests.rs"] +mod tests; diff --git a/src/auth/ssh_tests.rs b/src/auth/ssh_tests.rs new file mode 100644 index 0000000..19d183d --- /dev/null +++ b/src/auth/ssh_tests.rs @@ -0,0 +1,29 @@ +use super::*; + +#[test] +fn test_has_ssh_keys_detection() { + // This test just checks that the function runs without panicking + // The actual result depends on the test environment + let _ = has_ssh_keys(); +} + +#[test] +fn test_get_ssh_key_files() { + // This test just checks that the function runs without panicking + let keys = get_ssh_key_files(); + // Can't assert specific results as it depends on test environment + assert!(keys.len() <= 6); // At most 6 key types +} + +#[test] +fn test_has_ssh_agent() { + // This test just checks that the function runs without panicking + let _ = has_ssh_agent(); +} + +#[test] +#[ignore] // Ignore by default as it requires network access +fn test_has_github_ssh_access() { + // This test requires actual SSH configuration + let _ = has_github_ssh_access(); +} diff --git a/src/banner.rs b/src/banner.rs new file mode 100644 index 0000000..c8a6062 --- /dev/null +++ b/src/banner.rs @@ -0,0 +1,269 @@ +//! ASCII banner for gisa — shared across CLI and TUI. + +use console::style; + +/// Banner lines 1-4 (shared between CLI and TUI). +const LINES: [&str; 4] = [ + " ██████╗ ██╗████████╗ ███████╗ █████╗ ███╗ ███╗███████╗", + "██╔════╝ ██║╚══██╔══╝ ██╔════╝██╔══██╗████╗ ████║██╔════╝", + "██║ ███╗██║ ██║█████╗███████╗███████║██╔████╔██║█████╗ ", + "██║ ██║██║ ██║╚════╝╚════██║██╔══██║██║╚██╔╝██║██╔══╝ ", +]; + +/// Line 5 prefix (before version badge). +const LINE5_PREFIX: &str = "╚██████╔╝██║ ██║ ███████║██║ ██║██║ ╚═╝ ██║█"; + +/// Line 5 suffix (after version badge). +const LINE5_SUFFIX: &str = "╗"; + +/// Line 6. +const LAST_LINE: &str = " ╚═════╝ ╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝"; + +/// Visual width of the ASCII logo (in monospace columns). +const ART_WIDTH: usize = 62; + +/// Static gradient color stops: Blue → Cyan → Green. +const GRADIENT_STOPS: [(u8, u8, u8); 3] = [ + (59, 130, 246), // Blue + (6, 182, 212), // Cyan + (34, 197, 94), // Green +]; + +/// Animated gradient color stops: Blue → Cyan → Green → Indigo → Purple. +const ANIMATED_GRADIENT_STOPS: [(u8, u8, u8); 5] = [ + (59, 130, 246), // Blue + (6, 182, 212), // Cyan + (34, 197, 94), // Green + (99, 102, 241), // Indigo + (147, 51, 234), // Purple +]; + +/// Canonical subheadline used by CLI + Dashboard. +pub fn subheadline() -> &'static str { + env!("CARGO_PKG_DESCRIPTION") +} + +/// Prints the gisa ASCII art banner to stdout (CLI mode). +pub fn print_banner() { + // Build full art from shared constants + let version = env!("CARGO_PKG_VERSION"); + let version_display = format!("{:^6}", version); + let line5 = format!("{LINE5_PREFIX}{version_display}{LINE5_SUFFIX}"); + let art = format!( + "\n{}\n{}\n{}\n{}\n{}\n{}", + LINES[0], LINES[1], LINES[2], LINES[3], line5, LAST_LINE + ); + + println!("{}", style(art).cyan().bold()); + let subtitle = subheadline(); + let visible_len = subtitle.chars().count(); + let pad = if visible_len < ART_WIDTH { + (ART_WIDTH - visible_len) / 2 + } else { + 0 + }; + println!("{}{}\n", " ".repeat(pad + 1), style(subtitle).dim()); +} + +// --------------------------------------------------------------------------- +// TUI rendering (feature-gated) +// --------------------------------------------------------------------------- + +#[cfg(feature = "tui")] +use ratatui::{ + layout::Rect, + style::{Color, Modifier, Style}, + text::{Line, Span}, + widgets::Paragraph, + Frame, +}; + +/// Linearly interpolate between RGB color stops. +#[cfg(feature = "tui")] +pub(crate) fn interpolate_stops(stops: &[(u8, u8, u8)], t: f64) -> (u8, u8, u8) { + let t = t.clamp(0.0, 1.0); + let segments = stops.len() - 1; + let scaled = t * segments as f64; + let idx = (scaled as usize).min(segments - 1); + let local_t = scaled - idx as f64; + let (r1, g1, b1) = stops[idx]; + let (r2, g2, b2) = stops[idx + 1]; + let lerp = |a: u8, b: u8, t: f64| -> u8 { (a as f64 + (b as f64 - a as f64) * t) as u8 }; + ( + lerp(r1, r2, local_t), + lerp(g1, g2, local_t), + lerp(b1, b2, local_t), + ) +} + +/// Apply a static gradient to a line of text. +#[cfg(feature = "tui")] +fn gradient_line<'a>(text: &'a str, stops: &[(u8, u8, u8)]) -> Line<'a> { + let chars: Vec<&str> = text.split_inclusive(|_: char| true).collect(); + let len = chars.len().max(1); + let spans: Vec> = chars + .into_iter() + .enumerate() + .map(|(i, ch)| { + let t = i as f64 / (len - 1).max(1) as f64; + let (r, g, b) = interpolate_stops(stops, t); + Span::styled( + ch.to_string(), + Style::default() + .fg(Color::Rgb(r, g, b)) + .add_modifier(Modifier::BOLD), + ) + }) + .collect(); + Line::from(spans) +} + +/// Compute the color for a character at normalized position `base_t` +/// during a left-to-right sweep animation at the given `phase`. +/// Returns the first stop color when the character is outside the wave. +#[cfg(feature = "tui")] +fn sweep_color(stops: &[(u8, u8, u8)], base_t: f64, phase: f64) -> (u8, u8, u8) { + let wave_start = 2.0 * phase - 1.0; + let wave_t = base_t - wave_start; + if !(0.0..1.0).contains(&wave_t) { + stops[0] + } else { + interpolate_stops(stops, wave_t) + } +} + +/// Apply an animated gradient sweep to a line of text (left-to-right wave). +/// `phase` in [0.0, 1.0] drives the sweep: 0.0 and 1.0 = all first-stop color, +/// 0.5 = full gradient visible. +#[cfg(feature = "tui")] +fn animated_gradient_line<'a>(text: &'a str, stops: &[(u8, u8, u8)], phase: f64) -> Line<'a> { + let chars: Vec<&str> = text.split_inclusive(|_: char| true).collect(); + let len = chars.len().max(1); + let spans: Vec> = chars + .into_iter() + .enumerate() + .map(|(i, ch)| { + let base_t = i as f64 / (len - 1).max(1) as f64; + let (r, g, b) = sweep_color(stops, base_t, phase); + Span::styled( + ch.to_string(), + Style::default() + .fg(Color::Rgb(r, g, b)) + .add_modifier(Modifier::BOLD), + ) + }) + .collect(); + Line::from(spans) +} + +/// Render the GIT-SAME banner with a static Blue → Cyan → Green gradient. +#[cfg(feature = "tui")] +pub fn render_banner(frame: &mut Frame, area: Rect) { + let version = env!("CARGO_PKG_VERSION"); + let version_display = format!("{:^6}", version); + let stops = &GRADIENT_STOPS; + + let mut banner_lines: Vec = Vec::new(); + for text in &LINES { + banner_lines.push(gradient_line(text, stops)); + } + + // Line 5: gradient prefix + inverted version + gradient suffix + let full_len = + LINE5_PREFIX.chars().count() + version_display.len() + LINE5_SUFFIX.chars().count(); + let mut line5_spans: Vec = Vec::new(); + for (i, ch) in LINE5_PREFIX.split_inclusive(|_: char| true).enumerate() { + let t = i as f64 / (full_len - 1).max(1) as f64; + let (r, g, b) = interpolate_stops(stops, t); + line5_spans.push(Span::styled( + ch.to_string(), + Style::default() + .fg(Color::Rgb(r, g, b)) + .add_modifier(Modifier::BOLD), + )); + } + let ver_pos = LINE5_PREFIX.chars().count(); + let ver_t = ver_pos as f64 / (full_len - 1).max(1) as f64; + let (vr, vg, vb) = interpolate_stops(stops, ver_t); + line5_spans.push(Span::styled( + version_display, + Style::default() + .fg(Color::Black) + .bg(Color::Rgb(vr, vg, vb)) + .add_modifier(Modifier::BOLD), + )); + let suffix_pos = ver_pos + 6; + let t = suffix_pos as f64 / (full_len - 1).max(1) as f64; + let (r, g, b) = interpolate_stops(stops, t); + line5_spans.push(Span::styled( + LINE5_SUFFIX.to_string(), + Style::default() + .fg(Color::Rgb(r, g, b)) + .add_modifier(Modifier::BOLD), + )); + banner_lines.push(Line::from(line5_spans)); + + banner_lines.push(gradient_line(LAST_LINE, stops)); + + let banner = Paragraph::new(banner_lines).centered(); + frame.render_widget(banner, area); +} + +/// Render the GIT-SAME banner with animated gradient sweep (left-to-right wave). +/// `phase` in [0.0, 1.0] drives the sweep: 0.0 and 1.0 = all first-stop color, +/// 0.5 = full gradient visible. +#[cfg(feature = "tui")] +pub fn render_animated_banner(frame: &mut Frame, area: Rect, phase: f64) { + let version = env!("CARGO_PKG_VERSION"); + let version_display = format!("{:^6}", version); + let stops: &[(u8, u8, u8)] = &ANIMATED_GRADIENT_STOPS; + + let mut banner_lines: Vec = Vec::new(); + for text in &LINES { + banner_lines.push(animated_gradient_line(text, stops, phase)); + } + + // Line 5: sweep prefix + inverted version badge + sweep suffix + let full_len = + LINE5_PREFIX.chars().count() + version_display.len() + LINE5_SUFFIX.chars().count(); + let mut line5_spans: Vec = Vec::new(); + for (i, ch) in LINE5_PREFIX.split_inclusive(|_: char| true).enumerate() { + let base_t = i as f64 / (full_len - 1).max(1) as f64; + let (r, g, b) = sweep_color(stops, base_t, phase); + line5_spans.push(Span::styled( + ch.to_string(), + Style::default() + .fg(Color::Rgb(r, g, b)) + .add_modifier(Modifier::BOLD), + )); + } + let ver_pos = LINE5_PREFIX.chars().count(); + let ver_base_t = ver_pos as f64 / (full_len - 1).max(1) as f64; + let (vr, vg, vb) = sweep_color(stops, ver_base_t, phase); + line5_spans.push(Span::styled( + version_display, + Style::default() + .fg(Color::Black) + .bg(Color::Rgb(vr, vg, vb)) + .add_modifier(Modifier::BOLD), + )); + let suffix_pos = ver_pos + 6; + let suffix_base_t = suffix_pos as f64 / (full_len - 1).max(1) as f64; + let (r, g, b) = sweep_color(stops, suffix_base_t, phase); + line5_spans.push(Span::styled( + LINE5_SUFFIX.to_string(), + Style::default() + .fg(Color::Rgb(r, g, b)) + .add_modifier(Modifier::BOLD), + )); + banner_lines.push(Line::from(line5_spans)); + + banner_lines.push(animated_gradient_line(LAST_LINE, stops, phase)); + + let banner = Paragraph::new(banner_lines).centered(); + frame.render_widget(banner, area); +} + +#[cfg(test)] +#[path = "banner_tests.rs"] +mod tests; diff --git a/src/banner_tests.rs b/src/banner_tests.rs new file mode 100644 index 0000000..05ac418 --- /dev/null +++ b/src/banner_tests.rs @@ -0,0 +1,50 @@ +use super::*; + +#[test] +fn subheadline_is_non_empty() { + assert!(!subheadline().trim().is_empty()); +} + +#[test] +fn print_banner_executes_without_panicking() { + print_banner(); +} + +#[cfg(feature = "tui")] +#[test] +fn interpolate_stops_clamps_to_bounds() { + let start = interpolate_stops(&[(0, 0, 0), (255, 255, 255)], -1.0); + assert_eq!(start, (0, 0, 0)); + + let end = interpolate_stops(&[(0, 0, 0), (255, 255, 255)], 2.0); + assert_eq!(end, (255, 255, 255)); +} + +#[cfg(feature = "tui")] +#[test] +fn render_banner_handles_multiple_widths() { + use ratatui::backend::TestBackend; + use ratatui::Terminal; + + for width in [64, 90] { + let backend = TestBackend::new(width, 8); + let mut terminal = Terminal::new(backend).unwrap(); + terminal + .draw(|frame| { + let area = frame.area(); + render_banner(frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut rendered = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + rendered.push_str(buffer[(x, y)].symbol()); + } + rendered.push('\n'); + } + + assert!(rendered.contains(env!("CARGO_PKG_VERSION"))); + } +} diff --git a/src/cache/discovery.rs b/src/cache/discovery.rs new file mode 100644 index 0000000..588e30b --- /dev/null +++ b/src/cache/discovery.rs @@ -0,0 +1,225 @@ +use crate::types::OwnedRepo; +use anyhow::{Context, Result}; +use serde::{Deserialize, Serialize}; +use std::collections::HashMap; +use std::fs; +use std::path::{Path, PathBuf}; +use std::time::{Duration, SystemTime, UNIX_EPOCH}; +use tracing::{debug, warn}; + +/// Default cache TTL (1 hour) +const DEFAULT_CACHE_TTL: Duration = Duration::from_secs(3600); + +/// Current cache format version. +/// Increment this when making breaking changes to the cache format. +pub const CACHE_VERSION: u32 = 1; + +/// Discovery cache data. +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct DiscoveryCache { + /// Cache format version for forward compatibility. + /// If missing during deserialization, defaults to 0 (pre-versioned cache). + #[serde(default)] + pub version: u32, + + /// When the discovery was last performed (Unix timestamp). + pub last_discovery: u64, + + /// Username or identifier. + pub username: String, + + /// List of organization names. + pub orgs: Vec, + + /// Total number of repositories discovered. + pub repo_count: usize, + + /// Cached repositories by provider. + pub repos: HashMap>, +} + +impl DiscoveryCache { + /// Create a new cache entry. + pub fn new(username: String, repos: HashMap>) -> Self { + let orgs: Vec = repos + .values() + .flat_map(|r| r.iter().map(|owned| owned.owner.clone())) + .collect::>() + .into_iter() + .collect(); + + let repo_count = repos.values().map(|r| r.len()).sum(); + + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap_or_default() + .as_secs(); + + debug!( + version = CACHE_VERSION, + repo_count, + org_count = orgs.len(), + "Creating new discovery cache" + ); + + Self { + version: CACHE_VERSION, + last_discovery: now, + username, + orgs, + repo_count, + repos, + } + } + + /// Check if this cache is compatible with the current version. + pub fn is_compatible(&self) -> bool { + self.version == CACHE_VERSION + } + + /// Check if the cache is still valid. + pub fn is_valid(&self, ttl: Duration) -> bool { + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap_or_default() + .as_secs(); + + if now < self.last_discovery { + return false; + } + + let age = now - self.last_discovery; + age < ttl.as_secs() + } + + /// Get the age of the cache in seconds. + pub fn age_secs(&self) -> u64 { + let now = SystemTime::now() + .duration_since(UNIX_EPOCH) + .unwrap_or_default() + .as_secs(); + + now.saturating_sub(self.last_discovery) + } +} + +/// Discovery cache manager. +pub struct CacheManager { + cache_path: PathBuf, + ttl: Duration, +} + +impl CacheManager { + /// Create a cache manager for a specific workspace. + pub fn for_workspace(workspace_name: &str) -> Result { + let cache_path = crate::config::WorkspaceManager::cache_path(workspace_name) + .map_err(|e| anyhow::anyhow!("{}", e))?; + Ok(Self { + cache_path, + ttl: DEFAULT_CACHE_TTL, + }) + } + + /// Create a cache manager with a custom path. + pub fn with_path(cache_path: PathBuf) -> Self { + Self { + cache_path, + ttl: DEFAULT_CACHE_TTL, + } + } + + /// Create a cache manager with a custom TTL. + pub fn with_ttl(mut self, ttl: Duration) -> Self { + self.ttl = ttl; + self + } + + /// Load the cache if it exists and is valid. + pub fn load(&self) -> Result> { + if !self.cache_path.exists() { + debug!(path = %self.cache_path.display(), "Cache file does not exist"); + return Ok(None); + } + + let content = match fs::read_to_string(&self.cache_path) { + Ok(content) => content, + Err(err) => { + warn!( + path = %self.cache_path.display(), + error = %err, + "Cache file unreadable, ignoring cache" + ); + return Ok(None); + } + }; + let cache: DiscoveryCache = match serde_json::from_str(&content) { + Ok(cache) => cache, + Err(err) => { + warn!( + path = %self.cache_path.display(), + error = %err, + "Cache file malformed, ignoring cache" + ); + return Ok(None); + } + }; + + if !cache.is_compatible() { + warn!( + cache_version = cache.version, + current_version = CACHE_VERSION, + "Cache version mismatch, ignoring stale cache" + ); + return Ok(None); + } + + if cache.is_valid(self.ttl) { + debug!( + age_secs = cache.age_secs(), + repo_count = cache.repo_count, + "Loaded valid cache" + ); + Ok(Some(cache)) + } else { + debug!(age_secs = cache.age_secs(), "Cache expired"); + Ok(None) + } + } + + /// Save the cache to disk. + pub fn save(&self, cache: &DiscoveryCache) -> Result<()> { + if let Some(parent) = self.cache_path.parent() { + fs::create_dir_all(parent).context("Failed to create cache directory")?; + } + + let json = serde_json::to_string_pretty(cache).context("Failed to serialize cache")?; + fs::write(&self.cache_path, &json).context("Failed to write cache file")?; + + debug!( + path = %self.cache_path.display(), + version = cache.version, + repo_count = cache.repo_count, + bytes = json.len(), + "Saved cache to disk" + ); + + Ok(()) + } + + /// Clear the cache file. + pub fn clear(&self) -> Result<()> { + if self.cache_path.exists() { + fs::remove_file(&self.cache_path).context("Failed to remove cache file")?; + } + Ok(()) + } + + /// Get the cache path. + pub fn path(&self) -> &Path { + &self.cache_path + } +} + +#[cfg(test)] +#[path = "discovery_tests.rs"] +mod tests; diff --git a/src/cache/discovery_tests.rs b/src/cache/discovery_tests.rs new file mode 100644 index 0000000..6a60690 --- /dev/null +++ b/src/cache/discovery_tests.rs @@ -0,0 +1,162 @@ +use super::*; +use crate::types::Repo; +use std::thread::sleep; +use tempfile::TempDir; + +fn create_test_repo(id: u64, name: &str, owner: &str) -> OwnedRepo { + OwnedRepo { + owner: owner.to_string(), + repo: Repo { + id, + name: name.to_string(), + full_name: format!("{}/{}", owner, name), + ssh_url: format!("git@github.com:{}/{}.git", owner, name), + clone_url: format!("https://github.com/{}/{}.git", owner, name), + default_branch: "main".to_string(), + private: false, + archived: false, + fork: false, + pushed_at: None, + description: None, + }, + } +} + +#[test] +fn test_cache_creation() { + let mut repos = HashMap::new(); + repos.insert( + "github".to_string(), + vec![ + create_test_repo(1, "repo1", "org1"), + create_test_repo(2, "repo2", "org2"), + ], + ); + + let cache = DiscoveryCache::new("testuser".to_string(), repos); + + assert_eq!(cache.version, CACHE_VERSION); + assert_eq!(cache.username, "testuser"); + assert_eq!(cache.repo_count, 2); + assert_eq!(cache.orgs.len(), 2); + assert!(cache.orgs.contains(&"org1".to_string())); + assert!(cache.orgs.contains(&"org2".to_string())); + assert!(cache.is_compatible()); +} + +#[test] +fn test_cache_version_compatibility() { + let repos = HashMap::new(); + let mut cache = DiscoveryCache::new("testuser".to_string(), repos); + + assert!(cache.is_compatible()); + + cache.version = 0; + assert!(!cache.is_compatible()); + + cache.version = CACHE_VERSION + 1; + assert!(!cache.is_compatible()); +} + +#[test] +fn test_cache_validity() { + let repos = HashMap::new(); + let cache = DiscoveryCache::new("testuser".to_string(), repos); + + assert!(cache.is_valid(Duration::from_secs(3600))); + + sleep(Duration::from_millis(1100)); + assert!(!cache.is_valid(Duration::from_secs(1))); +} + +#[test] +fn test_cache_age() { + let repos = HashMap::new(); + let cache = DiscoveryCache::new("testuser".to_string(), repos); + + sleep(Duration::from_millis(100)); + let age = cache.age_secs(); + assert!(age == 0 || age == 1); +} + +#[test] +fn test_cache_save_and_load() { + let temp_dir = TempDir::new().expect("temp dir"); + let cache_path = temp_dir.path().join("workspace-cache.json"); + + let manager = CacheManager::with_path(cache_path.clone()); + + let mut repos = HashMap::new(); + repos.insert( + "github".to_string(), + vec![create_test_repo(1, "repo1", "org1")], + ); + + let cache = DiscoveryCache::new("testuser".to_string(), repos); + + manager.save(&cache).expect("save cache"); + assert!(cache_path.exists()); + + let loaded = manager.load().expect("load cache"); + assert!(loaded.is_some()); + + let loaded_cache = loaded.expect("cache exists"); + assert_eq!(loaded_cache.username, "testuser"); + assert_eq!(loaded_cache.repo_count, 1); +} + +#[test] +fn test_cache_expiration() { + let temp_dir = TempDir::new().expect("temp dir"); + let cache_path = temp_dir.path().join("workspace-cache.json"); + + let manager = CacheManager::with_path(cache_path.clone()).with_ttl(Duration::from_secs(1)); + + let repos = HashMap::new(); + let cache = DiscoveryCache::new("testuser".to_string(), repos); + + manager.save(&cache).expect("save cache"); + + let loaded = manager.load().expect("load cache"); + assert!( + loaded.is_some(), + "Cache should be valid immediately after save" + ); + + let short_ttl_manager = + CacheManager::with_path(cache_path.clone()).with_ttl(Duration::from_secs(1)); + sleep(Duration::from_millis(1100)); + + let loaded = short_ttl_manager.load().expect("load short ttl cache"); + assert!( + loaded.is_none(), + "Cache should be expired after waiting longer than TTL" + ); +} + +#[test] +fn test_cache_clear() { + let temp_dir = TempDir::new().expect("temp dir"); + let cache_path = temp_dir.path().join("workspace-cache.json"); + + let manager = CacheManager::with_path(cache_path.clone()); + + let repos = HashMap::new(); + let cache = DiscoveryCache::new("testuser".to_string(), repos); + + manager.save(&cache).expect("save cache"); + assert!(cache_path.exists()); + + manager.clear().expect("clear cache"); + assert!(!cache_path.exists()); +} + +#[test] +fn test_cache_load_nonexistent() { + let temp_dir = TempDir::new().expect("temp dir"); + let cache_path = temp_dir.path().join("nonexistent.json"); + + let manager = CacheManager::with_path(cache_path); + let loaded = manager.load().expect("load cache"); + assert!(loaded.is_none()); +} diff --git a/src/cache/mod.rs b/src/cache/mod.rs new file mode 100644 index 0000000..b49f7cc --- /dev/null +++ b/src/cache/mod.rs @@ -0,0 +1,9 @@ +//! Cache and history persistence. + +mod discovery; +#[cfg(feature = "tui")] +mod sync_history; + +pub use discovery::{CacheManager, DiscoveryCache, CACHE_VERSION}; +#[cfg(feature = "tui")] +pub use sync_history::SyncHistoryManager; diff --git a/src/cache/sync_history.rs b/src/cache/sync_history.rs new file mode 100644 index 0000000..f565263 --- /dev/null +++ b/src/cache/sync_history.rs @@ -0,0 +1,84 @@ +use anyhow::{Context, Result}; +use serde::{Deserialize, Serialize}; +use std::fs; +use std::path::PathBuf; +use tracing::debug; + +use crate::tui::app::SyncHistoryEntry; + +const HISTORY_VERSION: u32 = 1; +const MAX_HISTORY_ENTRIES: usize = 50; + +#[derive(Debug, Serialize, Deserialize)] +struct SyncHistoryFile { + version: u32, + entries: Vec, +} + +/// Manages per-workspace sync history persistence. +/// +/// History is stored at `~/.config/git-same//sync-history.json`. +pub struct SyncHistoryManager { + path: PathBuf, +} + +impl SyncHistoryManager { + /// Create a history manager for a specific workspace. + pub fn for_workspace(workspace_name: &str) -> Result { + let dir = crate::config::WorkspaceManager::workspace_dir(workspace_name) + .map_err(|e| anyhow::anyhow!("{}", e))?; + Ok(Self { + path: dir.join("sync-history.json"), + }) + } + + /// Load sync history from disk. Returns empty vec if file doesn't exist. + pub fn load(&self) -> Result> { + if !self.path.exists() { + return Ok(Vec::new()); + } + let content = fs::read_to_string(&self.path).context("Failed to read sync history file")?; + let file: SyncHistoryFile = + serde_json::from_str(&content).context("Failed to parse sync history")?; + if file.version != HISTORY_VERSION { + debug!( + file_version = file.version, + current_version = HISTORY_VERSION, + "Sync history version mismatch, starting fresh" + ); + return Ok(Vec::new()); + } + Ok(file.entries) + } + + /// Save sync history to disk, keeping only the most recent entries. + pub fn save(&self, entries: &[SyncHistoryEntry]) -> Result<()> { + if let Some(parent) = self.path.parent() { + fs::create_dir_all(parent).context("Failed to create history directory")?; + } + let capped: Vec = entries + .iter() + .rev() + .take(MAX_HISTORY_ENTRIES) + .rev() + .cloned() + .collect(); + let file = SyncHistoryFile { + version: HISTORY_VERSION, + entries: capped, + }; + let json = + serde_json::to_string_pretty(&file).context("Failed to serialize sync history")?; + fs::write(&self.path, &json).context("Failed to write sync history")?; + debug!( + path = %self.path.display(), + entries = file.entries.len(), + "Saved sync history" + ); + Ok(()) + } +} + +#[cfg(test)] +#[path = "sync_history_tests.rs"] +mod tests; diff --git a/src/cache/sync_history_tests.rs b/src/cache/sync_history_tests.rs new file mode 100644 index 0000000..4d3e0c3 --- /dev/null +++ b/src/cache/sync_history_tests.rs @@ -0,0 +1,68 @@ +use super::*; + +fn sample_entry(index: usize) -> SyncHistoryEntry { + SyncHistoryEntry { + timestamp: format!("2026-01-{:02}T00:00:00Z", (index % 28) + 1), + duration_secs: index as f64, + success: index, + failed: 0, + skipped: 0, + with_updates: index, + cloned: index / 2, + total_new_commits: index as u32, + } +} + +#[test] +fn load_missing_file_returns_empty_history() { + let temp = tempfile::tempdir().unwrap(); + let manager = SyncHistoryManager { + path: temp.path().join("sync-history.json"), + }; + + let entries = manager.load().unwrap(); + assert!(entries.is_empty()); +} + +#[test] +fn save_and_load_roundtrip_preserves_entries() { + let temp = tempfile::tempdir().unwrap(); + let manager = SyncHistoryManager { + path: temp.path().join("sync-history.json"), + }; + let entries = vec![sample_entry(1), sample_entry(2), sample_entry(3)]; + + manager.save(&entries).unwrap(); + let loaded = manager.load().unwrap(); + + assert_eq!(loaded.len(), 3); + assert_eq!(loaded[0].timestamp, entries[0].timestamp); + assert_eq!(loaded[2].total_new_commits, entries[2].total_new_commits); +} + +#[test] +fn save_caps_to_max_history_entries() { + let temp = tempfile::tempdir().unwrap(); + let manager = SyncHistoryManager { + path: temp.path().join("sync-history.json"), + }; + + let entries: Vec<_> = (0..75).map(sample_entry).collect(); + manager.save(&entries).unwrap(); + + let loaded = manager.load().unwrap(); + assert_eq!(loaded.len(), 50); + assert_eq!(loaded[0].duration_secs, 25.0); + assert_eq!(loaded[49].duration_secs, 74.0); +} + +#[test] +fn load_corrupt_json_returns_error() { + let temp = tempfile::tempdir().unwrap(); + let path = temp.path().join("sync-history.json"); + std::fs::write(&path, "not-json").unwrap(); + + let manager = SyncHistoryManager { path }; + let err = manager.load().unwrap_err(); + assert!(err.to_string().contains("Failed to parse sync history")); +} diff --git a/src/checks.rs b/src/checks.rs new file mode 100644 index 0000000..0b56780 --- /dev/null +++ b/src/checks.rs @@ -0,0 +1,194 @@ +//! System requirements checking. +//! +//! Provides reusable requirement checks for both the CLI `init` command +//! and the TUI init screen. + +use crate::auth::{gh_cli, ssh}; +use std::process::Command; + +/// Result of a single requirement check. +#[derive(Debug, Clone)] +pub struct CheckResult { + /// Human-readable name of the check (e.g., "Git CLI"). + pub name: String, + /// Whether the check passed. + pub passed: bool, + /// Detail message (e.g., "git 2.43.0" or "not found"). + pub message: String, + /// Suggested action to fix a failure. + pub suggestion: Option, + /// Whether this is a critical requirement (false = warning only). + pub critical: bool, +} + +/// Run all requirement checks. +/// +/// Returns a list of check results for: git, gh CLI, gh authentication, +/// SSH keys, and SSH GitHub access. +pub async fn check_requirements() -> Vec { + match tokio::task::spawn_blocking(check_requirements_sync).await { + Ok(results) => results, + Err(e) => vec![CheckResult { + name: "System checks".to_string(), + passed: false, + message: format!("failed to run checks: {}", e), + suggestion: Some("Try running checks again".to_string()), + critical: false, + }], + } +} + +/// Run all requirement checks synchronously. +pub fn check_requirements_sync() -> Vec { + vec![ + check_git_installed(), + check_gh_installed(), + check_gh_authenticated(), + check_ssh_keys(), + check_ssh_github_access(), + ] +} + +/// Check if git is installed and get its version. +fn check_git_installed() -> CheckResult { + match Command::new("git").arg("--version").output() { + Ok(output) if output.status.success() => { + let version = String::from_utf8_lossy(&output.stdout).trim().to_string(); + CheckResult { + name: "Git".to_string(), + passed: true, + message: version, + suggestion: None, + critical: true, + } + } + _ => CheckResult { + name: "Git".to_string(), + passed: false, + message: "not found".to_string(), + suggestion: Some("Install git: https://git-scm.com/downloads".to_string()), + critical: true, + }, + } +} + +/// Check if the GitHub CLI is installed. +fn check_gh_installed() -> CheckResult { + if gh_cli::is_installed() { + let version = Command::new("gh") + .arg("--version") + .output() + .ok() + .map(|o| { + String::from_utf8_lossy(&o.stdout) + .lines() + .next() + .unwrap_or("") + .trim() + .to_string() + }) + .unwrap_or_else(|| "installed".to_string()); + CheckResult { + name: "GitHub CLI".to_string(), + passed: true, + message: version, + suggestion: None, + critical: true, + } + } else { + CheckResult { + name: "GitHub CLI".to_string(), + passed: false, + message: "not found".to_string(), + suggestion: Some("Install from https://cli.github.com/".to_string()), + critical: true, + } + } +} + +/// Check if the user is authenticated with the GitHub CLI. +fn check_gh_authenticated() -> CheckResult { + if !gh_cli::is_installed() { + return CheckResult { + name: "GitHub Auth".to_string(), + passed: false, + message: "gh CLI not installed".to_string(), + suggestion: Some("Install gh CLI first, then run: gh auth login".to_string()), + critical: true, + }; + } + + if gh_cli::is_authenticated() { + let username = gh_cli::get_username().unwrap_or_else(|_| "authenticated".to_string()); + CheckResult { + name: "GitHub Auth".to_string(), + passed: true, + message: format!("logged in as {}", username), + suggestion: None, + critical: true, + } + } else { + CheckResult { + name: "GitHub Auth".to_string(), + passed: false, + message: "not authenticated".to_string(), + suggestion: Some("Run: gh auth login".to_string()), + critical: true, + } + } +} + +/// Check if SSH keys are present. +fn check_ssh_keys() -> CheckResult { + if ssh::has_ssh_keys() { + let keys = ssh::get_ssh_key_files(); + let key_names: Vec = keys + .iter() + .filter_map(|p| p.file_name().map(|f| f.to_string_lossy().to_string())) + .collect(); + CheckResult { + name: "SSH Keys".to_string(), + passed: true, + message: key_names.join(", "), + suggestion: None, + critical: false, + } + } else { + CheckResult { + name: "SSH Keys".to_string(), + passed: false, + message: "no SSH keys found in ~/.ssh".to_string(), + suggestion: Some( + "Generate a key: ssh-keygen -t ed25519 -C \"your_email@example.com\"".to_string(), + ), + critical: false, + } + } +} + +/// Check if SSH access to GitHub works. +fn check_ssh_github_access() -> CheckResult { + if ssh::has_github_ssh_access() { + CheckResult { + name: "SSH GitHub".to_string(), + passed: true, + message: "authenticated".to_string(), + suggestion: None, + critical: false, + } + } else { + CheckResult { + name: "SSH GitHub".to_string(), + passed: false, + message: "cannot reach github.com via SSH".to_string(), + suggestion: Some( + "Add your SSH key to GitHub: https://github.com/settings/keys".to_string(), + ), + critical: false, + } + } +} + +#[cfg(test)] +#[path = "checks_tests.rs"] +mod tests; diff --git a/src/checks_tests.rs b/src/checks_tests.rs new file mode 100644 index 0000000..e936e5d --- /dev/null +++ b/src/checks_tests.rs @@ -0,0 +1,47 @@ +use super::*; + +#[test] +fn test_check_git_installed_runs() { + let result = check_git_installed(); + // Just verify it runs without panic; actual result depends on environment + assert_eq!(result.name, "Git"); + assert!(result.critical); +} + +#[test] +fn test_check_gh_installed_runs() { + let result = check_gh_installed(); + assert_eq!(result.name, "GitHub CLI"); + assert!(result.critical); +} + +#[test] +fn test_check_ssh_keys_runs() { + let result = check_ssh_keys(); + assert_eq!(result.name, "SSH Keys"); + assert!(!result.critical); +} + +#[test] +fn test_check_result_fields() { + let result = CheckResult { + name: "Test".to_string(), + passed: true, + message: "ok".to_string(), + suggestion: None, + critical: false, + }; + assert!(result.passed); + assert!(result.suggestion.is_none()); +} + +#[tokio::test] +async fn test_check_requirements_returns_all_checks() { + let results = check_requirements().await; + assert_eq!(results.len(), 5); + assert_eq!(results[0].name, "Git"); + assert_eq!(results[1].name, "GitHub CLI"); + assert_eq!(results[2].name, "GitHub Auth"); + assert_eq!(results[3].name, "SSH Keys"); + assert_eq!(results[4].name, "SSH GitHub"); +} diff --git a/src/cli.rs b/src/cli.rs new file mode 100644 index 0000000..7d4be29 --- /dev/null +++ b/src/cli.rs @@ -0,0 +1,195 @@ +//! CLI argument parsing using clap. +//! +//! This module defines the command-line interface for git-same, +//! including all subcommands and their options. + +use clap::{Args, Parser, Subcommand}; +use std::path::PathBuf; + +/// Git-Same - Mirror GitHub structure /orgs/repos/ to local file system +/// +/// Available as: git-same, gitsame, gitsa, gisa +/// Also works as: git same (git subcommand) +#[derive(Parser, Debug)] +#[command(name = "git-same")] +#[command(version, about, long_about = None)] +#[command(propagate_version = true)] +pub struct Cli { + /// Increase verbosity (-v, -vv, -vvv) + #[arg(short, long, action = clap::ArgAction::Count, global = true)] + pub verbose: u8, + + /// Suppress all output except errors + #[arg(short, long, global = true)] + pub quiet: bool, + + /// Output in JSON format + #[arg(long, global = true)] + pub json: bool, + + /// Path to config file + #[arg(short = 'C', long, global = true)] + pub config: Option, + + #[command(subcommand)] + pub command: Option, +} + +/// Git-Same subcommands +#[derive(Subcommand, Debug)] +pub enum Command { + /// Initialize git-same configuration + Init(InitArgs), + + /// Configure a workspace (interactive wizard) + Setup(SetupArgs), + + /// Sync repositories (discover, clone new, fetch/pull existing) + Sync(SyncCmdArgs), + + /// Show status of local repositories + Status(StatusArgs), + + /// Manage workspaces (list, set default) + Workspace(WorkspaceArgs), + + /// Reset gisa — remove all config, workspaces, and cache + Reset(ResetArgs), +} + +/// Arguments for the init command +#[derive(Args, Debug)] +pub struct InitArgs { + /// Force overwrite existing config + #[arg(short, long)] + pub force: bool, + + /// Path for config file (default: ~/.config/git-same/config.toml) + #[arg(short, long)] + pub path: Option, +} + +/// Arguments for the setup command +#[derive(Args, Debug)] +pub struct SetupArgs { + /// Workspace name (auto-derived from base path if omitted) + #[arg(short, long)] + pub name: Option, +} + +/// Arguments for the sync command +#[derive(Args, Debug)] +pub struct SyncCmdArgs { + /// Workspace path or name to sync + #[arg(short, long)] + pub workspace: Option, + + /// Use pull instead of fetch for existing repos + #[arg(long)] + pub pull: bool, + + /// Perform a dry run (show what would be done) + #[arg(short = 'n', long)] + pub dry_run: bool, + + /// Maximum number of concurrent operations + #[arg(short, long)] + pub concurrency: Option, + + /// Force re-discovery (ignore cache) + #[arg(long)] + pub refresh: bool, + + /// Don't skip repositories with uncommitted changes + #[arg(long)] + pub no_skip_uncommitted: bool, +} + +/// Arguments for the status command +#[derive(Args, Debug)] +pub struct StatusArgs { + /// Workspace path or name + #[arg(short, long)] + pub workspace: Option, + + /// Show only repositories with changes + #[arg(short = 'd', long)] + pub uncommitted: bool, + + /// Show only repositories behind upstream + #[arg(short, long)] + pub behind: bool, + + /// Show detailed status for each repository + #[arg(long)] + pub detailed: bool, + + /// Filter to specific organizations (can be repeated) + #[arg(short, long)] + pub org: Vec, +} + +/// Arguments for the workspace command +#[derive(Args, Debug)] +pub struct WorkspaceArgs { + #[command(subcommand)] + pub command: WorkspaceCommand, +} + +/// Workspace subcommands +#[derive(Subcommand, Debug)] +pub enum WorkspaceCommand { + /// List configured workspaces + List, + /// Set or show the default workspace + Default(WorkspaceDefaultArgs), +} + +/// Arguments for the workspace default subcommand +#[derive(Args, Debug)] +pub struct WorkspaceDefaultArgs { + /// Workspace path or name to set as default (omit to show current) + pub name: Option, + + /// Clear the default workspace + #[arg(long)] + pub clear: bool, +} + +/// Arguments for the reset command +#[derive(Args, Debug)] +pub struct ResetArgs { + /// Skip confirmation prompt + #[arg(short, long)] + pub force: bool, +} + +impl Cli { + /// Parse command line arguments. + pub fn parse_args() -> Self { + Self::parse() + } + + /// Get the effective verbosity level (0-3). + pub fn verbosity(&self) -> u8 { + if self.quiet { + 0 + } else { + self.verbose.min(3) + } + } + + /// Check if output should be suppressed. + pub fn is_quiet(&self) -> bool { + self.quiet + } + + /// Check if JSON output is requested. + pub fn is_json(&self) -> bool { + self.json + } +} + +#[cfg(test)] +#[path = "cli_tests.rs"] +mod tests; diff --git a/src/cli_tests.rs b/src/cli_tests.rs new file mode 100644 index 0000000..637ffc7 --- /dev/null +++ b/src/cli_tests.rs @@ -0,0 +1,205 @@ +use super::*; + +#[test] +fn test_cli_parsing_init() { + let cli = Cli::try_parse_from(["gisa", "init", "--force"]).unwrap(); + match cli.command { + Some(Command::Init(args)) => assert!(args.force), + _ => panic!("Expected Init command"), + } +} + +#[test] +fn test_cli_parsing_setup() { + let cli = Cli::try_parse_from(["gisa", "setup"]).unwrap(); + match cli.command { + Some(Command::Setup(args)) => assert!(args.name.is_none()), + _ => panic!("Expected Setup command"), + } +} + +#[test] +fn test_cli_parsing_setup_with_name() { + let cli = Cli::try_parse_from(["gisa", "setup", "--name", "work"]).unwrap(); + match cli.command { + Some(Command::Setup(args)) => assert_eq!(args.name, Some("work".to_string())), + _ => panic!("Expected Setup command"), + } +} + +#[test] +fn test_cli_parsing_sync() { + let cli = Cli::try_parse_from(["gisa", "sync", "--pull", "--dry-run"]).unwrap(); + match cli.command { + Some(Command::Sync(args)) => { + assert!(args.pull); + assert!(args.dry_run); + assert!(args.workspace.is_none()); + } + _ => panic!("Expected Sync command"), + } +} + +#[test] +fn test_cli_parsing_sync_with_workspace() { + let cli = Cli::try_parse_from([ + "gisa", + "sync", + "--workspace", + "github", + "--concurrency", + "8", + ]) + .unwrap(); + match cli.command { + Some(Command::Sync(args)) => { + assert_eq!(args.workspace, Some("github".to_string())); + assert_eq!(args.concurrency, Some(8)); + } + _ => panic!("Expected Sync command"), + } +} + +#[test] +fn test_cli_parsing_status() { + let cli = Cli::try_parse_from(["gisa", "status", "--uncommitted", "--detailed"]).unwrap(); + match cli.command { + Some(Command::Status(args)) => { + assert!(args.uncommitted); + assert!(args.detailed); + assert!(args.workspace.is_none()); + } + _ => panic!("Expected Status command"), + } +} + +#[test] +fn test_cli_parsing_status_with_workspace() { + let cli = Cli::try_parse_from(["gisa", "status", "--workspace", "work"]).unwrap(); + match cli.command { + Some(Command::Status(args)) => { + assert_eq!(args.workspace, Some("work".to_string())); + } + _ => panic!("Expected Status command"), + } +} + +#[test] +fn test_cli_rejects_clone_subcommand() { + let cli = Cli::try_parse_from(["gisa", "clone"]); + assert!(cli.is_err()); +} + +#[test] +fn test_cli_rejects_fetch_subcommand() { + let cli = Cli::try_parse_from(["gisa", "fetch"]); + assert!(cli.is_err()); +} + +#[test] +fn test_cli_rejects_pull_subcommand() { + let cli = Cli::try_parse_from(["gisa", "pull"]); + assert!(cli.is_err()); +} + +#[test] +fn test_cli_parsing_reset() { + let cli = Cli::try_parse_from(["gisa", "reset"]).unwrap(); + match cli.command { + Some(Command::Reset(args)) => assert!(!args.force), + _ => panic!("Expected Reset command"), + } +} + +#[test] +fn test_cli_parsing_reset_force() { + let cli = Cli::try_parse_from(["gisa", "reset", "--force"]).unwrap(); + match cli.command { + Some(Command::Reset(args)) => assert!(args.force), + _ => panic!("Expected Reset command"), + } +} + +#[test] +fn test_cli_global_flags() { + let cli = Cli::try_parse_from(["gisa", "-vvv", "--json", "sync"]).unwrap(); + assert_eq!(cli.verbose, 3); + assert!(cli.json); + assert_eq!(cli.verbosity(), 3); +} + +#[test] +fn test_cli_quiet_flag() { + let cli = Cli::try_parse_from(["gisa", "--quiet", "sync"]).unwrap(); + assert!(cli.quiet); + assert!(cli.is_quiet()); + assert_eq!(cli.verbosity(), 0); +} + +#[test] +fn test_cli_no_subcommand() { + let cli = Cli::try_parse_from(["gisa"]).unwrap(); + assert!(cli.command.is_none()); +} + +#[test] +fn test_cli_parsing_workspace_list() { + let cli = Cli::try_parse_from(["gisa", "workspace", "list"]).unwrap(); + match cli.command { + Some(Command::Workspace(args)) => { + assert!(matches!(args.command, WorkspaceCommand::List)); + } + _ => panic!("Expected Workspace command"), + } +} + +#[test] +fn test_cli_parsing_workspace_default_set() { + let cli = Cli::try_parse_from(["gisa", "workspace", "default", "my-ws"]).unwrap(); + match cli.command { + Some(Command::Workspace(args)) => match args.command { + WorkspaceCommand::Default(d) => { + assert_eq!(d.name, Some("my-ws".to_string())); + assert!(!d.clear); + } + _ => panic!("Expected Default subcommand"), + }, + _ => panic!("Expected Workspace command"), + } +} + +#[test] +fn test_cli_parsing_workspace_default_clear() { + let cli = Cli::try_parse_from(["gisa", "workspace", "default", "--clear"]).unwrap(); + match cli.command { + Some(Command::Workspace(args)) => match args.command { + WorkspaceCommand::Default(d) => { + assert!(d.clear); + assert!(d.name.is_none()); + } + _ => panic!("Expected Default subcommand"), + }, + _ => panic!("Expected Workspace command"), + } +} + +#[test] +fn test_cli_parsing_workspace_default_show() { + let cli = Cli::try_parse_from(["gisa", "workspace", "default"]).unwrap(); + match cli.command { + Some(Command::Workspace(args)) => match args.command { + WorkspaceCommand::Default(d) => { + assert!(d.name.is_none()); + assert!(!d.clear); + } + _ => panic!("Expected Default subcommand"), + }, + _ => panic!("Expected Workspace command"), + } +} + +#[test] +fn verify_cli() { + use clap::CommandFactory; + Cli::command().debug_assert(); +} diff --git a/src/commands/init.rs b/src/commands/init.rs new file mode 100644 index 0000000..0b99112 --- /dev/null +++ b/src/commands/init.rs @@ -0,0 +1,77 @@ +//! Init command handler. +//! +//! Checks system requirements and writes the global configuration file. + +use crate::checks::{self, CheckResult}; +use crate::cli::InitArgs; +use crate::config::Config; +use crate::errors::{AppError, Result}; +use crate::output::Output; + +/// Initialize gisa configuration. +pub async fn run(args: &InitArgs, output: &Output) -> Result<()> { + // Step 1: Run requirement checks + output.info("Checking requirements..."); + let results = checks::check_requirements().await; + display_check_results(&results, output); + + let critical_failures: Vec<&CheckResult> = + results.iter().filter(|r| !r.passed && r.critical).collect(); + if !critical_failures.is_empty() { + output.warn("Some critical checks failed. You can still create the config, but gisa may not work correctly."); + } + + // Step 2: Write global config + let config_path = match args.path.clone() { + Some(p) => p, + None => Config::default_path()?, + }; + + if config_path.exists() && !args.force { + return Err(AppError::config(format!( + "Config file already exists at {}. Use --force to overwrite.", + config_path.display() + ))); + } + + // Create parent directory + if let Some(parent) = config_path.parent() { + std::fs::create_dir_all(parent) + .map_err(|e| AppError::path(format!("Failed to create config directory: {}", e)))?; + } + + // Write default config + let default_config = Config::default_toml(); + std::fs::write(&config_path, default_config) + .map_err(|e| AppError::path(format!("Failed to write config: {}", e)))?; + + output.success(&format!("Created config at {}", config_path.display())); + + // Step 3: Next steps + output.info("Run 'git-same setup' to configure a local folder as workspace."); + + Ok(()) +} + +/// Display check results with pass/fail indicators. +fn display_check_results(results: &[CheckResult], output: &Output) { + for result in results { + if result.passed { + output.success(&format!(" {} — {}", result.name, result.message)); + } else if result.critical { + output.error(&format!(" {} — {}", result.name, result.message)); + if let Some(ref suggestion) = result.suggestion { + output.info(&format!(" {}", suggestion)); + } + } else { + output.warn(&format!(" {} — {}", result.name, result.message)); + if let Some(ref suggestion) = result.suggestion { + output.info(&format!(" {}", suggestion)); + } + } + } +} + +#[cfg(test)] +#[path = "init_tests.rs"] +mod tests; diff --git a/src/commands/init_tests.rs b/src/commands/init_tests.rs new file mode 100644 index 0000000..69494d6 --- /dev/null +++ b/src/commands/init_tests.rs @@ -0,0 +1,113 @@ +use super::*; +use crate::cli::InitArgs; +use tempfile::TempDir; + +fn quiet_output() -> Output { + Output::new(crate::output::Verbosity::Quiet, false) +} + +#[tokio::test] +async fn test_init_creates_config() { + let temp = TempDir::new().unwrap(); + let config_path = temp.path().join("config.toml"); + let args = InitArgs { + force: false, + path: Some(config_path.clone()), + }; + let output = quiet_output(); + + let result = run(&args, &output).await; + assert!(result.is_ok()); + assert!(config_path.exists()); + let content = std::fs::read_to_string(&config_path).unwrap(); + assert!(!content.is_empty()); +} + +#[tokio::test] +async fn test_init_creates_config_dir() { + let temp = TempDir::new().unwrap(); + let config_path = temp.path().join("git-same/config.toml"); + let args = InitArgs { + force: false, + path: Some(config_path.clone()), + }; + let output = quiet_output(); + + let result = run(&args, &output).await; + assert!(result.is_ok()); + + let config_dir = temp.path().join("git-same"); + assert!(config_dir.exists()); + assert!(config_dir.is_dir()); +} + +#[tokio::test] +async fn test_init_fails_if_exists_without_force() { + let temp = TempDir::new().unwrap(); + let config_path = temp.path().join("config.toml"); + std::fs::write(&config_path, "existing").unwrap(); + + let args = InitArgs { + force: false, + path: Some(config_path), + }; + let output = quiet_output(); + + let result = run(&args, &output).await; + assert!(result.is_err()); +} + +#[tokio::test] +async fn test_init_overwrites_with_force() { + let temp = TempDir::new().unwrap(); + let config_path = temp.path().join("config.toml"); + std::fs::write(&config_path, "old content").unwrap(); + + let args = InitArgs { + force: true, + path: Some(config_path.clone()), + }; + let output = quiet_output(); + + let result = run(&args, &output).await; + assert!(result.is_ok()); + let content = std::fs::read_to_string(&config_path).unwrap(); + assert_ne!(content, "old content"); +} + +#[tokio::test] +async fn test_init_creates_parent_dirs() { + let temp = TempDir::new().unwrap(); + let config_path = temp.path().join("nested/deep/config.toml"); + let args = InitArgs { + force: false, + path: Some(config_path.clone()), + }; + let output = quiet_output(); + + let result = run(&args, &output).await; + assert!(result.is_ok()); + assert!(config_path.exists()); +} + +#[test] +fn test_display_check_results_no_panic() { + let results = vec![ + CheckResult { + name: "Git".to_string(), + passed: true, + message: "git 2.43.0".to_string(), + suggestion: None, + critical: true, + }, + CheckResult { + name: "SSH".to_string(), + passed: false, + message: "no keys".to_string(), + suggestion: Some("Generate a key".to_string()), + critical: false, + }, + ]; + let output = quiet_output(); + display_check_results(&results, &output); +} diff --git a/src/commands/mod.rs b/src/commands/mod.rs new file mode 100644 index 0000000..b4a1ea4 --- /dev/null +++ b/src/commands/mod.rs @@ -0,0 +1,73 @@ +//! Command handlers for the CLI subcommands. +//! +//! This module contains the runtime behavior for each subcommand, +//! separated from `main.rs` so the entrypoint stays focused on bootstrapping. + +pub mod init; +pub mod reset; +#[cfg(feature = "tui")] +pub mod setup; +pub mod status; +pub mod support; +pub mod sync_cmd; +pub mod workspace; + +pub use init::run as run_init; +pub use status::run as run_status; +pub use sync_cmd::run as run_sync_cmd; + +use crate::cli::Command; +use crate::config::Config; +use crate::errors::{AppError, Result}; +use crate::output::Output; +use std::path::Path; + +pub(crate) use support::{ensure_base_path, warn_if_concurrency_capped}; + +/// Run the specified command. +pub async fn run_command( + command: &Command, + config_path: Option<&Path>, + output: &Output, +) -> Result<()> { + // Commands that don't need config + if let Command::Init(args) = command { + return run_init(args, output).await; + } + if let Command::Reset(args) = command { + return reset::run(args, output).await; + } + #[cfg(feature = "tui")] + if let Command::Setup(args) = command { + return setup::run(args, output).await; + } + + // Load config for all other commands + let config = load_config(config_path)?; + + match command { + Command::Init(_) | Command::Reset(_) => unreachable!(), + #[cfg(feature = "tui")] + Command::Setup(_) => unreachable!(), + Command::Sync(args) => run_sync_cmd(args, &config, output).await, + Command::Status(args) => run_status(args, &config, output).await, + Command::Workspace(args) => workspace::run(args, &config, output), + } +} + +/// Load configuration from the given path or default location. +/// +/// Returns an error suggesting `gisa init` when no config file exists +/// at the default location, rather than silently using defaults. +fn load_config(config_path: Option<&Path>) -> Result { + let path = match config_path { + Some(p) => p.to_path_buf(), + None => Config::default_path()?, + }; + if !path.exists() { + return Err(AppError::config( + "No configuration found. Run 'gisa init' to create one.", + )); + } + Config::load_from(&path) +} diff --git a/src/commands/reset.rs b/src/commands/reset.rs new file mode 100644 index 0000000..deb0d3e --- /dev/null +++ b/src/commands/reset.rs @@ -0,0 +1,425 @@ +//! Reset command handler. +//! +//! Removes gisa configuration, workspace configs, and caches. +//! Supports interactive scope selection or `--force` for scripting. + +use crate::cli::ResetArgs; +use crate::config::{Config, WorkspaceConfig, WorkspaceManager}; +use crate::errors::{AppError, Result}; +use crate::output::Output; +use chrono::{DateTime, Utc}; +use std::io::{self, BufRead, Write}; +use std::path::PathBuf; + +/// What scope of reset to perform. +enum ResetScope { + Everything, + ConfigOnly, + AllWorkspaces, + Workspace(String), +} + +/// Rich detail about a single workspace for display. +struct WorkspaceDetail { + name: String, + base_path: String, + orgs: Vec, + last_synced: Option, + dir: PathBuf, + cache_size: Option, +} + +/// Everything that could be removed. +struct ResetTarget { + config_dir: PathBuf, + config_file: Option, + workspaces: Vec, +} + +impl ResetTarget { + fn is_empty(&self) -> bool { + self.config_file.is_none() && self.workspaces.is_empty() + } + + fn has_workspaces(&self) -> bool { + !self.workspaces.is_empty() + } +} + +/// Run the reset command. +pub async fn run(args: &ResetArgs, output: &Output) -> Result<()> { + let target = discover_targets()?; + + if target.is_empty() { + output.info("Nothing to reset — gisa is not configured."); + return Ok(()); + } + + // --force: delete everything, no prompts + if args.force { + display_detailed_targets(&ResetScope::Everything, &target, output); + execute_reset(&ResetScope::Everything, &target, output)?; + return Ok(()); + } + + // Interactive: ask what to reset + let scope = prompt_scope(&target)?; + display_detailed_targets(&scope, &target, output); + + if !confirm("\nAre you sure? [y/N] ")? { + output.info("Reset cancelled."); + return Ok(()); + } + + execute_reset(&scope, &target, output)?; + Ok(()) +} + +/// Discover what files and directories exist that could be removed. +fn discover_targets() -> Result { + let config_path = Config::default_path()?; + let config_dir = config_path + .parent() + .ok_or_else(|| AppError::config("Cannot determine config directory"))? + .to_path_buf(); + + let config_file = if config_path.exists() { + Some(config_path) + } else { + None + }; + + let workspaces = WorkspaceManager::list()? + .iter() + .map(build_workspace_detail) + .collect::>>()?; + + Ok(ResetTarget { + config_dir, + config_file, + workspaces, + }) +} + +/// Build rich detail for a workspace. +fn build_workspace_detail(ws: &WorkspaceConfig) -> Result { + let dir = WorkspaceManager::workspace_dir(&ws.name)?; + let cache_file = WorkspaceManager::cache_path(&ws.name)?; + + let cache_size = if cache_file.exists() { + std::fs::metadata(&cache_file).map(|m| m.len()).ok() + } else { + None + }; + + Ok(WorkspaceDetail { + name: ws.name.clone(), + base_path: ws.base_path.clone(), + orgs: ws.orgs.clone(), + last_synced: ws.last_synced.clone(), + dir, + cache_size, + }) +} + +/// Display detailed information about what will be deleted. +fn display_detailed_targets(scope: &ResetScope, target: &ResetTarget, output: &Output) { + output.warn("The following will be permanently deleted:"); + + match scope { + ResetScope::Everything => { + if let Some(ref path) = target.config_file { + output.info(&format!(" Global config: {}", path.display())); + } + for ws in &target.workspaces { + display_workspace_detail(ws, output); + } + } + ResetScope::ConfigOnly => { + if let Some(ref path) = target.config_file { + output.info(&format!(" Global config: {}", path.display())); + } + } + ResetScope::AllWorkspaces => { + for ws in &target.workspaces { + display_workspace_detail(ws, output); + } + } + ResetScope::Workspace(name) => { + if let Some(ws) = target.workspaces.iter().find(|w| w.name == *name) { + display_workspace_detail(ws, output); + } + } + } +} + +/// Display detail for a single workspace. +fn display_workspace_detail(ws: &WorkspaceDetail, output: &Output) { + output.info(&format!(" Workspace at {}:", ws.base_path)); + + if ws.orgs.is_empty() { + output.info(" Orgs: (all)"); + } else { + output.info(&format!( + " Orgs: {} ({})", + ws.orgs.join(", "), + ws.orgs.len() + )); + } + + let synced = ws + .last_synced + .as_deref() + .map(humanize_timestamp) + .unwrap_or_else(|| "never".to_string()); + output.info(&format!(" Last synced: {}", synced)); + + if let Some(size) = ws.cache_size { + output.info(&format!(" Cache: {}", format_bytes(size))); + } + + output.info(&format!(" Directory: {}", ws.dir.display())); +} + +/// Execute the reset based on scope. +fn execute_reset(scope: &ResetScope, target: &ResetTarget, output: &Output) -> Result<()> { + let mut had_errors = false; + + match scope { + ResetScope::Everything => { + for ws in &target.workspaces { + had_errors |= !remove_workspace_dir(ws, output); + } + if let Some(ref path) = target.config_file { + had_errors |= !remove_file(path, "config", output); + } + try_remove_empty_dir(&target.config_dir, output); + } + ResetScope::ConfigOnly => { + if let Some(ref path) = target.config_file { + had_errors |= !remove_file(path, "config", output); + } + } + ResetScope::AllWorkspaces => { + for ws in &target.workspaces { + had_errors |= !remove_workspace_dir(ws, output); + } + } + ResetScope::Workspace(name) => { + if let Some(ws) = target.workspaces.iter().find(|w| w.name == *name) { + had_errors |= !remove_workspace_dir(ws, output); + } else { + output.warn(&format!("Workspace '{}' not found.", name)); + had_errors = true; + } + } + } + + if had_errors { + Err(AppError::config( + "Reset completed with one or more removal errors.", + )) + } else { + match scope { + ResetScope::Everything => { + output.success("Reset complete. Run 'gisa init' to start fresh."); + } + ResetScope::ConfigOnly => { + output.success("Global config removed."); + } + ResetScope::AllWorkspaces => { + output.success("All workspaces removed."); + } + ResetScope::Workspace(name) => { + output.success(&format!("Workspace '{}' removed.", name)); + } + } + Ok(()) + } +} + +fn remove_workspace_dir(ws: &WorkspaceDetail, output: &Output) -> bool { + match std::fs::remove_dir_all(&ws.dir) { + Ok(()) => { + output.success(&format!("Removed workspace at {}", ws.base_path)); + true + } + Err(e) => { + output.warn(&format!( + "Failed to remove workspace at {}: {}", + ws.base_path, e + )); + false + } + } +} + +fn remove_file(path: &PathBuf, label: &str, output: &Output) -> bool { + match std::fs::remove_file(path) { + Ok(()) => { + output.success(&format!("Removed {}: {}", label, path.display())); + true + } + Err(e) => { + output.warn(&format!("Failed to remove {}: {}", label, e)); + false + } + } +} + +fn try_remove_empty_dir(dir: &PathBuf, output: &Output) { + if dir.exists() { + match std::fs::remove_dir(dir) { + Ok(()) => output.verbose(&format!("Removed directory: {}", dir.display())), + Err(_) => output.verbose(&format!( + "Config directory not empty, leaving: {}", + dir.display() + )), + } + } +} + +// --- Interactive prompts (all write to stderr) --- + +/// Prompt user to select what to reset. +fn prompt_scope(target: &ResetTarget) -> Result { + eprintln!("What would you like to reset?"); + + let mut options: Vec<(&str, ResetScope)> = Vec::new(); + + if target.config_file.is_some() && target.has_workspaces() { + options.push(( + "Everything (global config + all workspaces)", + ResetScope::Everything, + )); + } + + if target.config_file.is_some() { + options.push(("Global config only", ResetScope::ConfigOnly)); + } + + if target.workspaces.len() > 1 { + options.push(("All workspaces", ResetScope::AllWorkspaces)); + } + + if target.has_workspaces() { + options.push(("A specific workspace", ResetScope::Workspace(String::new()))); + } + + // If only one option, skip the menu + if options.len() == 1 { + let (_, scope) = options.remove(0); + return match scope { + ResetScope::Workspace(_) => prompt_workspace(&target.workspaces), + other => Ok(other), + }; + } + + for (i, (label, _)) in options.iter().enumerate() { + eprintln!(" {}. {}", i + 1, label); + } + + let choice = prompt_number("> ", options.len())?; + let (_, scope) = options.remove(choice - 1); + + match scope { + ResetScope::Workspace(_) => prompt_workspace(&target.workspaces), + other => Ok(other), + } +} + +/// Prompt user to pick a specific workspace. +fn prompt_workspace(workspaces: &[WorkspaceDetail]) -> Result { + eprintln!("\nSelect a workspace to delete:"); + for (i, ws) in workspaces.iter().enumerate() { + let orgs = if ws.orgs.is_empty() { + "all orgs".to_string() + } else { + format!("{} org(s)", ws.orgs.len()) + }; + let synced = ws + .last_synced + .as_deref() + .map(humanize_timestamp) + .unwrap_or_else(|| "never synced".to_string()); + eprintln!(" {}. {} ({}, {})", i + 1, ws.base_path, orgs, synced); + } + + let choice = prompt_number("> ", workspaces.len())?; + Ok(ResetScope::Workspace(workspaces[choice - 1].name.clone())) +} + +/// Read a number from stdin (1-based, within max). +fn prompt_number(prompt: &str, max: usize) -> Result { + loop { + eprint!("{}", prompt); + io::stderr().flush()?; + + let stdin = io::stdin(); + let mut line = String::new(); + let bytes_read = stdin.lock().read_line(&mut line)?; + if bytes_read == 0 { + return Err(AppError::Interrupted); + } + + match line.trim().parse::() { + Ok(n) if n >= 1 && n <= max => return Ok(n), + _ => eprintln!("Please enter a number between 1 and {}.", max), + } + } +} + +/// Prompt the user for y/N confirmation. +fn confirm(prompt: &str) -> Result { + eprint!("{}", prompt); + io::stderr().flush()?; + + let stdin = io::stdin(); + let mut line = String::new(); + stdin.lock().read_line(&mut line)?; + + let answer = line.trim().to_lowercase(); + Ok(answer == "y" || answer == "yes") +} + +// --- Formatting helpers --- + +/// Humanize an ISO 8601 timestamp to a relative string like "2h ago". +fn humanize_timestamp(ts: &str) -> String { + let parsed = ts + .parse::>() + .or_else(|_| DateTime::parse_from_rfc3339(ts).map(|dt| dt.with_timezone(&Utc))); + + let Ok(dt) = parsed else { + return ts.to_string(); + }; + + let duration = Utc::now().signed_duration_since(dt); + + if duration.num_days() > 30 { + format!("{}mo ago", duration.num_days() / 30) + } else if duration.num_days() > 0 { + format!("{}d ago", duration.num_days()) + } else if duration.num_hours() > 0 { + format!("{}h ago", duration.num_hours()) + } else if duration.num_minutes() > 0 { + format!("{}m ago", duration.num_minutes()) + } else { + "just now".to_string() + } +} + +/// Format bytes to human-readable string. +fn format_bytes(bytes: u64) -> String { + if bytes >= 1_048_576 { + format!("{:.1} MB", bytes as f64 / 1_048_576.0) + } else if bytes >= 1024 { + format!("{:.1} KB", bytes as f64 / 1024.0) + } else { + format!("{} B", bytes) + } +} + +#[cfg(test)] +#[path = "reset_tests.rs"] +mod tests; diff --git a/src/commands/reset_tests.rs b/src/commands/reset_tests.rs new file mode 100644 index 0000000..0934651 --- /dev/null +++ b/src/commands/reset_tests.rs @@ -0,0 +1,112 @@ +use super::*; + +#[test] +fn test_reset_target_is_empty_when_nothing_exists() { + let target = ResetTarget { + config_dir: PathBuf::from("/nonexistent"), + config_file: None, + workspaces: Vec::new(), + }; + assert!(target.is_empty()); +} + +#[test] +fn test_reset_target_not_empty_with_config() { + let target = ResetTarget { + config_dir: PathBuf::from("/some/dir"), + config_file: Some(PathBuf::from("/some/dir/config.toml")), + workspaces: Vec::new(), + }; + assert!(!target.is_empty()); +} + +#[test] +fn test_reset_target_not_empty_with_workspaces() { + let target = ResetTarget { + config_dir: PathBuf::from("/some/dir"), + config_file: None, + workspaces: vec![WorkspaceDetail { + name: "ws1".to_string(), + base_path: "~/github".to_string(), + orgs: vec!["org1".to_string()], + last_synced: None, + dir: PathBuf::from("/some/dir/ws1"), + cache_size: None, + }], + }; + assert!(!target.is_empty()); +} + +#[test] +fn test_humanize_timestamp_hours() { + let ts = (Utc::now() - chrono::Duration::hours(3)).to_rfc3339(); + assert_eq!(humanize_timestamp(&ts), "3h ago"); +} + +#[test] +fn test_humanize_timestamp_days() { + let ts = (Utc::now() - chrono::Duration::days(5)).to_rfc3339(); + assert_eq!(humanize_timestamp(&ts), "5d ago"); +} + +#[test] +fn test_humanize_timestamp_invalid() { + assert_eq!(humanize_timestamp("not-a-date"), "not-a-date"); +} + +#[test] +fn test_humanize_timestamp_just_now() { + let ts = Utc::now().to_rfc3339(); + assert_eq!(humanize_timestamp(&ts), "just now"); +} + +#[test] +fn test_format_bytes() { + assert_eq!(format_bytes(500), "500 B"); + assert_eq!(format_bytes(1024), "1.0 KB"); + assert_eq!(format_bytes(15360), "15.0 KB"); + assert_eq!(format_bytes(1_048_576), "1.0 MB"); +} + +#[test] +fn test_display_workspace_detail_no_panic() { + let ws = WorkspaceDetail { + name: "test".to_string(), + base_path: "~/github".to_string(), + orgs: vec!["org1".to_string(), "org2".to_string()], + last_synced: Some("2026-02-24T10:00:00Z".to_string()), + dir: PathBuf::from("/tmp/test"), + cache_size: Some(12345), + }; + let output = Output::new(crate::output::Verbosity::Quiet, false); + display_workspace_detail(&ws, &output); +} + +#[test] +fn test_display_detailed_targets_everything() { + let target = ResetTarget { + config_dir: PathBuf::from("/tmp/test"), + config_file: Some(PathBuf::from("/tmp/test/config.toml")), + workspaces: vec![WorkspaceDetail { + name: "ws1".to_string(), + base_path: "~/github".to_string(), + orgs: Vec::new(), + last_synced: None, + dir: PathBuf::from("/tmp/test/ws1"), + cache_size: None, + }], + }; + let output = Output::new(crate::output::Verbosity::Quiet, false); + display_detailed_targets(&ResetScope::Everything, &target, &output); +} + +#[test] +fn test_display_detailed_targets_config_only() { + let target = ResetTarget { + config_dir: PathBuf::from("/tmp/test"), + config_file: Some(PathBuf::from("/tmp/test/config.toml")), + workspaces: Vec::new(), + }; + let output = Output::new(crate::output::Verbosity::Quiet, false); + display_detailed_targets(&ResetScope::ConfigOnly, &target, &output); +} diff --git a/src/commands/setup.rs b/src/commands/setup.rs new file mode 100644 index 0000000..77d0b27 --- /dev/null +++ b/src/commands/setup.rs @@ -0,0 +1,23 @@ +//! Setup command handler. +//! +//! Thin wrapper that launches the interactive setup wizard. + +#[cfg(feature = "tui")] +use crate::cli::SetupArgs; +#[cfg(feature = "tui")] +use crate::errors::Result; +#[cfg(feature = "tui")] +use crate::output::Output; + +/// Run the setup wizard. +#[cfg(feature = "tui")] +pub async fn run(_args: &SetupArgs, output: &Output) -> Result<()> { + let completed = crate::setup::run_setup().await?; + if completed { + output.success("Workspace configured successfully"); + output.info("Run 'gisa sync' to sync your repositories"); + } else { + output.info("Setup cancelled"); + } + Ok(()) +} diff --git a/src/commands/status.rs b/src/commands/status.rs new file mode 100644 index 0000000..e1d4b90 --- /dev/null +++ b/src/commands/status.rs @@ -0,0 +1,131 @@ +//! Status command handler. + +use crate::cli::StatusArgs; +use crate::config::{Config, WorkspaceManager}; +use crate::discovery::DiscoveryOrchestrator; +use crate::errors::Result; +use crate::git::{GitOperations, ShellGit}; +use crate::output::{format_count, Output}; + +/// Show status of repositories. +pub async fn run(args: &StatusArgs, config: &Config, output: &Output) -> Result<()> { + let mut workspace = WorkspaceManager::resolve(args.workspace.as_deref(), config)?; + + // Ensure base path exists (offer to fix if user moved it) + super::ensure_base_path(&mut workspace, output)?; + let base_path = workspace.expanded_base_path(); + + let structure = workspace.structure.as_deref().unwrap_or(&config.structure); + + // Scan local repositories + let git = ShellGit::new(); + let orchestrator = DiscoveryOrchestrator::new(workspace.filters.clone(), structure.to_string()); + let local_repos = orchestrator.scan_local(&base_path, &git); + + if local_repos.is_empty() { + output.warn("No repositories found"); + return Ok(()); + } + + output.info(&format_count(local_repos.len(), "repositories found")); + + // Get status for each + let mut uncommitted_count = 0; + let mut behind_count = 0; + let mut error_count = 0; + + for (path, org, name) in &local_repos { + let status = git.status(path); + + match status { + Ok(s) => { + let is_uncommitted = s.is_uncommitted || s.has_untracked; + let is_behind = s.behind > 0; + + // Apply filters + if args.uncommitted && !is_uncommitted { + continue; + } + if args.behind && !is_behind { + continue; + } + if !args.org.is_empty() && !args.org.contains(org) { + continue; + } + + if is_uncommitted { + uncommitted_count += 1; + } + if is_behind { + behind_count += 1; + } + + // Print status + let full_name = format!("{}/{}", org, name); + if args.detailed { + println!("{}", full_name); + println!(" Branch: {}", s.branch); + if s.ahead > 0 || s.behind > 0 { + println!(" Ahead: {}, Behind: {}", s.ahead, s.behind); + } + if s.is_uncommitted { + println!(" Status: uncommitted changes"); + } + if s.has_untracked { + println!(" Status: has untracked files"); + } + } else { + let mut indicators = Vec::new(); + if is_uncommitted { + indicators.push("*".to_string()); + } + if s.ahead > 0 { + indicators.push(format!("+{}", s.ahead)); + } + if s.behind > 0 { + indicators.push(format!("-{}", s.behind)); + } + + if indicators.is_empty() { + println!(" {} (clean)", full_name); + } else { + println!(" {} [{}]", full_name, indicators.join(", ")); + } + } + } + Err(e) => { + error_count += 1; + output.verbose(&format!(" {}/{} - error: {}", org, name, e)); + } + } + } + + // Summary + println!(); + if uncommitted_count > 0 { + output.warn(&format!( + "{} repositories have uncommitted changes", + uncommitted_count + )); + } + if behind_count > 0 { + output.info(&format!( + "{} repositories are behind upstream", + behind_count + )); + } + if error_count > 0 { + output.warn(&format!( + "{} repositories could not be checked", + error_count + )); + } else if uncommitted_count == 0 && behind_count == 0 { + output.success("All repositories are clean and up to date"); + } + + Ok(()) +} + +#[cfg(test)] +#[path = "status_tests.rs"] +mod tests; diff --git a/src/commands/status_tests.rs b/src/commands/status_tests.rs new file mode 100644 index 0000000..4500d0d --- /dev/null +++ b/src/commands/status_tests.rs @@ -0,0 +1,28 @@ +use super::*; +use crate::output::Verbosity; + +fn quiet_output() -> Output { + Output::new(Verbosity::Quiet, false) +} + +#[tokio::test] +async fn test_status_no_workspaces() { + let args = StatusArgs { + workspace: Some("nonexistent".to_string()), + uncommitted: false, + behind: false, + detailed: false, + org: vec![], + }; + let config = Config::default(); + let output = quiet_output(); + + let result = run(&args, &config, &output).await; + let err = result.expect_err("nonexistent workspace should return an error"); + assert!( + err.to_string().contains("not found") + || err.to_string().contains("No workspace configured for path"), + "unexpected error: {}", + err + ); +} diff --git a/src/commands/support/concurrency.rs b/src/commands/support/concurrency.rs new file mode 100644 index 0000000..3f29541 --- /dev/null +++ b/src/commands/support/concurrency.rs @@ -0,0 +1,20 @@ +use crate::operations::clone::MAX_CONCURRENCY; +use crate::output::Output; + +/// Warn if requested concurrency exceeds the maximum. +/// Returns the effective concurrency to use. +pub(crate) fn warn_if_concurrency_capped(requested: usize, output: &Output) -> usize { + if requested > MAX_CONCURRENCY { + output.warn(&format!( + "Requested concurrency {} exceeds maximum {}. Using {} instead.", + requested, MAX_CONCURRENCY, MAX_CONCURRENCY + )); + MAX_CONCURRENCY + } else { + requested + } +} + +#[cfg(test)] +#[path = "concurrency_tests.rs"] +mod tests; diff --git a/src/commands/support/concurrency_tests.rs b/src/commands/support/concurrency_tests.rs new file mode 100644 index 0000000..f622572 --- /dev/null +++ b/src/commands/support/concurrency_tests.rs @@ -0,0 +1,30 @@ +use super::*; +use crate::output::{Output, Verbosity}; + +fn quiet_output() -> Output { + Output::new(Verbosity::Quiet, false) +} + +#[test] +fn test_concurrency_within_limit() { + let output = quiet_output(); + assert_eq!(warn_if_concurrency_capped(4, &output), 4); +} + +#[test] +fn test_concurrency_at_limit() { + let output = quiet_output(); + assert_eq!( + warn_if_concurrency_capped(MAX_CONCURRENCY, &output), + MAX_CONCURRENCY + ); +} + +#[test] +fn test_concurrency_above_limit() { + let output = quiet_output(); + assert_eq!( + warn_if_concurrency_capped(MAX_CONCURRENCY + 10, &output), + MAX_CONCURRENCY + ); +} diff --git a/src/commands/support/mod.rs b/src/commands/support/mod.rs new file mode 100644 index 0000000..293baea --- /dev/null +++ b/src/commands/support/mod.rs @@ -0,0 +1,7 @@ +//! Shared command helpers. + +pub mod concurrency; +pub mod workspace; + +pub(crate) use concurrency::warn_if_concurrency_capped; +pub(crate) use workspace::ensure_base_path; diff --git a/src/commands/support/workspace.rs b/src/commands/support/workspace.rs new file mode 100644 index 0000000..2c5b881 --- /dev/null +++ b/src/commands/support/workspace.rs @@ -0,0 +1,68 @@ +use crate::config::{WorkspaceConfig, WorkspaceManager}; +use crate::errors::{AppError, Result}; +use crate::output::Output; +use std::io::{self, BufRead, Write}; + +/// Ensure the workspace base_path exists. +/// +/// If the configured path is missing, checks whether the current directory +/// could be the new location and offers to update the workspace config. +/// Returns an error if the path cannot be resolved. +pub(crate) fn ensure_base_path(workspace: &mut WorkspaceConfig, output: &Output) -> Result<()> { + let base_path = workspace.expanded_base_path(); + if base_path.is_dir() { + return Ok(()); + } + if base_path.exists() { + return Err(AppError::config(format!( + "Base path '{}' exists but is not a directory.", + base_path.display() + ))); + } + + let cwd = std::env::current_dir() + .map_err(|e| AppError::path(format!("Cannot determine current directory: {}", e)))?; + + output.warn(&format!( + "Base path '{}' does not exist.", + workspace.base_path + )); + output.info(&format!("Current directory: {}", cwd.display())); + + let prompt = format!( + "Update workspace at '{}' to use '{}'? [y/N] ", + workspace.base_path, + cwd.display() + ); + + if confirm_stderr(&prompt)? { + workspace.base_path = cwd.to_string_lossy().to_string(); + WorkspaceManager::save(workspace)?; + output.success(&format!("Updated base path to '{}'", workspace.base_path)); + Ok(()) + } else { + Err(AppError::config(format!( + "Base path '{}' does not exist. \ + Move to the correct directory and retry, \ + or update manually with 'gisa setup'.", + base_path.display() + ))) + } +} + +/// Prompt on stderr and return true if the user answers y/yes. +fn confirm_stderr(prompt: &str) -> Result { + eprint!("{}", prompt); + io::stderr().flush()?; + + let stdin = io::stdin(); + let mut line = String::new(); + stdin.lock().read_line(&mut line)?; + + let answer = line.trim().to_lowercase(); + Ok(answer == "y" || answer == "yes") +} + +#[cfg(test)] +#[path = "workspace_tests.rs"] +mod tests; diff --git a/src/commands/support/workspace_tests.rs b/src/commands/support/workspace_tests.rs new file mode 100644 index 0000000..5190f31 --- /dev/null +++ b/src/commands/support/workspace_tests.rs @@ -0,0 +1,33 @@ +use super::*; +use crate::output::{Output, Verbosity}; + +#[test] +fn ensure_base_path_is_noop_when_path_exists() { + let temp = tempfile::tempdir().unwrap(); + let mut workspace = WorkspaceConfig::new("ws", temp.path().to_string_lossy().to_string()); + let output = Output::new(Verbosity::Quiet, false); + + ensure_base_path(&mut workspace, &output).unwrap(); + assert_eq!( + workspace.base_path, + temp.path().to_string_lossy().to_string() + ); +} + +#[test] +fn ensure_base_path_rejects_existing_file_path() { + let temp = tempfile::tempdir().unwrap(); + let file_path = temp.path().join("not-a-directory"); + std::fs::write(&file_path, "x").unwrap(); + + let mut workspace = WorkspaceConfig::new("ws", file_path.to_string_lossy().to_string()); + let output = Output::new(Verbosity::Quiet, false); + + let err = ensure_base_path(&mut workspace, &output).unwrap_err(); + assert!(err.to_string().contains("not a directory")); +} + +#[test] +fn confirm_stderr_function_signature_is_stable() { + let _fn_ptr: fn(&str) -> Result = confirm_stderr; +} diff --git a/src/commands/sync_cmd.rs b/src/commands/sync_cmd.rs new file mode 100644 index 0000000..0a806ed --- /dev/null +++ b/src/commands/sync_cmd.rs @@ -0,0 +1,177 @@ +//! Sync command handler. +//! +//! Combined operation: discover repos -> clone new ones -> fetch/pull existing ones. + +use super::warn_if_concurrency_capped; +use crate::cli::SyncCmdArgs; +use crate::config::{Config, WorkspaceManager}; +use crate::errors::Result; +use crate::operations::clone::CloneProgress; +use crate::operations::sync::{SyncMode, SyncProgress}; +use crate::output::{ + format_count, CloneProgressBar, DiscoveryProgressBar, Output, SyncProgressBar, Verbosity, +}; +use crate::workflows::sync_workspace::{ + execute_prepared_sync, prepare_sync_workspace, SyncWorkspaceRequest, +}; +use std::sync::Arc; + +/// Sync repositories for a workspace. +pub async fn run(args: &SyncCmdArgs, config: &Config, output: &Output) -> Result<()> { + let verbosity = if output.is_json() { + Verbosity::Quiet + } else { + output.verbosity() + }; + + // Resolve workspace and ensure base path exists (offer to fix if user moved it) + let mut workspace = WorkspaceManager::resolve(args.workspace.as_deref(), config)?; + super::ensure_base_path(&mut workspace, output)?; + + output.info("Discovering repositories..."); + let discovery_progress = DiscoveryProgressBar::new(verbosity); + let prepared = prepare_sync_workspace( + SyncWorkspaceRequest { + config, + workspace: &workspace, + refresh: args.refresh, + skip_uncommitted: !args.no_skip_uncommitted, + pull: args.pull, + concurrency_override: args.concurrency, + create_base_path: false, + }, + &discovery_progress, + ) + .await?; + discovery_progress.finish(); + + output.verbose(&format!( + "Authenticated as {:?} via {}", + prepared.auth.username, prepared.auth.method + )); + + if prepared.used_cache { + if let Some(age_secs) = prepared.cache_age_secs { + output.verbose(&format!( + "Using cached discovery ({} repos, {} seconds old)", + prepared.repos.len(), + age_secs + )); + } + } + + if prepared.repos.is_empty() { + output.warn("No repositories found matching filters"); + return Ok(()); + } + + output.info(&format_count( + prepared.repos.len(), + "repositories discovered", + )); + + let effective_concurrency = warn_if_concurrency_capped(prepared.requested_concurrency, output); + debug_assert_eq!(effective_concurrency, prepared.effective_concurrency); + + // Dry-run output + let had_clones = !prepared.plan.to_clone.is_empty(); + if args.dry_run { + if had_clones { + output.info(&format!( + "Would clone {} new repositories:", + prepared.plan.to_clone.len() + )); + for repo in &prepared.plan.to_clone { + output.info(&format!(" + {}", repo.full_name())); + } + } + + if !prepared.to_sync.is_empty() { + let op = if prepared.sync_mode == SyncMode::Pull { + "pull" + } else { + "fetch" + }; + output.info(&format!( + "Would {} {} existing repositories:", + op, + prepared.to_sync.len() + )); + for repo in &prepared.to_sync { + output.info(&format!(" ~ {}", repo.repo.full_name())); + } + } else if !had_clones { + output.success("All repositories are up to date"); + } + + return Ok(()); + } + + // Execute shared workflow + let clone_progress = Arc::new(CloneProgressBar::new( + prepared.plan.to_clone.len(), + verbosity, + )); + let clone_progress_dyn: Arc = clone_progress.clone(); + + let operation = if prepared.sync_mode == SyncMode::Pull { + "Pull" + } else { + "Fetch" + }; + let sync_progress = Arc::new(SyncProgressBar::new( + prepared.to_sync.len(), + verbosity, + operation, + )); + let sync_progress_dyn: Arc = sync_progress.clone(); + + let outcome = + execute_prepared_sync(&prepared, false, clone_progress_dyn, sync_progress_dyn).await; + + if let Some(summary) = &outcome.clone_summary { + clone_progress.finish(summary.success, summary.failed, summary.skipped); + if summary.has_failures() { + output.warn(&format!("{} repositories failed to clone", summary.failed)); + } else if summary.success > 0 { + output.success(&format!("Cloned {} new repositories", summary.success)); + } + } + + if let Some(summary) = &outcome.sync_summary { + sync_progress.finish(summary.success, summary.failed, summary.skipped); + + let with_updates = outcome + .sync_results + .iter() + .filter(|r| r.had_updates) + .count(); + if summary.has_failures() { + output.warn(&format!( + "{} of {} repositories failed to {}", + summary.failed, + summary.total(), + operation.to_lowercase() + )); + } else { + output.success(&format!( + "{}ed {} repositories ({} with updates)", + operation, summary.success, with_updates + )); + } + } else if !had_clones { + output.success("All repositories are up to date"); + } + + // Update last_synced + workspace.last_synced = Some(chrono::Utc::now().to_rfc3339()); + if let Err(e) = WorkspaceManager::save(&workspace) { + output.verbose(&format!("Warning: Failed to update last_synced: {}", e)); + } + + Ok(()) +} + +#[cfg(test)] +#[path = "sync_cmd_tests.rs"] +mod tests; diff --git a/src/commands/sync_cmd_tests.rs b/src/commands/sync_cmd_tests.rs new file mode 100644 index 0000000..0af5801 --- /dev/null +++ b/src/commands/sync_cmd_tests.rs @@ -0,0 +1,69 @@ +use super::*; +use crate::output::{Output, Verbosity}; +use tokio::sync::Mutex; + +static HOME_LOCK: Mutex<()> = Mutex::const_new(()); + +fn default_args() -> SyncCmdArgs { + SyncCmdArgs { + workspace: None, + pull: false, + dry_run: false, + concurrency: None, + refresh: false, + no_skip_uncommitted: false, + } +} + +#[tokio::test] +async fn run_returns_error_when_no_workspace_is_configured() { + let _lock = HOME_LOCK.lock().await; + let original_home = std::env::var("HOME").ok(); + let temp = tempfile::tempdir().unwrap(); + std::env::set_var("HOME", temp.path()); + + let args = default_args(); + let config = Config::default(); + let output = Output::new(Verbosity::Quiet, false); + + let result = run(&args, &config, &output).await; + + if let Some(home) = original_home { + std::env::set_var("HOME", home); + } else { + std::env::remove_var("HOME"); + } + + let err = result.unwrap_err(); + assert!(err.to_string().contains("No workspaces configured")); +} + +#[tokio::test] +async fn run_returns_error_for_unknown_workspace_name() { + let _lock = HOME_LOCK.lock().await; + let original_home = std::env::var("HOME").ok(); + let temp = tempfile::tempdir().unwrap(); + std::env::set_var("HOME", temp.path()); + + let mut args = default_args(); + args.workspace = Some("unknown-workspace".to_string()); + + let config = Config::default(); + let output = Output::new(Verbosity::Quiet, false); + + let result = run(&args, &config, &output).await; + + if let Some(home) = original_home { + std::env::set_var("HOME", home); + } else { + std::env::remove_var("HOME"); + } + + let err = result.unwrap_err(); + assert!(err.to_string().contains("No workspace configured")); +} + +#[test] +fn run_function_is_exposed() { + let _fn_ptr = run; +} diff --git a/src/commands/workspace.rs b/src/commands/workspace.rs new file mode 100644 index 0000000..b7e5e36 --- /dev/null +++ b/src/commands/workspace.rs @@ -0,0 +1,97 @@ +//! Workspace management command handler. + +use crate::cli::{WorkspaceArgs, WorkspaceCommand}; +use crate::config::{Config, WorkspaceManager}; +use crate::errors::Result; +use crate::output::Output; + +/// Run the workspace command. +pub fn run(args: &WorkspaceArgs, config: &Config, output: &Output) -> Result<()> { + match &args.command { + WorkspaceCommand::List => list(config, output), + WorkspaceCommand::Default(default_args) => { + if default_args.clear { + clear_default(output) + } else if let Some(ref name) = default_args.name { + set_default(name, output) + } else { + show_default(config, output) + } + } + } +} + +fn list(config: &Config, output: &Output) -> Result<()> { + let workspaces = WorkspaceManager::list()?; + + if workspaces.is_empty() { + output.info("No workspaces configured. Run 'gisa setup' to create one."); + return Ok(()); + } + + let default_name = config.default_workspace.as_deref().unwrap_or(""); + + for ws in &workspaces { + let marker = if ws.name == default_name { "*" } else { " " }; + let last_synced = ws.last_synced.as_deref().unwrap_or("never"); + let org_info = if ws.orgs.is_empty() { + "all orgs".to_string() + } else { + format!("{} orgs", ws.orgs.len()) + }; + let provider_label = ws.provider.kind.display_name(); + + output.plain(&format!( + " {} {} ({}, {}, last synced: {})", + marker, ws.base_path, provider_label, org_info, last_synced + )); + } + + if !default_name.is_empty() { + if let Ok(default_ws) = WorkspaceManager::load(default_name) { + output.plain(""); + output.info(&format!("Default: {}", default_ws.display_label())); + } + } + + Ok(()) +} + +fn show_default(config: &Config, output: &Output) -> Result<()> { + match &config.default_workspace { + Some(name) => { + if let Ok(ws) = WorkspaceManager::load(name) { + output.info(&format!("Default workspace: {}", ws.display_label())); + } else { + output.info(&format!("Default workspace: {} (not found)", name)); + } + } + None => output.info("No default workspace set. Use 'gisa workspace default '."), + } + Ok(()) +} + +fn set_default(name_or_path: &str, output: &Output) -> Result<()> { + // Try name first (backward compat), then path + let ws = match WorkspaceManager::load(name_or_path) { + Ok(ws) => ws, + Err(_) => WorkspaceManager::load_by_path(name_or_path)?, + }; + + Config::save_default_workspace(Some(&ws.name))?; + output.success(&format!( + "Default workspace set to '{}'", + ws.display_label() + )); + Ok(()) +} + +fn clear_default(output: &Output) -> Result<()> { + Config::save_default_workspace(None)?; + output.success("Default workspace cleared"); + Ok(()) +} + +#[cfg(test)] +#[path = "workspace_tests.rs"] +mod tests; diff --git a/src/commands/workspace_tests.rs b/src/commands/workspace_tests.rs new file mode 100644 index 0000000..024feaf --- /dev/null +++ b/src/commands/workspace_tests.rs @@ -0,0 +1,35 @@ +use super::*; +use crate::output::Verbosity; + +fn quiet_output() -> Output { + Output::new(Verbosity::Quiet, false) +} + +#[test] +fn test_show_default_none() { + let config = Config::default(); + let output = quiet_output(); + let result = show_default(&config, &output); + assert!(result.is_ok()); +} + +#[test] +fn test_show_default_some() { + let config = Config { + default_workspace: Some("my-ws".to_string()), + ..Config::default() + }; + let output = quiet_output(); + let result = show_default(&config, &output); + assert!(result.is_ok()); +} + +#[test] +fn test_list_empty() { + // This test may fail if user has workspaces configured; + // the actual CRUD tests are in workspace_manager.rs + let config = Config::default(); + let output = quiet_output(); + let result = list(&config, &output); + assert!(result.is_ok()); +} diff --git a/src/config/mod.rs b/src/config/mod.rs new file mode 100644 index 0000000..f7c8f4f --- /dev/null +++ b/src/config/mod.rs @@ -0,0 +1,29 @@ +//! Configuration management for git-same. +//! +//! This module handles loading, parsing, and validating configuration +//! from `git-same.config.toml` files. +//! +//! # Example Configuration +//! +//! ```toml +//! base_path = "~/github" +//! concurrency = 4 +//! +//! [[providers]] +//! kind = "github" +//! auth = "gh-cli" +//! ``` + +mod parser; +mod provider_config; +pub mod workspace; +pub mod workspace_manager; +pub mod workspace_policy; +pub mod workspace_store; + +pub use parser::{Config, ConfigCloneOptions, FilterOptions, SyncMode}; +pub use provider_config::{AuthMethod, ProviderEntry}; +pub use workspace::{WorkspaceConfig, WorkspaceProvider}; +pub use workspace_manager::WorkspaceManager; +pub use workspace_policy::WorkspacePolicy; +pub use workspace_store::WorkspaceStore; diff --git a/src/config/parser.rs b/src/config/parser.rs new file mode 100644 index 0000000..a088592 --- /dev/null +++ b/src/config/parser.rs @@ -0,0 +1,360 @@ +//! Configuration file parser. +//! +//! Handles loading and parsing of config.toml files. + +use super::provider_config::ProviderEntry; +use crate::errors::AppError; +use crate::operations::clone::{DEFAULT_CONCURRENCY, MAX_CONCURRENCY}; +use serde::{Deserialize, Serialize}; +use std::path::{Path, PathBuf}; + +/// Clone-specific configuration options (from config file). +/// +/// Note: This is distinct from `git::CloneOptions` which is used for +/// the actual git clone operation parameters. +#[derive(Debug, Clone, Serialize, Deserialize, Default)] +pub struct ConfigCloneOptions { + /// Shallow clone depth (0 = full history) + #[serde(default)] + pub depth: u32, + + /// Specific branch to clone (empty = default branch) + #[serde(default)] + pub branch: String, + + /// Whether to clone submodules + #[serde(default)] + pub recurse_submodules: bool, +} + +/// Repository filter options. +#[derive(Debug, Clone, Serialize, Deserialize, Default)] +pub struct FilterOptions { + /// Include archived repositories + #[serde(default)] + pub include_archived: bool, + + /// Include forked repositories + #[serde(default)] + pub include_forks: bool, + + /// Filter to specific organizations (empty = all) + #[serde(default)] + pub orgs: Vec, + + /// Exclude specific repos by full name (e.g., "org/repo") + #[serde(default)] + pub exclude_repos: Vec, +} + +/// Sync mode for existing repositories. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, Default)] +#[serde(rename_all = "kebab-case")] +pub enum SyncMode { + /// Only fetch (safe, doesn't modify working tree) + #[default] + Fetch, + /// Pull changes (modifies working tree) + Pull, +} + +impl std::str::FromStr for SyncMode { + type Err = String; + + fn from_str(s: &str) -> Result { + match s.to_lowercase().as_str() { + "fetch" => Ok(SyncMode::Fetch), + "pull" => Ok(SyncMode::Pull), + _ => Err(format!("Invalid sync mode: '{}'. Use 'fetch' or 'pull'", s)), + } + } +} + +/// Full application configuration. +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct Config { + /// Directory structure pattern + /// Placeholders: {provider}, {org}, {repo} + #[serde(default = "default_structure")] + pub structure: String, + + /// Number of parallel operations + #[serde(default = "default_concurrency")] + pub concurrency: usize, + + /// Sync behavior + #[serde(default)] + pub sync_mode: SyncMode, + + /// Default workspace name (used when --workspace is not specified and multiple exist) + #[serde(default)] + pub default_workspace: Option, + + /// Dashboard auto-refresh interval in seconds (5–3600, default 30) + #[serde(default = "default_refresh_interval")] + pub refresh_interval: u64, + + /// Clone options + #[serde(default)] + #[serde(rename = "clone")] + pub clone: ConfigCloneOptions, + + /// Filter options + #[serde(default)] + pub filters: FilterOptions, + + /// Provider configurations + #[serde(default = "default_providers")] + pub providers: Vec, +} + +fn default_structure() -> String { + "{org}/{repo}".to_string() +} + +fn default_concurrency() -> usize { + DEFAULT_CONCURRENCY +} + +fn default_refresh_interval() -> u64 { + 30 +} + +fn default_providers() -> Vec { + vec![ProviderEntry::github()] +} + +impl Default for Config { + fn default() -> Self { + Self { + structure: default_structure(), + concurrency: default_concurrency(), + sync_mode: SyncMode::default(), + default_workspace: None, + refresh_interval: default_refresh_interval(), + clone: ConfigCloneOptions::default(), + filters: FilterOptions::default(), + providers: default_providers(), + } + } +} + +impl Config { + /// Returns the default config file path (~/.config/git-same/config.toml). + pub fn default_path() -> Result { + #[cfg(target_os = "macos")] + let config_dir = { + let home = std::env::var("HOME") + .map_err(|_| AppError::config("HOME environment variable not set"))?; + PathBuf::from(home).join(".config/git-same") + }; + #[cfg(not(target_os = "macos"))] + let config_dir = if let Some(dir) = directories::ProjectDirs::from("", "", "git-same") { + dir.config_dir().to_path_buf() + } else { + let home = std::env::var("HOME") + .or_else(|_| std::env::var("USERPROFILE")) + .map_err(|_| { + AppError::config("Neither HOME nor USERPROFILE environment variable is set") + })?; + PathBuf::from(home).join(".config/git-same") + }; + + Ok(config_dir.join("config.toml")) + } + + /// Load configuration from the default path, or return defaults if file doesn't exist. + pub fn load() -> Result { + Self::load_from(&Self::default_path()?) + } + + /// Load configuration from a specific file, or return defaults if file doesn't exist. + pub fn load_from(path: &Path) -> Result { + if path.exists() { + let content = std::fs::read_to_string(path) + .map_err(|e| AppError::config(format!("Failed to read config file: {}", e)))?; + Self::parse(&content) + } else { + Ok(Config::default()) + } + } + + /// Parse configuration from a TOML string. + pub fn parse(content: &str) -> Result { + let config: Config = toml::from_str(content) + .map_err(|e| AppError::config(format!("Failed to parse config: {}", e)))?; + config.validate()?; + Ok(config) + } + + /// Validate the configuration. + pub fn validate(&self) -> Result<(), AppError> { + // Validate concurrency + if !(1..=MAX_CONCURRENCY).contains(&self.concurrency) { + return Err(AppError::config(format!( + "concurrency must be between 1 and {}", + MAX_CONCURRENCY + ))); + } + + // Validate refresh_interval + if !(5..=3600).contains(&self.refresh_interval) { + return Err(AppError::config( + "refresh_interval must be between 5 and 3600 seconds", + )); + } + + // Validate providers + if self.providers.is_empty() { + return Err(AppError::config("At least one provider must be configured")); + } + + for (i, provider) in self.providers.iter().enumerate() { + provider + .validate() + .map_err(|e| AppError::config(format!("Provider {} error: {}", i + 1, e)))?; + } + + Ok(()) + } + + /// Generate the default configuration file content. + pub fn default_toml() -> String { + format!( + r#"# Git-Same Configuration +# See: https://github.com/zaai-com/git-same + +# Directory structure pattern +# Placeholders: {{provider}}, {{org}}, {{repo}} +structure = "{{org}}/{{repo}}" + +# Number of parallel clone/sync operations (1-{}) +# Keeping this bounded helps avoid provider rate limits and local resource contention. +concurrency = {} + +# Sync behavior: "fetch" (safe) or "pull" (updates working tree) +sync_mode = "fetch""#, + MAX_CONCURRENCY, DEFAULT_CONCURRENCY + ) + r#" + +[clone] +# Clone depth (0 = full history) +depth = 0 + +# Clone submodules +recurse_submodules = false + +[filters] +# Include archived repositories +include_archived = false + +# Include forked repositories +include_forks = false + +# Filter to specific organizations (empty = all) +# orgs = ["my-org", "other-org"] + +# Exclude specific repos +# exclude_repos = ["org/repo-to-skip"] + +# Provider configuration (default: GitHub.com with gh CLI auth) +[[providers]] +kind = "github" +auth = "gh-cli" +prefer_ssh = true +# base_path = "~/github" +"# + } + + /// Save the default_workspace setting to the config file at the default path. + pub fn save_default_workspace(workspace: Option<&str>) -> Result<(), AppError> { + Self::save_default_workspace_to(&Self::default_path()?, workspace) + } + + /// Save the default_workspace setting to a specific config file. + /// + /// Uses targeted text replacement to preserve comments and formatting. + pub fn save_default_workspace_to(path: &Path, workspace: Option<&str>) -> Result<(), AppError> { + let content = if path.exists() { + std::fs::read_to_string(path) + .map_err(|e| AppError::config(format!("Failed to read config: {}", e)))? + } else { + return Err(AppError::config( + "Config file not found. Run 'gisa init' first.", + )); + }; + + let new_line = match workspace { + Some(name) => { + let escaped = toml::Value::String(name.to_string()).to_string(); + format!("default_workspace = {}", escaped) + } + None => String::new(), + }; + + // Replace existing default_workspace line, or insert after sync_mode + let new_content = if content.contains("default_workspace") { + let mut lines: Vec<&str> = content.lines().collect(); + lines.retain(|line| { + let trimmed = line.trim(); + !trimmed.starts_with("default_workspace") + && !trimmed.starts_with("# default_workspace") + }); + let mut result = lines.join("\n"); + if !new_line.is_empty() { + // Insert after sync_mode line + if let Some(pos) = result.find("sync_mode") { + if let Some(nl) = result[pos..].find('\n') { + let insert_pos = pos + nl + 1; + result.insert_str(insert_pos, &format!("{}\n", new_line)); + } + } else { + // Fallback: insert near the top (after first blank line) + if let Some(pos) = result.find("\n\n") { + result.insert_str(pos + 1, &format!("\n{}\n", new_line)); + } else { + result = format!("{}\n{}\n", new_line, result); + } + } + } + // Ensure trailing newline + if !result.ends_with('\n') { + result.push('\n'); + } + result + } else if !new_line.is_empty() { + // Insert after sync_mode line + let mut result = content.clone(); + if let Some(pos) = result.find("sync_mode") { + if let Some(nl) = result[pos..].find('\n') { + let insert_pos = pos + nl + 1; + result.insert_str(insert_pos, &format!("\n{}\n", new_line)); + } + } else { + // Fallback: insert near the top (after first blank line) + if let Some(pos) = result.find("\n\n") { + result.insert_str(pos + 1, &format!("\n{}\n", new_line)); + } else { + result = format!("{}\n{}\n", new_line, result); + } + } + result + } else { + // Nothing to do — clearing a field that doesn't exist + content + }; + + std::fs::write(path, new_content) + .map_err(|e| AppError::config(format!("Failed to write config: {}", e)))?; + Ok(()) + } + + /// Returns enabled providers only. + pub fn enabled_providers(&self) -> impl Iterator { + self.providers.iter().filter(|p| p.enabled) + } +} + +#[cfg(test)] +#[path = "parser_tests.rs"] +mod tests; diff --git a/src/config/parser_tests.rs b/src/config/parser_tests.rs new file mode 100644 index 0000000..99a622f --- /dev/null +++ b/src/config/parser_tests.rs @@ -0,0 +1,263 @@ +use super::*; +use std::io::Write; +use tempfile::NamedTempFile; + +#[test] +fn test_default_config() { + let config = Config::default(); + assert_eq!(config.concurrency, 8); + assert_eq!(config.sync_mode, SyncMode::Fetch); + assert!(!config.filters.include_archived); + assert!(!config.filters.include_forks); + assert_eq!(config.providers.len(), 1); +} + +#[test] +fn test_load_minimal_config() { + let mut file = NamedTempFile::new().unwrap(); + writeln!(file, "concurrency = 2").unwrap(); + + let config = Config::load_from(file.path()).unwrap(); + assert_eq!(config.concurrency, 2); +} + +#[test] +fn test_load_full_config() { + let content = r#" +structure = "{provider}/{org}/{repo}" +concurrency = 8 +sync_mode = "pull" + +[clone] +depth = 1 +recurse_submodules = true + +[filters] +include_archived = true +include_forks = true +orgs = ["my-org"] +exclude_repos = ["my-org/skip-this"] + +[[providers]] +kind = "github" +auth = "gh-cli" +"#; + + let config = Config::parse(content).unwrap(); + assert_eq!(config.structure, "{provider}/{org}/{repo}"); + assert_eq!(config.concurrency, 8); + assert_eq!(config.sync_mode, SyncMode::Pull); + assert_eq!(config.clone.depth, 1); + assert!(config.clone.recurse_submodules); + assert!(config.filters.include_archived); + assert!(config.filters.include_forks); + assert_eq!(config.filters.orgs, vec!["my-org"]); + assert_eq!(config.filters.exclude_repos, vec!["my-org/skip-this"]); +} + +#[test] +fn test_load_multi_provider_config() { + let content = r#" +[[providers]] +kind = "github" +auth = "gh-cli" + +[[providers]] +kind = "github" +name = "Work" +auth = "gh-cli" +"#; + + let config = Config::parse(content).unwrap(); + assert_eq!(config.providers.len(), 2); + assert_eq!(config.providers[0].kind, crate::types::ProviderKind::GitHub); + assert_eq!(config.providers[1].name, Some("Work".to_string())); +} + +#[test] +fn test_missing_file_returns_defaults() { + let config = Config::load_from(Path::new("/nonexistent/config.toml")).unwrap(); + assert_eq!(config.concurrency, 8); +} + +#[test] +fn test_validation_rejects_zero_concurrency() { + let config = Config { + concurrency: 0, + ..Config::default() + }; + let result = config.validate(); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("concurrency")); +} + +#[test] +fn test_validation_rejects_high_concurrency() { + let config = Config { + concurrency: 100, + ..Config::default() + }; + let result = config.validate(); + assert!(result.is_err()); +} + +#[test] +fn test_validation_rejects_empty_providers() { + let config = Config { + providers: vec![], + ..Config::default() + }; + let result = config.validate(); + assert!(result.is_err()); + assert!(result.unwrap_err().to_string().contains("provider")); +} + +#[test] +fn test_sync_mode_from_str() { + assert_eq!("fetch".parse::().unwrap(), SyncMode::Fetch); + assert_eq!("pull".parse::().unwrap(), SyncMode::Pull); + assert_eq!("FETCH".parse::().unwrap(), SyncMode::Fetch); + assert!("invalid".parse::().is_err()); +} + +#[test] +fn test_default_toml_is_valid() { + let toml = Config::default_toml(); + let result = Config::parse(&toml); + assert!(result.is_ok(), "Default TOML should be valid: {:?}", result); +} + +#[test] +fn test_enabled_providers_filter() { + let config = Config { + providers: vec![ + ProviderEntry { + enabled: true, + ..ProviderEntry::github() + }, + ProviderEntry { + enabled: false, + ..ProviderEntry::github() + }, + ProviderEntry { + enabled: true, + ..ProviderEntry::github() + }, + ], + ..Config::default() + }; + + let enabled: Vec<_> = config.enabled_providers().collect(); + assert_eq!(enabled.len(), 2); +} + +#[test] +fn test_default_config_has_no_default_workspace() { + let config = Config::default(); + assert!(config.default_workspace.is_none()); +} + +#[test] +fn test_parse_config_with_default_workspace() { + let content = r#" +default_workspace = "my-ws" + +[[providers]] +kind = "github" +auth = "gh-cli" +"#; + let config = Config::parse(content).unwrap(); + assert_eq!(config.default_workspace, Some("my-ws".to_string())); +} + +#[test] +fn test_parse_config_without_default_workspace() { + let content = r#" +[[providers]] +kind = "github" +auth = "gh-cli" +"#; + let config = Config::parse(content).unwrap(); + assert!(config.default_workspace.is_none()); +} + +#[test] +fn test_save_default_workspace_to_set() { + let temp = tempfile::TempDir::new().unwrap(); + let path = temp.path().join("config.toml"); + std::fs::write(&path, Config::default_toml()).unwrap(); + + Config::save_default_workspace_to(&path, Some("my-ws")).unwrap(); + + let content = std::fs::read_to_string(&path).unwrap(); + assert!(content.contains("default_workspace = \"my-ws\"")); + // Original content preserved + assert!(content.contains("concurrency")); + // Still valid TOML + let config = Config::parse(&content).unwrap(); + assert_eq!(config.default_workspace, Some("my-ws".to_string())); +} + +#[test] +fn test_save_default_workspace_to_clear() { + let temp = tempfile::TempDir::new().unwrap(); + let path = temp.path().join("config.toml"); + std::fs::write(&path, Config::default_toml()).unwrap(); + + // Set then clear + Config::save_default_workspace_to(&path, Some("my-ws")).unwrap(); + Config::save_default_workspace_to(&path, None).unwrap(); + + let content = std::fs::read_to_string(&path).unwrap(); + assert!(!content.contains("default_workspace")); + // Still valid TOML + let config = Config::parse(&content).unwrap(); + assert!(config.default_workspace.is_none()); +} + +#[test] +fn test_save_default_workspace_to_replace() { + let temp = tempfile::TempDir::new().unwrap(); + let path = temp.path().join("config.toml"); + std::fs::write(&path, Config::default_toml()).unwrap(); + + Config::save_default_workspace_to(&path, Some("ws1")).unwrap(); + Config::save_default_workspace_to(&path, Some("ws2")).unwrap(); + + let content = std::fs::read_to_string(&path).unwrap(); + assert!(content.contains("default_workspace = \"ws2\"")); + assert!(!content.contains("ws1")); + let config = Config::parse(&content).unwrap(); + assert_eq!(config.default_workspace, Some("ws2".to_string())); +} + +#[test] +fn test_save_default_workspace_to_replace_without_sync_mode() { + let temp = tempfile::TempDir::new().unwrap(); + let path = temp.path().join("config.toml"); + let content = r#" +structure = "{org}/{repo}" +concurrency = 8 +default_workspace = "ws-old" + +[[providers]] +kind = "github" +auth = "gh-cli" +"#; + std::fs::write(&path, content).unwrap(); + + Config::save_default_workspace_to(&path, Some("ws-new")).unwrap(); + + let updated = std::fs::read_to_string(&path).unwrap(); + assert!(updated.contains("default_workspace = \"ws-new\"")); + assert!(!updated.contains("ws-old")); + let config = Config::parse(&updated).unwrap(); + assert_eq!(config.default_workspace.as_deref(), Some("ws-new")); +} + +#[test] +fn test_save_default_workspace_to_nonexistent_file() { + let result = + Config::save_default_workspace_to(Path::new("/nonexistent/config.toml"), Some("ws")); + assert!(result.is_err()); +} diff --git a/src/config/provider_config.rs b/src/config/provider_config.rs new file mode 100644 index 0000000..eb84896 --- /dev/null +++ b/src/config/provider_config.rs @@ -0,0 +1,102 @@ +//! Provider-specific configuration. +//! +//! Defines how individual Git hosting providers are configured, +//! including authentication and API endpoints. + +use crate::types::ProviderKind; +use serde::{Deserialize, Serialize}; + +/// How to authenticate with a provider. +/// +/// Currently only GitHub CLI is supported. +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Default)] +#[serde(rename_all = "kebab-case")] +pub enum AuthMethod { + /// Use GitHub CLI (`gh auth token`) + #[default] + GhCli, +} + +/// Configuration for a single Git hosting provider. +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct ProviderEntry { + /// The type of provider (github, gitlab, etc.) + #[serde(default)] + pub kind: ProviderKind, + + /// Display name for this provider instance + #[serde(default)] + pub name: Option, + + /// API base URL (required for GitHub Enterprise, optional for others) + #[serde(default)] + pub api_url: Option, + + /// How to authenticate + #[serde(default)] + pub auth: AuthMethod, + + /// Whether to prefer SSH for cloning (default: true) + #[serde(default = "default_true")] + pub prefer_ssh: bool, + + /// Base directory override for this provider's repos + #[serde(default)] + pub base_path: Option, + + /// Whether this provider is enabled + #[serde(default = "default_true")] + pub enabled: bool, +} + +fn default_true() -> bool { + true +} + +impl Default for ProviderEntry { + fn default() -> Self { + Self { + kind: ProviderKind::GitHub, + name: None, + api_url: None, + auth: AuthMethod::GhCli, + prefer_ssh: true, + base_path: None, + enabled: true, + } + } +} + +impl ProviderEntry { + /// Creates a default GitHub.com provider entry. + pub fn github() -> Self { + Self { + kind: ProviderKind::GitHub, + name: Some("GitHub".to_string()), + ..Default::default() + } + } + + /// Returns the effective API URL for this provider. + pub fn effective_api_url(&self) -> String { + self.api_url + .clone() + .unwrap_or_else(|| self.kind.default_api_url().to_string()) + } + + /// Returns the display name for this provider. + pub fn display_name(&self) -> String { + self.name + .clone() + .unwrap_or_else(|| self.kind.display_name().to_string()) + } + + /// Validates the provider configuration. + pub fn validate(&self) -> Result<(), String> { + Ok(()) + } +} + +#[cfg(test)] +#[path = "provider_config_tests.rs"] +mod tests; diff --git a/src/config/provider_config_tests.rs b/src/config/provider_config_tests.rs new file mode 100644 index 0000000..7e93ba8 --- /dev/null +++ b/src/config/provider_config_tests.rs @@ -0,0 +1,63 @@ +use super::*; + +#[test] +fn test_default_provider_entry() { + let entry = ProviderEntry::default(); + assert_eq!(entry.kind, ProviderKind::GitHub); + assert_eq!(entry.auth, AuthMethod::GhCli); + assert!(entry.prefer_ssh); + assert!(entry.enabled); +} + +#[test] +fn test_github_factory() { + let entry = ProviderEntry::github(); + assert_eq!(entry.kind, ProviderKind::GitHub); + assert_eq!(entry.display_name(), "GitHub"); +} + +#[test] +fn test_effective_api_url_with_override() { + let mut entry = ProviderEntry::github(); + entry.api_url = Some("https://custom-api.example.com".to_string()); + assert_eq!(entry.effective_api_url(), "https://custom-api.example.com"); +} + +#[test] +fn test_effective_api_url_default() { + let entry = ProviderEntry::github(); + assert_eq!(entry.effective_api_url(), "https://api.github.com"); +} + +#[test] +fn test_validate_valid_config() { + let entry = ProviderEntry::github(); + assert!(entry.validate().is_ok()); +} + +#[test] +fn test_serde_roundtrip() { + let entry = ProviderEntry { + kind: ProviderKind::GitHub, + name: Some("My GitHub".to_string()), + auth: AuthMethod::GhCli, + prefer_ssh: false, + ..Default::default() + }; + + let toml = toml::to_string(&entry).unwrap(); + let parsed: ProviderEntry = toml::from_str(&toml).unwrap(); + + assert_eq!(parsed.kind, entry.kind); + assert_eq!(parsed.name, entry.name); + assert_eq!(parsed.auth, entry.auth); + assert_eq!(parsed.prefer_ssh, entry.prefer_ssh); +} + +#[test] +fn test_auth_method_serde() { + assert_eq!( + serde_json::to_string(&AuthMethod::GhCli).unwrap(), + "\"gh-cli\"" + ); +} diff --git a/src/config/workspace.rs b/src/config/workspace.rs new file mode 100644 index 0000000..79a8eb7 --- /dev/null +++ b/src/config/workspace.rs @@ -0,0 +1,186 @@ +//! Workspace configuration. +//! +//! Each workspace represents a sync target folder with its own provider, +//! selected organizations, and repository filters. Each workspace is a +//! subdirectory of `~/.config/git-same//` containing `workspace-config.toml`. + +use super::provider_config::AuthMethod; +use super::{ConfigCloneOptions, FilterOptions, SyncMode}; +use crate::types::ProviderKind; +use serde::{Deserialize, Serialize}; + +/// Provider configuration scoped to a single workspace. +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct WorkspaceProvider { + /// The type of provider (github, gitlab, etc.) + #[serde(default)] + pub kind: ProviderKind, + + /// How to authenticate + #[serde(default)] + pub auth: AuthMethod, + + /// API base URL (required for GitHub Enterprise) + #[serde(default, skip_serializing_if = "Option::is_none")] + pub api_url: Option, + + /// Whether to prefer SSH for cloning (default: true) + #[serde(default = "default_true")] + pub prefer_ssh: bool, +} + +fn default_true() -> bool { + true +} + +impl Default for WorkspaceProvider { + fn default() -> Self { + Self { + kind: ProviderKind::GitHub, + auth: AuthMethod::GhCli, + api_url: None, + prefer_ssh: true, + } + } +} + +impl WorkspaceProvider { + /// Convert to a `ProviderEntry` for use with existing provider/auth infrastructure. + pub fn to_provider_entry(&self) -> super::ProviderEntry { + super::ProviderEntry { + kind: self.kind, + name: Some(self.kind.display_name().to_string()), + api_url: self.api_url.clone(), + auth: self.auth.clone(), + prefer_ssh: self.prefer_ssh, + base_path: None, + enabled: true, + } + } +} + +/// Configuration for a single workspace (sync target folder). +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct WorkspaceConfig { + /// Workspace name, derived from the config folder name at load time. + /// + /// Not stored in `workspace-config.toml` — the folder name is the source of truth. + #[serde(skip_serializing, default)] + pub name: String, + + /// Absolute path to the folder where repos are cloned. + pub base_path: String, + + /// Provider configuration for this workspace. + pub provider: WorkspaceProvider, + + /// The authenticated username (discovered during setup). + #[serde(default)] + pub username: String, + + /// Selected organizations to sync (empty = all). + #[serde(default)] + pub orgs: Vec, + + /// Specific repos to include (empty = all from selected orgs). + #[serde(default)] + pub include_repos: Vec, + + /// Repos to exclude by full name (e.g., "org/repo"). + #[serde(default)] + pub exclude_repos: Vec, + + /// Directory structure pattern override (None = use global default). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub structure: Option, + + /// Sync mode override (None = use global default). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub sync_mode: Option, + + /// Clone options override (None = use global default). + #[serde(default, skip_serializing_if = "Option::is_none")] + #[serde(rename = "clone")] + pub clone_options: Option, + + /// Filter options. + #[serde(default)] + pub filters: FilterOptions, + + /// Concurrency override (None = use global default). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub concurrency: Option, + + /// Dashboard auto-refresh interval override in seconds (None = use global default). + #[serde(default, skip_serializing_if = "Option::is_none")] + pub refresh_interval: Option, + + /// ISO 8601 timestamp of last sync. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub last_synced: Option, +} + +impl WorkspaceConfig { + /// Create a new workspace config with minimal required fields. + pub fn new(name: impl Into, base_path: impl Into) -> Self { + Self { + name: name.into(), + base_path: base_path.into(), + provider: WorkspaceProvider::default(), + username: String::new(), + orgs: Vec::new(), + include_repos: Vec::new(), + exclude_repos: Vec::new(), + structure: None, + sync_mode: None, + clone_options: None, + filters: FilterOptions::default(), + concurrency: None, + refresh_interval: None, + last_synced: None, + } + } + + /// Expand ~ in base_path to the actual home directory. + pub fn expanded_base_path(&self) -> std::path::PathBuf { + let expanded = shellexpand::tilde(&self.base_path); + std::path::PathBuf::from(expanded.as_ref()) + } + + /// Returns a user-friendly label: `"~/repos (GitHub)"`. + /// + /// This is the primary user-facing workspace identity. The internal `name` + /// field is a filesystem key and should never be shown to users. + pub fn display_label(&self) -> String { + format!("{} ({})", self.base_path, self.provider.kind.display_name()) + } + + /// Returns a short display summary for selectors. + pub fn summary(&self) -> String { + let orgs = if self.orgs.is_empty() { + "all orgs".to_string() + } else { + format!("{} org(s)", self.orgs.len()) + }; + let synced = self.last_synced.as_deref().unwrap_or("never synced"); + format!("{} ({}, {})", self.display_label(), orgs, synced) + } + + /// Serialize to TOML string. + pub fn to_toml(&self) -> Result { + toml::to_string_pretty(self).map_err(|e| { + crate::errors::AppError::config(format!("Failed to serialize workspace config: {}", e)) + }) + } + + /// Parse from TOML string. + pub fn from_toml(content: &str) -> Result { + toml::from_str(content).map_err(|e| { + crate::errors::AppError::config(format!("Failed to parse workspace config: {}", e)) + }) + } +} + +#[cfg(test)] +#[path = "workspace_tests.rs"] +mod tests; diff --git a/src/config/workspace_manager.rs b/src/config/workspace_manager.rs new file mode 100644 index 0000000..4d392d9 --- /dev/null +++ b/src/config/workspace_manager.rs @@ -0,0 +1,89 @@ +//! Workspace manager facade. +//! +//! This compatibility layer keeps the existing `WorkspaceManager` API stable +//! while delegating storage and policy responsibilities to dedicated modules. + +use super::workspace::WorkspaceConfig; +use super::{workspace_policy::WorkspacePolicy, workspace_store::WorkspaceStore}; +use crate::errors::AppError; +use crate::types::ProviderKind; +use std::path::{Path, PathBuf}; + +/// Compatibility facade for workspace operations. +pub struct WorkspaceManager; + +impl WorkspaceManager { + /// Returns the config directory: `~/.config/git-same/`. + pub fn config_dir() -> Result { + WorkspaceStore::config_dir() + } + + /// List all workspace configs. + pub fn list() -> Result, AppError> { + WorkspaceStore::list() + } + + /// Load a specific workspace by name. + pub fn load(name: &str) -> Result { + WorkspaceStore::load(name) + } + + /// Save a workspace config (create or update). + pub fn save(workspace: &WorkspaceConfig) -> Result<(), AppError> { + WorkspaceStore::save(workspace) + } + + /// Delete a workspace by name. + pub fn delete(name: &str) -> Result<(), AppError> { + WorkspaceStore::delete(name) + } + + /// Find a workspace whose base_path matches the given directory. + pub fn find_by_path(path: &Path) -> Result, AppError> { + WorkspaceStore::find_by_path(path) + } + + /// Load a workspace by its base_path string. + pub fn load_by_path(path_str: &str) -> Result { + WorkspaceStore::load_by_path(path_str) + } + + /// Derive a workspace name from a base path and provider. + pub fn name_from_path(path: &Path, provider: ProviderKind) -> String { + WorkspacePolicy::name_from_path(path, provider) + } + + /// Return a unique workspace name, appending `-2`, `-3`, etc. on collision. + pub fn unique_name(base: &str) -> Result { + WorkspacePolicy::unique_name(base) + } + + /// Resolve which workspace to use. + pub fn resolve( + name: Option<&str>, + config: &super::parser::Config, + ) -> Result { + WorkspacePolicy::resolve(name, config) + } + + /// Resolve from an already-loaded list of workspaces (no filesystem access). + pub fn resolve_from_list( + workspaces: Vec, + ) -> Result { + WorkspacePolicy::resolve_from_list(workspaces) + } + + /// Returns the directory path for a workspace: `~/.config/git-same//`. + pub fn workspace_dir(name: &str) -> Result { + WorkspaceStore::workspace_dir(name) + } + + /// Returns the cache file path for a workspace. + pub fn cache_path(name: &str) -> Result { + WorkspaceStore::cache_path(name) + } +} + +#[cfg(test)] +#[path = "workspace_manager_tests.rs"] +mod tests; diff --git a/src/config/workspace_manager_tests.rs b/src/config/workspace_manager_tests.rs new file mode 100644 index 0000000..6481b59 --- /dev/null +++ b/src/config/workspace_manager_tests.rs @@ -0,0 +1,8 @@ +use super::*; + +#[test] +fn test_name_from_path_simple() { + let name = + WorkspaceManager::name_from_path(Path::new("/home/user/github"), ProviderKind::GitHub); + assert_eq!(name, "github-github"); +} diff --git a/src/config/workspace_policy.rs b/src/config/workspace_policy.rs new file mode 100644 index 0000000..44d4cf9 --- /dev/null +++ b/src/config/workspace_policy.rs @@ -0,0 +1,106 @@ +//! Workspace resolution and naming rules (policy concern only). + +use super::parser::Config; +use super::workspace::WorkspaceConfig; +use super::workspace_store::WorkspaceStore; +use crate::errors::AppError; +use crate::types::ProviderKind; +use std::path::Path; + +/// Workspace policy helpers. +pub struct WorkspacePolicy; + +impl WorkspacePolicy { + /// Derive a workspace name from a base path and provider. + pub fn name_from_path(path: &Path, provider: ProviderKind) -> String { + let lossy = path.to_string_lossy(); + let expanded = shellexpand::tilde(&lossy); + let path = Path::new(expanded.as_ref()); + + let last_component = path + .components() + .filter_map(|c| { + if let std::path::Component::Normal(s) = c { + s.to_str() + } else { + None + } + }) + .next_back() + .unwrap_or("workspace"); + + let prefix = match provider { + ProviderKind::GitHub => "github", + ProviderKind::GitHubEnterprise => "ghe", + ProviderKind::GitLab => "gitlab", + ProviderKind::GitLabSelfManaged => "glsm", + ProviderKind::Codeberg => "codeberg", + ProviderKind::Bitbucket => "bitbucket", + }; + format!("{}-{}", prefix, last_component) + .to_lowercase() + .replace([' ', '_'], "-") + } + + /// Return a unique workspace name, appending `-2`, `-3`, etc. on collision. + pub fn unique_name(base: &str) -> Result { + let dir = WorkspaceStore::workspace_dir(base)?; + if !dir.exists() { + return Ok(base.to_string()); + } + + for suffix in 2..=100 { + let candidate = format!("{}-{}", base, suffix); + let candidate_dir = WorkspaceStore::workspace_dir(&candidate)?; + if !candidate_dir.exists() { + return Ok(candidate); + } + } + + Err(AppError::config(format!( + "Could not find a unique workspace name based on '{}'", + base + ))) + } + + /// Resolve which workspace to use. + pub fn resolve(name: Option<&str>, config: &Config) -> Result { + if let Some(value) = name { + return WorkspaceStore::load(value).or_else(|_| WorkspaceStore::load_by_path(value)); + } + + if let Some(ref default) = config.default_workspace { + return WorkspaceStore::load(default); + } + + let workspaces = WorkspaceStore::list()?; + Self::resolve_from_list(workspaces) + } + + /// Resolve from an already-loaded list of workspaces (no filesystem access). + pub fn resolve_from_list( + workspaces: Vec, + ) -> Result { + match workspaces.len() { + 0 => Err(AppError::config( + "No workspaces configured. Run 'gisa setup' first.", + )), + 1 => Ok(workspaces + .into_iter() + .next() + .expect("single workspace exists")), + _ => { + let labels: Vec = workspaces.iter().map(|w| w.display_label()).collect(); + Err(AppError::config(format!( + "Multiple workspaces configured. Use --workspace to select one, \ + or set a default with 'gisa workspace default ': {}", + labels.join(", ") + ))) + } + } + } +} + +#[cfg(test)] +#[path = "workspace_policy_tests.rs"] +mod tests; diff --git a/src/config/workspace_policy_tests.rs b/src/config/workspace_policy_tests.rs new file mode 100644 index 0000000..efb38b4 --- /dev/null +++ b/src/config/workspace_policy_tests.rs @@ -0,0 +1,40 @@ +use super::*; + +#[test] +fn name_from_path_uses_provider_prefix_and_normalizes() { + let name = WorkspacePolicy::name_from_path( + std::path::Path::new("~/Developer/My_Project"), + ProviderKind::GitHubEnterprise, + ); + assert_eq!(name, "ghe-my-project"); + + let github = WorkspacePolicy::name_from_path( + std::path::Path::new("~/repos/Personal"), + ProviderKind::GitHub, + ); + assert_eq!(github, "github-personal"); +} + +#[test] +fn resolve_from_list_errors_when_no_workspaces() { + let err = WorkspacePolicy::resolve_from_list(Vec::new()).unwrap_err(); + assert!(err.to_string().contains("No workspaces configured")); +} + +#[test] +fn resolve_from_list_returns_single_workspace() { + let ws = WorkspaceConfig::new("solo", "/tmp/solo"); + let resolved = WorkspacePolicy::resolve_from_list(vec![ws.clone()]).unwrap(); + assert_eq!(resolved.name, "solo"); + assert_eq!(resolved.base_path, "/tmp/solo"); +} + +#[test] +fn resolve_from_list_errors_when_multiple_workspaces() { + let ws1 = WorkspaceConfig::new("a", "/tmp/a"); + let ws2 = WorkspaceConfig::new("b", "/tmp/b"); + + let err = WorkspacePolicy::resolve_from_list(vec![ws1, ws2]).unwrap_err(); + assert!(err.to_string().contains("Multiple workspaces configured")); + assert!(err.to_string().contains("--workspace")); +} diff --git a/src/config/workspace_store.rs b/src/config/workspace_store.rs new file mode 100644 index 0000000..c9abcbc --- /dev/null +++ b/src/config/workspace_store.rs @@ -0,0 +1,226 @@ +//! Workspace persistence (storage concern only). + +use super::workspace::WorkspaceConfig; +use crate::errors::AppError; +use std::path::{Component, Path, PathBuf}; + +/// Filesystem-backed workspace store. +pub struct WorkspaceStore; + +impl WorkspaceStore { + /// Returns the config directory: `~/.config/git-same/`. + pub fn config_dir() -> Result { + let config_path = crate::config::Config::default_path()?; + config_path + .parent() + .map(|p| p.to_path_buf()) + .ok_or_else(|| AppError::config("Cannot determine config directory")) + } + + /// List all workspace configs. + pub fn list() -> Result, AppError> { + let dir = Self::config_dir()?; + if !dir.exists() { + return Ok(Vec::new()); + } + + let mut workspaces = Vec::new(); + let entries = std::fs::read_dir(&dir) + .map_err(|e| AppError::config(format!("Failed to read config directory: {}", e)))?; + + for entry in entries { + let entry = entry + .map_err(|e| AppError::config(format!("Failed to read directory entry: {}", e)))?; + let path = entry.path(); + if path.is_dir() { + let config_file = path.join("workspace-config.toml"); + if config_file.exists() { + match Self::load_from_path(&config_file) { + Ok(ws) => workspaces.push(ws), + Err(e) => { + tracing::warn!( + path = %config_file.display(), + error = %e, + "Skipping invalid workspace config" + ); + } + } + } + } + } + + workspaces.sort_by(|a, b| a.name.cmp(&b.name)); + Ok(workspaces) + } + + /// Load a specific workspace by name. + pub fn load(name: &str) -> Result { + let path = Self::config_path(name)?; + if !path.exists() { + return Err(AppError::config(format!( + "Workspace '{}' not found at {}", + name, + path.display() + ))); + } + Self::load_from_path(&path) + } + + /// Save a workspace config (create or update). + pub fn save(workspace: &WorkspaceConfig) -> Result<(), AppError> { + let path = Self::config_path(&workspace.name)?; + if let Some(parent) = path.parent() { + std::fs::create_dir_all(parent).map_err(|e| { + AppError::config(format!("Failed to create workspace directory: {}", e)) + })?; + } + let content = workspace.to_toml()?; + std::fs::write(&path, content).map_err(|e| { + AppError::config(format!( + "Failed to write workspace config at {}: {}", + path.display(), + e + )) + })?; + Ok(()) + } + + /// Delete a workspace by name (removes the entire workspace directory). + pub fn delete(name: &str) -> Result<(), AppError> { + let dir = Self::workspace_dir(name)?; + if !dir.exists() { + return Err(AppError::config(format!("Workspace '{}' not found", name))); + } + std::fs::remove_dir_all(&dir).map_err(|e| { + AppError::config(format!("Failed to delete workspace '{}': {}", name, e)) + })?; + Ok(()) + } + + /// Find a workspace whose base_path matches the given directory. + pub fn find_by_path(path: &Path) -> Result, AppError> { + let workspaces = Self::list()?; + let canonical = std::fs::canonicalize(path).unwrap_or_else(|_| path.to_path_buf()); + + for ws in workspaces { + let ws_path = ws.expanded_base_path(); + let ws_canonical = std::fs::canonicalize(&ws_path).unwrap_or_else(|_| ws_path.clone()); + if ws_canonical == canonical { + return Ok(Some(ws)); + } + } + Ok(None) + } + + /// Load a workspace by its base_path string. + pub fn load_by_path(path_str: &str) -> Result { + let workspaces = Self::list()?; + + // Exact string match on base_path + for ws in &workspaces { + if ws.base_path == path_str { + return Ok(ws.clone()); + } + } + + // Canonical path comparison + let expanded = shellexpand::tilde(path_str); + let target = Path::new(expanded.as_ref()); + let target_canonical = + std::fs::canonicalize(target).unwrap_or_else(|_| target.to_path_buf()); + + for ws in workspaces { + let ws_expanded = ws.expanded_base_path(); + let ws_canonical = std::fs::canonicalize(&ws_expanded).unwrap_or(ws_expanded); + if ws_canonical == target_canonical { + return Ok(ws); + } + } + + Err(AppError::config(format!( + "No workspace configured for path '{}'", + path_str + ))) + } + + /// Returns the directory path for a workspace: `~/.config/git-same//`. + pub fn workspace_dir(name: &str) -> Result { + Self::validate_workspace_name(name)?; + Ok(Self::config_dir()?.join(name)) + } + + /// Returns the cache file path for a workspace: `~/.config/git-same//workspace-cache.json`. + pub fn cache_path(name: &str) -> Result { + Ok(Self::workspace_dir(name)?.join("workspace-cache.json")) + } + + /// Returns the file path for a workspace config. + fn config_path(name: &str) -> Result { + Ok(Self::workspace_dir(name)?.join("workspace-config.toml")) + } + + /// Validate workspace names to prevent path traversal. + fn validate_workspace_name(name: &str) -> Result<(), AppError> { + if name.trim().is_empty() { + return Err(AppError::config("Workspace name cannot be empty")); + } + + let path = Path::new(name); + if path.is_absolute() + || path.components().any(|c| { + matches!( + c, + Component::ParentDir | Component::RootDir | Component::Prefix(_) + ) + }) + { + return Err(AppError::config(format!( + "Invalid workspace name '{}'", + name + ))); + } + + if name.contains('/') || name.contains('\\') { + return Err(AppError::config(format!( + "Invalid workspace name '{}'", + name + ))); + } + + if !name + .chars() + .all(|c| c.is_ascii_alphanumeric() || matches!(c, '-' | '_' | '.')) + { + return Err(AppError::config( + "Workspace name may only contain letters, numbers, '-', '_' and '.'", + )); + } + + Ok(()) + } + + /// Load a workspace config from a specific file path. + fn load_from_path(path: &Path) -> Result { + let content = std::fs::read_to_string(path).map_err(|e| { + AppError::config(format!( + "Failed to read workspace config at {}: {}", + path.display(), + e + )) + })?; + let mut ws = WorkspaceConfig::from_toml(&content)?; + + // Derive name from the parent folder + if let Some(parent) = path.parent() { + if let Some(folder_name) = parent.file_name().and_then(|n| n.to_str()) { + ws.name = folder_name.to_string(); + } + } + + Ok(ws) + } +} + +#[cfg(test)] +#[path = "workspace_store_tests.rs"] +mod tests; diff --git a/src/config/workspace_store_tests.rs b/src/config/workspace_store_tests.rs new file mode 100644 index 0000000..1b0a155 --- /dev/null +++ b/src/config/workspace_store_tests.rs @@ -0,0 +1,108 @@ +use super::*; +use std::path::Path; +use std::sync::Mutex; + +static HOME_LOCK: Mutex<()> = Mutex::new(()); + +fn with_temp_home(home: &Path, f: impl FnOnce() -> T) -> T { + let _lock = HOME_LOCK.lock().expect("HOME lock poisoned"); + let original_home = std::env::var("HOME").ok(); + + struct HomeRestore(Option); + impl Drop for HomeRestore { + fn drop(&mut self) { + if let Some(value) = self.0.take() { + std::env::set_var("HOME", value); + } else { + std::env::remove_var("HOME"); + } + } + } + + let _restore = HomeRestore(original_home); + std::env::set_var("HOME", home); + f() +} + +#[test] +fn workspace_and_cache_paths_are_derived_from_workspace_name() { + let temp = tempfile::tempdir().unwrap(); + + with_temp_home(temp.path(), || { + let workspace_dir = WorkspaceStore::workspace_dir("alpha").unwrap(); + let cache_path = WorkspaceStore::cache_path("alpha").unwrap(); + + assert_eq!(workspace_dir, temp.path().join(".config/git-same/alpha")); + assert_eq!( + cache_path, + temp.path() + .join(".config/git-same/alpha/workspace-cache.json") + ); + }); +} + +#[test] +fn load_from_path_roundtrip_sets_name_from_parent_directory() { + let temp = tempfile::tempdir().unwrap(); + let workspace_dir = temp.path().join("roundtrip"); + std::fs::create_dir_all(&workspace_dir).unwrap(); + + let config_path = workspace_dir.join("workspace-config.toml"); + let workspace = WorkspaceConfig::new("ignored-name", "/tmp/roundtrip"); + std::fs::write(&config_path, workspace.to_toml().unwrap()).unwrap(); + + let loaded = WorkspaceStore::load_from_path(&config_path).unwrap(); + assert_eq!(loaded.name, "roundtrip"); + assert_eq!(loaded.base_path, "/tmp/roundtrip"); +} + +#[test] +fn save_load_and_list_roundtrip_in_empty_config_root() { + let temp = tempfile::tempdir().unwrap(); + + with_temp_home(temp.path(), || { + let listed_before = WorkspaceStore::list().unwrap(); + assert!(listed_before.is_empty()); + + let workspace = WorkspaceConfig::new("team-alpha", "/tmp/team-alpha"); + WorkspaceStore::save(&workspace).unwrap(); + + let loaded = WorkspaceStore::load("team-alpha").unwrap(); + assert_eq!(loaded.name, "team-alpha"); + assert_eq!(loaded.base_path, "/tmp/team-alpha"); + + let listed_after = WorkspaceStore::list().unwrap(); + assert_eq!(listed_after.len(), 1); + assert_eq!(listed_after[0].name, "team-alpha"); + }); +} + +#[test] +fn delete_nonexistent_workspace_returns_error() { + let temp = tempfile::tempdir().unwrap(); + + with_temp_home(temp.path(), || { + let err = WorkspaceStore::delete("ghost-workspace").unwrap_err(); + assert!(err.to_string().contains("not found")); + }); +} + +#[test] +fn workspace_name_rejects_path_traversal() { + let temp = tempfile::tempdir().unwrap(); + + with_temp_home(temp.path(), || { + let err = WorkspaceStore::workspace_dir("../escape").unwrap_err(); + assert!(err.to_string().contains("Invalid workspace name")); + }); +} + +#[test] +fn workspace_name_allows_safe_characters() { + let temp = tempfile::tempdir().unwrap(); + + with_temp_home(temp.path(), || { + let path = WorkspaceStore::workspace_dir("team.alpha-1_repo").unwrap(); + assert_eq!(path, temp.path().join(".config/git-same/team.alpha-1_repo")); + }); +} diff --git a/src/config/workspace_tests.rs b/src/config/workspace_tests.rs new file mode 100644 index 0000000..8c6a126 --- /dev/null +++ b/src/config/workspace_tests.rs @@ -0,0 +1,131 @@ +use super::*; + +#[test] +fn test_new_workspace_config() { + let ws = WorkspaceConfig::new("github", "~/github"); + assert_eq!(ws.name, "github"); + assert_eq!(ws.base_path, "~/github"); + assert_eq!(ws.provider.kind, ProviderKind::GitHub); + assert!(ws.orgs.is_empty()); + assert!(ws.last_synced.is_none()); +} + +#[test] +fn test_workspace_provider_default() { + let provider = WorkspaceProvider::default(); + assert_eq!(provider.kind, ProviderKind::GitHub); + assert_eq!(provider.auth, AuthMethod::GhCli); + assert!(provider.prefer_ssh); + assert!(provider.api_url.is_none()); +} + +#[test] +fn test_workspace_provider_to_provider_entry() { + let provider = WorkspaceProvider { + kind: ProviderKind::GitHub, + auth: AuthMethod::GhCli, + api_url: None, + prefer_ssh: false, + }; + let entry = provider.to_provider_entry(); + assert_eq!(entry.kind, ProviderKind::GitHub); + assert_eq!(entry.auth, AuthMethod::GhCli); + assert!(entry.api_url.is_none()); + assert!(!entry.prefer_ssh); + assert!(entry.enabled); +} + +#[test] +fn test_serde_roundtrip() { + let ws = WorkspaceConfig { + name: "my-workspace".to_string(), + base_path: "~/code/repos".to_string(), + provider: WorkspaceProvider { + kind: ProviderKind::GitHub, + auth: AuthMethod::GhCli, + api_url: None, + prefer_ssh: true, + }, + username: "testuser".to_string(), + orgs: vec!["org1".to_string(), "org2".to_string()], + include_repos: vec![], + exclude_repos: vec!["org1/skip-this".to_string()], + structure: Some("{org}/{repo}".to_string()), + sync_mode: Some(SyncMode::Pull), + clone_options: None, + filters: FilterOptions { + include_archived: false, + include_forks: true, + orgs: vec![], + exclude_repos: vec![], + }, + concurrency: Some(8), + refresh_interval: None, + last_synced: Some("2026-02-23T10:00:00Z".to_string()), + }; + + let toml_str = ws.to_toml().unwrap(); + let parsed = WorkspaceConfig::from_toml(&toml_str).unwrap(); + + // name is skip_serializing — it's derived from the folder, not the TOML + assert!(parsed.name.is_empty()); + assert_eq!(parsed.base_path, ws.base_path); + assert_eq!(parsed.username, ws.username); + assert_eq!(parsed.orgs, ws.orgs); + assert_eq!(parsed.exclude_repos, ws.exclude_repos); + assert_eq!(parsed.structure, ws.structure); + assert_eq!(parsed.sync_mode, ws.sync_mode); + assert_eq!(parsed.concurrency, ws.concurrency); + assert_eq!(parsed.last_synced, ws.last_synced); + assert_eq!(parsed.provider.kind, ws.provider.kind); + assert_eq!(parsed.provider.auth, ws.provider.auth); + assert!(parsed.filters.include_forks); +} + +#[test] +fn test_expanded_base_path() { + let ws = WorkspaceConfig::new("test", "~/github"); + let expanded = ws.expanded_base_path(); + assert!(!expanded.to_string_lossy().contains('~')); +} + +#[test] +fn test_summary() { + let ws = WorkspaceConfig { + orgs: vec!["org1".to_string(), "org2".to_string()], + last_synced: None, + ..WorkspaceConfig::new("github", "~/github") + }; + let summary = ws.summary(); + assert!(summary.contains("github")); + assert!(summary.contains("2 org(s)")); + assert!(summary.contains("never synced")); +} + +#[test] +fn test_display_label() { + let ws = WorkspaceConfig::new("github-repos", "~/repos"); + assert_eq!(ws.display_label(), "~/repos (GitHub)"); +} + +#[test] +fn test_summary_all_orgs() { + let ws = WorkspaceConfig::new("work", "~/work"); + let summary = ws.summary(); + assert!(summary.contains("all orgs")); +} + +#[test] +fn test_optional_fields_not_serialized_when_none() { + let ws = WorkspaceConfig::new("minimal", "~/minimal"); + let toml_str = ws.to_toml().unwrap(); + // name is derived from folder, never written to TOML as its own key + assert!( + !toml_str.lines().any(|l| l.starts_with("name ")), + "TOML should not contain a 'name' key" + ); + assert!(!toml_str.contains("structure")); + assert!(!toml_str.contains("sync_mode")); + assert!(!toml_str.contains("concurrency")); + assert!(!toml_str.contains("last_synced")); +} diff --git a/src/discovery.rs b/src/discovery.rs new file mode 100644 index 0000000..3192954 --- /dev/null +++ b/src/discovery.rs @@ -0,0 +1,239 @@ +//! Discovery orchestration module. +//! +//! This module coordinates repository discovery across providers +//! and manages action planning for clone/sync operations. + +use crate::config::FilterOptions; +use crate::domain::RepoPathTemplate; +use crate::git::GitOperations; +use crate::operations::sync::LocalRepo; +use crate::provider::{DiscoveryOptions, DiscoveryProgress, Provider}; +use crate::types::{ActionPlan, OwnedRepo}; +use std::collections::HashSet; +use std::path::{Path, PathBuf}; + +/// Orchestrates repository discovery. +pub struct DiscoveryOrchestrator { + /// Filter options + pub(crate) filters: FilterOptions, + /// Directory structure template + pub(crate) structure: String, +} + +impl DiscoveryOrchestrator { + /// Creates a new discovery orchestrator. + pub fn new(filters: FilterOptions, structure: String) -> Self { + Self { filters, structure } + } + + /// Converts filter options to discovery options. + pub fn to_discovery_options(&self) -> DiscoveryOptions { + DiscoveryOptions::new() + .with_archived(self.filters.include_archived) + .with_forks(self.filters.include_forks) + .with_orgs(self.filters.orgs.clone()) + .with_exclusions(self.filters.exclude_repos.clone()) + } + + /// Discovers repositories from a provider. + pub async fn discover( + &self, + provider: &dyn Provider, + progress: &dyn DiscoveryProgress, + ) -> Result, crate::errors::ProviderError> { + let options = self.to_discovery_options(); + provider.discover_repos(&options, progress).await + } + + /// Computes the local path for a repository. + pub fn compute_path(&self, base_path: &Path, repo: &OwnedRepo, provider: &str) -> PathBuf { + RepoPathTemplate::new(self.structure.clone()).render_owned_repo(base_path, repo, provider) + } + + /// Creates an action plan by comparing discovered repos with local filesystem. + pub fn plan_clone( + &self, + base_path: &Path, + repos: Vec, + provider: &str, + git: &G, + ) -> ActionPlan { + let mut plan = ActionPlan::new(); + + for repo in repos { + let local_path = self.compute_path(base_path, &repo, provider); + + if local_path.exists() { + if git.is_repo(&local_path) { + // Existing repo - add to sync + plan.add_sync(repo); + } else { + // Directory exists but not a repo + plan.add_skipped(repo, "directory exists but is not a git repository"); + } + } else { + // New repo - add to clone + plan.add_clone(repo); + } + } + + plan + } + + /// Creates a sync plan for existing local repositories. + pub fn plan_sync( + &self, + base_path: &Path, + repos: Vec, + provider: &str, + git: &G, + skip_uncommitted: bool, + ) -> (Vec, Vec<(OwnedRepo, String)>) { + let mut to_sync = Vec::new(); + let mut skipped = Vec::new(); + + for repo in repos { + let local_path = self.compute_path(base_path, &repo, provider); + + if !local_path.exists() { + skipped.push((repo, "not cloned locally".to_string())); + continue; + } + + if !git.is_repo(&local_path) { + skipped.push((repo, "not a git repository".to_string())); + continue; + } + + if skip_uncommitted { + match git.status(&local_path) { + Ok(status) => { + if status.is_uncommitted || status.has_untracked { + skipped.push((repo, "uncommitted changes".to_string())); + continue; + } + } + Err(err) => { + skipped.push((repo, format!("failed to get status: {}", err))); + continue; + } + } + } + + to_sync.push(LocalRepo::new(repo, local_path)); + } + + (to_sync, skipped) + } + + /// Scans local filesystem for cloned repositories. + pub fn scan_local( + &self, + base_path: &Path, + git: &G, + ) -> Vec<(PathBuf, String, String)> { + let mut repos = Vec::new(); + + // Determine scan depth based on structure + // {org}/{repo} -> 2 levels + // {provider}/{org}/{repo} -> 3 levels + let depth = RepoPathTemplate::new(self.structure.clone()).scan_depth(); + + self.scan_dir(base_path, base_path, git, &mut repos, 0, depth); + + repos + } + + /// Recursively scans directories for git repos. + fn scan_dir( + &self, + base_path: &Path, + path: &Path, + git: &G, + repos: &mut Vec<(PathBuf, String, String)>, + current_depth: usize, + max_depth: usize, + ) { + if current_depth >= max_depth { + return; + } + + let entries = match std::fs::read_dir(path) { + Ok(e) => e, + Err(_) => return, + }; + + for entry in entries.flatten() { + let entry_path = entry.path(); + if !entry_path.is_dir() { + continue; + } + + // Skip hidden directories + if entry.file_name().to_string_lossy().starts_with('.') { + continue; + } + + if current_depth + 1 == max_depth && git.is_repo(&entry_path) { + // This is a repo at the expected depth + let rel_path = entry_path.strip_prefix(base_path).unwrap_or(&entry_path); + let parts: Vec<_> = rel_path.components().collect(); + + if parts.len() >= 2 { + let org = parts[parts.len() - 2] + .as_os_str() + .to_string_lossy() + .to_string(); + let repo = parts[parts.len() - 1] + .as_os_str() + .to_string_lossy() + .to_string(); + repos.push((entry_path.clone(), org, repo)); + } + } else { + // Recurse into subdirectory + self.scan_dir( + base_path, + &entry_path, + git, + repos, + current_depth + 1, + max_depth, + ); + } + } + } +} + +/// Merges discovered repos from multiple providers. +pub fn merge_repos(repos_by_provider: Vec<(String, Vec)>) -> Vec<(String, OwnedRepo)> { + let mut result = Vec::new(); + + for (provider, repos) in repos_by_provider { + for repo in repos { + result.push((provider.clone(), repo)); + } + } + + result +} + +/// Deduplicates repos by full name, preferring first occurrence. +pub fn deduplicate_repos(repos: Vec<(String, OwnedRepo)>) -> Vec<(String, OwnedRepo)> { + let mut seen = HashSet::new(); + let mut result = Vec::new(); + + for (provider, repo) in repos { + let key = repo.full_name().to_string(); + if !seen.contains(&key) { + seen.insert(key); + result.push((provider, repo)); + } + } + + result +} + +#[cfg(test)] +#[path = "discovery_tests.rs"] +mod tests; diff --git a/src/discovery_tests.rs b/src/discovery_tests.rs new file mode 100644 index 0000000..503a00c --- /dev/null +++ b/src/discovery_tests.rs @@ -0,0 +1,169 @@ +use super::*; +use crate::git::MockGit; +use crate::types::Repo; +use tempfile::TempDir; + +fn test_repo(name: &str, owner: &str) -> OwnedRepo { + OwnedRepo::new(owner, Repo::test(name, owner)) +} + +#[test] +fn test_orchestrator_creation() { + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + assert_eq!(orchestrator.structure, "{org}/{repo}"); +} + +#[test] +fn test_compute_path_simple() { + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + + let repo = test_repo("my-repo", "my-org"); + let path = orchestrator.compute_path(Path::new("/base"), &repo, "github"); + + assert_eq!(path, PathBuf::from("/base/my-org/my-repo")); +} + +#[test] +fn test_compute_path_with_provider() { + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{provider}/{org}/{repo}".to_string()); + + let repo = test_repo("my-repo", "my-org"); + let path = orchestrator.compute_path(Path::new("/base"), &repo, "github"); + + assert_eq!(path, PathBuf::from("/base/github/my-org/my-repo")); +} + +#[test] +fn test_plan_clone_new_repos() { + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + let git = MockGit::new(); + + let repos = vec![test_repo("repo1", "org"), test_repo("repo2", "org")]; + + let plan = orchestrator.plan_clone(Path::new("/nonexistent"), repos, "github", &git); + + assert_eq!(plan.to_clone.len(), 2); + assert_eq!(plan.to_sync.len(), 0); + assert_eq!(plan.skipped.len(), 0); +} + +#[test] +fn test_plan_clone_existing_repos() { + let temp = TempDir::new().unwrap(); + let repo_path = temp.path().join("org/repo"); + std::fs::create_dir_all(&repo_path).unwrap(); + + let mut git = MockGit::new(); + git.add_repo(repo_path.to_string_lossy().to_string()); + + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + + let repos = vec![test_repo("repo", "org")]; + let plan = orchestrator.plan_clone(temp.path(), repos, "github", &git); + + assert_eq!(plan.to_clone.len(), 0); + assert_eq!(plan.to_sync.len(), 1); + assert_eq!(plan.skipped.len(), 0); +} + +#[test] +fn test_plan_clone_non_repo_dir() { + let temp = TempDir::new().unwrap(); + let repo_path = temp.path().join("org/repo"); + std::fs::create_dir_all(&repo_path).unwrap(); + + let git = MockGit::new(); // Not marked as a repo + + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + + let repos = vec![test_repo("repo", "org")]; + let plan = orchestrator.plan_clone(temp.path(), repos, "github", &git); + + assert_eq!(plan.to_clone.len(), 0); + assert_eq!(plan.to_sync.len(), 0); + assert_eq!(plan.skipped.len(), 1); +} + +#[test] +fn test_plan_sync() { + let temp = TempDir::new().unwrap(); + let repo_path = temp.path().join("org/repo"); + std::fs::create_dir_all(&repo_path).unwrap(); + + let mut git = MockGit::new(); + git.add_repo(repo_path.to_string_lossy().to_string()); + + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + + let repos = vec![test_repo("repo", "org")]; + let (to_sync, skipped) = orchestrator.plan_sync(temp.path(), repos, "github", &git, false); + + assert_eq!(to_sync.len(), 1); + assert_eq!(skipped.len(), 0); +} + +#[test] +fn test_plan_sync_not_cloned() { + let filters = FilterOptions::default(); + let orchestrator = DiscoveryOrchestrator::new(filters, "{org}/{repo}".to_string()); + let git = MockGit::new(); + + let repos = vec![test_repo("repo", "org")]; + let (to_sync, skipped) = + orchestrator.plan_sync(Path::new("/nonexistent"), repos, "github", &git, false); + + assert_eq!(to_sync.len(), 0); + assert_eq!(skipped.len(), 1); + assert!(skipped[0].1.contains("not cloned")); +} + +#[test] +fn test_merge_repos() { + let repos1 = vec![test_repo("repo1", "org1")]; + let repos2 = vec![test_repo("repo2", "org2")]; + + let merged = merge_repos(vec![ + ("github".to_string(), repos1), + ("gitlab".to_string(), repos2), + ]); + + assert_eq!(merged.len(), 2); + assert_eq!(merged[0].0, "github"); + assert_eq!(merged[1].0, "gitlab"); +} + +#[test] +fn test_deduplicate_repos() { + let repo1 = test_repo("repo", "org"); + let repo2 = test_repo("repo", "org"); // Duplicate + + let repos = vec![("github".to_string(), repo1), ("gitlab".to_string(), repo2)]; + + let deduped = deduplicate_repos(repos); + assert_eq!(deduped.len(), 1); + assert_eq!(deduped[0].0, "github"); // First one wins +} + +#[test] +fn test_to_discovery_options() { + let filters = FilterOptions { + include_archived: true, + include_forks: false, + orgs: vec!["org1".to_string(), "org2".to_string()], + exclude_repos: vec!["org/skip-this".to_string()], + }; + + let orchestrator = DiscoveryOrchestrator::new(filters.clone(), "{org}/{repo}".to_string()); + let options = orchestrator.to_discovery_options(); + + assert!(options.include_archived); + assert!(!options.include_forks); + assert_eq!(options.org_filter, vec!["org1", "org2"]); +} diff --git a/src/domain/mod.rs b/src/domain/mod.rs new file mode 100644 index 0000000..b312815 --- /dev/null +++ b/src/domain/mod.rs @@ -0,0 +1,5 @@ +//! Domain logic (pure business rules). + +pub mod repo_path_template; + +pub use repo_path_template::RepoPathTemplate; diff --git a/src/domain/repo_path_template.rs b/src/domain/repo_path_template.rs new file mode 100644 index 0000000..e5ee9ab --- /dev/null +++ b/src/domain/repo_path_template.rs @@ -0,0 +1,99 @@ +//! Repository path templating. + +use crate::types::OwnedRepo; +use std::path::{Component, Path, PathBuf}; + +/// Canonical renderer for workspace repository paths. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct RepoPathTemplate { + template: String, +} + +impl RepoPathTemplate { + /// Create a new path template. + pub fn new(template: impl Into) -> Self { + Self { + template: template.into(), + } + } + + /// Returns the underlying template string. + pub fn as_str(&self) -> &str { + &self.template + } + + /// Render a repository path from template placeholders. + pub fn render(&self, base_path: &Path, provider: &str, owner: &str, repo: &str) -> PathBuf { + let provider = sanitize_component(provider); + let owner = sanitize_component(owner); + let repo = sanitize_component(repo); + let rendered = self + .template + .replace("{provider}", &provider) + .replace("{org}", &owner) + .replace("{repo}", &repo); + + base_path.join(rendered) + } + + /// Render a repository path from an owned repository object. + pub fn render_owned_repo(&self, base_path: &Path, repo: &OwnedRepo, provider: &str) -> PathBuf { + self.render(base_path, provider, &repo.owner, &repo.repo.name) + } + + /// Render from a full name (`org/repo`) when available. + pub fn render_full_name( + &self, + base_path: &Path, + provider: &str, + full_name: &str, + ) -> Option { + let (owner, repo) = full_name.split_once('/')?; + Some(self.render(base_path, provider, owner, repo)) + } + + /// Expected scan depth for local repository traversal. + pub fn scan_depth(&self) -> usize { + let sample = self + .template + .replace("{provider}", "provider") + .replace("{org}", "org") + .replace("{repo}", "repo"); + + let depth = Path::new(&sample) + .components() + .filter(|c| matches!(c, Component::Normal(_))) + .count(); + + depth.max(1) + } +} + +fn sanitize_component(value: &str) -> String { + let trimmed = value.trim(); + if trimmed.is_empty() { + return "_".to_string(); + } + + let mut sanitized = trimmed + .replace(['/', '\\'], "_") + .replace("..", "__") + .trim() + .to_string(); + + if sanitized.is_empty() { + sanitized = "_".to_string(); + } + + sanitized +} + +impl Default for RepoPathTemplate { + fn default() -> Self { + Self::new("{org}/{repo}") + } +} + +#[cfg(test)] +#[path = "repo_path_template_tests.rs"] +mod tests; diff --git a/src/domain/repo_path_template_tests.rs b/src/domain/repo_path_template_tests.rs new file mode 100644 index 0000000..a63db59 --- /dev/null +++ b/src/domain/repo_path_template_tests.rs @@ -0,0 +1,50 @@ +use super::*; + +#[test] +fn test_render_standard_template() { + let template = RepoPathTemplate::new("{org}/{repo}"); + let path = template.render(Path::new("/tmp/base"), "github", "acme", "api"); + assert_eq!(path, PathBuf::from("/tmp/base/acme/api")); +} + +#[test] +fn test_render_provider_template() { + let template = RepoPathTemplate::new("{provider}/{org}/{repo}"); + let path = template.render(Path::new("/tmp/base"), "github", "acme", "api"); + assert_eq!(path, PathBuf::from("/tmp/base/github/acme/api")); +} + +#[test] +fn test_scan_depth() { + assert_eq!(RepoPathTemplate::new("{org}/{repo}").scan_depth(), 2); + assert_eq!( + RepoPathTemplate::new("{provider}/{org}/{repo}").scan_depth(), + 3 + ); + assert_eq!(RepoPathTemplate::new("code/{org}/{repo}").scan_depth(), 3); +} + +#[test] +fn test_render_full_name() { + let template = RepoPathTemplate::new("{org}/{repo}"); + let path = template.render_full_name(Path::new("/x"), "github", "acme/api"); + assert_eq!(path, Some(PathBuf::from("/x/acme/api"))); + assert!(template + .render_full_name(Path::new("/x"), "github", "invalid") + .is_none()); +} + +#[test] +fn test_render_sanitizes_path_components() { + let template = RepoPathTemplate::new("{provider}/{org}/{repo}"); + let path = template.render( + Path::new("/tmp/base"), + "github/enterprise", + "../acme", + "api\\v2", + ); + assert_eq!( + path, + PathBuf::from("/tmp/base/github_enterprise/___acme/api_v2") + ); +} diff --git a/src/errors/app.rs b/src/errors/app.rs new file mode 100644 index 0000000..f39412a --- /dev/null +++ b/src/errors/app.rs @@ -0,0 +1,139 @@ +//! Application-level error types. +//! +//! These errors represent top-level failures in the gisa application, +//! aggregating errors from providers, git operations, and configuration. + +use super::{GitError, ProviderError}; +use thiserror::Error; + +/// Top-level application errors. +/// +/// This enum aggregates all error types that can occur in the application, +/// providing a unified error type for the CLI interface. +#[derive(Error, Debug)] +pub enum AppError { + /// Configuration file error. + #[error("Configuration error: {0}")] + Config(String), + + /// Authentication failed across all methods. + #[error("Authentication failed: {0}")] + Auth(String), + + /// Error from a Git hosting provider. + #[error("Provider error: {0}")] + Provider( + #[from] + #[source] + ProviderError, + ), + + /// Error during a git operation. + #[error("Git error: {0}")] + Git( + #[from] + #[source] + GitError, + ), + + /// File system I/O error. + #[error("IO error: {0}")] + Io( + #[from] + #[source] + std::io::Error, + ), + + /// Path-related error (invalid path, not found, etc.). + #[error("Path error: {0}")] + Path(String), + + /// User cancelled the operation. + #[error("Operation cancelled by user")] + Cancelled, + + /// Operation interrupted by signal. + #[error("Operation interrupted")] + Interrupted, + + /// Generic error with context. + #[error("{0}")] + Other( + #[from] + #[source] + anyhow::Error, + ), +} + +impl AppError { + /// Creates a configuration error. + pub fn config(message: impl Into) -> Self { + AppError::Config(message.into()) + } + + /// Creates an authentication error. + pub fn auth(message: impl Into) -> Self { + AppError::Auth(message.into()) + } + + /// Creates a path error. + pub fn path(message: impl Into) -> Self { + AppError::Path(message.into()) + } + + /// Returns `true` if this error is recoverable with a retry. + pub fn is_retryable(&self) -> bool { + match self { + AppError::Provider(e) => e.is_retryable(), + AppError::Git(e) => e.is_retryable(), + AppError::Io(e) => { + // Some I/O errors are retryable + matches!( + e.kind(), + std::io::ErrorKind::TimedOut + | std::io::ErrorKind::Interrupted + | std::io::ErrorKind::WouldBlock + ) + } + _ => false, + } + } + + /// Returns a user-friendly exit code for this error. + pub fn exit_code(&self) -> i32 { + match self { + AppError::Config(_) => 2, + AppError::Auth(_) => 3, + AppError::Provider(_) => 4, + AppError::Git(_) => 5, + AppError::Io(_) => 6, + AppError::Path(_) => 7, + AppError::Cancelled => 130, // Standard for SIGINT + AppError::Interrupted => 130, + AppError::Other(_) => 1, + } + } + + /// Returns a suggested action to resolve this error. + pub fn suggested_action(&self) -> &str { + match self { + AppError::Config(_) => { + "Check your config file for syntax errors, or run 'gisa init' to create one" + } + AppError::Auth(_) => "Run 'gh auth login' to authenticate with GitHub CLI", + AppError::Provider(e) => e.suggested_action(), + AppError::Git(e) => e.suggested_action(), + AppError::Io(_) => "Check file permissions and disk space", + AppError::Path(_) => "Check that the path exists and is accessible", + AppError::Cancelled | AppError::Interrupted => "Re-run the command to continue", + AppError::Other(_) => "Check the error message and try again", + } + } +} + +/// A convenience type alias for Results in this application. +pub type Result = std::result::Result; + +#[cfg(test)] +#[path = "app_tests.rs"] +mod tests; diff --git a/src/errors/app_tests.rs b/src/errors/app_tests.rs new file mode 100644 index 0000000..3a917af --- /dev/null +++ b/src/errors/app_tests.rs @@ -0,0 +1,88 @@ +use super::*; + +#[test] +fn test_from_provider_error() { + let provider_err = ProviderError::Authentication("bad token".to_string()); + let app_err: AppError = provider_err.into(); + assert!(matches!(app_err, AppError::Provider(_))); +} + +#[test] +fn test_from_git_error() { + let git_err = GitError::GitNotFound; + let app_err: AppError = git_err.into(); + assert!(matches!(app_err, AppError::Git(_))); +} + +#[test] +fn test_from_io_error() { + let io_err = std::io::Error::new(std::io::ErrorKind::NotFound, "file not found"); + let app_err: AppError = io_err.into(); + assert!(matches!(app_err, AppError::Io(_))); +} + +#[test] +fn test_exit_codes_are_distinct() { + let errors = [ + AppError::Config("test".to_string()), + AppError::Auth("test".to_string()), + AppError::Provider(ProviderError::Network("test".to_string())), + AppError::Git(GitError::GitNotFound), + AppError::Path("test".to_string()), + AppError::Cancelled, + ]; + + let codes: Vec = errors.iter().map(|e| e.exit_code()).collect(); + // Config, Auth, Provider, Git, Path should have unique codes + assert_eq!(codes[0], 2); // Config + assert_eq!(codes[1], 3); // Auth + assert_eq!(codes[2], 4); // Provider + assert_eq!(codes[3], 5); // Git + assert_eq!(codes[4], 7); // Path + assert_eq!(codes[5], 130); // Cancelled +} + +#[test] +fn test_is_retryable_delegates_to_inner() { + let retryable = AppError::Provider(ProviderError::Network("timeout".to_string())); + assert!(retryable.is_retryable()); + + let not_retryable = AppError::Provider(ProviderError::Authentication("bad".to_string())); + assert!(!not_retryable.is_retryable()); +} + +#[test] +fn test_config_error_not_retryable() { + let err = AppError::config("invalid toml"); + assert!(!err.is_retryable()); +} + +#[test] +fn test_helper_constructors() { + let err = AppError::config("bad config"); + assert!(matches!(err, AppError::Config(_))); + + let err = AppError::auth("no token"); + assert!(matches!(err, AppError::Auth(_))); + + let err = AppError::path("invalid path"); + assert!(matches!(err, AppError::Path(_))); +} + +#[test] +fn test_error_display() { + let err = AppError::config("missing base_path"); + let display = format!("{}", err); + assert!(display.contains("Configuration error")); + assert!(display.contains("missing base_path")); +} + +#[test] +fn test_suggested_action_returns_useful_text() { + let err = AppError::auth("no token found"); + let suggestion = err.suggested_action(); + assert_eq!( + suggestion, + "Run 'gh auth login' to authenticate with GitHub CLI" + ); +} diff --git a/src/errors/git.rs b/src/errors/git.rs new file mode 100644 index 0000000..8ebb80e --- /dev/null +++ b/src/errors/git.rs @@ -0,0 +1,177 @@ +//! Git operation error types. +//! +//! These errors represent failures that occur when executing git commands +//! via the shell (clone, fetch, pull, etc.). + +use thiserror::Error; + +/// Errors that occur during git operations. +#[derive(Error, Debug)] +pub enum GitError { + /// Git executable not found in PATH. + #[error("Git not found. Please install git and ensure it's in your PATH")] + GitNotFound, + + /// Clone operation failed. + #[error("Clone failed for {repo}: {message}")] + CloneFailed { + /// Repository URL or name + repo: String, + /// Error message from git + message: String, + }, + + /// Fetch operation failed. + #[error("Fetch failed for {repo}: {message}")] + FetchFailed { + /// Repository path or name + repo: String, + /// Error message from git + message: String, + }, + + /// Pull operation failed. + #[error("Pull failed for {repo}: {message}")] + PullFailed { + /// Repository path or name + repo: String, + /// Error message from git + message: String, + }, + + /// Repository has uncommitted changes that would be overwritten. + #[error("Repository has uncommitted changes: {path}")] + UncommittedRepository { + /// Path to the repository + path: String, + }, + + /// Path is not a git repository. + #[error("Not a git repository: {path}")] + NotARepository { + /// Path that was expected to be a repository + path: String, + }, + + /// Permission denied during git operation. + #[error("Permission denied: {0}")] + PermissionDenied(String), + + /// SSH key not configured for the host. + #[error("SSH key not configured for {host}. Run 'ssh -T git@{host}' to test")] + SshKeyMissing { + /// The git host (e.g., github.com) + host: String, + }, + + /// SSH authentication failed. + #[error("SSH authentication failed for {host}: {message}")] + SshAuthFailed { + /// The git host + host: String, + /// Error message + message: String, + }, + + /// Generic command execution failure. + #[error("Git command failed: {0}")] + CommandFailed(String), + + /// Timeout during git operation. + #[error("Git operation timed out after {seconds} seconds")] + Timeout { + /// Number of seconds before timeout + seconds: u64, + }, +} + +impl GitError { + /// Creates a clone failed error. + pub fn clone_failed(repo: impl Into, message: impl Into) -> Self { + GitError::CloneFailed { + repo: repo.into(), + message: message.into(), + } + } + + /// Creates a fetch failed error. + pub fn fetch_failed(repo: impl AsRef, message: impl Into) -> Self { + GitError::FetchFailed { + repo: repo.as_ref().to_string_lossy().to_string(), + message: message.into(), + } + } + + /// Creates a pull failed error. + pub fn pull_failed(repo: impl AsRef, message: impl Into) -> Self { + GitError::PullFailed { + repo: repo.as_ref().to_string_lossy().to_string(), + message: message.into(), + } + } + + /// Creates a command failed error. + pub fn command_failed(command: impl Into, message: impl Into) -> Self { + GitError::CommandFailed(format!("{}: {}", command.into(), message.into())) + } + + /// Returns `true` if this error indicates the repository can be skipped + /// safely without affecting other operations. + pub fn is_skippable(&self) -> bool { + matches!( + self, + GitError::UncommittedRepository { .. } + | GitError::PermissionDenied(_) + | GitError::SshKeyMissing { .. } + | GitError::SshAuthFailed { .. } + ) + } + + /// Returns `true` if this error might be resolved by retrying. + pub fn is_retryable(&self) -> bool { + matches!(self, GitError::Timeout { .. }) + } + + /// Returns a user-friendly suggestion for how to resolve this error. + pub fn suggested_action(&self) -> &'static str { + match self { + GitError::GitNotFound => "Install git from https://git-scm.com/downloads", + GitError::CloneFailed { .. } => "Check the repository URL and your network connection", + GitError::FetchFailed { .. } | GitError::PullFailed { .. } => { + "Check your network connection and repository access" + } + GitError::UncommittedRepository { .. } => "Commit or stash your changes before syncing", + GitError::NotARepository { .. } => { + "The directory exists but is not a git repository. Remove it to clone fresh" + } + GitError::PermissionDenied(_) => "Check file permissions and your authentication", + GitError::SshKeyMissing { .. } => { + "Add your SSH key to the git hosting service, or use HTTPS authentication" + } + GitError::SshAuthFailed { .. } => { + "Check your SSH key configuration with your git host (e.g. 'ssh -T git@')" + } + GitError::CommandFailed(_) => "Check the error message and try again", + GitError::Timeout { .. } => { + "The operation took too long. Try with a smaller repository or better connection" + } + } + } + + /// Extracts the repository identifier from the error, if available. + pub fn repo_identifier(&self) -> Option<&str> { + match self { + GitError::CloneFailed { repo, .. } + | GitError::FetchFailed { repo, .. } + | GitError::PullFailed { repo, .. } => Some(repo), + GitError::UncommittedRepository { path } | GitError::NotARepository { path } => { + Some(path) + } + _ => None, + } + } +} + +#[cfg(test)] +#[path = "git_tests.rs"] +mod tests; diff --git a/src/errors/git_tests.rs b/src/errors/git_tests.rs new file mode 100644 index 0000000..e8ee248 --- /dev/null +++ b/src/errors/git_tests.rs @@ -0,0 +1,87 @@ +use super::*; + +#[test] +fn test_uncommitted_repository_is_skippable() { + let err = GitError::UncommittedRepository { + path: "/home/user/repo".to_string(), + }; + assert!(err.is_skippable()); +} + +#[test] +fn test_ssh_errors_are_skippable() { + let err = GitError::SshKeyMissing { + host: "github.com".to_string(), + }; + assert!(err.is_skippable()); + + let err = GitError::SshAuthFailed { + host: "github.com".to_string(), + message: "Permission denied".to_string(), + }; + assert!(err.is_skippable()); +} + +#[test] +fn test_clone_failed_is_not_skippable() { + let err = GitError::CloneFailed { + repo: "org/repo".to_string(), + message: "Network error".to_string(), + }; + assert!(!err.is_skippable()); +} + +#[test] +fn test_timeout_is_retryable() { + let err = GitError::Timeout { seconds: 120 }; + assert!(err.is_retryable()); +} + +#[test] +fn test_git_not_found_is_not_retryable() { + let err = GitError::GitNotFound; + assert!(!err.is_retryable()); +} + +#[test] +fn test_command_failed_is_not_retryable() { + let err = GitError::CommandFailed("some failure".to_string()); + assert!(!err.is_retryable()); +} + +#[test] +fn test_repo_identifier_extraction() { + let err = GitError::CloneFailed { + repo: "my-org/my-repo".to_string(), + message: "error".to_string(), + }; + assert_eq!(err.repo_identifier(), Some("my-org/my-repo")); + + let err = GitError::UncommittedRepository { + path: "/path/to/repo".to_string(), + }; + assert_eq!(err.repo_identifier(), Some("/path/to/repo")); + + let err = GitError::GitNotFound; + assert_eq!(err.repo_identifier(), None); +} + +#[test] +fn test_error_display() { + let err = GitError::CloneFailed { + repo: "org/repo".to_string(), + message: "fatal: repository not found".to_string(), + }; + let display = format!("{}", err); + assert!(display.contains("org/repo")); + assert!(display.contains("repository not found")); +} + +#[test] +fn test_suggested_actions_are_helpful() { + let err = GitError::SshKeyMissing { + host: "github.com".to_string(), + }; + let suggestion = err.suggested_action(); + assert!(suggestion.contains("SSH") || suggestion.contains("HTTPS")); +} diff --git a/src/errors/mod.rs b/src/errors/mod.rs new file mode 100644 index 0000000..72660ff --- /dev/null +++ b/src/errors/mod.rs @@ -0,0 +1,24 @@ +//! Error types for the gisa application. +//! +//! This module provides a hierarchy of error types: +//! - [`AppError`] - Top-level application errors +//! - [`ProviderError`] - Errors from Git hosting providers (GitHub, GitLab, etc.) +//! - [`GitError`] - Errors from git command-line operations +//! +//! # Example +//! +//! ``` +//! use git_same::errors::{AppError, Result}; +//! +//! fn do_something() -> Result<()> { +//! Err(AppError::config("missing required field")) +//! } +//! ``` + +mod app; +mod git; +mod provider; + +pub use app::{AppError, Result}; +pub use git::GitError; +pub use provider::ProviderError; diff --git a/src/errors/provider.rs b/src/errors/provider.rs new file mode 100644 index 0000000..16c6edc --- /dev/null +++ b/src/errors/provider.rs @@ -0,0 +1,118 @@ +//! Provider-specific error types for Git hosting services. +//! +//! These errors represent failures that occur when interacting with +//! provider APIs like GitHub, GitLab, or Bitbucket. + +use thiserror::Error; + +/// Errors that occur when interacting with a Git hosting provider's API. +#[derive(Error, Debug)] +pub enum ProviderError { + /// Authentication failed - invalid or expired token. + #[error("Authentication failed: {0}")] + Authentication(String), + + /// Network-level error - connection failed, timeout, etc. + #[error("Network error: {0}")] + Network(String), + + /// API returned an error response. + #[error("API error (HTTP {status}): {message}")] + Api { + /// HTTP status code + status: u16, + /// Error message from the API + message: String, + }, + + /// Rate limit exceeded. + #[error("Rate limited. Resets at {reset_time}")] + RateLimited { + /// When the rate limit resets (ISO 8601 format) + reset_time: String, + }, + + /// Failed to parse API response. + #[error("Failed to parse response: {0}")] + Parse(String), + + /// Configuration error for the provider. + #[error("Configuration error: {0}")] + Configuration(String), + + /// Feature not yet implemented. + #[error("Not implemented: {0}")] + NotImplemented(String), + + /// Resource not found (404). + #[error("Not found: {0}")] + NotFound(String), + + /// Permission denied (403 without rate limit). + #[error("Permission denied: {0}")] + PermissionDenied(String), +} + +impl ProviderError { + /// Returns `true` if this error is potentially recoverable with a retry. + /// + /// Retryable errors include: + /// - Network errors (transient connectivity issues) + /// - Rate limiting (will succeed after waiting) + /// - HTTP 429 responses + /// - Server errors (5xx status codes) + pub fn is_retryable(&self) -> bool { + matches!( + self, + ProviderError::Network(_) + | ProviderError::RateLimited { .. } + | ProviderError::Api { status: 429, .. } + | ProviderError::Api { + status: 500..=599, + .. + } + ) + } + + /// Returns a user-friendly suggestion for how to resolve this error. + pub fn suggested_action(&self) -> &'static str { + match self { + ProviderError::Authentication(_) => { + "Re-authenticate with your Git provider or verify your access token/credentials" + } + ProviderError::RateLimited { .. } => { + "Wait for the rate limit to reset, or use a different authentication token" + } + ProviderError::Network(_) => "Check your internet connection and try again", + ProviderError::Api { status: 403, .. } => { + "Check that your token has the required permissions for this operation" + } + ProviderError::Api { status: 404, .. } | ProviderError::NotFound(_) => { + "The resource may have been deleted or you may have lost access" + } + ProviderError::PermissionDenied(_) => { + "Check that your token has the required permissions for this operation" + } + ProviderError::Configuration(_) => "Check your gisa.config.toml configuration file", + ProviderError::NotImplemented(_) => { + "This feature is not yet available. Check for updates" + } + _ => "Please check the error message and try again", + } + } + + /// Creates an API error from an HTTP status code and message. + pub fn from_status(status: u16, message: impl Into) -> Self { + let message = message.into(); + match status { + 401 => ProviderError::Authentication(message), + 403 => ProviderError::PermissionDenied(message), + 404 => ProviderError::NotFound(message), + _ => ProviderError::Api { status, message }, + } + } +} + +#[cfg(test)] +#[path = "provider_tests.rs"] +mod tests; diff --git a/src/errors/provider_tests.rs b/src/errors/provider_tests.rs new file mode 100644 index 0000000..66e3a53 --- /dev/null +++ b/src/errors/provider_tests.rs @@ -0,0 +1,100 @@ +use super::*; + +#[test] +fn test_network_error_is_retryable() { + let err = ProviderError::Network("connection refused".to_string()); + assert!(err.is_retryable()); +} + +#[test] +fn test_rate_limited_is_retryable() { + let err = ProviderError::RateLimited { + reset_time: "2024-01-01T00:00:00Z".to_string(), + }; + assert!(err.is_retryable()); +} + +#[test] +fn test_server_error_is_retryable() { + let err = ProviderError::Api { + status: 500, + message: "Internal Server Error".to_string(), + }; + assert!(err.is_retryable()); + + let err = ProviderError::Api { + status: 503, + message: "Service Unavailable".to_string(), + }; + assert!(err.is_retryable()); +} + +#[test] +fn test_429_is_retryable() { + let err = ProviderError::Api { + status: 429, + message: "Too Many Requests".to_string(), + }; + assert!(err.is_retryable()); +} + +#[test] +fn test_auth_error_is_not_retryable() { + let err = ProviderError::Authentication("bad token".to_string()); + assert!(!err.is_retryable()); +} + +#[test] +fn test_client_error_is_not_retryable() { + let err = ProviderError::Api { + status: 400, + message: "Bad Request".to_string(), + }; + assert!(!err.is_retryable()); + + let err = ProviderError::Api { + status: 404, + message: "Not Found".to_string(), + }; + assert!(!err.is_retryable()); +} + +#[test] +fn test_suggested_action_for_auth() { + let err = ProviderError::Authentication("token expired".to_string()); + assert!(err.suggested_action().contains("Re-authenticate")); +} + +#[test] +fn test_suggested_action_for_rate_limit() { + let err = ProviderError::RateLimited { + reset_time: "2024-01-01T00:00:00Z".to_string(), + }; + assert!(err.suggested_action().contains("rate limit")); +} + +#[test] +fn test_from_status_creates_correct_error_type() { + let err = ProviderError::from_status(401, "Unauthorized"); + assert!(matches!(err, ProviderError::Authentication(_))); + + let err = ProviderError::from_status(403, "Forbidden"); + assert!(matches!(err, ProviderError::PermissionDenied(_))); + + let err = ProviderError::from_status(404, "Not Found"); + assert!(matches!(err, ProviderError::NotFound(_))); + + let err = ProviderError::from_status(500, "Server Error"); + assert!(matches!(err, ProviderError::Api { status: 500, .. })); +} + +#[test] +fn test_error_display() { + let err = ProviderError::Api { + status: 500, + message: "Internal Server Error".to_string(), + }; + let display = format!("{}", err); + assert!(display.contains("500")); + assert!(display.contains("Internal Server Error")); +} diff --git a/src/git/mod.rs b/src/git/mod.rs new file mode 100644 index 0000000..c119b45 --- /dev/null +++ b/src/git/mod.rs @@ -0,0 +1,47 @@ +//! Git operations module. +//! +//! This module provides abstractions and implementations for git operations. +//! +//! # Architecture +//! +//! The module is built around the [`GitOperations`] trait, which abstracts +//! git commands like clone, fetch, pull, and status. This allows for: +//! +//! - Real implementations using shell commands ([`ShellGit`]) +//! - Mock implementations for testing +//! +//! # Example +//! +//! ```no_run +//! use git_same::git::{ShellGit, GitOperations, CloneOptions}; +//! use std::path::Path; +//! +//! let git = ShellGit::new(); +//! +//! // Clone a repository +//! let options = CloneOptions::new().with_depth(1); +//! git.clone_repo( +//! "git@github.com:user/repo.git", +//! Path::new("/tmp/repo"), +//! &options +//! ).expect("Clone failed"); +//! +//! // Check status +//! let status = git.status(Path::new("/tmp/repo")).expect("Status failed"); +//! if status.is_clean_and_synced() { +//! println!("Repository is clean and in sync"); +//! } +//! ``` + +pub mod shell; +pub mod traits; + +pub use shell::ShellGit; +pub use traits::{CloneOptions, FetchResult, GitOperations, PullResult, RepoStatus}; + +#[cfg(test)] +pub use traits::mock::{MockConfig, MockGit}; + +#[cfg(test)] +#[path = "mod_tests.rs"] +mod tests; diff --git a/src/git/mod_tests.rs b/src/git/mod_tests.rs new file mode 100644 index 0000000..e54aba4 --- /dev/null +++ b/src/git/mod_tests.rs @@ -0,0 +1,20 @@ +use super::*; +use std::path::Path; + +#[test] +fn reexports_are_accessible() { + let _git = ShellGit::new(); + + let options = CloneOptions::new().with_depth(1).with_branch("main"); + assert_eq!(options.depth, 1); + assert_eq!(options.branch.as_deref(), Some("main")); +} + +#[test] +fn mock_git_reexport_behaves_as_expected() { + let mock = MockGit::new(); + let status = mock.status(Path::new("/tmp/nonexistent")).unwrap(); + + assert_eq!(status.branch, "main"); + assert!(!status.is_uncommitted); +} diff --git a/src/git/shell.rs b/src/git/shell.rs new file mode 100644 index 0000000..7dd2d71 --- /dev/null +++ b/src/git/shell.rs @@ -0,0 +1,362 @@ +//! Shell-based git command implementation. +//! +//! This module provides the real implementation of git operations +//! by invoking git commands through the shell. + +use crate::errors::GitError; +use crate::git::traits::{CloneOptions, FetchResult, GitOperations, PullResult, RepoStatus}; +use std::path::Path; +use std::process::{Command, Output}; +use tracing::{debug, trace}; + +/// Shell-based git operations. +/// +/// This implementation executes git commands via the shell and parses their output. +#[derive(Debug, Clone, Copy, Default)] +pub struct ShellGit; + +impl ShellGit { + /// Creates a new ShellGit instance. + pub fn new() -> Self { + Self + } + + /// Runs a git command and returns the output. + fn run_git(&self, args: &[&str], cwd: Option<&Path>) -> Result { + let mut cmd = Command::new("git"); + cmd.args(args); + + if let Some(dir) = cwd { + cmd.current_dir(dir); + } + + // Prevent git from prompting for credentials + cmd.env("GIT_TERMINAL_PROMPT", "0"); + + cmd.output().map_err(|e| { + GitError::command_failed( + format!("git {}", args.join(" ")), + format!("Failed to execute: {}", e), + ) + }) + } + + /// Runs a git command and returns stdout as a string. + fn run_git_output(&self, args: &[&str], cwd: Option<&Path>) -> Result { + let output = self.run_git(args, cwd)?; + + if output.status.success() { + Ok(String::from_utf8_lossy(&output.stdout).trim().to_string()) + } else { + let stderr = String::from_utf8_lossy(&output.stderr).to_string(); + Err(GitError::command_failed( + format!("git {}", args.join(" ")), + stderr, + )) + } + } + + /// Checks if a git command succeeds. + fn run_git_check(&self, args: &[&str], cwd: Option<&Path>) -> bool { + self.run_git(args, cwd) + .map(|o| o.status.success()) + .unwrap_or(false) + } + + /// Parses the porcelain status output. + fn parse_status_output(&self, output: &str, branch_output: &str) -> RepoStatus { + let mut staged_count: usize = 0; + let mut unstaged_count: usize = 0; + let mut untracked_count: usize = 0; + + for line in output.lines() { + if line.len() < 2 { + continue; + } + let bytes = line.as_bytes(); + let x = bytes[0]; // index (staged) status + let y = bytes[1]; // working tree (unstaged) status + + if x == b'?' && y == b'?' { + untracked_count += 1; + } else { + if x != b' ' && x != b'?' { + staged_count += 1; + } + if y != b' ' && y != b'?' { + unstaged_count += 1; + } + } + } + let is_uncommitted = staged_count > 0 || unstaged_count > 0; + let has_untracked = untracked_count > 0; + + // Parse branch info from `git status -b --porcelain` + // Format: "## main...origin/main [ahead 1, behind 2]" or "## main" + let (branch, ahead, behind) = self.parse_branch_info(branch_output); + + RepoStatus { + branch, + is_uncommitted, + ahead, + behind, + has_untracked, + staged_count, + unstaged_count, + untracked_count, + } + } + + /// Parses branch info from git status -b --porcelain output. + fn parse_branch_info(&self, output: &str) -> (String, u32, u32) { + let first_line = output.lines().next().unwrap_or(""); + + // Remove the "## " prefix + let line = first_line.strip_prefix("## ").unwrap_or(first_line); + + // Split on "..." to get branch name and tracking info + let (branch_part, info_part): (&str, Option<&str>) = if let Some(idx) = line.find("...") { + (&line[..idx], Some(&line[idx + 3..])) + } else { + // No tracking branch, but might have [ahead X, behind Y] directly + // e.g., "## feature [ahead 1, behind 2]" + if let Some(bracket_idx) = line.find('[') { + (line[..bracket_idx].trim_end(), Some(&line[bracket_idx..])) + } else { + let branch = line.split_whitespace().next().unwrap_or("HEAD"); + (branch, None) + } + }; + + let branch = branch_part.to_string(); + let mut ahead = 0; + let mut behind = 0; + + // Parse ahead/behind from info part + // Format: "origin/main [ahead 1, behind 2]" or "[ahead 1]" or "origin/main [ahead 1]" + if let Some(info) = info_part { + if let Some(start) = info.find('[') { + if let Some(end) = info.find(']') { + let bracket_content = &info[start + 1..end]; + for part in bracket_content.split(", ") { + if let Some(n) = part.strip_prefix("ahead ") { + ahead = n.parse().unwrap_or(0); + } else if let Some(n) = part.strip_prefix("behind ") { + behind = n.parse().unwrap_or(0); + } + } + } + } + } + + (branch, ahead, behind) + } +} + +impl GitOperations for ShellGit { + fn clone_repo(&self, url: &str, target: &Path, options: &CloneOptions) -> Result<(), GitError> { + debug!( + url, + target = %target.display(), + depth = options.depth, + branch = options.branch.as_deref().unwrap_or("default"), + recurse_submodules = options.recurse_submodules, + "Starting git clone" + ); + + let mut args = vec!["clone"]; + + // Add depth if specified + let depth_str; + if options.depth > 0 { + depth_str = options.depth.to_string(); + args.push("--depth"); + args.push(&depth_str); + } + + // Add branch if specified + if let Some(ref branch) = options.branch { + args.push("--branch"); + args.push(branch); + } + + // Add submodule recursion if requested + if options.recurse_submodules { + args.push("--recurse-submodules"); + } + + // Add URL and target + args.push(url); + let target_str = target.to_string_lossy(); + args.push(&target_str); + + trace!(args = ?args, "Executing git command"); + let output = self.run_git(&args, None)?; + + if output.status.success() { + debug!(url, target = %target.display(), "Clone completed successfully"); + Ok(()) + } else { + let stderr = String::from_utf8_lossy(&output.stderr).to_string(); + debug!(url, error = %stderr, "Clone failed"); + Err(GitError::clone_failed(url, stderr)) + } + } + + fn fetch(&self, repo_path: &Path) -> Result { + debug!(repo = %repo_path.display(), "Starting git fetch"); + + // Resolve upstream tracking ref and snapshot its commit before fetch. + let tracking_branch = self + .run_git_output( + &["rev-parse", "--abbrev-ref", "--symbolic-full-name", "@{u}"], + Some(repo_path), + ) + .ok(); + let before_upstream = tracking_branch.as_ref().and_then(|tracking| { + self.run_git_output(&["rev-parse", tracking], Some(repo_path)) + .ok() + }); + + // Run fetch + trace!(repo = %repo_path.display(), "Executing fetch --all --prune"); + let output = self.run_git(&["fetch", "--all", "--prune"], Some(repo_path))?; + + if !output.status.success() { + let stderr = String::from_utf8_lossy(&output.stderr).to_string(); + debug!(repo = %repo_path.display(), error = %stderr, "Fetch failed"); + return Err(GitError::fetch_failed(repo_path, stderr)); + } + + // Compare upstream commit before/after fetch to determine whether remote changed. + let updated = if let (Some(before_ref), Some(tracking)) = + (before_upstream, tracking_branch.as_deref()) + { + let after = self + .run_git_output(&["rev-parse", tracking], Some(repo_path)) + .ok(); + after.map(|a| a != before_ref).unwrap_or(false) + } else { + false + }; + + // Count new commits if updated + let new_commits = if updated { + self.run_git_output(&["rev-list", "--count", "HEAD..@{u}"], Some(repo_path)) + .ok() + .and_then(|s| s.parse().ok()) + } else { + Some(0) + }; + + debug!( + repo = %repo_path.display(), + updated, + new_commits = new_commits.unwrap_or(0), + "Fetch completed" + ); + + Ok(FetchResult { + updated, + new_commits, + }) + } + + fn pull(&self, repo_path: &Path) -> Result { + debug!(repo = %repo_path.display(), "Starting git pull"); + + // First check status + let status = self.status(repo_path)?; + + if status.is_uncommitted { + debug!(repo = %repo_path.display(), "Skipping pull: uncommitted changes"); + return Ok(PullResult { + success: false, + updated: false, + fast_forward: false, + error: Some("Working tree has uncommitted changes".to_string()), + }); + } + + // Try fast-forward only pull + trace!(repo = %repo_path.display(), "Executing pull --ff-only"); + let output = self.run_git(&["pull", "--ff-only"], Some(repo_path))?; + + if output.status.success() { + let stdout = String::from_utf8_lossy(&output.stdout); + let updated = !stdout.contains("Already up to date"); + let fast_forward = + stdout.contains("Fast-forward") || stdout.contains("Already up to date"); + + debug!( + repo = %repo_path.display(), + updated, + fast_forward, + "Pull completed successfully" + ); + + Ok(PullResult { + success: true, + updated, + fast_forward, + error: None, + }) + } else { + let stderr = String::from_utf8_lossy(&output.stderr).to_string(); + + // Check if it's a non-fast-forward situation + if stderr.contains("Not possible to fast-forward") { + debug!(repo = %repo_path.display(), "Pull failed: branch has diverged"); + Ok(PullResult { + success: false, + updated: false, + fast_forward: false, + error: Some("Cannot fast-forward, local branch has diverged".to_string()), + }) + } else { + debug!(repo = %repo_path.display(), error = %stderr, "Pull failed"); + Err(GitError::pull_failed(repo_path, stderr)) + } + } + } + + fn status(&self, repo_path: &Path) -> Result { + // Get status with branch info + let branch_output = + self.run_git_output(&["status", "-b", "--porcelain"], Some(repo_path))?; + + // Get just the file status + let status_output = self.run_git_output(&["status", "--porcelain"], Some(repo_path))?; + + Ok(self.parse_status_output(&status_output, &branch_output)) + } + + fn is_repo(&self, path: &Path) -> bool { + if !path.exists() { + return false; + } + + self.run_git_check(&["rev-parse", "--git-dir"], Some(path)) + } + + fn current_branch(&self, repo_path: &Path) -> Result { + self.run_git_output(&["rev-parse", "--abbrev-ref", "HEAD"], Some(repo_path)) + } + + fn remote_url(&self, repo_path: &Path, remote: &str) -> Result { + self.run_git_output(&["remote", "get-url", remote], Some(repo_path)) + } + + fn recent_commits(&self, repo_path: &Path, limit: usize) -> Result, GitError> { + let limit_arg = format!("-{}", limit); + let output = self.run_git_output(&["log", "--oneline", &limit_arg], Some(repo_path))?; + if output.is_empty() { + return Ok(Vec::new()); + } + Ok(output.lines().map(|l| l.to_string()).collect()) + } +} + +#[cfg(test)] +#[path = "shell_tests.rs"] +mod tests; diff --git a/src/git/shell_tests.rs b/src/git/shell_tests.rs new file mode 100644 index 0000000..96bc1e5 --- /dev/null +++ b/src/git/shell_tests.rs @@ -0,0 +1,122 @@ +use super::*; + +#[test] +fn test_shell_git_creation() { + let _git = ShellGit::new(); + // ShellGit is a zero-sized type with no fields +} + +#[test] +fn test_parse_branch_info_simple() { + let git = ShellGit::new(); + let (branch, ahead, behind) = git.parse_branch_info("## main"); + assert_eq!(branch, "main"); + assert_eq!(ahead, 0); + assert_eq!(behind, 0); +} + +#[test] +fn test_parse_branch_info_with_tracking() { + let git = ShellGit::new(); + let (branch, ahead, behind) = git.parse_branch_info("## main...origin/main"); + assert_eq!(branch, "main"); + assert_eq!(ahead, 0); + assert_eq!(behind, 0); +} + +#[test] +fn test_parse_branch_info_ahead() { + let git = ShellGit::new(); + let (branch, ahead, behind) = git.parse_branch_info("## feature...origin/feature [ahead 3]"); + assert_eq!(branch, "feature"); + assert_eq!(ahead, 3); + assert_eq!(behind, 0); +} + +#[test] +fn test_parse_branch_info_behind() { + let git = ShellGit::new(); + let (branch, ahead, behind) = git.parse_branch_info("## main...origin/main [behind 5]"); + assert_eq!(branch, "main"); + assert_eq!(ahead, 0); + assert_eq!(behind, 5); +} + +#[test] +fn test_parse_branch_info_diverged() { + let git = ShellGit::new(); + let (branch, ahead, behind) = + git.parse_branch_info("## develop...origin/develop [ahead 2, behind 7]"); + assert_eq!(branch, "develop"); + assert_eq!(ahead, 2); + assert_eq!(behind, 7); +} + +#[test] +fn test_parse_status_clean() { + let git = ShellGit::new(); + let status = git.parse_status_output("", "## main...origin/main"); + assert!(!status.is_uncommitted); + assert!(!status.has_untracked); + assert_eq!(status.branch, "main"); +} + +#[test] +fn test_parse_status_modified() { + let git = ShellGit::new(); + let status = git.parse_status_output(" M src/main.rs", "## main"); + assert!(status.is_uncommitted); + assert!(!status.has_untracked); +} + +#[test] +fn test_parse_status_untracked() { + let git = ShellGit::new(); + let status = git.parse_status_output("?? newfile.txt", "## main"); + assert!(!status.is_uncommitted); + assert!(status.has_untracked); +} + +#[test] +fn test_parse_status_mixed() { + let git = ShellGit::new(); + let output = " M src/main.rs\n?? newfile.txt\nA staged.rs"; + let status = git.parse_status_output(output, "## feature [ahead 1, behind 2]"); + assert!(status.is_uncommitted); + assert!(status.has_untracked); + assert_eq!(status.branch, "feature"); + assert_eq!(status.ahead, 1); + assert_eq!(status.behind, 2); +} + +// Integration tests that require actual git repo +#[test] +#[ignore] // Run with: cargo test -- --ignored +fn test_is_repo_real() { + let git = ShellGit::new(); + // Current directory should be a git repo + assert!(git.is_repo(Path::new("."))); + // Root is not a git repo + assert!(!git.is_repo(Path::new("/"))); +} + +#[test] +#[ignore] +fn test_current_branch_real() { + let git = ShellGit::new(); + let branch = git.current_branch(Path::new(".")); + assert!(branch.is_ok()); + // Should return some branch name + assert!(!branch.unwrap().is_empty()); +} + +#[test] +#[ignore] +fn test_status_real() { + let git = ShellGit::new(); + let status = git.status(Path::new(".")); + assert!(status.is_ok()); + let status = status.unwrap(); + // Should have a branch + assert!(!status.branch.is_empty()); +} diff --git a/src/git/traits.rs b/src/git/traits.rs new file mode 100644 index 0000000..d52d325 --- /dev/null +++ b/src/git/traits.rs @@ -0,0 +1,396 @@ +//! Git operations trait definitions. +//! +//! This module defines the trait abstractions for git operations, +//! allowing for both real and mock implementations for testing. + +use crate::errors::GitError; +use std::path::Path; + +/// Options for cloning a repository. +#[derive(Debug, Clone, Default)] +pub struct CloneOptions { + /// Clone depth (0 = full clone) + pub depth: u32, + /// Specific branch to clone + pub branch: Option, + /// Whether to recurse into submodules + pub recurse_submodules: bool, +} + +impl CloneOptions { + /// Creates new clone options with defaults. + pub fn new() -> Self { + Self::default() + } + + /// Sets the clone depth. + pub fn with_depth(mut self, depth: u32) -> Self { + self.depth = depth; + self + } + + /// Sets the branch to clone. + pub fn with_branch(mut self, branch: impl Into) -> Self { + self.branch = Some(branch.into()); + self + } + + /// Enables recursive submodule cloning. + pub fn with_submodules(mut self) -> Self { + self.recurse_submodules = true; + self + } +} + +/// Status of a local repository. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct RepoStatus { + /// Current branch name + pub branch: String, + /// Whether the working tree has uncommitted changes + pub is_uncommitted: bool, + /// Number of commits ahead of upstream + pub ahead: u32, + /// Number of commits behind upstream + pub behind: u32, + /// Whether there are untracked files + pub has_untracked: bool, + /// Number of staged (index) changes + pub staged_count: usize, + /// Number of unstaged (working tree) changes + pub unstaged_count: usize, + /// Number of untracked files + pub untracked_count: usize, +} + +impl RepoStatus { + /// Returns true if the repo is clean and in sync with upstream. + pub fn is_clean_and_synced(&self) -> bool { + !self.is_uncommitted && !self.has_untracked && self.ahead == 0 && self.behind == 0 + } + + /// Returns true if it's safe to do a fast-forward pull. + pub fn can_fast_forward(&self) -> bool { + !self.is_uncommitted && self.ahead == 0 && self.behind > 0 + } +} + +/// Result of a fetch operation. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct FetchResult { + /// Whether any new commits were fetched + pub updated: bool, + /// Number of new commits (if available) + pub new_commits: Option, +} + +/// Result of a pull operation. +#[derive(Debug, Clone, PartialEq, Eq)] +pub struct PullResult { + /// Whether the pull was successful + pub success: bool, + /// Whether the pull applied updates to the local branch + pub updated: bool, + /// Whether this was a fast-forward + pub fast_forward: bool, + /// Error message if not successful + pub error: Option, +} + +/// Trait for git operations. +/// +/// This trait abstracts git commands to allow for testing with mocks. +pub trait GitOperations: Send + Sync { + /// Clones a repository to the target path. + /// + /// # Arguments + /// * `url` - The clone URL (SSH or HTTPS) + /// * `target` - Target directory path + /// * `options` - Clone options (depth, branch, submodules) + fn clone_repo(&self, url: &str, target: &Path, options: &CloneOptions) -> Result<(), GitError>; + + /// Fetches updates from the remote. + /// + /// # Arguments + /// * `repo_path` - Path to the local repository + fn fetch(&self, repo_path: &Path) -> Result; + + /// Pulls updates from the remote. + /// + /// # Arguments + /// * `repo_path` - Path to the local repository + fn pull(&self, repo_path: &Path) -> Result; + + /// Gets the status of a local repository. + /// + /// # Arguments + /// * `repo_path` - Path to the local repository + fn status(&self, repo_path: &Path) -> Result; + + /// Checks if a directory is a git repository. + /// + /// # Arguments + /// * `path` - Path to check + fn is_repo(&self, path: &Path) -> bool; + + /// Gets the current branch name. + /// + /// # Arguments + /// * `repo_path` - Path to the local repository + fn current_branch(&self, repo_path: &Path) -> Result; + + /// Gets the remote URL for a repository. + /// + /// # Arguments + /// * `repo_path` - Path to the local repository + /// * `remote` - Remote name (default: "origin") + fn remote_url(&self, repo_path: &Path, remote: &str) -> Result; + + /// Gets recent commits as one-line summaries. + /// + /// # Arguments + /// * `repo_path` - Path to the local repository + /// * `limit` - Maximum number of commits to return + fn recent_commits(&self, repo_path: &Path, limit: usize) -> Result, GitError>; +} + +/// A mock implementation of GitOperations for testing. +#[cfg(test)] +pub mod mock { + use super::*; + use std::collections::HashMap; + use std::sync::{Arc, Mutex}; + + /// Records of operations performed. + #[derive(Debug, Clone, Default)] + pub struct MockCallLog { + pub clones: Vec<(String, String, CloneOptions)>, // (url, path, options) + pub fetches: Vec, // paths + pub pulls: Vec, // paths + pub status_checks: Vec, // paths + } + + /// Configuration for mock responses. + #[derive(Debug, Clone)] + pub struct MockConfig { + /// Whether clone operations should succeed + pub clone_succeeds: bool, + /// Whether fetch operations should succeed + pub fetch_succeeds: bool, + /// Whether pull operations should succeed + pub pull_succeeds: bool, + /// Whether fetch reports updates + pub fetch_has_updates: bool, + /// Default status to return + pub default_status: RepoStatus, + /// Custom statuses per path + pub path_statuses: HashMap, + /// Paths that are valid repos + pub valid_repos: Vec, + /// Custom error message for failures + pub error_message: Option, + } + + impl Default for MockConfig { + fn default() -> Self { + Self { + clone_succeeds: true, + fetch_succeeds: true, + pull_succeeds: true, + fetch_has_updates: false, + default_status: RepoStatus { + branch: "main".to_string(), + is_uncommitted: false, + ahead: 0, + behind: 0, + has_untracked: false, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + }, + path_statuses: HashMap::new(), + valid_repos: Vec::new(), + error_message: None, + } + } + } + + /// Mock git operations for testing. + pub struct MockGit { + config: MockConfig, + log: Arc>, + } + + impl MockGit { + /// Creates a new mock with default configuration. + pub fn new() -> Self { + Self { + config: MockConfig::default(), + log: Arc::new(Mutex::new(MockCallLog::default())), + } + } + + /// Creates a new mock with custom configuration. + pub fn with_config(config: MockConfig) -> Self { + Self { + config, + log: Arc::new(Mutex::new(MockCallLog::default())), + } + } + + /// Gets the call log. + pub fn call_log(&self) -> MockCallLog { + self.log.lock().unwrap().clone() + } + + /// Marks a path as a valid repo. + pub fn add_repo(&mut self, path: impl Into) { + self.config.valid_repos.push(path.into()); + } + + /// Sets a custom status for a path. + pub fn set_status(&mut self, path: impl Into, status: RepoStatus) { + self.config.path_statuses.insert(path.into(), status); + } + + /// Configures clone to fail. + pub fn fail_clones(&mut self, message: Option) { + self.config.clone_succeeds = false; + self.config.error_message = message; + } + + /// Configures fetch to fail. + pub fn fail_fetches(&mut self, message: Option) { + self.config.fetch_succeeds = false; + self.config.error_message = message; + } + + /// Configures pull to fail. + pub fn fail_pulls(&mut self, message: Option) { + self.config.pull_succeeds = false; + self.config.error_message = message; + } + } + + impl Default for MockGit { + fn default() -> Self { + Self::new() + } + } + + impl GitOperations for MockGit { + fn clone_repo( + &self, + url: &str, + target: &Path, + options: &CloneOptions, + ) -> Result<(), GitError> { + let mut log = self.log.lock().unwrap(); + log.clones.push(( + url.to_string(), + target.to_string_lossy().to_string(), + options.clone(), + )); + + if self.config.clone_succeeds { + Ok(()) + } else { + Err(GitError::clone_failed( + url, + self.config + .error_message + .as_deref() + .unwrap_or("mock clone failure"), + )) + } + } + + fn fetch(&self, repo_path: &Path) -> Result { + let mut log = self.log.lock().unwrap(); + log.fetches.push(repo_path.to_string_lossy().to_string()); + + if self.config.fetch_succeeds { + Ok(FetchResult { + updated: self.config.fetch_has_updates, + new_commits: if self.config.fetch_has_updates { + Some(3) + } else { + Some(0) + }, + }) + } else { + Err(GitError::fetch_failed( + repo_path, + self.config + .error_message + .as_deref() + .unwrap_or("mock fetch failure"), + )) + } + } + + fn pull(&self, repo_path: &Path) -> Result { + let mut log = self.log.lock().unwrap(); + log.pulls.push(repo_path.to_string_lossy().to_string()); + + if self.config.pull_succeeds { + Ok(PullResult { + success: true, + updated: true, + fast_forward: true, + error: None, + }) + } else { + Err(GitError::pull_failed( + repo_path, + self.config + .error_message + .as_deref() + .unwrap_or("mock pull failure"), + )) + } + } + + fn status(&self, repo_path: &Path) -> Result { + let mut log = self.log.lock().unwrap(); + let path_str = repo_path.to_string_lossy().to_string(); + log.status_checks.push(path_str.clone()); + + if let Some(status) = self.config.path_statuses.get(&path_str) { + Ok(status.clone()) + } else { + Ok(self.config.default_status.clone()) + } + } + + fn is_repo(&self, path: &Path) -> bool { + let path_str = path.to_string_lossy().to_string(); + self.config.valid_repos.contains(&path_str) + } + + fn current_branch(&self, repo_path: &Path) -> Result { + let path_str = repo_path.to_string_lossy().to_string(); + if let Some(status) = self.config.path_statuses.get(&path_str) { + Ok(status.branch.clone()) + } else { + Ok(self.config.default_status.branch.clone()) + } + } + + fn remote_url(&self, _repo_path: &Path, _remote: &str) -> Result { + Ok("git@github.com:example/repo.git".to_string()) + } + + fn recent_commits( + &self, + _repo_path: &Path, + _limit: usize, + ) -> Result, GitError> { + Ok(Vec::new()) + } + } +} + +#[cfg(test)] +#[path = "traits_tests.rs"] +mod tests; diff --git a/src/git/traits_tests.rs b/src/git/traits_tests.rs new file mode 100644 index 0000000..a90dcc7 --- /dev/null +++ b/src/git/traits_tests.rs @@ -0,0 +1,189 @@ +use super::*; + +#[test] +fn test_clone_options_builder() { + let options = CloneOptions::new() + .with_depth(1) + .with_branch("develop") + .with_submodules(); + + assert_eq!(options.depth, 1); + assert_eq!(options.branch, Some("develop".to_string())); + assert!(options.recurse_submodules); +} + +#[test] +fn test_clone_options_default() { + let options = CloneOptions::default(); + assert_eq!(options.depth, 0); + assert!(options.branch.is_none()); + assert!(!options.recurse_submodules); +} + +#[test] +fn test_repo_status_clean_and_synced() { + let status = RepoStatus { + branch: "main".to_string(), + is_uncommitted: false, + ahead: 0, + behind: 0, + has_untracked: false, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + }; + assert!(status.is_clean_and_synced()); + + let uncommitted_status = RepoStatus { + is_uncommitted: true, + ..status.clone() + }; + assert!(!uncommitted_status.is_clean_and_synced()); + + let ahead = RepoStatus { + ahead: 1, + ..status.clone() + }; + assert!(!ahead.is_clean_and_synced()); +} + +#[test] +fn test_repo_status_can_fast_forward() { + let status = RepoStatus { + branch: "main".to_string(), + is_uncommitted: false, + ahead: 0, + behind: 3, + has_untracked: false, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + }; + assert!(status.can_fast_forward()); + + let uncommitted_status = RepoStatus { + is_uncommitted: true, + ..status.clone() + }; + assert!(!uncommitted_status.can_fast_forward()); + + let diverged = RepoStatus { + ahead: 1, + behind: 3, + ..status.clone() + }; + assert!(!diverged.can_fast_forward()); +} + +mod mock_tests { + use super::mock::*; + use super::*; + + #[test] + fn test_mock_clone_success() { + let mock = MockGit::new(); + let result = mock.clone_repo( + "git@github.com:user/repo.git", + Path::new("/tmp/repo"), + &CloneOptions::default(), + ); + assert!(result.is_ok()); + + let log = mock.call_log(); + assert_eq!(log.clones.len(), 1); + assert_eq!(log.clones[0].0, "git@github.com:user/repo.git"); + } + + #[test] + fn test_mock_clone_failure() { + let mut mock = MockGit::new(); + mock.fail_clones(Some("permission denied".to_string())); + + let result = mock.clone_repo( + "git@github.com:user/repo.git", + Path::new("/tmp/repo"), + &CloneOptions::default(), + ); + assert!(result.is_err()); + + let err = result.unwrap_err(); + assert!(err.to_string().contains("permission denied")); + } + + #[test] + fn test_mock_fetch() { + let config = MockConfig { + fetch_has_updates: true, + ..Default::default() + }; + let mock = MockGit::with_config(config); + + let result = mock.fetch(Path::new("/tmp/repo")).unwrap(); + assert!(result.updated); + assert_eq!(result.new_commits, Some(3)); + } + + #[test] + fn test_mock_pull() { + let mock = MockGit::new(); + let result = mock.pull(Path::new("/tmp/repo")).unwrap(); + assert!(result.success); + assert!(result.fast_forward); + } + + #[test] + fn test_mock_status_default() { + let mock = MockGit::new(); + let status = mock.status(Path::new("/tmp/repo")).unwrap(); + assert_eq!(status.branch, "main"); + assert!(!status.is_uncommitted); + } + + #[test] + fn test_mock_status_custom() { + let mut mock = MockGit::new(); + mock.set_status( + "/tmp/repo", + RepoStatus { + branch: "feature".to_string(), + is_uncommitted: true, + ahead: 2, + behind: 0, + has_untracked: true, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + }, + ); + + let status = mock.status(Path::new("/tmp/repo")).unwrap(); + assert_eq!(status.branch, "feature"); + assert!(status.is_uncommitted); + assert_eq!(status.ahead, 2); + } + + #[test] + fn test_mock_is_repo() { + let mut mock = MockGit::new(); + mock.add_repo("/tmp/repo"); + + assert!(mock.is_repo(Path::new("/tmp/repo"))); + assert!(!mock.is_repo(Path::new("/tmp/not-a-repo"))); + } + + #[test] + fn test_mock_call_log_tracking() { + let mock = MockGit::new(); + + let _ = mock.clone_repo("url1", Path::new("/path1"), &CloneOptions::default()); + let _ = mock.fetch(Path::new("/path2")); + let _ = mock.pull(Path::new("/path3")); + let _ = mock.status(Path::new("/path4")); + + let log = mock.call_log(); + assert_eq!(log.clones.len(), 1); + assert_eq!(log.fetches.len(), 1); + assert_eq!(log.pulls.len(), 1); + assert_eq!(log.status_checks.len(), 1); + } +} diff --git a/src/infra/mod.rs b/src/infra/mod.rs new file mode 100644 index 0000000..958b19f --- /dev/null +++ b/src/infra/mod.rs @@ -0,0 +1,6 @@ +//! Infrastructure adapters (I/O, provider and git bindings). + +pub mod storage; + +pub use crate::git; +pub use crate::provider; diff --git a/src/infra/storage/mod.rs b/src/infra/storage/mod.rs new file mode 100644 index 0000000..8869004 --- /dev/null +++ b/src/infra/storage/mod.rs @@ -0,0 +1,4 @@ +//! Storage layer adapters. + +pub use crate::cache::*; +pub use crate::config::workspace_manager::*; diff --git a/src/lib.rs b/src/lib.rs new file mode 100644 index 0000000..5378da4 --- /dev/null +++ b/src/lib.rs @@ -0,0 +1,100 @@ +//! # Git-Same - Mirror GitHub org/repo structure locally +//! +//! Git-Same is a CLI tool that discovers all GitHub organizations +//! and repositories you have access to, then clones them to your local filesystem +//! maintaining the org/repo directory structure. +//! +//! ## Features +//! +//! - **Multi-Provider Support**: Works with GitHub (more providers coming soon) +//! - **Parallel Operations**: Clones and syncs repositories concurrently +//! - **Smart Filtering**: Filter by archived status, forks, organizations +//! - **Incremental Sync**: Only fetches/pulls what has changed +//! - **Progress Reporting**: Beautiful progress bars and status updates +//! +//! ## Available Commands +//! +//! The tool can be invoked using any of these names (all installed by default): +//! - `git-same` - Main command +//! - `gitsame` - No hyphen variant +//! - `gitsa` - Short form +//! - `gisa` - Shortest variant +//! - `git same` - Git subcommand (requires git-same in PATH) +//! +//! ## Example +//! +//! ```bash +//! # Initialize configuration +//! git-same init +//! +//! # Set up a workspace +//! git-same setup +//! +//! # Sync repositories (clone new + fetch existing) +//! git-same sync --dry-run +//! git-same sync +//! +//! # Show status +//! git-same status +//! +//! # Also works as git subcommand +//! git same sync +//! ``` + +pub mod app; +pub mod auth; +pub mod banner; +pub mod cache; +pub mod checks; +pub mod cli; +pub mod commands; +pub mod config; +pub mod discovery; +pub mod domain; +pub mod errors; +pub mod git; +pub mod infra; +pub mod operations; +pub mod output; +pub mod provider; +#[cfg(feature = "tui")] +pub mod setup; +#[cfg(feature = "tui")] +pub mod tui; +pub mod types; +pub mod workflows; + +/// Re-export commonly used types for convenience. +pub mod prelude { + pub use crate::auth::{get_auth, get_auth_for_provider, AuthResult}; + pub use crate::cache::{CacheManager, DiscoveryCache, CACHE_VERSION}; + pub use crate::cli::{Cli, Command, InitArgs, ResetArgs, StatusArgs, SyncCmdArgs}; + pub use crate::config::{ + AuthMethod, Config, ConfigCloneOptions, FilterOptions, ProviderEntry, + SyncMode as ConfigSyncMode, + }; + pub use crate::discovery::DiscoveryOrchestrator; + pub use crate::domain::RepoPathTemplate; + pub use crate::errors::{AppError, GitError, ProviderError, Result}; + pub use crate::git::{ + CloneOptions, FetchResult, GitOperations, PullResult, RepoStatus, ShellGit, + }; + pub use crate::operations::clone::{ + CloneManager, CloneManagerOptions, CloneProgress, CloneResult, + }; + pub use crate::operations::sync::{ + LocalRepo, SyncManager, SyncManagerOptions, SyncMode, SyncResult, + }; + pub use crate::output::{ + CloneProgressBar, DiscoveryProgressBar, Output, SyncProgressBar, Verbosity, + }; + pub use crate::provider::{ + create_provider, Credentials, DiscoveryOptions, DiscoveryProgress, NoProgress, Provider, + RateLimitInfo, + }; + pub use crate::types::{ActionPlan, OpResult, OpSummary, Org, OwnedRepo, ProviderKind, Repo}; +} + +#[cfg(test)] +#[path = "lib_tests.rs"] +mod tests; diff --git a/src/lib_tests.rs b/src/lib_tests.rs new file mode 100644 index 0000000..6b76ccb --- /dev/null +++ b/src/lib_tests.rs @@ -0,0 +1,24 @@ +use super::*; + +#[test] +fn prelude_reexports_core_types() { + use crate::prelude::*; + + let options = CloneOptions::new().with_depth(1).with_branch("main"); + assert_eq!(options.depth, 1); + assert_eq!(options.branch.as_deref(), Some("main")); + + let provider = ProviderEntry::github(); + assert_eq!(provider.kind, ProviderKind::GitHub); + + let repo = Repo::test("rocket", "acme"); + let owned = OwnedRepo::new("acme", repo); + assert_eq!(owned.full_name(), "acme/rocket"); +} + +#[test] +fn top_level_modules_are_accessible() { + let _ = output::Verbosity::Normal; + let _ = operations::sync::SyncMode::Fetch; + let _ = types::ProviderKind::GitLab; +} diff --git a/src/main.rs b/src/main.rs new file mode 100644 index 0000000..09dca1e --- /dev/null +++ b/src/main.rs @@ -0,0 +1,132 @@ +//! Git-Same - Mirror GitHub structure /orgs/repos/ to local file system +//! +//! Main entry point for the git-same CLI application. + +use git_same::app::cli::{run_command, Cli}; +use git_same::output::{Output, Verbosity}; +use std::process::ExitCode; +use tracing::debug; + +/// Initialize structured logging based on GISA_LOG environment variable. +/// +/// Examples: +/// - `GISA_LOG=debug` - Enable debug logging for all modules +/// - `GISA_LOG=git_same=debug` - Enable debug logging for git-same only +/// - `GISA_LOG=git_same::auth=trace` - Enable trace logging for auth module +/// - `GISA_LOG=warn` - Only show warnings and errors +fn init_logging() { + use tracing_subscriber::{fmt, prelude::*, EnvFilter}; + + // Use GISA_LOG env var, defaulting to "warn" if not set + let filter = EnvFilter::try_from_env("GISA_LOG").unwrap_or_else(|_| EnvFilter::new("warn")); + + tracing_subscriber::registry() + .with(fmt::layer().with_target(true).with_level(true).compact()) + .with(filter) + .init(); +} + +#[tokio::main] +async fn main() -> ExitCode { + // Initialize logging early + init_logging(); + + let cli = Cli::parse_args(); + debug!(command = ?cli.command, "Parsed CLI arguments"); + + match cli.command { + Some(ref command) => { + // CLI subcommand mode — existing behavior + let verbosity = Verbosity::from(cli.verbosity()); + let output = Output::new(verbosity, cli.is_json()); + + // Print banner unless quiet or JSON output + if !output.is_json() && !cli.is_quiet() { + git_same::banner::print_banner(); + } + + let result = run_command(command, cli.config.as_deref(), &output).await; + + match result { + Ok(()) => ExitCode::SUCCESS, + Err(e) => { + output.error(&e.to_string()); + if verbosity >= Verbosity::Verbose { + eprintln!(" Suggestion: {}", e.suggested_action()); + } + ExitCode::from(e.exit_code().clamp(1, 255) as u8) + } + } + } + None => { + // No subcommand — launch TUI + #[cfg(feature = "tui")] + { + use git_same::config::Config; + + // Auto-create default config if it doesn't exist + if cli.config.is_none() { + let default_path = match Config::default_path() { + Ok(path) => path, + Err(e) => { + eprintln!("Failed to determine default config path: {}", e); + return ExitCode::from(2); + } + }; + + if !default_path.exists() { + if let Some(parent) = default_path.parent() { + if let Err(e) = std::fs::create_dir_all(parent) { + eprintln!( + "Failed to create config directory '{}': {}", + parent.display(), + e + ); + return ExitCode::from(2); + } + } + if let Err(e) = std::fs::write(&default_path, Config::default_toml()) { + eprintln!( + "Failed to write default config '{}': {}", + default_path.display(), + e + ); + return ExitCode::from(2); + } + } + } + + let config = match cli.config.as_ref() { + Some(path) => Config::load_from(path), + None => Config::load(), + }; + + match config { + Ok(config) => match git_same::app::tui::run_tui(config).await { + Ok(()) => ExitCode::SUCCESS, + Err(e) => { + eprintln!("TUI error: {}", e); + ExitCode::from(1) + } + }, + Err(e) => { + eprintln!("Failed to load config: {}", e); + eprintln!("Run 'gisa init' to create a configuration file."); + ExitCode::from(2) + } + } + } + #[cfg(not(feature = "tui"))] + { + eprintln!( + "TUI not available. Run a subcommand (e.g., 'gisa sync') or build with --features tui." + ); + ExitCode::from(1) + } + } + } +} + +#[cfg(test)] +#[path = "main_tests.rs"] +mod tests; diff --git a/src/main_tests.rs b/src/main_tests.rs new file mode 100644 index 0000000..42f0134 --- /dev/null +++ b/src/main_tests.rs @@ -0,0 +1,22 @@ +use super::*; +use clap::Parser; +use git_same::cli::Command; + +#[test] +fn main_cli_parses_sync_subcommand() { + let cli = Cli::try_parse_from(["gisa", "sync", "--dry-run", "--pull"]).unwrap(); + + match cli.command { + Some(Command::Sync(args)) => { + assert!(args.dry_run); + assert!(args.pull); + } + _ => panic!("expected sync subcommand"), + } +} + +#[test] +fn main_cli_without_subcommand_is_none() { + let cli = Cli::try_parse_from(["gisa"]).unwrap(); + assert!(cli.command.is_none()); +} diff --git a/src/operations/clone.rs b/src/operations/clone.rs new file mode 100644 index 0000000..ced17e2 --- /dev/null +++ b/src/operations/clone.rs @@ -0,0 +1,349 @@ +//! Parallel cloning operations. +//! +//! This module provides functionality for cloning repositories, +//! including parallel cloning with controlled concurrency. +//! +//! # Example +//! +//! ```no_run +//! use git_same::operations::clone::{CloneManager, CloneManagerOptions, NoProgress}; +//! use git_same::git::ShellGit; +//! use std::path::Path; +//! +//! # async fn example() { +//! let git = ShellGit::new(); +//! let options = CloneManagerOptions::new() +//! .with_concurrency(4) +//! .with_structure("{org}/{repo}"); +//! +//! let manager = CloneManager::new(git, options); +//! +//! // repos would come from discovery +//! let repos = vec![]; +//! let progress = NoProgress; +//! +//! let (summary, results) = manager +//! .clone_repos(Path::new("~/github"), repos, "github", std::sync::Arc::new(progress)) +//! .await; +//! +//! println!("Cloned {} repos, {} failed", summary.success, summary.failed); +//! # } +//! ``` + +use crate::domain::RepoPathTemplate; +use crate::git::{CloneOptions, GitOperations}; +use crate::types::{OpResult, OpSummary, OwnedRepo}; +use std::path::{Path, PathBuf}; +use std::sync::Arc; +use tokio::sync::Semaphore; + +/// Maximum allowed concurrency to prevent resource exhaustion. +/// Higher values can cause "too many open files" errors and network saturation. +pub const MAX_CONCURRENCY: usize = 16; + +/// Minimum concurrency (at least one clone at a time). +pub const MIN_CONCURRENCY: usize = 1; + +/// Default concurrency when not specified in config. +pub const DEFAULT_CONCURRENCY: usize = 8; + +/// Progress callback for clone operations. +pub trait CloneProgress: Send + Sync { + /// Called when a clone starts. + fn on_start(&self, repo: &OwnedRepo, index: usize, total: usize); + + /// Called when a clone completes successfully. + fn on_complete(&self, repo: &OwnedRepo, index: usize, total: usize); + + /// Called when a clone fails. + fn on_error(&self, repo: &OwnedRepo, error: &str, index: usize, total: usize); + + /// Called when a clone is skipped. + fn on_skip(&self, repo: &OwnedRepo, reason: &str, index: usize, total: usize); +} + +/// A no-op progress implementation for when no progress reporting is needed. +#[derive(Debug, Clone, Copy, Default)] +pub struct NoProgress; + +impl CloneProgress for NoProgress { + fn on_start(&self, _repo: &OwnedRepo, _index: usize, _total: usize) {} + fn on_complete(&self, _repo: &OwnedRepo, _index: usize, _total: usize) {} + fn on_error(&self, _repo: &OwnedRepo, _error: &str, _index: usize, _total: usize) {} + fn on_skip(&self, _repo: &OwnedRepo, _reason: &str, _index: usize, _total: usize) {} +} + +/// Result of a single clone operation. +#[derive(Debug)] +pub struct CloneResult { + /// The repository that was cloned + pub repo: OwnedRepo, + /// The local path where it was cloned + pub path: PathBuf, + /// The operation result + pub result: OpResult, +} + +/// Options for the clone manager. +#[derive(Debug, Clone)] +pub struct CloneManagerOptions { + /// Maximum number of concurrent clones + pub concurrency: usize, + /// Clone options (depth, branch, submodules) + pub clone_options: CloneOptions, + /// Directory structure template + /// Supports: {provider}, {org}, {repo} + pub structure: String, + /// Whether to use SSH URLs (vs HTTPS) + pub prefer_ssh: bool, + /// Whether this is a dry run + pub dry_run: bool, +} + +impl Default for CloneManagerOptions { + fn default() -> Self { + Self { + concurrency: DEFAULT_CONCURRENCY, + clone_options: CloneOptions::default(), + structure: "{org}/{repo}".to_string(), + prefer_ssh: true, + dry_run: false, + } + } +} + +impl CloneManagerOptions { + /// Creates new options with defaults. + pub fn new() -> Self { + Self::default() + } + + /// Sets the concurrency level, clamped to [MIN_CONCURRENCY, MAX_CONCURRENCY]. + /// + /// Returns the options with the effective concurrency set. + /// Use [`effective_concurrency`] to check if the value was capped. + pub fn with_concurrency(mut self, concurrency: usize) -> Self { + self.concurrency = concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + self + } + + /// Checks if a requested concurrency exceeds the maximum. + /// + /// Returns `Some(MAX_CONCURRENCY)` if capping occurred, `None` otherwise. + pub fn check_concurrency_cap(requested: usize) -> Option { + if requested > MAX_CONCURRENCY { + Some(MAX_CONCURRENCY) + } else { + None + } + } + + /// Sets the clone options. + pub fn with_clone_options(mut self, options: CloneOptions) -> Self { + self.clone_options = options; + self + } + + /// Sets the directory structure. + pub fn with_structure(mut self, structure: impl Into) -> Self { + self.structure = structure.into(); + self + } + + /// Sets SSH preference. + pub fn with_ssh(mut self, prefer_ssh: bool) -> Self { + self.prefer_ssh = prefer_ssh; + self + } + + /// Sets dry run mode. + pub fn with_dry_run(mut self, dry_run: bool) -> Self { + self.dry_run = dry_run; + self + } +} + +/// Manages parallel clone operations. +pub struct CloneManager { + git: Arc, + options: CloneManagerOptions, +} + +impl CloneManager { + /// Creates a new clone manager. + pub fn new(git: G, mut options: CloneManagerOptions) -> Self { + options.concurrency = options.concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + Self { + git: Arc::new(git), + options, + } + } + + /// Computes the local path for a repository. + pub fn compute_path(&self, base_path: &Path, repo: &OwnedRepo, provider: &str) -> PathBuf { + RepoPathTemplate::new(self.options.structure.clone()) + .render_owned_repo(base_path, repo, provider) + } + + /// Gets the clone URL for a repository. + pub fn get_clone_url<'a>(&self, repo: &'a OwnedRepo) -> &'a str { + if self.options.prefer_ssh { + &repo.repo.ssh_url + } else { + &repo.repo.clone_url + } + } + + /// Clones repositories in parallel. + /// + /// Returns a summary of operations and individual results. + pub async fn clone_repos( + &self, + base_path: &Path, + repos: Vec, + provider: &str, + progress: Arc, + ) -> (OpSummary, Vec) { + let total = repos.len(); + let concurrency = self + .options + .concurrency + .clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + let semaphore = Arc::new(Semaphore::new(concurrency)); + let mut handles = Vec::with_capacity(total); + + for (index, repo) in repos.into_iter().enumerate() { + let permit = semaphore.clone().acquire_owned().await.unwrap(); + let git = self.git.clone(); + let clone_options = self.options.clone_options.clone(); + let target_path = self.compute_path(base_path, &repo, provider); + let url = self.get_clone_url(&repo).to_string(); + let dry_run = self.options.dry_run; + let progress = Arc::clone(&progress); + let panic_repo = repo.clone(); + let panic_path = target_path.clone(); + + let handle = tokio::spawn(async move { + // Notify progress - clone starting + progress.on_start(&repo, index, total); + let result = if dry_run { + OpResult::Skipped("dry run".to_string()) + } else if target_path.exists() { + OpResult::Skipped("directory already exists".to_string()) + } else { + // Create parent directories + if let Some(parent) = target_path.parent() { + if let Err(e) = std::fs::create_dir_all(parent) { + OpResult::Failed(format!("Failed to create directory: {}", e)) + } else { + // Perform the clone (blocking operation) + match tokio::task::spawn_blocking({ + let git = git.clone(); + let url = url.clone(); + let target_path = target_path.clone(); + let clone_options = clone_options.clone(); + move || git.clone_repo(&url, &target_path, &clone_options) + }) + .await + { + Ok(Ok(())) => OpResult::Success, + Ok(Err(e)) => OpResult::Failed(e.to_string()), + Err(e) => OpResult::Failed(format!("Task panicked: {}", e)), + } + } + } else { + OpResult::Failed("Invalid target path".to_string()) + } + }; + + drop(permit); // Release semaphore + + CloneResult { + repo, + path: target_path, + result, + } + }); + + handles.push((panic_repo, panic_path, handle)); + } + + // Collect results + let mut summary = OpSummary::new(); + let mut results = Vec::with_capacity(total); + + for (index, (panic_repo, panic_path, handle)) in handles.into_iter().enumerate() { + match handle.await { + Ok(clone_result) => { + // Notify progress + match &clone_result.result { + OpResult::Success => { + progress.on_complete(&clone_result.repo, index, total); + } + OpResult::Failed(err) => { + progress.on_error(&clone_result.repo, err, index, total); + } + OpResult::Skipped(reason) => { + progress.on_skip(&clone_result.repo, reason, index, total); + } + } + + summary.record(&clone_result.result); + results.push(clone_result); + } + Err(e) => { + let err = format!("Task panicked: {}", e); + progress.on_error(&panic_repo, &err, index, total); + let failed = CloneResult { + repo: panic_repo, + path: panic_path, + result: OpResult::Failed(err), + }; + summary.record(&failed.result); + results.push(failed); + } + } + } + + (summary, results) + } + + /// Clones a single repository synchronously. + pub fn clone_single(&self, base_path: &Path, repo: &OwnedRepo, provider: &str) -> CloneResult { + let target_path = self.compute_path(base_path, repo, provider); + let url = self.get_clone_url(repo); + + let result = if self.options.dry_run { + OpResult::Skipped("dry run".to_string()) + } else if target_path.exists() { + OpResult::Skipped("directory already exists".to_string()) + } else { + // Create parent directories + if let Some(parent) = target_path.parent() { + if let Err(e) = std::fs::create_dir_all(parent) { + OpResult::Failed(format!("Failed to create directory: {}", e)) + } else { + match self + .git + .clone_repo(url, &target_path, &self.options.clone_options) + { + Ok(()) => OpResult::Success, + Err(e) => OpResult::Failed(e.to_string()), + } + } + } else { + OpResult::Failed("Invalid target path".to_string()) + } + }; + + CloneResult { + repo: repo.clone(), + path: target_path, + result, + } + } +} + +#[cfg(test)] +#[path = "clone_tests.rs"] +mod tests; diff --git a/src/operations/clone_tests.rs b/src/operations/clone_tests.rs new file mode 100644 index 0000000..aa3119f --- /dev/null +++ b/src/operations/clone_tests.rs @@ -0,0 +1,308 @@ +use super::*; +use crate::git::MockGit; +use crate::types::Repo; +use std::sync::atomic::{AtomicUsize, Ordering}; +use tempfile::TempDir; + +fn test_repo(name: &str, owner: &str) -> OwnedRepo { + OwnedRepo::new(owner, Repo::test(name, owner)) +} + +#[test] +fn test_clone_manager_options_default() { + let options = CloneManagerOptions::default(); + assert_eq!(options.concurrency, 8); + assert!(options.prefer_ssh); + assert!(!options.dry_run); + assert_eq!(options.structure, "{org}/{repo}"); +} + +#[test] +fn test_clone_manager_options_builder() { + let clone_opts = CloneOptions::new().with_depth(1); + let options = CloneManagerOptions::new() + .with_concurrency(8) + .with_clone_options(clone_opts) + .with_structure("{provider}/{org}/{repo}") + .with_ssh(false) + .with_dry_run(true); + + assert_eq!(options.concurrency, 8); + assert_eq!(options.clone_options.depth, 1); + assert_eq!(options.structure, "{provider}/{org}/{repo}"); + assert!(!options.prefer_ssh); + assert!(options.dry_run); +} + +#[test] +fn test_concurrency_minimum() { + let options = CloneManagerOptions::new().with_concurrency(0); + assert_eq!(options.concurrency, MIN_CONCURRENCY); // Minimum is 1 +} + +#[test] +fn test_concurrency_maximum() { + let options = CloneManagerOptions::new().with_concurrency(100); + assert_eq!(options.concurrency, MAX_CONCURRENCY); // Capped at max +} + +#[test] +fn test_concurrency_within_bounds() { + let options = CloneManagerOptions::new().with_concurrency(8); + assert_eq!(options.concurrency, 8); // Within bounds, unchanged +} + +#[test] +fn test_check_concurrency_cap() { + assert_eq!(CloneManagerOptions::check_concurrency_cap(8), None); + assert_eq!(CloneManagerOptions::check_concurrency_cap(16), None); + assert_eq!( + CloneManagerOptions::check_concurrency_cap(17), + Some(MAX_CONCURRENCY) + ); + assert_eq!( + CloneManagerOptions::check_concurrency_cap(100), + Some(MAX_CONCURRENCY) + ); +} + +#[test] +fn test_compute_path_simple() { + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_structure("{org}/{repo}"); + let manager = CloneManager::new(git, options); + + let repo = test_repo("my-repo", "my-org"); + let path = manager.compute_path(Path::new("/base"), &repo, "github"); + + assert_eq!(path, PathBuf::from("/base/my-org/my-repo")); +} + +#[test] +fn test_compute_path_with_provider() { + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_structure("{provider}/{org}/{repo}"); + let manager = CloneManager::new(git, options); + + let repo = test_repo("my-repo", "my-org"); + let path = manager.compute_path(Path::new("/base"), &repo, "github"); + + assert_eq!(path, PathBuf::from("/base/github/my-org/my-repo")); +} + +#[test] +fn test_get_clone_url_ssh() { + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_ssh(true); + let manager = CloneManager::new(git, options); + + let repo = test_repo("repo", "org"); + let url = manager.get_clone_url(&repo); + + assert!(url.starts_with("git@")); +} + +#[test] +fn test_get_clone_url_https() { + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_ssh(false); + let manager = CloneManager::new(git, options); + + let repo = test_repo("repo", "org"); + let url = manager.get_clone_url(&repo); + + assert!(url.starts_with("https://")); +} + +#[test] +fn test_clone_single_dry_run() { + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_dry_run(true); + let manager = CloneManager::new(git, options); + + let repo = test_repo("repo", "org"); + let result = manager.clone_single(Path::new("/tmp/base"), &repo, "github"); + + assert!(result.result.is_skipped()); + assert_eq!(result.result.skip_reason(), Some("dry run")); +} + +#[test] +fn test_clone_single_existing_dir() { + let temp = TempDir::new().unwrap(); + let target = temp.path().join("org/repo"); + std::fs::create_dir_all(&target).unwrap(); + + let git = MockGit::new(); + let options = CloneManagerOptions::new(); + let manager = CloneManager::new(git, options); + + let repo = test_repo("repo", "org"); + let result = manager.clone_single(temp.path(), &repo, "github"); + + assert!(result.result.is_skipped()); + assert_eq!( + result.result.skip_reason(), + Some("directory already exists") + ); +} + +#[test] +fn test_clone_single_success() { + let temp = TempDir::new().unwrap(); + + let git = MockGit::new(); + let options = CloneManagerOptions::new(); + let manager = CloneManager::new(git, options); + + let repo = test_repo("repo", "org"); + let result = manager.clone_single(temp.path(), &repo, "github"); + + assert!(result.result.is_success()); + assert_eq!(result.path, temp.path().join("org/repo")); +} + +#[test] +fn test_clone_single_failure() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.fail_clones(Some("network error".to_string())); + + let options = CloneManagerOptions::new(); + let manager = CloneManager::new(git, options); + + let repo = test_repo("repo", "org"); + let result = manager.clone_single(temp.path(), &repo, "github"); + + assert!(result.result.is_failed()); + assert!(result + .result + .error_message() + .unwrap() + .contains("network error")); +} + +struct CountingProgress { + started: AtomicUsize, + completed: AtomicUsize, + errors: AtomicUsize, + skipped: AtomicUsize, +} + +impl CountingProgress { + fn new() -> Self { + Self { + started: AtomicUsize::new(0), + completed: AtomicUsize::new(0), + errors: AtomicUsize::new(0), + skipped: AtomicUsize::new(0), + } + } +} + +impl CloneProgress for CountingProgress { + fn on_start(&self, _repo: &OwnedRepo, _index: usize, _total: usize) { + self.started.fetch_add(1, Ordering::SeqCst); + } + + fn on_complete(&self, _repo: &OwnedRepo, _index: usize, _total: usize) { + self.completed.fetch_add(1, Ordering::SeqCst); + } + + fn on_error(&self, _repo: &OwnedRepo, _error: &str, _index: usize, _total: usize) { + self.errors.fetch_add(1, Ordering::SeqCst); + } + + fn on_skip(&self, _repo: &OwnedRepo, _reason: &str, _index: usize, _total: usize) { + self.skipped.fetch_add(1, Ordering::SeqCst); + } +} + +#[tokio::test] +async fn test_clone_repos_parallel() { + let temp = TempDir::new().unwrap(); + + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_concurrency(2); + let manager = CloneManager::new(git, options); + + let repos = vec![ + test_repo("repo1", "org"), + test_repo("repo2", "org"), + test_repo("repo3", "org"), + ]; + + let progress = Arc::new(CountingProgress::new()); + let progress_dyn: Arc = progress.clone(); + let (summary, results) = manager + .clone_repos(temp.path(), repos, "github", progress_dyn) + .await; + + assert_eq!(summary.success, 3); + assert_eq!(summary.failed, 0); + assert_eq!(results.len(), 3); + + // Check progress was called + assert_eq!(progress.started.load(Ordering::SeqCst), 3); + assert_eq!(progress.completed.load(Ordering::SeqCst), 3); +} + +#[tokio::test] +async fn test_clone_repos_dry_run() { + let temp = TempDir::new().unwrap(); + + let git = MockGit::new(); + let options = CloneManagerOptions::new().with_dry_run(true); + let manager = CloneManager::new(git, options); + + let repos = vec![test_repo("repo1", "org"), test_repo("repo2", "org")]; + + let progress: Arc = Arc::new(NoProgress); + let (summary, _results) = manager + .clone_repos(temp.path(), repos, "github", progress) + .await; + + assert_eq!(summary.success, 0); + assert_eq!(summary.skipped, 2); +} + +#[tokio::test] +async fn test_clone_repos_with_failure() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.fail_clones(Some("test error".to_string())); + + let options = CloneManagerOptions::new(); + let manager = CloneManager::new(git, options); + + let repos = vec![test_repo("repo1", "org")]; + + let progress = Arc::new(CountingProgress::new()); + let progress_dyn: Arc = progress.clone(); + let (summary, _results) = manager + .clone_repos(temp.path(), repos, "github", progress_dyn) + .await; + + assert_eq!(summary.failed, 1); + assert_eq!(progress.errors.load(Ordering::SeqCst), 1); +} + +#[tokio::test] +async fn test_clone_repos_zero_concurrency_is_clamped() { + let temp = TempDir::new().unwrap(); + + let git = MockGit::new(); + let mut options = CloneManagerOptions::new().with_dry_run(true); + options.concurrency = 0; // bypass builder clamp on purpose + let manager = CloneManager::new(git, options); + + let repos = vec![test_repo("repo1", "org")]; + let progress: Arc = Arc::new(NoProgress); + let (summary, _results) = manager + .clone_repos(temp.path(), repos, "github", progress) + .await; + + assert_eq!(summary.skipped, 1); +} diff --git a/src/operations/mod.rs b/src/operations/mod.rs new file mode 100644 index 0000000..bdcb2ef --- /dev/null +++ b/src/operations/mod.rs @@ -0,0 +1,4 @@ +//! Core clone/sync operation managers. + +pub mod clone; +pub mod sync; diff --git a/src/operations/sync.rs b/src/operations/sync.rs new file mode 100644 index 0000000..c928c0c --- /dev/null +++ b/src/operations/sync.rs @@ -0,0 +1,605 @@ +//! Sync manager for fetch and pull operations. +//! +//! This module provides functionality for syncing existing local repositories +//! with their remotes, including parallel fetch and pull operations. +//! +//! # Example +//! +//! ```no_run +//! use git_same::operations::sync::{SyncManager, SyncManagerOptions, SyncMode, LocalRepo, NoSyncProgress}; +//! use git_same::git::ShellGit; +//! use git_same::types::{OwnedRepo, Repo}; +//! use std::path::PathBuf; +//! +//! # async fn example() { +//! let git = ShellGit::new(); +//! let options = SyncManagerOptions::new() +//! .with_concurrency(4) +//! .with_mode(SyncMode::Fetch); +//! +//! let manager = SyncManager::new(git, options); +//! +//! // repos would come from discovery +//! let repos: Vec = vec![]; +//! let progress = NoSyncProgress; +//! +//! let (summary, results) = manager +//! .sync_repos(repos, std::sync::Arc::new(progress)) +//! .await; +//! +//! println!("Synced {} repos, {} had updates", summary.success, +//! results.iter().filter(|r| r.had_updates).count()); +//! # } +//! ``` + +use crate::git::{FetchResult, GitOperations, PullResult, RepoStatus}; +use crate::types::{OpResult, OpSummary, OwnedRepo}; +use std::path::{Path, PathBuf}; +use std::sync::Arc; +use tokio::sync::Semaphore; + +use super::clone::{MAX_CONCURRENCY, MIN_CONCURRENCY}; + +/// Progress callback for sync operations. +pub trait SyncProgress: Send + Sync { + /// Called when a sync operation starts. + fn on_start(&self, repo: &OwnedRepo, path: &Path, index: usize, total: usize); + + /// Called when a fetch completes. + fn on_fetch_complete(&self, repo: &OwnedRepo, result: &FetchResult, index: usize, total: usize); + + /// Called when a pull completes. + fn on_pull_complete(&self, repo: &OwnedRepo, result: &PullResult, index: usize, total: usize); + + /// Called when a sync fails. + fn on_error(&self, repo: &OwnedRepo, error: &str, index: usize, total: usize); + + /// Called when a sync is skipped. + fn on_skip(&self, repo: &OwnedRepo, reason: &str, index: usize, total: usize); +} + +/// A no-op progress implementation. +#[derive(Debug, Clone, Copy, Default)] +pub struct NoSyncProgress; + +impl SyncProgress for NoSyncProgress { + fn on_start(&self, _repo: &OwnedRepo, _path: &Path, _index: usize, _total: usize) {} + fn on_fetch_complete( + &self, + _repo: &OwnedRepo, + _result: &FetchResult, + _index: usize, + _total: usize, + ) { + } + fn on_pull_complete( + &self, + _repo: &OwnedRepo, + _result: &PullResult, + _index: usize, + _total: usize, + ) { + } + fn on_error(&self, _repo: &OwnedRepo, _error: &str, _index: usize, _total: usize) {} + fn on_skip(&self, _repo: &OwnedRepo, _reason: &str, _index: usize, _total: usize) {} +} + +/// Sync mode - fetch only or pull. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] +pub enum SyncMode { + /// Only fetch, don't modify working tree + #[default] + Fetch, + /// Fetch and pull (fast-forward only) + Pull, +} + +/// Result of a single sync operation. +#[derive(Debug)] +pub struct SyncResult { + /// The repository that was synced + pub repo: OwnedRepo, + /// The local path + pub path: PathBuf, + /// The operation result + pub result: OpResult, + /// Whether updates were available + pub had_updates: bool, + /// Repository status before sync + pub status: Option, + /// Fetch result (if fetch was performed) + pub fetch_result: Option, + /// Pull result (if pull was performed) + pub pull_result: Option, +} + +/// A repository with its local path for syncing. +#[derive(Debug, Clone)] +pub struct LocalRepo { + /// The owned repo metadata + pub repo: OwnedRepo, + /// Local filesystem path + pub path: PathBuf, +} + +impl LocalRepo { + /// Creates a new local repo. + pub fn new(repo: OwnedRepo, path: impl Into) -> Self { + Self { + repo, + path: path.into(), + } + } +} + +/// Options for the sync manager. +#[derive(Debug, Clone)] +pub struct SyncManagerOptions { + /// Maximum number of concurrent syncs + pub concurrency: usize, + /// Sync mode (fetch or pull) + pub mode: SyncMode, + /// Skip repos with uncommitted changes + pub skip_uncommitted: bool, + /// Whether this is a dry run + pub dry_run: bool, +} + +impl Default for SyncManagerOptions { + fn default() -> Self { + Self { + concurrency: crate::operations::clone::DEFAULT_CONCURRENCY, + mode: SyncMode::Fetch, + skip_uncommitted: true, + dry_run: false, + } + } +} + +impl SyncManagerOptions { + /// Creates new options with defaults. + pub fn new() -> Self { + Self::default() + } + + /// Sets the concurrency level, clamped to [MIN_CONCURRENCY, MAX_CONCURRENCY]. + pub fn with_concurrency(mut self, concurrency: usize) -> Self { + self.concurrency = concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + self + } + + /// Sets the sync mode. + pub fn with_mode(mut self, mode: SyncMode) -> Self { + self.mode = mode; + self + } + + /// Sets whether to skip uncommitted repos. + pub fn with_skip_uncommitted(mut self, skip_uncommitted: bool) -> Self { + self.skip_uncommitted = skip_uncommitted; + self + } + + /// Sets dry run mode. + pub fn with_dry_run(mut self, dry_run: bool) -> Self { + self.dry_run = dry_run; + self + } +} + +/// Manages parallel sync operations. +pub struct SyncManager { + git: Arc, + options: SyncManagerOptions, +} + +impl SyncManager { + /// Creates a new sync manager. + pub fn new(git: G, mut options: SyncManagerOptions) -> Self { + options.concurrency = options.concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + Self { + git: Arc::new(git), + options, + } + } + + /// Syncs repositories in parallel. + pub async fn sync_repos( + &self, + repos: Vec, + progress: Arc, + ) -> (OpSummary, Vec) { + let total = repos.len(); + let concurrency = self + .options + .concurrency + .clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + let semaphore = Arc::new(Semaphore::new(concurrency)); + let mut handles = Vec::with_capacity(total); + + for (index, local_repo) in repos.into_iter().enumerate() { + let permit = semaphore.clone().acquire_owned().await.unwrap(); + let git = self.git.clone(); + let mode = self.options.mode; + let skip_uncommitted = self.options.skip_uncommitted; + let dry_run = self.options.dry_run; + let progress = Arc::clone(&progress); + + let handle = tokio::spawn(async move { + // Notify progress - sync starting + progress.on_start(&local_repo.repo, &local_repo.path, index, total); + let path = local_repo.path.clone(); + + // Check if path exists and is a repo + if !path.exists() { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped("path does not exist".to_string()), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + if !git.is_repo(&path) { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped("not a git repository".to_string()), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + + // Get status (blocking) + let status = match tokio::task::spawn_blocking({ + let git = git.clone(); + let path = path.clone(); + move || git.status(&path) + }) + .await + { + Ok(Ok(status)) => Some(status), + Ok(Err(e)) if skip_uncommitted => { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped(format!("failed to get status: {}", e)), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + Ok(Err(_)) => None, + Err(e) if skip_uncommitted => { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped(format!( + "failed to get status: task join error: {}", + e + )), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + Err(_) => None, + }; + + // Check if uncommitted and should skip + if skip_uncommitted { + if let Some(ref s) = status { + if s.is_uncommitted || s.has_untracked { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped("uncommitted changes".to_string()), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + } + } + + // Dry run + if dry_run { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Skipped("dry run".to_string()), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + + // Perform fetch (blocking) + let fetch_result = tokio::task::spawn_blocking({ + let git = git.clone(); + let path = path.clone(); + move || git.fetch(&path) + }) + .await; + + let fetch_result = match fetch_result { + Ok(Ok(r)) => r, + Ok(Err(e)) => { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Failed(e.to_string()), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + Err(e) => { + drop(permit); + return SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Failed(format!("Task panicked: {}", e)), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + }; + + let had_updates = fetch_result.updated; + + // If pull mode and has updates, do pull + if mode == SyncMode::Pull && had_updates { + let pull_task_result = tokio::task::spawn_blocking({ + let git = git.clone(); + let path = path.clone(); + move || git.pull(&path) + }) + .await; + + let (result, actual_pull_result) = match pull_task_result { + Ok(Ok(r)) if r.success => (OpResult::Success, Some(r)), + Ok(Ok(r)) => ( + OpResult::Failed( + r.error.clone().unwrap_or_else(|| "Pull failed".to_string()), + ), + Some(r), + ), + Ok(Err(e)) => (OpResult::Failed(e.to_string()), None), + Err(e) => (OpResult::Failed(format!("Task panicked: {}", e)), None), + }; + + drop(permit); + SyncResult { + repo: local_repo.repo, + path, + result, + had_updates, + status, + fetch_result: Some(fetch_result), + pull_result: actual_pull_result, + } + } else { + drop(permit); + SyncResult { + repo: local_repo.repo, + path, + result: OpResult::Success, + had_updates, + status, + fetch_result: Some(fetch_result), + pull_result: None, + } + } + }); + + handles.push(handle); + } + + // Collect results + let mut summary = OpSummary::new(); + let mut results = Vec::with_capacity(total); + + for (index, handle) in handles.into_iter().enumerate() { + match handle.await { + Ok(sync_result) => { + // Notify progress based on result using actual operation results + match &sync_result.result { + OpResult::Success => { + if let Some(ref pull_result) = sync_result.pull_result { + progress.on_pull_complete( + &sync_result.repo, + pull_result, + index, + total, + ); + } else if let Some(ref fetch_result) = sync_result.fetch_result { + progress.on_fetch_complete( + &sync_result.repo, + fetch_result, + index, + total, + ); + } + } + OpResult::Failed(err) => { + progress.on_error(&sync_result.repo, err, index, total); + } + OpResult::Skipped(reason) => { + progress.on_skip(&sync_result.repo, reason, index, total); + } + } + + summary.record(&sync_result.result); + results.push(sync_result); + } + Err(e) => { + summary.record(&OpResult::Failed(format!("Task panicked: {}", e))); + } + } + } + + (summary, results) + } + + /// Syncs a single repository synchronously. + pub fn sync_single(&self, local_repo: &LocalRepo) -> SyncResult { + let path = &local_repo.path; + + // Check if path exists + if !path.exists() { + return SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Skipped("path does not exist".to_string()), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + if !self.git.is_repo(path) { + return SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Skipped("not a git repository".to_string()), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + + // Get status + let status = match self.git.status(path) { + Ok(status) => Some(status), + Err(e) if self.options.skip_uncommitted => { + return SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Skipped(format!("failed to get status: {}", e)), + had_updates: false, + status: None, + fetch_result: None, + pull_result: None, + }; + } + Err(_) => None, + }; + + // Check if uncommitted + if self.options.skip_uncommitted { + if let Some(ref s) = status { + if s.is_uncommitted || s.has_untracked { + return SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Skipped("uncommitted changes".to_string()), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + } + } + + // Dry run + if self.options.dry_run { + return SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Skipped("dry run".to_string()), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + + // Fetch + let fetch_result = match self.git.fetch(path) { + Ok(r) => r, + Err(e) => { + return SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Failed(e.to_string()), + had_updates: false, + status, + fetch_result: None, + pull_result: None, + }; + } + }; + + let had_updates = fetch_result.updated; + + // Pull if needed + if self.options.mode == SyncMode::Pull && had_updates { + match self.git.pull(path) { + Ok(r) if r.success => SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Success, + had_updates, + status, + fetch_result: Some(fetch_result), + pull_result: Some(r), + }, + Ok(r) => SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Failed( + r.error.clone().unwrap_or_else(|| "Pull failed".to_string()), + ), + had_updates, + status, + fetch_result: Some(fetch_result), + pull_result: Some(r), + }, + Err(e) => SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Failed(e.to_string()), + had_updates, + status, + fetch_result: Some(fetch_result), + pull_result: None, + }, + } + } else { + SyncResult { + repo: local_repo.repo.clone(), + path: path.clone(), + result: OpResult::Success, + had_updates, + status, + fetch_result: Some(fetch_result), + pull_result: None, + } + } + } +} + +#[cfg(test)] +#[path = "sync_tests.rs"] +mod tests; diff --git a/src/operations/sync_tests.rs b/src/operations/sync_tests.rs new file mode 100644 index 0000000..05c9a3d --- /dev/null +++ b/src/operations/sync_tests.rs @@ -0,0 +1,297 @@ +use super::*; +use crate::git::{MockConfig, MockGit, RepoStatus}; +use crate::types::Repo; +use std::sync::atomic::{AtomicUsize, Ordering}; +use tempfile::TempDir; + +fn test_repo(name: &str, owner: &str) -> OwnedRepo { + OwnedRepo::new(owner, Repo::test(name, owner)) +} + +fn local_repo(name: &str, owner: &str, path: impl Into) -> LocalRepo { + LocalRepo::new(test_repo(name, owner), path) +} + +#[test] +fn test_sync_manager_options_default() { + let options = SyncManagerOptions::default(); + assert_eq!(options.concurrency, 8); + assert_eq!(options.mode, SyncMode::Fetch); + assert!(options.skip_uncommitted); + assert!(!options.dry_run); +} + +#[test] +fn test_sync_manager_options_builder() { + let options = SyncManagerOptions::new() + .with_concurrency(8) + .with_mode(SyncMode::Pull) + .with_skip_uncommitted(false) + .with_dry_run(true); + + assert_eq!(options.concurrency, 8); + assert_eq!(options.mode, SyncMode::Pull); + assert!(!options.skip_uncommitted); + assert!(options.dry_run); +} + +#[test] +fn test_sync_single_path_not_exists() { + let git = MockGit::new(); + let options = SyncManagerOptions::new(); + let manager = SyncManager::new(git, options); + + let repo = local_repo("repo", "org", "/nonexistent/path"); + let result = manager.sync_single(&repo); + + assert!(result.result.is_skipped()); + assert_eq!(result.result.skip_reason(), Some("path does not exist")); +} + +#[test] +fn test_sync_single_dry_run() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.add_repo(temp.path().to_string_lossy().to_string()); + + let options = SyncManagerOptions::new().with_dry_run(true); + let manager = SyncManager::new(git, options); + + let repo = local_repo("repo", "org", temp.path()); + let result = manager.sync_single(&repo); + + assert!(result.result.is_skipped()); + assert_eq!(result.result.skip_reason(), Some("dry run")); +} + +#[test] +fn test_sync_single_uncommitted_skip() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + let path_str = temp.path().to_string_lossy().to_string(); + git.add_repo(path_str.clone()); + git.set_status( + path_str, + RepoStatus { + branch: "main".to_string(), + is_uncommitted: true, + ahead: 0, + behind: 0, + has_untracked: false, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + }, + ); + + let options = SyncManagerOptions::new().with_skip_uncommitted(true); + let manager = SyncManager::new(git, options); + + let repo = local_repo("repo", "org", temp.path()); + let result = manager.sync_single(&repo); + + assert!(result.result.is_skipped()); + assert_eq!(result.result.skip_reason(), Some("uncommitted changes")); +} + +#[test] +fn test_sync_single_fetch_success() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.add_repo(temp.path().to_string_lossy().to_string()); + let options = SyncManagerOptions::new().with_mode(SyncMode::Fetch); + let manager = SyncManager::new(git, options); + + let repo = local_repo("repo", "org", temp.path()); + let result = manager.sync_single(&repo); + + assert!(result.result.is_success()); +} + +#[test] +fn test_sync_single_pull_success() { + let temp = TempDir::new().unwrap(); + + let config = MockConfig { + fetch_has_updates: true, + ..Default::default() + }; + let mut git = MockGit::with_config(config); + git.add_repo(temp.path().to_string_lossy().to_string()); + + let options = SyncManagerOptions::new().with_mode(SyncMode::Pull); + let manager = SyncManager::new(git, options); + + let repo = local_repo("repo", "org", temp.path()); + let result = manager.sync_single(&repo); + + assert!(result.result.is_success()); + assert!(result.had_updates); +} + +#[test] +fn test_sync_single_fetch_failure() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.add_repo(temp.path().to_string_lossy().to_string()); + git.fail_fetches(Some("network error".to_string())); + + let options = SyncManagerOptions::new(); + let manager = SyncManager::new(git, options); + + let repo = local_repo("repo", "org", temp.path()); + let result = manager.sync_single(&repo); + + assert!(result.result.is_failed()); + assert!(result + .result + .error_message() + .unwrap() + .contains("network error")); +} + +struct CountingSyncProgress { + started: AtomicUsize, + fetch_complete: AtomicUsize, + pull_complete: AtomicUsize, + errors: AtomicUsize, + skipped: AtomicUsize, +} + +impl CountingSyncProgress { + fn new() -> Self { + Self { + started: AtomicUsize::new(0), + fetch_complete: AtomicUsize::new(0), + pull_complete: AtomicUsize::new(0), + errors: AtomicUsize::new(0), + skipped: AtomicUsize::new(0), + } + } +} + +impl SyncProgress for CountingSyncProgress { + fn on_start(&self, _repo: &OwnedRepo, _path: &Path, _index: usize, _total: usize) { + self.started.fetch_add(1, Ordering::SeqCst); + } + + fn on_fetch_complete( + &self, + _repo: &OwnedRepo, + _result: &FetchResult, + _index: usize, + _total: usize, + ) { + self.fetch_complete.fetch_add(1, Ordering::SeqCst); + } + + fn on_pull_complete( + &self, + _repo: &OwnedRepo, + _result: &PullResult, + _index: usize, + _total: usize, + ) { + self.pull_complete.fetch_add(1, Ordering::SeqCst); + } + + fn on_error(&self, _repo: &OwnedRepo, _error: &str, _index: usize, _total: usize) { + self.errors.fetch_add(1, Ordering::SeqCst); + } + + fn on_skip(&self, _repo: &OwnedRepo, _reason: &str, _index: usize, _total: usize) { + self.skipped.fetch_add(1, Ordering::SeqCst); + } +} + +#[tokio::test] +async fn test_sync_repos_parallel() { + let temp1 = TempDir::new().unwrap(); + let temp2 = TempDir::new().unwrap(); + let temp3 = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.add_repo(temp1.path().to_string_lossy().to_string()); + git.add_repo(temp2.path().to_string_lossy().to_string()); + git.add_repo(temp3.path().to_string_lossy().to_string()); + let options = SyncManagerOptions::new().with_concurrency(2); + let manager = SyncManager::new(git, options); + + let repos = vec![ + local_repo("repo1", "org", temp1.path()), + local_repo("repo2", "org", temp2.path()), + local_repo("repo3", "org", temp3.path()), + ]; + + let progress = Arc::new(CountingSyncProgress::new()); + let progress_dyn: Arc = progress.clone(); + let (summary, results) = manager.sync_repos(repos, progress_dyn).await; + + assert_eq!(summary.success, 3); + assert_eq!(results.len(), 3); + assert_eq!(progress.started.load(Ordering::SeqCst), 3); + assert_eq!(progress.fetch_complete.load(Ordering::SeqCst), 3); +} + +#[tokio::test] +async fn test_sync_repos_dry_run() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.add_repo(temp.path().to_string_lossy().to_string()); + let options = SyncManagerOptions::new().with_dry_run(true); + let manager = SyncManager::new(git, options); + + let repos = vec![local_repo("repo", "org", temp.path())]; + + let progress: Arc = Arc::new(NoSyncProgress); + let (summary, _results) = manager.sync_repos(repos, progress).await; + + assert_eq!(summary.skipped, 1); +} + +#[tokio::test] +async fn test_sync_repos_with_updates_pull_mode() { + let temp = TempDir::new().unwrap(); + + let config = MockConfig { + fetch_has_updates: true, + ..Default::default() + }; + let mut git = MockGit::with_config(config); + git.add_repo(temp.path().to_string_lossy().to_string()); + + let options = SyncManagerOptions::new().with_mode(SyncMode::Pull); + let manager = SyncManager::new(git, options); + + let repos = vec![local_repo("repo", "org", temp.path())]; + + let progress = Arc::new(CountingSyncProgress::new()); + let progress_dyn: Arc = progress.clone(); + let (summary, results) = manager.sync_repos(repos, progress_dyn).await; + + assert_eq!(summary.success, 1); + assert!(results[0].had_updates); + assert_eq!(progress.pull_complete.load(Ordering::SeqCst), 1); +} + +#[tokio::test] +async fn test_sync_repos_zero_concurrency_is_clamped() { + let temp = TempDir::new().unwrap(); + + let mut git = MockGit::new(); + git.add_repo(temp.path().to_string_lossy().to_string()); + let mut options = SyncManagerOptions::new().with_dry_run(true); + options.concurrency = 0; // bypass builder clamp on purpose + let manager = SyncManager::new(git, options); + + let repos = vec![local_repo("repo", "org", temp.path())]; + let progress: Arc = Arc::new(NoSyncProgress); + let (summary, _results) = manager.sync_repos(repos, progress).await; + + assert_eq!(summary.skipped, 1); +} diff --git a/src/output/mod.rs b/src/output/mod.rs new file mode 100644 index 0000000..a1ce410 --- /dev/null +++ b/src/output/mod.rs @@ -0,0 +1,7 @@ +//! Output and progress reporting. + +mod printer; +pub mod progress; + +pub use printer::{format_count, format_error, format_success, format_warning, Output, Verbosity}; +pub use progress::{CloneProgressBar, DiscoveryProgressBar, SyncProgressBar}; diff --git a/src/output/printer.rs b/src/output/printer.rs new file mode 100644 index 0000000..12e4ea5 --- /dev/null +++ b/src/output/printer.rs @@ -0,0 +1,135 @@ +use console::style; + +/// Output verbosity level. +#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] +pub enum Verbosity { + /// No output except errors. + Quiet = 0, + /// Normal output. + Normal = 1, + /// Verbose output. + Verbose = 2, + /// Very verbose (debug) output. + Debug = 3, +} + +impl From for Verbosity { + fn from(v: u8) -> Self { + match v { + 0 => Verbosity::Quiet, + 1 => Verbosity::Normal, + 2 => Verbosity::Verbose, + _ => Verbosity::Debug, + } + } +} + +/// Output handler for consistent formatting. +#[derive(Debug, Clone)] +pub struct Output { + verbosity: Verbosity, + json: bool, +} + +impl Output { + /// Creates a new output handler. + pub fn new(verbosity: Verbosity, json: bool) -> Self { + Self { verbosity, json } + } + + /// Creates a quiet output handler. + pub fn quiet() -> Self { + Self::new(Verbosity::Quiet, false) + } + + /// Prints an info message. + pub fn info(&self, msg: &str) { + if !self.json && self.verbosity >= Verbosity::Normal { + println!("{} {}", style("→").cyan(), msg); + } + } + + /// Prints a success message. + pub fn success(&self, msg: &str) { + if !self.json && self.verbosity >= Verbosity::Normal { + println!("{} {}", style("✓").green(), msg); + } + } + + /// Prints a warning message. + pub fn warn(&self, msg: &str) { + if !self.json && self.verbosity >= Verbosity::Normal { + eprintln!("{} {}", style("⚠").yellow(), msg); + } + } + + /// Prints an error message. + pub fn error(&self, msg: &str) { + if !self.json { + eprintln!("{} {}", style("✗").red(), msg); + } + } + + /// Prints a plain stdout line (no prefix). + /// + /// Useful for tabular/list output that should still respect quiet/json modes. + pub fn plain(&self, msg: &str) { + if !self.json && self.verbosity >= Verbosity::Normal { + println!("{}", msg); + } + } + + /// Prints a verbose message. + pub fn verbose(&self, msg: &str) { + if !self.json && self.verbosity >= Verbosity::Verbose { + println!("{} {}", style("·").dim(), msg); + } + } + + /// Prints a debug message. + pub fn debug(&self, msg: &str) { + if !self.json && self.verbosity >= Verbosity::Debug { + println!("{} {}", style("⋅").dim(), style(msg).dim()); + } + } + + /// Returns true if output is in JSON mode. + pub fn is_json(&self) -> bool { + self.json + } + + /// Returns the current verbosity level. + pub fn verbosity(&self) -> Verbosity { + self.verbosity + } +} + +impl Default for Output { + fn default() -> Self { + Self::new(Verbosity::Normal, false) + } +} + +/// Format a count with appropriate styling. +pub fn format_count(count: usize, label: &str) -> String { + format!("{} {}", style(count).cyan().bold(), label) +} + +/// Format a success message. +pub fn format_success(msg: &str) -> String { + format!("{} {}", style("✓").green(), msg) +} + +/// Format an error message. +pub fn format_error(msg: &str) -> String { + format!("{} {}", style("✗").red(), msg) +} + +/// Format a warning message. +pub fn format_warning(msg: &str) -> String { + format!("{} {}", style("⚠").yellow(), msg) +} + +#[cfg(test)] +#[path = "printer_tests.rs"] +mod tests; diff --git a/src/output/printer_tests.rs b/src/output/printer_tests.rs new file mode 100644 index 0000000..33fc443 --- /dev/null +++ b/src/output/printer_tests.rs @@ -0,0 +1,48 @@ +use super::*; + +#[test] +fn test_verbosity_from_u8() { + assert_eq!(Verbosity::from(0), Verbosity::Quiet); + assert_eq!(Verbosity::from(1), Verbosity::Normal); + assert_eq!(Verbosity::from(2), Verbosity::Verbose); + assert_eq!(Verbosity::from(3), Verbosity::Debug); + assert_eq!(Verbosity::from(100), Verbosity::Debug); +} + +#[test] +fn test_verbosity_ordering() { + assert!(Verbosity::Quiet < Verbosity::Normal); + assert!(Verbosity::Normal < Verbosity::Verbose); + assert!(Verbosity::Verbose < Verbosity::Debug); +} + +#[test] +fn test_output_creation() { + let output = Output::new(Verbosity::Normal, false); + assert!(!output.is_json()); + + let json_output = Output::new(Verbosity::Normal, true); + assert!(json_output.is_json()); +} + +#[test] +fn test_output_quiet() { + let output = Output::quiet(); + assert_eq!(output.verbosity, Verbosity::Quiet); +} + +#[test] +fn test_format_functions() { + let count = format_count(42, "repos"); + assert!(count.contains("42")); + assert!(count.contains("repos")); + + let success = format_success("done"); + assert!(success.contains("done")); + + let error = format_error("failed"); + assert!(error.contains("failed")); + + let warning = format_warning("caution"); + assert!(warning.contains("caution")); +} diff --git a/src/output/progress/clone.rs b/src/output/progress/clone.rs new file mode 100644 index 0000000..2c1f404 --- /dev/null +++ b/src/output/progress/clone.rs @@ -0,0 +1,94 @@ +use crate::operations::clone::CloneProgress; +use crate::output::Verbosity; +use crate::types::OwnedRepo; +use console::style; +use indicatif::{MultiProgress, ProgressBar}; + +use super::styles::progress_style; + +/// Progress reporter for clone operations. +pub struct CloneProgressBar { + #[allow(dead_code)] + multi: MultiProgress, + main_bar: ProgressBar, + verbosity: Verbosity, +} + +impl CloneProgressBar { + /// Creates a new clone progress bar. + pub fn new(total: usize, verbosity: Verbosity) -> Self { + let multi = MultiProgress::new(); + let main_bar = multi.add(ProgressBar::new(total as u64)); + main_bar.set_style(progress_style()); + main_bar.set_message("Cloning repositories..."); + main_bar.enable_steady_tick(std::time::Duration::from_millis(100)); + + Self { + multi, + main_bar, + verbosity, + } + } + + /// Finishes the progress bar. + pub fn finish(&self, success: usize, failed: usize, skipped: usize) { + let msg = format!( + "{} {} cloned, {} failed, {} skipped", + style("✓").green(), + success, + failed, + skipped + ); + self.main_bar.finish_with_message(msg); + } +} + +impl CloneProgress for CloneProgressBar { + fn on_start(&self, repo: &OwnedRepo, _index: usize, _total: usize) { + if self.verbosity >= Verbosity::Verbose { + self.main_bar + .set_message(format!("Cloning {}...", style(repo.full_name()).cyan())); + } + } + + fn on_complete(&self, repo: &OwnedRepo, _index: usize, _total: usize) { + self.main_bar.inc(1); + if self.verbosity >= Verbosity::Debug { + self.main_bar.suspend(|| { + println!("{} Cloned {}", style("✓").green(), repo.full_name()); + }); + } + } + + fn on_error(&self, repo: &OwnedRepo, error: &str, _index: usize, _total: usize) { + self.main_bar.inc(1); + if self.verbosity >= Verbosity::Normal { + self.main_bar.suspend(|| { + eprintln!( + "{} Failed to clone {}: {}", + style("✗").red(), + repo.full_name(), + error + ); + }); + } + } + + fn on_skip(&self, repo: &OwnedRepo, reason: &str, _index: usize, _total: usize) { + self.main_bar.inc(1); + if self.verbosity >= Verbosity::Verbose { + self.main_bar.suspend(|| { + println!( + "{} Skipped {}: {}", + style("→").dim(), + repo.full_name(), + reason + ); + }); + } + } +} + +#[cfg(test)] +#[path = "clone_tests.rs"] +mod tests; diff --git a/src/output/progress/clone_tests.rs b/src/output/progress/clone_tests.rs new file mode 100644 index 0000000..66ee96e --- /dev/null +++ b/src/output/progress/clone_tests.rs @@ -0,0 +1,18 @@ +use super::*; +use crate::operations::clone::CloneProgress; + +fn sample_repo() -> OwnedRepo { + OwnedRepo::new("acme", crate::types::Repo::test("rocket", "acme")) +} + +#[test] +fn clone_progress_bar_methods_execute_without_panics() { + let progress = CloneProgressBar::new(2, Verbosity::Verbose); + let repo = sample_repo(); + + progress.on_start(&repo, 1, 2); + progress.on_complete(&repo, 1, 2); + progress.on_error(&repo, "network", 2, 2); + progress.on_skip(&repo, "already cloned", 2, 2); + progress.finish(1, 1, 0); +} diff --git a/src/output/progress/discovery.rs b/src/output/progress/discovery.rs new file mode 100644 index 0000000..3e2f38f --- /dev/null +++ b/src/output/progress/discovery.rs @@ -0,0 +1,98 @@ +use crate::output::Verbosity; +use crate::provider::DiscoveryProgress; +use console::style; +use indicatif::{MultiProgress, ProgressBar}; +use std::sync::atomic::{AtomicUsize, Ordering}; +use std::sync::Arc; + +use super::styles::spinner_style; + +/// Progress reporter for discovery operations. +pub struct DiscoveryProgressBar { + #[allow(dead_code)] + multi: MultiProgress, + main_bar: ProgressBar, + repo_count: Arc, + verbosity: Verbosity, +} + +impl DiscoveryProgressBar { + /// Creates a new discovery progress bar. + pub fn new(verbosity: Verbosity) -> Self { + let multi = MultiProgress::new(); + let main_bar = multi.add(ProgressBar::new_spinner()); + main_bar.set_style(spinner_style()); + main_bar.set_message("Discovering repositories..."); + main_bar.enable_steady_tick(std::time::Duration::from_millis(100)); + + Self { + multi, + main_bar, + repo_count: Arc::new(AtomicUsize::new(0)), + verbosity, + } + } + + /// Finishes the progress bar. + pub fn finish(&self) { + let count = self.repo_count.load(Ordering::SeqCst); + self.main_bar.finish_with_message(format!( + "{} Discovered {} repositories", + style("✓").green(), + count + )); + } +} + +impl DiscoveryProgress for DiscoveryProgressBar { + fn on_orgs_discovered(&self, count: usize) { + if self.verbosity >= Verbosity::Verbose { + self.main_bar + .set_message(format!("Found {} organizations", count)); + } + } + + fn on_org_started(&self, org_name: &str) { + if self.verbosity >= Verbosity::Verbose { + self.main_bar + .set_message(format!("Discovering: {}", style(org_name).cyan())); + } + } + + fn on_org_complete(&self, org_name: &str, repo_count: usize) { + self.repo_count.fetch_add(repo_count, Ordering::SeqCst); + let total = self.repo_count.load(Ordering::SeqCst); + self.main_bar.set_message(format!( + "Discovered {} repos ({} from {})", + total, + repo_count, + style(org_name).cyan() + )); + } + + fn on_personal_repos_started(&self) { + if self.verbosity >= Verbosity::Verbose { + self.main_bar + .set_message("Discovering personal repositories..."); + } + } + + fn on_personal_repos_complete(&self, count: usize) { + self.repo_count.fetch_add(count, Ordering::SeqCst); + let total = self.repo_count.load(Ordering::SeqCst); + self.main_bar + .set_message(format!("Discovered {} repos (including personal)", total)); + } + + fn on_error(&self, message: &str) { + if self.verbosity >= Verbosity::Normal { + self.main_bar.suspend(|| { + eprintln!("{} {}", style("⚠").yellow(), message); + }); + } + } +} + +#[cfg(test)] +#[path = "discovery_tests.rs"] +mod tests; diff --git a/src/output/progress/discovery_tests.rs b/src/output/progress/discovery_tests.rs new file mode 100644 index 0000000..aa8cd34 --- /dev/null +++ b/src/output/progress/discovery_tests.rs @@ -0,0 +1,14 @@ +use super::*; + +#[test] +fn discovery_progress_bar_methods_execute_without_panics() { + let progress = DiscoveryProgressBar::new(Verbosity::Verbose); + + progress.on_orgs_discovered(3); + progress.on_org_started("acme"); + progress.on_org_complete("acme", 7); + progress.on_personal_repos_started(); + progress.on_personal_repos_complete(2); + progress.on_error("simulated warning"); + progress.finish(); +} diff --git a/src/output/progress/mod.rs b/src/output/progress/mod.rs new file mode 100644 index 0000000..45929e2 --- /dev/null +++ b/src/output/progress/mod.rs @@ -0,0 +1,8 @@ +mod clone; +mod discovery; +mod styles; +mod sync; + +pub use clone::CloneProgressBar; +pub use discovery::DiscoveryProgressBar; +pub use sync::SyncProgressBar; diff --git a/src/output/progress/styles.rs b/src/output/progress/styles.rs new file mode 100644 index 0000000..3f191b8 --- /dev/null +++ b/src/output/progress/styles.rs @@ -0,0 +1,20 @@ +use indicatif::ProgressStyle; + +/// Default spinner style frames. +const SPINNER_FRAMES: &[&str] = &["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"]; + +/// Creates a default spinner style. +pub fn spinner_style() -> ProgressStyle { + ProgressStyle::default_spinner() + .tick_strings(SPINNER_FRAMES) + .template("{spinner:.cyan} {msg}") + .expect("Invalid spinner template") +} + +/// Creates a progress bar style. +pub fn progress_style() -> ProgressStyle { + ProgressStyle::default_bar() + .template("{spinner:.cyan} [{bar:40.cyan/dim}] {pos}/{len} {msg}") + .expect("Invalid progress template") + .progress_chars("━╸─") +} diff --git a/src/output/progress/sync.rs b/src/output/progress/sync.rs new file mode 100644 index 0000000..e1dbfbe --- /dev/null +++ b/src/output/progress/sync.rs @@ -0,0 +1,151 @@ +use crate::git::{FetchResult, PullResult}; +use crate::operations::sync::SyncProgress; +use crate::output::Verbosity; +use crate::types::OwnedRepo; +use console::style; +use indicatif::{MultiProgress, ProgressBar}; +use std::path::Path; +use std::sync::atomic::{AtomicUsize, Ordering}; +use std::sync::Arc; + +use super::styles::progress_style; + +/// Progress reporter for sync operations. +pub struct SyncProgressBar { + #[allow(dead_code)] + multi: MultiProgress, + main_bar: ProgressBar, + verbosity: Verbosity, + updates_count: Arc, +} + +impl SyncProgressBar { + /// Creates a new sync progress bar. + pub fn new(total: usize, verbosity: Verbosity, operation: &str) -> Self { + let multi = MultiProgress::new(); + let main_bar = multi.add(ProgressBar::new(total as u64)); + main_bar.set_style(progress_style()); + main_bar.set_message(format!("{}ing repositories...", operation)); + main_bar.enable_steady_tick(std::time::Duration::from_millis(100)); + + Self { + multi, + main_bar, + verbosity, + updates_count: Arc::new(AtomicUsize::new(0)), + } + } + + /// Finishes the progress bar. + pub fn finish(&self, success: usize, failed: usize, skipped: usize) { + let updates = self.updates_count.load(Ordering::SeqCst); + let msg = format!( + "{} {} synced ({} with updates), {} failed, {} skipped", + style("✓").green(), + success, + updates, + failed, + skipped + ); + self.main_bar.finish_with_message(msg); + } +} + +impl SyncProgress for SyncProgressBar { + fn on_start(&self, repo: &OwnedRepo, _path: &Path, _index: usize, _total: usize) { + if self.verbosity >= Verbosity::Verbose { + self.main_bar + .set_message(format!("Syncing {}...", style(repo.full_name()).cyan())); + } + } + + fn on_fetch_complete( + &self, + repo: &OwnedRepo, + result: &FetchResult, + _index: usize, + _total: usize, + ) { + self.main_bar.inc(1); + if result.updated { + self.updates_count.fetch_add(1, Ordering::SeqCst); + } + if self.verbosity >= Verbosity::Debug { + let status = if result.updated { + "updated" + } else { + "up to date" + }; + self.main_bar.suspend(|| { + println!( + "{} {} {}", + style("✓").green(), + repo.full_name(), + style(status).dim() + ); + }); + } + } + + fn on_pull_complete( + &self, + repo: &OwnedRepo, + result: &PullResult, + _index: usize, + _total: usize, + ) { + self.main_bar.inc(1); + if result.updated { + self.updates_count.fetch_add(1, Ordering::SeqCst); + } + if self.verbosity >= Verbosity::Debug { + let status = if !result.updated { + "up to date" + } else if result.fast_forward { + "fast-forward" + } else { + "merged" + }; + self.main_bar.suspend(|| { + println!( + "{} {} {}", + style("✓").green(), + repo.full_name(), + style(status).dim() + ); + }); + } + } + + fn on_error(&self, repo: &OwnedRepo, error: &str, _index: usize, _total: usize) { + self.main_bar.inc(1); + if self.verbosity >= Verbosity::Normal { + self.main_bar.suspend(|| { + eprintln!( + "{} Failed to sync {}: {}", + style("✗").red(), + repo.full_name(), + error + ); + }); + } + } + + fn on_skip(&self, repo: &OwnedRepo, reason: &str, _index: usize, _total: usize) { + self.main_bar.inc(1); + if self.verbosity >= Verbosity::Verbose { + self.main_bar.suspend(|| { + println!( + "{} Skipped {}: {}", + style("→").dim(), + repo.full_name(), + reason + ); + }); + } + } +} + +#[cfg(test)] +#[path = "sync_tests.rs"] +mod tests; diff --git a/src/output/progress/sync_tests.rs b/src/output/progress/sync_tests.rs new file mode 100644 index 0000000..ce1a17d --- /dev/null +++ b/src/output/progress/sync_tests.rs @@ -0,0 +1,38 @@ +use super::*; +use crate::operations::sync::SyncProgress; + +fn sample_repo() -> OwnedRepo { + OwnedRepo::new("acme", crate::types::Repo::test("rocket", "acme")) +} + +#[test] +fn sync_progress_bar_methods_execute_without_panics() { + let progress = SyncProgressBar::new(3, Verbosity::Verbose, "Fetch"); + let repo = sample_repo(); + let temp_dir = std::env::temp_dir(); + + progress.on_start(&repo, temp_dir.as_path(), 1, 3); + progress.on_fetch_complete( + &repo, + &FetchResult { + updated: true, + new_commits: Some(4), + }, + 1, + 3, + ); + progress.on_pull_complete( + &repo, + &PullResult { + success: true, + updated: true, + fast_forward: true, + error: None, + }, + 2, + 3, + ); + progress.on_error(&repo, "sync failed", 3, 3); + progress.on_skip(&repo, "dirty tree", 3, 3); + progress.finish(2, 1, 0); +} diff --git a/src/provider/github/client.rs b/src/provider/github/client.rs new file mode 100644 index 0000000..77b4ecc --- /dev/null +++ b/src/provider/github/client.rs @@ -0,0 +1,274 @@ +//! GitHub API client implementation. + +use async_trait::async_trait; +use reqwest::header::{HeaderMap, HeaderValue, ACCEPT, AUTHORIZATION, USER_AGENT}; +use reqwest::Client; +use tracing::{debug, trace}; + +use super::pagination::fetch_all_pages; +use super::GITHUB_API_URL; +use crate::errors::ProviderError; +use crate::provider::traits::*; +use crate::types::{Org, OwnedRepo, ProviderKind, Repo}; + +/// Default timeout for API requests in seconds. +const DEFAULT_TIMEOUT_SECS: u64 = 60; + +/// GitHub provider implementation. +/// +/// Supports both github.com and GitHub Enterprise Server. +pub struct GitHubProvider { + /// HTTP client + client: Client, + /// Authentication credentials + credentials: Credentials, + /// Display name for this provider instance + display_name: String, +} + +impl GitHubProvider { + /// Creates a new GitHub provider with default timeout. + pub fn new( + credentials: Credentials, + display_name: impl Into, + ) -> Result { + Self::with_timeout(credentials, display_name, DEFAULT_TIMEOUT_SECS) + } + + /// Creates a new GitHub provider with custom timeout. + pub fn with_timeout( + credentials: Credentials, + display_name: impl Into, + timeout_secs: u64, + ) -> Result { + let mut headers = HeaderMap::new(); + headers.insert(USER_AGENT, HeaderValue::from_static("gisa-cli/0.1.0")); + headers.insert( + ACCEPT, + HeaderValue::from_static("application/vnd.github+json"), + ); + headers.insert( + "X-GitHub-Api-Version", + HeaderValue::from_static("2022-11-28"), + ); + + let client = Client::builder() + .default_headers(headers) + .timeout(std::time::Duration::from_secs(timeout_secs)) + .build() + .map_err(|e| ProviderError::Configuration(e.to_string()))?; + + Ok(Self { + client, + credentials, + display_name: display_name.into(), + }) + } + + /// Constructs a full API URL from a path. + fn api_url(&self, path: &str) -> String { + format!("{}{}", self.credentials.api_base_url, path) + } + + /// Makes an authenticated GET request. + async fn get(&self, url: &str) -> Result { + trace!(url, "Making authenticated GET request"); + + let response = self + .client + .get(url) + .header(AUTHORIZATION, format!("Bearer {}", self.credentials.token)) + .send() + .await + .map_err(|e| ProviderError::Network(e.to_string()))?; + + let status = response.status(); + trace!(url, status = %status, "Received response"); + + if !status.is_success() { + let body = response.text().await.unwrap_or_default(); + debug!(url, status = %status, "API request failed"); + return Err(ProviderError::from_status(status.as_u16(), body)); + } + + response + .json() + .await + .map_err(|e| ProviderError::Parse(e.to_string())) + } + + /// Fetches all pages from an endpoint. + async fn get_paginated( + &self, + url: &str, + ) -> Result, ProviderError> { + fetch_all_pages(&self.client, &self.credentials.token, url).await + } + + /// Determines if this is GitHub.com or GitHub Enterprise. + fn is_github_com(&self) -> bool { + self.credentials.api_base_url == GITHUB_API_URL + } +} + +#[async_trait] +impl Provider for GitHubProvider { + fn kind(&self) -> ProviderKind { + if self.is_github_com() { + ProviderKind::GitHub + } else { + ProviderKind::GitHubEnterprise + } + } + + fn display_name(&self) -> &str { + &self.display_name + } + + async fn validate_credentials(&self) -> Result<(), ProviderError> { + // Make a simple API call to verify the token works + self.get_username().await?; + Ok(()) + } + + async fn get_username(&self) -> Result { + #[derive(serde::Deserialize)] + struct User { + login: String, + } + + let url = self.api_url("/user"); + let user: User = self.get(&url).await?; + Ok(user.login) + } + + async fn get_organizations(&self) -> Result, ProviderError> { + let url = self.api_url("/user/orgs"); + self.get_paginated(&url).await + } + + async fn get_org_repos(&self, org: &str) -> Result, ProviderError> { + let url = self.api_url(&format!("/orgs/{}/repos", org)); + self.get_paginated(&url).await + } + + async fn get_user_repos(&self) -> Result, ProviderError> { + let url = self.api_url("/user/repos?affiliation=owner"); + self.get_paginated(&url).await + } + + async fn get_rate_limit(&self) -> Result { + #[derive(serde::Deserialize)] + struct RateLimitResponse { + rate: RateInfo, + } + + #[derive(serde::Deserialize)] + struct RateInfo { + limit: u32, + remaining: u32, + reset: i64, + } + + let url = self.api_url("/rate_limit"); + let response: RateLimitResponse = self.get(&url).await?; + + Ok(RateLimitInfo { + limit: response.rate.limit, + remaining: response.rate.remaining, + reset_at: Some(response.rate.reset), + }) + } + + async fn discover_repos( + &self, + options: &DiscoveryOptions, + progress: &dyn DiscoveryProgress, + ) -> Result, ProviderError> { + debug!(provider = %self.display_name, "Starting repository discovery"); + + let username = self.get_username().await?; + debug!(username, "Authenticated user"); + + let mut all_repos = Vec::new(); + + // Get organizations + let orgs = self.get_organizations().await?; + let orgs_count = orgs.len(); + let filtered_orgs: Vec<_> = orgs + .into_iter() + .filter(|o| options.should_include_org(&o.login)) + .collect(); + + debug!( + total_orgs = orgs_count, + filtered_orgs = filtered_orgs.len(), + "Discovered organizations" + ); + progress.on_orgs_discovered(filtered_orgs.len()); + + // Fetch repos for each org + for org in &filtered_orgs { + progress.on_org_started(&org.login); + + match self.get_org_repos(&org.login).await { + Ok(repos) => { + let filtered: Vec<_> = repos + .into_iter() + .filter(|r| options.should_include(r)) + .collect(); + + let count = filtered.len(); + for repo in filtered { + all_repos.push(OwnedRepo::new(&org.login, repo)); + } + + progress.on_org_complete(&org.login, count); + } + Err(e) => { + progress.on_error(&format!("Error fetching repos for {}: {}", org.login, e)); + progress.on_org_complete(&org.login, 0); + } + } + } + + // Fetch personal repos + progress.on_personal_repos_started(); + + match self.get_user_repos().await { + Ok(repos) => { + let filtered: Vec<_> = repos + .into_iter() + // Skip repos already added via org + .filter(|r| !all_repos.iter().any(|or| or.repo.id == r.id)) + .filter(|r| options.should_include(r)) + .collect(); + + let count = filtered.len(); + for repo in filtered { + all_repos.push(OwnedRepo::new(&username, repo)); + } + + progress.on_personal_repos_complete(count); + } + Err(e) => { + progress.on_error(&format!("Error fetching personal repos: {}", e)); + progress.on_personal_repos_complete(0); + } + } + + Ok(all_repos) + } + + fn get_clone_url(&self, repo: &Repo, prefer_ssh: bool) -> String { + if prefer_ssh { + repo.ssh_url.clone() + } else { + repo.clone_url.clone() + } + } +} + +#[cfg(test)] +#[path = "client_tests.rs"] +mod tests; diff --git a/src/provider/github/client_tests.rs b/src/provider/github/client_tests.rs new file mode 100644 index 0000000..8902d2b --- /dev/null +++ b/src/provider/github/client_tests.rs @@ -0,0 +1,81 @@ +use super::*; + +fn test_credentials() -> Credentials { + Credentials::new("test-token", GITHUB_API_URL) +} + +fn real_github_token_from_gh() -> Option { + if !crate::auth::gh_cli::is_installed() || !crate::auth::gh_cli::is_authenticated() { + return None; + } + crate::auth::gh_cli::get_token().ok() +} + +#[test] +fn test_provider_creation() { + let result = GitHubProvider::new(test_credentials(), "Test GitHub"); + assert!(result.is_ok()); + + let provider = result.unwrap(); + assert_eq!(provider.kind(), ProviderKind::GitHub); + assert_eq!(provider.display_name(), "Test GitHub"); +} + +#[test] +fn test_is_github_com() { + let provider = GitHubProvider::new(test_credentials(), "GitHub").unwrap(); + assert!(provider.is_github_com()); + + let enterprise_creds = Credentials::new("token", "https://github.company.com/api/v3"); + let provider = GitHubProvider::new(enterprise_creds, "GHE").unwrap(); + assert!(!provider.is_github_com()); +} + +#[test] +fn test_api_url_construction() { + let provider = GitHubProvider::new(test_credentials(), "GitHub").unwrap(); + assert_eq!(provider.api_url("/user"), "https://api.github.com/user"); + assert_eq!( + provider.api_url("/orgs/test/repos"), + "https://api.github.com/orgs/test/repos" + ); +} + +#[test] +fn test_kind_detection() { + let github_creds = Credentials::new("token", GITHUB_API_URL); + let provider = GitHubProvider::new(github_creds, "GitHub").unwrap(); + assert_eq!(provider.kind(), ProviderKind::GitHub); + + let ghe_creds = Credentials::new("token", "https://github.company.com/api/v3"); + let provider = GitHubProvider::new(ghe_creds, "GHE").unwrap(); + assert_eq!(provider.kind(), ProviderKind::GitHubEnterprise); +} + +// Integration tests that require gh CLI to be installed and authenticated +// These are ignored by default +#[tokio::test] +#[ignore] +async fn test_get_username_real() { + let Some(token) = real_github_token_from_gh() else { + return; + }; + let credentials = Credentials::new(token, GITHUB_API_URL); + let provider = GitHubProvider::new(credentials, "GitHub").unwrap(); + + let username = provider.get_username().await.unwrap(); + assert!(!username.is_empty()); +} + +#[tokio::test] +#[ignore] +async fn test_get_rate_limit_real() { + let Some(token) = real_github_token_from_gh() else { + return; + }; + let credentials = Credentials::new(token, GITHUB_API_URL); + let provider = GitHubProvider::new(credentials, "GitHub").unwrap(); + + let rate_limit = provider.get_rate_limit().await.unwrap(); + assert!(rate_limit.limit > 0); +} diff --git a/src/provider/github/mod.rs b/src/provider/github/mod.rs new file mode 100644 index 0000000..54f9a26 --- /dev/null +++ b/src/provider/github/mod.rs @@ -0,0 +1,11 @@ +//! GitHub provider implementation. +//! +//! Supports both github.com and GitHub Enterprise Server. + +mod client; +mod pagination; + +pub use client::GitHubProvider; + +/// Default GitHub API URL +pub const GITHUB_API_URL: &str = "https://api.github.com"; diff --git a/src/provider/github/pagination.rs b/src/provider/github/pagination.rs new file mode 100644 index 0000000..f12ffff --- /dev/null +++ b/src/provider/github/pagination.rs @@ -0,0 +1,201 @@ +//! GitHub API pagination handling. +//! +//! GitHub uses Link headers for pagination. This module handles +//! parsing those headers and fetching all pages. + +use reqwest::header::AUTHORIZATION; +use reqwest::Client; +use serde::de::DeserializeOwned; +use std::time::{Duration, SystemTime, UNIX_EPOCH}; + +use crate::errors::ProviderError; + +/// Maximum pages to fetch (100 items/page = 10,000 items max). +const MAX_PAGES: usize = 100; + +/// Maximum retry attempts for transient failures. Uses exponential backoff. +const MAX_RETRIES: u32 = 3; + +/// Initial backoff in ms. Doubles each retry: 1s -> 2s -> 4s. +const INITIAL_BACKOFF_MS: u64 = 1000; + +/// Parses the GitHub Link header to find the next page URL. +/// +/// GitHub Link headers look like: +/// `; rel="next", ; rel="last"` +pub fn parse_link_header(link: &str) -> Option { + for part in link.split(',') { + let segments: Vec<&str> = part.split(';').collect(); + if segments.len() >= 2 { + let rel = segments[1].trim(); + if rel == "rel=\"next\"" { + let url = segments[0].trim(); + // Remove < and > from URL + if url.starts_with('<') && url.ends_with('>') { + return Some(url[1..url.len() - 1].to_string()); + } + } + } + } + None +} + +/// Format a Unix timestamp as a human-readable reset time string. +fn format_reset_time(reset_timestamp: &str) -> String { + if let Ok(secs) = reset_timestamp.parse::() { + if let Some(dt) = chrono::DateTime::from_timestamp(secs, 0) { + let wait = dt.signed_duration_since(chrono::Utc::now()); + let mins = wait.num_minutes(); + let secs_rem = wait.num_seconds() % 60; + return if mins > 0 { + format!( + "{} (resets in {}m {}s)", + dt.format("%H:%M:%S UTC"), + mins, + secs_rem + ) + } else if secs_rem > 0 { + format!("{} (resets in {}s)", dt.format("%H:%M:%S UTC"), secs_rem) + } else { + format!("{} (resets now)", dt.format("%H:%M:%S UTC")) + }; + } + } + reset_timestamp.to_string() +} + +/// Calculate wait time until rate limit reset. +fn calculate_wait_time(reset_timestamp: &str) -> Option { + if let Ok(reset_secs) = reset_timestamp.parse::() { + let now = SystemTime::now().duration_since(UNIX_EPOCH).ok()?.as_secs(); + + if reset_secs > now { + return Some(Duration::from_secs(reset_secs - now)); + } + } + None +} + +/// Fetches all pages from a GitHub API endpoint using Link header pagination. +/// +/// # Arguments +/// * `client` - The HTTP client to use +/// * `token` - The authentication token +/// * `initial_url` - The URL to start fetching from +/// +/// This function implements exponential backoff for rate limit errors and transient failures. +pub async fn fetch_all_pages( + client: &Client, + token: &str, + initial_url: &str, +) -> Result, ProviderError> { + let mut results = Vec::new(); + let mut url = Some(format!( + "{}{}per_page=100", + initial_url, + if initial_url.contains('?') { "&" } else { "?" } + )); + + let mut page_count = 0; + + while let Some(current_url) = url { + let mut retry_count = 0; + let mut backoff_ms = INITIAL_BACKOFF_MS; + + let (next_url_opt, items) = loop { + let response = match client + .get(¤t_url) + .header(AUTHORIZATION, format!("Bearer {}", token)) + .send() + .await + { + Ok(response) => response, + Err(e) if retry_count < MAX_RETRIES => { + retry_count += 1; + tokio::time::sleep(Duration::from_millis(backoff_ms)).await; + backoff_ms *= 2; + continue; + } + Err(e) => return Err(ProviderError::Network(e.to_string())), + }; + + let status = response.status(); + + // Check for rate limiting + if status.as_u16() == 403 { + if let Some(remaining) = response.headers().get("x-ratelimit-remaining") { + if remaining.to_str().unwrap_or("1") == "0" { + let reset = response + .headers() + .get("x-ratelimit-reset") + .and_then(|h| h.to_str().ok()) + .unwrap_or("unknown"); + + // Try to parse reset time and wait + if let Some(wait_time) = calculate_wait_time(reset) { + if retry_count < MAX_RETRIES { + retry_count += 1; + // Add a small buffer to the wait time + let wait_with_buffer = wait_time + Duration::from_secs(5); + tokio::time::sleep(wait_with_buffer).await; + continue; // Retry the request + } + } + + return Err(ProviderError::RateLimited { + reset_time: format_reset_time(reset), + }); + } + } + } + + // Retry on 5xx errors with exponential backoff + if status.is_server_error() && retry_count < MAX_RETRIES { + retry_count += 1; + tokio::time::sleep(Duration::from_millis(backoff_ms)).await; + backoff_ms *= 2; // Exponential backoff: 1s, 2s, 4s + continue; + } + + if !status.is_success() { + let body = response.text().await.unwrap_or_default(); + return Err(ProviderError::from_status(status.as_u16(), body)); + } + + // Get next page URL before consuming response body + let next_url = response + .headers() + .get("Link") + .and_then(|h| h.to_str().ok()) + .and_then(parse_link_header); + + // Parse response body + let items: Vec = response + .json() + .await + .map_err(|e| ProviderError::Parse(e.to_string()))?; + + break (next_url, items); + }; + + // Use the next URL from the loop + url = next_url_opt; + + // Extend results with items from this page + results.extend(items); + + page_count += 1; + if page_count >= MAX_PAGES && url.is_some() { + return Err(ProviderError::Configuration(format!( + "Pagination truncated after {} pages for '{}'", + MAX_PAGES, initial_url + ))); + } + } + + Ok(results) +} + +#[cfg(test)] +#[path = "pagination_tests.rs"] +mod tests; diff --git a/src/provider/github/pagination_tests.rs b/src/provider/github/pagination_tests.rs new file mode 100644 index 0000000..4c8b9a4 --- /dev/null +++ b/src/provider/github/pagination_tests.rs @@ -0,0 +1,66 @@ +use super::*; + +#[test] +fn test_parse_link_header_with_next() { + let header = r#"; rel="next", ; rel="last""#; + let next = parse_link_header(header); + assert_eq!( + next, + Some("https://api.github.com/user/repos?page=2".to_string()) + ); +} + +#[test] +fn test_parse_link_header_without_next() { + let header = r#"; rel="first", ; rel="last""#; + let next = parse_link_header(header); + assert_eq!(next, None); +} + +#[test] +fn test_parse_link_header_only_last() { + let header = r#"; rel="prev", ; rel="last""#; + let next = parse_link_header(header); + assert_eq!(next, None); +} + +#[test] +fn test_parse_link_header_empty() { + let next = parse_link_header(""); + assert_eq!(next, None); +} + +#[test] +fn test_parse_link_header_malformed() { + let header = "malformed header without proper format"; + let next = parse_link_header(header); + assert_eq!(next, None); +} + +#[test] +fn test_parse_link_header_complex() { + let header = r#"; rel="next", ; rel="last""#; + let next = parse_link_header(header); + assert_eq!( + next, + Some("https://api.github.com/organizations/12345/repos?page=2&per_page=100".to_string()) + ); +} + +#[test] +fn test_format_reset_time_future() { + let future = (chrono::Utc::now() + chrono::Duration::minutes(5)).timestamp(); + let result = format_reset_time(&future.to_string()); + assert!(result.contains("UTC")); + assert!(result.contains("resets in")); +} + +#[test] +fn test_format_reset_time_invalid() { + assert_eq!(format_reset_time("unknown"), "unknown"); +} + +#[test] +fn test_format_reset_time_empty() { + assert_eq!(format_reset_time(""), ""); +} diff --git a/src/provider/mock.rs b/src/provider/mock.rs new file mode 100644 index 0000000..18478eb --- /dev/null +++ b/src/provider/mock.rs @@ -0,0 +1,246 @@ +//! Mock provider for testing. +//! +//! This module provides a configurable mock implementation of the [`Provider`] +//! trait for use in unit tests. + +use async_trait::async_trait; +use std::collections::HashMap; +use std::sync::{Arc, Mutex}; + +use super::traits::*; +use crate::errors::ProviderError; +use crate::types::{Org, OwnedRepo, ProviderKind, Repo}; + +/// A mock provider that can be configured with predefined responses. +pub struct MockProvider { + /// The provider kind to report + pub kind: ProviderKind, + /// Display name + pub display_name: String, + /// The username to return + pub username: String, + /// Organizations to return + pub orgs: Vec, + /// Repos per organization + pub org_repos: HashMap>, + /// Personal repos + pub user_repos: Vec, + /// Rate limit info to return + pub rate_limit: RateLimitInfo, + /// Track method calls for assertions + pub call_log: Arc>>, + /// Should auth validation fail? + pub should_fail_auth: bool, + /// Should org fetching fail? + pub should_fail_orgs: bool, +} + +impl Default for MockProvider { + fn default() -> Self { + Self::new() + } +} + +impl MockProvider { + /// Creates a new mock provider with defaults. + pub fn new() -> Self { + Self { + kind: ProviderKind::GitHub, + display_name: "Mock GitHub".to_string(), + username: "testuser".to_string(), + orgs: vec![], + org_repos: HashMap::new(), + user_repos: vec![], + rate_limit: RateLimitInfo { + limit: 5000, + remaining: 5000, + reset_at: None, + }, + call_log: Arc::new(Mutex::new(vec![])), + should_fail_auth: false, + should_fail_orgs: false, + } + } + + /// Sets the username. + pub fn with_username(mut self, username: impl Into) -> Self { + self.username = username.into(); + self + } + + /// Sets the organizations. + pub fn with_orgs(mut self, orgs: Vec) -> Self { + self.orgs = orgs; + self + } + + /// Sets repos for an organization. + pub fn with_org_repos(mut self, org: impl Into, repos: Vec) -> Self { + self.org_repos.insert(org.into(), repos); + self + } + + /// Sets personal repos. + pub fn with_user_repos(mut self, repos: Vec) -> Self { + self.user_repos = repos; + self + } + + /// Makes auth validation fail. + pub fn with_auth_failure(mut self) -> Self { + self.should_fail_auth = true; + self + } + + /// Makes org fetching fail. + pub fn with_orgs_failure(mut self) -> Self { + self.should_fail_orgs = true; + self + } + + /// Records a method call. + fn log_call(&self, method: &str) { + let mut log = self.call_log.lock().unwrap(); + log.push(method.to_string()); + } + + /// Returns all recorded method calls. + pub fn get_calls(&self) -> Vec { + self.call_log.lock().unwrap().clone() + } + + /// Clears the call log. + pub fn clear_calls(&self) { + self.call_log.lock().unwrap().clear(); + } +} + +#[async_trait] +impl Provider for MockProvider { + fn kind(&self) -> ProviderKind { + self.kind + } + + fn display_name(&self) -> &str { + &self.display_name + } + + async fn validate_credentials(&self) -> Result<(), ProviderError> { + self.log_call("validate_credentials"); + if self.should_fail_auth { + Err(ProviderError::Authentication( + "Mock authentication failure".to_string(), + )) + } else { + Ok(()) + } + } + + async fn get_username(&self) -> Result { + self.log_call("get_username"); + if self.should_fail_auth { + Err(ProviderError::Authentication( + "Mock authentication failure".to_string(), + )) + } else { + Ok(self.username.clone()) + } + } + + async fn get_organizations(&self) -> Result, ProviderError> { + self.log_call("get_organizations"); + if self.should_fail_orgs { + Err(ProviderError::Api { + status: 500, + message: "Mock server error".to_string(), + }) + } else { + Ok(self.orgs.clone()) + } + } + + async fn get_org_repos(&self, org: &str) -> Result, ProviderError> { + self.log_call(&format!("get_org_repos:{}", org)); + Ok(self.org_repos.get(org).cloned().unwrap_or_default()) + } + + async fn get_user_repos(&self) -> Result, ProviderError> { + self.log_call("get_user_repos"); + Ok(self.user_repos.clone()) + } + + async fn get_rate_limit(&self) -> Result { + self.log_call("get_rate_limit"); + Ok(self.rate_limit.clone()) + } + + async fn discover_repos( + &self, + options: &DiscoveryOptions, + progress: &dyn DiscoveryProgress, + ) -> Result, ProviderError> { + self.log_call("discover_repos"); + + let mut repos = Vec::new(); + + // Report orgs + let filtered_orgs: Vec<_> = self + .orgs + .iter() + .filter(|o| options.should_include_org(&o.login)) + .collect(); + + progress.on_orgs_discovered(filtered_orgs.len()); + + // Fetch org repos + for org in filtered_orgs { + progress.on_org_started(&org.login); + let mut org_count = 0usize; + + if let Some(org_repos) = self.org_repos.get(&org.login) { + let filtered: Vec<_> = org_repos + .iter() + .filter(|r| options.should_include(r)) + .collect(); + + for repo in filtered { + repos.push(OwnedRepo::new(&org.login, repo.clone())); + org_count += 1; + } + } + + progress.on_org_complete(&org.login, org_count); + } + + // Fetch personal repos + progress.on_personal_repos_started(); + + let personal_filtered: Vec<_> = self + .user_repos + .iter() + .filter(|r| options.should_include(r)) + .filter(|r| !repos.iter().any(|or| or.repo.id == r.id)) + .collect(); + + let personal_count = personal_filtered.len(); + for repo in personal_filtered { + repos.push(OwnedRepo::new(&self.username, repo.clone())); + } + + progress.on_personal_repos_complete(personal_count); + + Ok(repos) + } + + fn get_clone_url(&self, repo: &Repo, prefer_ssh: bool) -> String { + if prefer_ssh { + repo.ssh_url.clone() + } else { + repo.clone_url.clone() + } + } +} + +#[cfg(test)] +#[path = "mock_tests.rs"] +mod tests; diff --git a/src/provider/mock_tests.rs b/src/provider/mock_tests.rs new file mode 100644 index 0000000..a7b37be --- /dev/null +++ b/src/provider/mock_tests.rs @@ -0,0 +1,105 @@ +use super::*; + +fn test_org(name: &str) -> Org { + Org::new(name, 1) +} + +#[tokio::test] +async fn test_mock_provider_username() { + let provider = MockProvider::new().with_username("octocat"); + let username = provider.get_username().await.unwrap(); + assert_eq!(username, "octocat"); +} + +#[tokio::test] +async fn test_mock_provider_orgs() { + let provider = MockProvider::new().with_orgs(vec![test_org("org1"), test_org("org2")]); + + let orgs = provider.get_organizations().await.unwrap(); + assert_eq!(orgs.len(), 2); + assert_eq!(orgs[0].login, "org1"); + assert_eq!(orgs[1].login, "org2"); +} + +#[tokio::test] +async fn test_mock_provider_auth_failure() { + let provider = MockProvider::new().with_auth_failure(); + + let result = provider.validate_credentials().await; + assert!(result.is_err()); + assert!(matches!( + result.unwrap_err(), + ProviderError::Authentication(_) + )); +} + +#[tokio::test] +async fn test_mock_provider_orgs_failure() { + let provider = MockProvider::new().with_orgs_failure(); + + let result = provider.get_organizations().await; + assert!(result.is_err()); +} + +#[tokio::test] +async fn test_mock_provider_call_logging() { + let provider = MockProvider::new(); + + provider.get_username().await.unwrap(); + provider.get_organizations().await.unwrap(); + provider.get_org_repos("test-org").await.unwrap(); + + let calls = provider.get_calls(); + assert_eq!(calls.len(), 3); + assert_eq!(calls[0], "get_username"); + assert_eq!(calls[1], "get_organizations"); + assert_eq!(calls[2], "get_org_repos:test-org"); +} + +#[tokio::test] +async fn test_mock_provider_discovery() { + let provider = MockProvider::new() + .with_username("testuser") + .with_orgs(vec![test_org("my-org")]) + .with_org_repos("my-org", vec![Repo::test("repo1", "my-org")]) + .with_user_repos(vec![Repo::test("personal", "testuser")]); + + let options = DiscoveryOptions::new(); + let progress = NoProgress; + + let repos = provider.discover_repos(&options, &progress).await.unwrap(); + assert_eq!(repos.len(), 2); +} + +#[tokio::test] +async fn test_mock_provider_discovery_with_filters() { + let mut archived_repo = Repo::test("archived", "my-org"); + archived_repo.archived = true; + + let provider = MockProvider::new() + .with_username("testuser") + .with_orgs(vec![test_org("my-org")]) + .with_org_repos( + "my-org", + vec![Repo::test("active", "my-org"), archived_repo], + ); + + let options = DiscoveryOptions::new().with_archived(false); + let progress = NoProgress; + + let repos = provider.discover_repos(&options, &progress).await.unwrap(); + assert_eq!(repos.len(), 1); + assert_eq!(repos[0].repo.name, "active"); +} + +#[test] +fn test_clone_url_preference() { + let provider = MockProvider::new(); + let repo = Repo::test("test", "org"); + + let ssh_url = provider.get_clone_url(&repo, true); + assert!(ssh_url.starts_with("git@")); + + let https_url = provider.get_clone_url(&repo, false); + assert!(https_url.starts_with("https://")); +} diff --git a/src/provider/mod.rs b/src/provider/mod.rs new file mode 100644 index 0000000..63731a0 --- /dev/null +++ b/src/provider/mod.rs @@ -0,0 +1,64 @@ +//! Git hosting provider implementations. +//! +//! This module contains the [`Provider`] trait and implementations for +//! various Git hosting services: +//! +//! - **GitHub** - github.com (active) +//! - **GitHub Enterprise** - coming soon +//! - **GitLab** - coming soon +//! - **GitLab Self-Managed** - coming soon +//! - **Codeberg** - coming soon +//! - **Bitbucket** - coming soon +//! +//! # Example +//! +//! ```no_run +//! use git_same::provider::{create_provider, DiscoveryOptions, NoProgress}; +//! use git_same::config::ProviderEntry; +//! +//! # async fn example() -> Result<(), git_same::errors::AppError> { +//! let entry = ProviderEntry::github(); +//! let provider = create_provider(&entry, "ghp_token123")?; +//! +//! let options = DiscoveryOptions::new(); +//! let progress = NoProgress; +//! let repos = provider.discover_repos(&options, &progress).await?; +//! # Ok(()) +//! # } +//! ``` + +pub mod github; +pub mod traits; + +#[cfg(test)] +pub mod mock; + +pub use traits::{ + Credentials, DiscoveryOptions, DiscoveryProgress, NoProgress, Provider, RateLimitInfo, +}; + +use crate::config::ProviderEntry; +use crate::errors::{AppError, ProviderError}; +use crate::types::ProviderKind; + +/// Creates a provider instance based on configuration. +pub fn create_provider(entry: &ProviderEntry, token: &str) -> Result, AppError> { + let api_url = entry.effective_api_url(); + + match entry.kind { + ProviderKind::GitHub => { + let credentials = Credentials::new(token, api_url); + let provider = github::GitHubProvider::new(credentials, entry.display_name()) + .map_err(AppError::Provider)?; + Ok(Box::new(provider)) + } + other => Err(AppError::Provider(ProviderError::NotImplemented(format!( + "{} support coming soon", + other.display_name() + )))), + } +} + +#[cfg(test)] +#[path = "mod_tests.rs"] +mod tests; diff --git a/src/provider/mod_tests.rs b/src/provider/mod_tests.rs new file mode 100644 index 0000000..b13b257 --- /dev/null +++ b/src/provider/mod_tests.rs @@ -0,0 +1,35 @@ +use super::*; +use crate::types::ProviderKind; + +#[test] +fn create_provider_supports_github() { + let github = ProviderEntry::github(); + let provider = create_provider(&github, "ghp_test_token").unwrap(); + assert_eq!(provider.kind(), ProviderKind::GitHub); +} + +#[test] +fn create_provider_returns_not_implemented_for_unsupported() { + let unsupported = [ + (ProviderKind::GitHubEnterprise, "GitHub Enterprise"), + (ProviderKind::GitLab, "GitLab"), + (ProviderKind::GitLabSelfManaged, "GitLab Self-Managed"), + (ProviderKind::Codeberg, "Codeberg"), + (ProviderKind::Bitbucket, "Bitbucket"), + ]; + + for (kind, expected_name) in unsupported { + let mut entry = ProviderEntry::github(); + entry.kind = kind; + + match create_provider(&entry, "token") { + Ok(_) => panic!("expected {} to be unsupported", expected_name), + Err(err) => assert!( + err.to_string().contains("coming soon"), + "{} error should contain 'coming soon': {}", + expected_name, + err + ), + } + } +} diff --git a/src/provider/traits.rs b/src/provider/traits.rs new file mode 100644 index 0000000..b4612a4 --- /dev/null +++ b/src/provider/traits.rs @@ -0,0 +1,214 @@ +//! Provider trait definitions. +//! +//! The [`Provider`] trait defines the interface that all Git hosting +//! provider implementations must implement. + +use async_trait::async_trait; + +use crate::errors::ProviderError; +use crate::types::{Org, OwnedRepo, ProviderKind, Repo}; + +/// Authentication credentials for a provider. +#[derive(Debug, Clone)] +pub struct Credentials { + /// The authentication token + pub token: String, + /// Base URL for API calls + pub api_base_url: String, + /// The authenticated username (if known) + pub username: Option, +} + +impl Credentials { + /// Creates new credentials with token and API URL. + pub fn new(token: impl Into, api_base_url: impl Into) -> Self { + Self { + token: token.into(), + api_base_url: api_base_url.into(), + username: None, + } + } + + /// Sets the username. + pub fn with_username(mut self, username: impl Into) -> Self { + self.username = Some(username.into()); + self + } +} + +/// Rate limit information from the provider. +#[derive(Debug, Clone, Default)] +pub struct RateLimitInfo { + /// Maximum requests allowed per period + pub limit: u32, + /// Remaining requests in current period + pub remaining: u32, + /// Unix timestamp when the limit resets + pub reset_at: Option, +} + +impl RateLimitInfo { + /// Returns true if the rate limit is exhausted. + pub fn is_exhausted(&self) -> bool { + self.remaining == 0 + } + + /// Returns the number of seconds until the rate limit resets. + pub fn seconds_until_reset(&self) -> Option { + self.reset_at.map(|reset| { + let now = chrono::Utc::now().timestamp(); + (reset - now).max(0) + }) + } +} + +/// Options for repository discovery. +#[derive(Debug, Clone, Default)] +pub struct DiscoveryOptions { + /// Include archived repositories + pub include_archived: bool, + /// Include forked repositories + pub include_forks: bool, + /// Filter to specific organizations (empty = all) + pub org_filter: Vec, + /// Exclude specific repos by full name + pub exclude_repos: Vec, +} + +impl DiscoveryOptions { + /// Creates default discovery options. + pub fn new() -> Self { + Self::default() + } + + /// Include archived repositories. + pub fn with_archived(mut self, include: bool) -> Self { + self.include_archived = include; + self + } + + /// Include forked repositories. + pub fn with_forks(mut self, include: bool) -> Self { + self.include_forks = include; + self + } + + /// Filter to specific organizations. + pub fn with_orgs(mut self, orgs: Vec) -> Self { + self.org_filter = orgs; + self + } + + /// Exclude specific repositories. + pub fn with_exclusions(mut self, repos: Vec) -> Self { + self.exclude_repos = repos; + self + } + + /// Check if a repo should be included based on filters. + pub fn should_include(&self, repo: &Repo) -> bool { + // Check archived filter + if !self.include_archived && repo.archived { + return false; + } + + // Check fork filter + if !self.include_forks && repo.fork { + return false; + } + + // Check exclusion list + if self.exclude_repos.contains(&repo.full_name) { + return false; + } + + true + } + + /// Check if an org should be included based on filters. + pub fn should_include_org(&self, org: &str) -> bool { + if self.org_filter.is_empty() { + return true; + } + self.org_filter.iter().any(|o| o == org) + } +} + +/// Callback trait for progress reporting during discovery. +pub trait DiscoveryProgress: Send + Sync { + /// Called when organizations are discovered. + fn on_orgs_discovered(&self, count: usize); + + /// Called when starting to fetch repos for an org. + fn on_org_started(&self, org_name: &str); + + /// Called when finished fetching repos for an org. + fn on_org_complete(&self, org_name: &str, repo_count: usize); + + /// Called when starting to fetch personal repos. + fn on_personal_repos_started(&self); + + /// Called when finished fetching personal repos. + fn on_personal_repos_complete(&self, count: usize); + + /// Called on any error during discovery (non-fatal). + fn on_error(&self, message: &str); +} + +/// A no-op implementation for when progress isn't needed. +#[derive(Debug, Default)] +pub struct NoProgress; + +impl DiscoveryProgress for NoProgress { + fn on_orgs_discovered(&self, _: usize) {} + fn on_org_started(&self, _: &str) {} + fn on_org_complete(&self, _: &str, _: usize) {} + fn on_personal_repos_started(&self) {} + fn on_personal_repos_complete(&self, _: usize) {} + fn on_error(&self, _: &str) {} +} + +/// The core trait that all providers must implement. +/// +/// This trait defines the interface for interacting with Git hosting providers +/// like GitHub, GitLab, and Bitbucket. +#[async_trait] +pub trait Provider: Send + Sync { + /// Returns the provider kind (GitHub, GitLab, etc.). + fn kind(&self) -> ProviderKind; + + /// Returns the display name for this provider instance. + fn display_name(&self) -> &str; + + /// Validates that the credentials are valid. + async fn validate_credentials(&self) -> Result<(), ProviderError>; + + /// Gets the authenticated user's username. + async fn get_username(&self) -> Result; + + /// Fetches all organizations the user belongs to. + async fn get_organizations(&self) -> Result, ProviderError>; + + /// Fetches all repositories for a specific organization. + async fn get_org_repos(&self, org: &str) -> Result, ProviderError>; + + /// Fetches the user's personal repositories (not org repos). + async fn get_user_repos(&self) -> Result, ProviderError>; + + /// Returns current rate limit information. + async fn get_rate_limit(&self) -> Result; + + /// High-level discovery that returns all repos with filtering. + async fn discover_repos( + &self, + options: &DiscoveryOptions, + progress: &dyn DiscoveryProgress, + ) -> Result, ProviderError>; + + /// Returns the clone URL for a repo (SSH or HTTPS based on preference). + fn get_clone_url(&self, repo: &Repo, prefer_ssh: bool) -> String; +} + +#[cfg(test)] +#[path = "traits_tests.rs"] +mod tests; diff --git a/src/provider/traits_tests.rs b/src/provider/traits_tests.rs new file mode 100644 index 0000000..789ecd9 --- /dev/null +++ b/src/provider/traits_tests.rs @@ -0,0 +1,112 @@ +use super::*; + +#[test] +fn test_credentials_builder() { + let creds = Credentials::new("token123", "https://api.github.com").with_username("testuser"); + + assert_eq!(creds.token, "token123"); + assert_eq!(creds.api_base_url, "https://api.github.com"); + assert_eq!(creds.username, Some("testuser".to_string())); +} + +#[test] +fn test_rate_limit_exhausted() { + let info = RateLimitInfo { + limit: 5000, + remaining: 0, + reset_at: None, + }; + assert!(info.is_exhausted()); + + let info = RateLimitInfo { + limit: 5000, + remaining: 100, + reset_at: None, + }; + assert!(!info.is_exhausted()); +} + +#[test] +fn test_discovery_options_builder() { + let options = DiscoveryOptions::new() + .with_archived(true) + .with_forks(true) + .with_orgs(vec!["org1".to_string(), "org2".to_string()]) + .with_exclusions(vec!["org1/skip".to_string()]); + + assert!(options.include_archived); + assert!(options.include_forks); + assert_eq!(options.org_filter.len(), 2); + assert_eq!(options.exclude_repos.len(), 1); +} + +#[test] +fn test_should_include_repo() { + let options = DiscoveryOptions::new(); + + // Non-archived, non-fork repo should be included + let repo = Repo::test("repo", "org"); + assert!(options.should_include(&repo)); +} + +#[test] +fn test_should_exclude_archived() { + let options = DiscoveryOptions::new().with_archived(false); + + let mut repo = Repo::test("repo", "org"); + repo.archived = true; + assert!(!options.should_include(&repo)); + + let options = DiscoveryOptions::new().with_archived(true); + assert!(options.should_include(&repo)); +} + +#[test] +fn test_should_exclude_forks() { + let options = DiscoveryOptions::new().with_forks(false); + + let mut repo = Repo::test("repo", "org"); + repo.fork = true; + assert!(!options.should_include(&repo)); + + let options = DiscoveryOptions::new().with_forks(true); + assert!(options.should_include(&repo)); +} + +#[test] +fn test_should_exclude_by_name() { + let options = DiscoveryOptions::new().with_exclusions(vec!["org/excluded-repo".to_string()]); + + let mut repo = Repo::test("excluded-repo", "org"); + repo.full_name = "org/excluded-repo".to_string(); + assert!(!options.should_include(&repo)); + + let mut repo = Repo::test("included-repo", "org"); + repo.full_name = "org/included-repo".to_string(); + assert!(options.should_include(&repo)); +} + +#[test] +fn test_should_include_org_empty_filter() { + let options = DiscoveryOptions::new(); + assert!(options.should_include_org("any-org")); +} + +#[test] +fn test_should_include_org_with_filter() { + let options = DiscoveryOptions::new().with_orgs(vec!["allowed-org".to_string()]); + + assert!(options.should_include_org("allowed-org")); + assert!(!options.should_include_org("other-org")); +} + +#[test] +fn test_no_progress_compiles() { + let progress = NoProgress; + progress.on_orgs_discovered(5); + progress.on_org_started("test"); + progress.on_org_complete("test", 10); + progress.on_personal_repos_started(); + progress.on_personal_repos_complete(3); + progress.on_error("test error"); +} diff --git a/src/setup/handler.rs b/src/setup/handler.rs new file mode 100644 index 0000000..31426fb --- /dev/null +++ b/src/setup/handler.rs @@ -0,0 +1,855 @@ +//! Setup wizard event handling. + +use super::state::{ + tilde_collapse, AuthStatus, OrgEntry, PathBrowseEntry, SetupOutcome, SetupState, SetupStep, +}; +use crate::auth::{get_auth_for_provider, gh_cli}; +use crate::config::{WorkspaceConfig, WorkspaceManager}; +use crate::provider::{create_provider, Credentials}; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; + +/// Handle a key event in the setup wizard. +/// +/// Returns true if the event triggered an async operation that should be awaited. +pub async fn handle_key(state: &mut SetupState, key: KeyEvent) { + // Global quit shortcuts + if key.modifiers.contains(KeyModifiers::CONTROL) && key.code == KeyCode::Char('c') { + state.outcome = Some(SetupOutcome::Cancelled); + state.should_quit = true; + return; + } + let path_popup_active = state.step == SetupStep::SelectPath && state.path_browse_mode; + if path_popup_active && key.modifiers == KeyModifiers::NONE { + match key.code { + KeyCode::Up + | KeyCode::Down + | KeyCode::Left + | KeyCode::Right + | KeyCode::Enter + | KeyCode::Esc => { + handle_path(state, key); + return; + } + _ => {} + } + } + if key.modifiers == KeyModifiers::NONE + && key.code == KeyCode::Char('q') + && !matches!(state.step, SetupStep::SelectPath) + { + state.outcome = Some(SetupOutcome::Cancelled); + state.should_quit = true; + return; + } + if !path_popup_active + && state.step != SetupStep::SelectPath + && key.modifiers == KeyModifiers::NONE + && key.code == KeyCode::Esc + { + state.outcome = Some(SetupOutcome::Cancelled); + state.should_quit = true; + return; + } + if !path_popup_active + && state.step != SetupStep::SelectPath + && key.modifiers == KeyModifiers::NONE + { + match key.code { + KeyCode::Left => { + state.prev_step(); + return; + } + KeyCode::Right => { + handle_step_forward(state).await; + return; + } + _ => {} + } + } + + match state.step { + SetupStep::Welcome => handle_welcome(state, key), + SetupStep::SelectProvider => handle_provider(state, key), + SetupStep::Authenticate => handle_auth(state, key).await, + SetupStep::SelectPath => handle_path(state, key), + SetupStep::SelectOrgs => handle_orgs(state, key).await, + SetupStep::Confirm => handle_confirm(state, key), + SetupStep::Complete => handle_complete(state, key), + } +} + +async fn handle_step_forward(state: &mut SetupState) { + match state.step { + SetupStep::Welcome => { + state.next_step(); + } + SetupStep::SelectProvider => { + if state.provider_choices[state.provider_index].available { + state.auth_status = AuthStatus::Pending; + state.next_step(); + } + } + SetupStep::Authenticate => match state.auth_status.clone() { + AuthStatus::Pending | AuthStatus::Failed(_) => { + state.auth_status = AuthStatus::Checking; + do_authenticate(state).await; + } + AuthStatus::Success => { + state.next_step(); + } + AuthStatus::Checking => {} + }, + SetupStep::SelectOrgs => { + if state.org_loading { + do_discover_orgs(state).await; + } else if state.org_error.is_some() { + state.org_loading = true; + state.org_error = None; + } else { + state.next_step(); + } + } + SetupStep::SelectPath => { + if state.path_browse_mode { + if !state.path_browse_current_dir.is_empty() { + state.base_path = state.path_browse_current_dir.clone(); + state.path_cursor = state.base_path.len(); + } + close_path_browse_to_input(state); + } else if state.path_suggestions_mode { + if let Some(s) = state.path_suggestions.get(state.path_suggestion_index) { + state.base_path = s.path.clone(); + state.path_cursor = state.base_path.len(); + } + } + confirm_path(state); + } + SetupStep::Confirm => match save_workspace(state) { + Ok(()) => { + state.next_step(); + } + Err(e) => { + state.error_message = Some(e.to_string()); + } + }, + SetupStep::Complete => { + state.next_step(); + } + } +} + +fn handle_welcome(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Enter => { + state.next_step(); + } + KeyCode::Esc => { + state.prev_step(); + } + _ => {} + } +} + +fn handle_provider(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Up => { + if state.provider_index > 0 { + state.provider_index -= 1; + } + } + KeyCode::Down => { + if state.provider_index + 1 < state.provider_choices.len() { + state.provider_index += 1; + } + } + KeyCode::Enter => { + if state.provider_choices[state.provider_index].available { + state.auth_status = AuthStatus::Pending; + state.next_step(); + } + } + KeyCode::Esc => { + state.prev_step(); + } + _ => {} + } +} + +async fn handle_auth(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Enter => { + match &state.auth_status { + AuthStatus::Pending | AuthStatus::Failed(_) => { + // Attempt authentication + state.auth_status = AuthStatus::Checking; + do_authenticate(state).await; + } + AuthStatus::Success => { + state.next_step(); + } + AuthStatus::Checking => {} + } + } + KeyCode::Esc => { + state.prev_step(); + } + _ => {} + } +} + +async fn do_authenticate(state: &mut SetupState) { + let provider_entry = state.build_workspace_provider().to_provider_entry(); + match get_auth_for_provider(&provider_entry) { + Ok(auth) => { + let username = auth.username.or_else(|| gh_cli::get_username().ok()); + state.username = username; + state.auth_token = Some(auth.token); + state.auth_status = AuthStatus::Success; + } + Err(e) => { + state.auth_status = AuthStatus::Failed(e.to_string()); + } + } +} + +fn handle_path(state: &mut SetupState, key: KeyEvent) { + if state.path_browse_mode { + handle_path_browse(state, key); + } else if state.path_suggestions_mode { + handle_path_suggestions(state, key); + } else { + handle_path_input(state, key); + } +} + +fn confirm_path(state: &mut SetupState) { + if state.base_path.is_empty() { + state.error_message = Some("Base path cannot be empty".to_string()); + } else { + state.error_message = None; + state.next_step(); + } +} + +fn open_path_browse_mode(state: &mut SetupState) { + let dir = resolve_browse_root(); + state.path_browse_info = None; + set_browse_root(state, dir); + state.path_suggestions_mode = false; + state.path_browse_mode = true; +} + +fn resolve_browse_root() -> std::path::PathBuf { + std::env::current_dir() + .or_else(|_| std::env::var("HOME").map(std::path::PathBuf::from)) + .unwrap_or_else(|_| std::path::PathBuf::from("/")) +} + +fn set_browse_root(state: &mut SetupState, dir: std::path::PathBuf) { + let root_path = tilde_collapse(&dir.to_string_lossy()); + let (children, browse_error) = read_child_directories(&dir, 1); + let root = PathBrowseEntry { + label: browse_label_for_path(&dir), + path: root_path.clone(), + depth: 0, + expanded: true, + has_children: !children.is_empty(), + }; + + let mut entries = Vec::with_capacity(children.len() + 1); + entries.push(root); + entries.extend(children); + + state.path_browse_current_dir = root_path; + state.path_browse_entries = entries; + state.path_browse_error = browse_error; + state.path_browse_index = 0; +} + +fn browse_label_for_path(path: &std::path::Path) -> String { + if path.parent().is_none() { + "/".to_string() + } else { + let name = path + .file_name() + .map(|part| part.to_string_lossy().to_string()) + .unwrap_or_else(|| path.to_string_lossy().to_string()); + format!("{name}/") + } +} + +fn has_visible_child_directory(dir: &std::path::Path) -> bool { + match std::fs::read_dir(dir) { + Ok(entries) => entries.flatten().any(|entry| { + let path = entry.path(); + if !path.is_dir() { + return false; + } + let name = entry.file_name().to_string_lossy().to_string(); + !name.starts_with('.') + }), + Err(_) => false, + } +} + +fn read_child_directories( + dir: &std::path::Path, + depth: u16, +) -> (Vec, Option) { + let mut children = Vec::new(); + let mut browse_error = None; + + match std::fs::read_dir(dir) { + Ok(dir_entries) => { + for entry_result in dir_entries { + match entry_result { + Ok(entry) => { + let path = entry.path(); + if !path.is_dir() { + continue; + } + let name = entry.file_name().to_string_lossy().to_string(); + if name.starts_with('.') { + continue; + } + children.push(PathBrowseEntry { + label: format!("{name}/"), + path: tilde_collapse(&path.to_string_lossy()), + depth, + expanded: false, + has_children: has_visible_child_directory(&path), + }); + } + Err(e) => { + if browse_error.is_none() { + browse_error = Some(format!("Some entries could not be read: {e}")); + } + } + } + } + } + Err(e) => { + browse_error = Some(format!( + "Cannot read '{}': {e}", + tilde_collapse(&dir.to_string_lossy()) + )); + } + } + + children.sort_by_key(|entry| entry.label.to_lowercase()); + (children, browse_error) +} + +fn close_path_browse_to_input(state: &mut SetupState) { + state.path_browse_mode = false; + state.path_suggestions_mode = false; + state.path_browse_error = None; + state.path_browse_info = None; + state.path_cursor = state.base_path.len(); + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; +} + +fn sync_browse_current_dir(state: &mut SetupState) { + if let Some(entry) = state.path_browse_entries.get(state.path_browse_index) { + state.path_browse_current_dir = entry.path.clone(); + } +} + +fn selected_browse_dir(state: &SetupState) -> Option { + state + .path_browse_entries + .get(state.path_browse_index) + .map(|entry| std::path::PathBuf::from(shellexpand::tilde(&entry.path).as_ref())) +} + +fn collapse_selected_entry(state: &mut SetupState) { + let Some(entry) = state + .path_browse_entries + .get(state.path_browse_index) + .cloned() + else { + return; + }; + if !entry.expanded { + return; + } + let start = state.path_browse_index + 1; + let mut end = start; + while end < state.path_browse_entries.len() + && state.path_browse_entries[end].depth > entry.depth + { + end += 1; + } + if start < end { + state.path_browse_entries.drain(start..end); + } + if let Some(selected) = state.path_browse_entries.get_mut(state.path_browse_index) { + selected.expanded = false; + } +} + +fn expand_selected_entry(state: &mut SetupState) { + let index = state.path_browse_index; + let Some(dir) = selected_browse_dir(state) else { + return; + }; + let Some(selected) = state.path_browse_entries.get(index) else { + return; + }; + let depth = selected.depth; + + let (children, browse_error) = read_child_directories(&dir, depth + 1); + state.path_browse_error = browse_error; + if children.is_empty() { + if let Some(entry) = state.path_browse_entries.get_mut(index) { + entry.has_children = false; + entry.expanded = false; + } + return; + } + + if let Some(entry) = state.path_browse_entries.get_mut(index) { + entry.expanded = true; + entry.has_children = true; + } + state + .path_browse_entries + .splice(index + 1..index + 1, children); +} + +fn open_selected_browse_entry(state: &mut SetupState) { + let Some(selected) = state + .path_browse_entries + .get(state.path_browse_index) + .cloned() + else { + return; + }; + if !selected.has_children { + return; + } + if selected.expanded { + let child_index = state.path_browse_index + 1; + if child_index < state.path_browse_entries.len() + && state.path_browse_entries[child_index].depth == selected.depth + 1 + { + state.path_browse_index = child_index; + } + } else { + expand_selected_entry(state); + } + sync_browse_current_dir(state); +} + +fn move_to_parent_or_collapse_selected_entry(state: &mut SetupState) { + let Some(selected) = state + .path_browse_entries + .get(state.path_browse_index) + .cloned() + else { + return; + }; + if selected.expanded { + collapse_selected_entry(state); + sync_browse_current_dir(state); + return; + } + if selected.depth == 0 { + return; + } + for idx in (0..state.path_browse_index).rev() { + if state.path_browse_entries[idx].depth + 1 == selected.depth { + state.path_browse_index = idx; + sync_browse_current_dir(state); + return; + } + } +} + +fn select_current_browse_folder(state: &mut SetupState) { + if let Some(entry) = state.path_browse_entries.get(state.path_browse_index) { + state.base_path = entry.path.clone(); + state.path_cursor = state.base_path.len(); + } + close_path_browse_to_input(state); +} + +fn handle_path_browse(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Up => { + if state.path_browse_index > 0 { + state.path_browse_index -= 1; + sync_browse_current_dir(state); + } + } + KeyCode::Down => { + if state.path_browse_index + 1 < state.path_browse_entries.len() { + state.path_browse_index += 1; + sync_browse_current_dir(state); + } + } + KeyCode::Right => { + open_selected_browse_entry(state); + } + KeyCode::Left => { + move_to_parent_or_collapse_selected_entry(state); + } + KeyCode::Enter => { + select_current_browse_folder(state); + } + KeyCode::Esc => { + close_path_browse_to_input(state); + } + _ => {} + } +} + +fn handle_path_suggestions(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Up => { + if state.path_suggestion_index > 0 { + state.path_suggestion_index -= 1; + } + } + KeyCode::Down => { + if state.path_suggestion_index + 1 < state.path_suggestions.len() { + state.path_suggestion_index += 1; + } + } + KeyCode::Enter => { + if let Some(s) = state.path_suggestions.get(state.path_suggestion_index) { + state.base_path = s.path.clone(); + state.path_cursor = state.base_path.len(); + } + confirm_path(state); + } + KeyCode::Tab => { + if let Some(s) = state.path_suggestions.get(state.path_suggestion_index) { + state.base_path = s.path.clone(); + state.path_cursor = state.base_path.len(); + } + state.path_suggestions_mode = false; + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } + KeyCode::Char('b') => { + open_path_browse_mode(state); + } + KeyCode::Esc => { + state.prev_step(); + } + KeyCode::Backspace => { + state.path_suggestions_mode = false; + if state.path_cursor > 0 { + state.path_cursor -= 1; + state.base_path.remove(state.path_cursor); + } + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } + KeyCode::Char(c) => { + state.path_suggestions_mode = false; + state.base_path.clear(); + state.base_path.push(c); + state.path_cursor = 1; + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } + _ => {} + } +} + +fn handle_path_input(state: &mut SetupState, key: KeyEvent) { + if key.modifiers.contains(KeyModifiers::CONTROL) && key.code == KeyCode::Char('b') { + open_path_browse_mode(state); + return; + } + + match key.code { + KeyCode::Tab => { + apply_tab_completion(state); + } + KeyCode::Enter => { + confirm_path(state); + } + KeyCode::Esc => { + state.prev_step(); + } + KeyCode::Backspace => { + if state.path_cursor > 0 { + state.path_cursor -= 1; + state.base_path.remove(state.path_cursor); + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } + } + KeyCode::Delete => { + if state.path_cursor < state.base_path.len() { + state.base_path.remove(state.path_cursor); + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } + } + KeyCode::Left => { + if state.path_cursor > 0 { + state.path_cursor -= 1; + } + } + KeyCode::Right => { + if state.path_cursor < state.base_path.len() { + state.path_cursor += 1; + } + } + KeyCode::Home => { + state.path_cursor = 0; + } + KeyCode::End => { + state.path_cursor = state.base_path.len(); + } + KeyCode::Char(c) => { + state.base_path.insert(state.path_cursor, c); + state.path_cursor += 1; + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } + _ => {} + } +} + +/// Compute directory completions for the current input path. +fn compute_completions(input: &str) -> Vec { + if input.is_empty() { + return Vec::new(); + } + let expanded = shellexpand::tilde(input); + let path = std::path::Path::new(expanded.as_ref()); + + let (parent, prefix) = if expanded.ends_with('/') { + (path.to_path_buf(), String::new()) + } else { + let parent = path + .parent() + .filter(|p| !p.as_os_str().is_empty()) + .map(std::path::Path::to_path_buf) + .unwrap_or_else(|| { + std::env::current_dir().unwrap_or_else(|_| std::path::PathBuf::from(".")) + }); + let prefix = path + .file_name() + .map(|f| f.to_string_lossy().to_string()) + .unwrap_or_default(); + (parent, prefix) + }; + + let mut results = Vec::new(); + if let Ok(entries) = std::fs::read_dir(&parent) { + for entry in entries.flatten() { + if !entry.path().is_dir() { + continue; + } + let name = entry.file_name().to_string_lossy().to_string(); + if name.starts_with('.') { + continue; + } + if prefix.is_empty() || name.starts_with(&prefix) { + let full = parent.join(&name); + let display = tilde_collapse(&full.to_string_lossy()); + results.push(format!("{}/", display)); + } + } + } + results.sort(); + results +} + +fn apply_tab_completion(state: &mut SetupState) { + if state.path_completions.is_empty() { + return; + } + if state.path_completions.len() == 1 { + state.base_path = state.path_completions[0].clone(); + state.path_cursor = state.base_path.len(); + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } else { + let common = longest_common_prefix(&state.path_completions); + if common.len() > state.base_path.len() { + state.base_path = common; + state.path_cursor = state.base_path.len(); + state.path_completions = compute_completions(&state.base_path); + state.path_completion_index = 0; + } else { + // Already at common prefix, cycle through completions + state.base_path = state.path_completions[state.path_completion_index].clone(); + state.path_cursor = state.base_path.len(); + state.path_completion_index = + (state.path_completion_index + 1) % state.path_completions.len(); + } + } +} + +fn longest_common_prefix(strings: &[String]) -> String { + if strings.is_empty() { + return String::new(); + } + let mut prefix: Vec = strings[0].chars().collect(); + for s in &strings[1..] { + let mut matched = 0usize; + for (a, b) in prefix.iter().copied().zip(s.chars()) { + if a != b { + break; + } + matched += 1; + } + prefix.truncate(matched); + if prefix.is_empty() { + break; + } + } + prefix.into_iter().collect() +} + +async fn handle_orgs(state: &mut SetupState, key: KeyEvent) { + if state.org_loading { + // Trigger org discovery + do_discover_orgs(state).await; + return; + } + + match key.code { + KeyCode::Up => { + if state.org_index > 0 { + state.org_index -= 1; + } + } + KeyCode::Down => { + if state.org_index + 1 < state.orgs.len() { + state.org_index += 1; + } + } + KeyCode::Char(' ') => { + if !state.orgs.is_empty() { + state.orgs[state.org_index].selected = !state.orgs[state.org_index].selected; + } + } + KeyCode::Char('a') => { + for org in &mut state.orgs { + org.selected = true; + } + } + KeyCode::Char('n') => { + for org in &mut state.orgs { + org.selected = false; + } + } + KeyCode::Enter => { + if state.org_error.is_some() { + // Retry + state.org_loading = true; + state.org_error = None; + } else { + state.next_step(); + } + } + KeyCode::Esc => { + state.prev_step(); + } + _ => {} + } +} + +async fn do_discover_orgs(state: &mut SetupState) { + let Some(ref token) = state.auth_token else { + state.org_error = Some("Not authenticated".to_string()); + state.org_loading = false; + return; + }; + + let provider_entry = state.build_workspace_provider().to_provider_entry(); + let api_url = provider_entry.effective_api_url(); + + let credentials = Credentials { + token: token.clone(), + api_base_url: api_url, + username: state.username.clone(), + }; + + match create_provider(&provider_entry, &credentials.token) { + Ok(provider) => match provider.get_organizations().await { + Ok(orgs) => { + let mut org_entries: Vec = Vec::new(); + for org in &orgs { + let repo_count = provider + .get_org_repos(&org.login) + .await + .map(|r| r.len()) + .unwrap_or(0); + org_entries.push(OrgEntry { + name: org.login.clone(), + repo_count, + selected: true, + }); + } + org_entries.sort_by(|a, b| a.name.cmp(&b.name)); + state.orgs = org_entries; + state.org_index = 0; + state.org_loading = false; + } + Err(e) => { + state.org_error = Some(e.to_string()); + state.org_loading = false; + } + }, + Err(e) => { + state.org_error = Some(e.to_string()); + state.org_loading = false; + } + } +} + +fn handle_confirm(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Enter => { + // Save workspace config and advance to Complete screen + match save_workspace(state) { + Ok(()) => { + state.next_step(); + } + Err(e) => { + state.error_message = Some(e.to_string()); + } + } + } + KeyCode::Esc => { + state.prev_step(); + } + _ => {} + } +} + +fn handle_complete(state: &mut SetupState, key: KeyEvent) { + match key.code { + KeyCode::Enter | KeyCode::Char('s') => { + state.next_step(); // Triggers Completed + should_quit + } + KeyCode::Esc => { + state.prev_step(); + } + _ => {} + } +} + +fn save_workspace(state: &SetupState) -> Result<(), crate::errors::AppError> { + let mut ws = WorkspaceConfig::new(&state.workspace_name, &state.base_path); + ws.provider = state.build_workspace_provider(); + ws.username = state.username.clone().unwrap_or_default(); + ws.orgs = state.selected_orgs(); + + WorkspaceManager::save(&ws)?; + Ok(()) +} + +#[cfg(test)] +#[path = "handler_tests.rs"] +mod tests; diff --git a/src/setup/handler_tests.rs b/src/setup/handler_tests.rs new file mode 100644 index 0000000..cac8407 --- /dev/null +++ b/src/setup/handler_tests.rs @@ -0,0 +1,358 @@ +use super::*; +use crate::setup::state::SetupStep; + +fn cwd_collapsed() -> String { + super::tilde_collapse(&std::env::current_dir().unwrap().to_string_lossy()) +} + +fn tempdir_in_cwd(prefix: &str) -> tempfile::TempDir { + let cwd = std::env::current_dir().unwrap(); + tempfile::Builder::new() + .prefix(prefix) + .tempdir_in(cwd) + .unwrap() +} + +fn find_entry_index(state: &SetupState, path: &std::path::Path) -> usize { + let wanted = super::tilde_collapse(&path.to_string_lossy()); + state + .path_browse_entries + .iter() + .position(|entry| entry.path == wanted) + .expect("expected path to be listed in popup tree") +} + +#[tokio::test] +async fn q_quits_setup_wizard() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('q'), KeyModifiers::NONE), + ) + .await; + + assert!(state.should_quit); + assert!(matches!(state.outcome, Some(SetupOutcome::Cancelled))); +} + +#[tokio::test] +async fn esc_cancels_setup_from_any_step() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.step = SetupStep::SelectOrgs; + state.org_loading = false; + + handle_key(&mut state, KeyEvent::new(KeyCode::Esc, KeyModifiers::NONE)).await; + + assert!(state.should_quit); + assert!(matches!(state.outcome, Some(SetupOutcome::Cancelled))); +} + +#[tokio::test] +async fn left_moves_to_previous_step() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.step = SetupStep::SelectOrgs; + state.org_loading = false; + + handle_key(&mut state, KeyEvent::new(KeyCode::Left, KeyModifiers::NONE)).await; + + assert_eq!(state.step, SetupStep::Authenticate); +} + +#[tokio::test] +async fn right_advances_from_provider_step() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + assert_eq!(state.step, SetupStep::SelectProvider); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + + assert_eq!(state.step, SetupStep::Authenticate); +} + +#[tokio::test] +async fn b_opens_path_browser_from_suggestions_mode() { + let temp = tempdir_in_cwd("gisa-path-browse-"); + std::fs::create_dir_all(temp.path().join("child")).unwrap(); + + let mut state = SetupState::new(&temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.populate_path_suggestions(); + state.base_path = temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::NONE), + ) + .await; + + assert!(state.path_browse_mode); + assert_eq!(state.step, SetupStep::SelectPath); + assert_eq!(state.path_browse_index, 0); + assert_eq!(state.path_browse_current_dir, cwd_collapsed()); + assert_eq!(state.path_browse_entries[0].depth, 0); + assert!(state.path_browse_entries.iter().any(|entry| entry.path + == super::tilde_collapse(&temp.path().to_string_lossy()) + && entry.depth == 1)); +} + +#[tokio::test] +async fn right_in_path_browse_mode_navigates_tree_without_advancing_step() { + let temp = tempdir_in_cwd("gisa-path-nav-"); + let alpha = temp.path().join("alpha"); + std::fs::create_dir_all(&alpha).unwrap(); + let expected = super::tilde_collapse(&alpha.to_string_lossy()); + + let mut state = SetupState::new(&temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.path_suggestions_mode = false; + state.base_path = temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::CONTROL), + ) + .await; + assert!(state.path_browse_mode); + state.path_browse_index = find_entry_index(&state, temp.path()); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + assert!(state.path_browse_entries[state.path_browse_index].expanded); + + assert_eq!(state.step, SetupStep::SelectPath); + assert!(state.path_browse_mode); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + assert_eq!(state.path_browse_current_dir, expected); +} + +#[tokio::test] +async fn enter_in_browse_mode_sets_path_and_closes_popup() { + let temp = tempdir_in_cwd("gisa-path-enter-"); + let alpha = temp.path().join("alpha"); + std::fs::create_dir_all(&alpha).unwrap(); + let expected = super::tilde_collapse(&alpha.to_string_lossy()); + + let mut state = SetupState::new(&temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.path_suggestions_mode = false; + state.base_path = temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::CONTROL), + ) + .await; + state.path_browse_index = find_entry_index(&state, temp.path()); + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE), + ) + .await; + + assert!(!state.path_browse_mode); + assert_eq!(state.step, SetupStep::SelectPath); + assert_eq!(state.base_path, expected); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE), + ) + .await; + assert_eq!(state.step, SetupStep::Confirm); +} + +#[tokio::test] +async fn esc_in_popup_only_closes_popup() { + let temp = tempfile::tempdir().unwrap(); + let mut state = SetupState::new(&temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.path_suggestions_mode = false; + state.base_path = temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::CONTROL), + ) + .await; + assert!(state.path_browse_mode); + + handle_key(&mut state, KeyEvent::new(KeyCode::Esc, KeyModifiers::NONE)).await; + + assert!(!state.path_browse_mode); + assert_eq!(state.step, SetupStep::SelectPath); + assert!(!state.should_quit); +} + +#[tokio::test] +async fn left_moves_to_parent_and_then_collapses() { + let temp = tempdir_in_cwd("gisa-path-left-"); + let alpha = temp.path().join("alpha"); + let nested = alpha.join("nested"); + std::fs::create_dir_all(&nested).unwrap(); + let parent = super::tilde_collapse(&temp.path().to_string_lossy()); + + let mut state = SetupState::new(&temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.path_suggestions_mode = false; + state.base_path = temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::CONTROL), + ) + .await; + state.path_browse_index = find_entry_index(&state, temp.path()); + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + assert_eq!( + state.path_browse_current_dir, + super::tilde_collapse(&alpha.to_string_lossy()) + ); + + handle_key(&mut state, KeyEvent::new(KeyCode::Left, KeyModifiers::NONE)).await; + assert_eq!(state.path_browse_current_dir, parent); + + let before = state.path_browse_entries.len(); + handle_key(&mut state, KeyEvent::new(KeyCode::Left, KeyModifiers::NONE)).await; + assert!(state.path_browse_entries.len() < before); + let parent_index = find_entry_index(&state, temp.path()); + assert!(!state.path_browse_entries[parent_index].expanded); +} + +#[tokio::test] +async fn right_on_leaf_does_not_change_selection_until_enter() { + let leaf_temp = tempdir_in_cwd("gisa-path-leaf-"); + let expected = super::tilde_collapse(&leaf_temp.path().to_string_lossy()); + + let mut state = SetupState::new(&leaf_temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.path_suggestions_mode = false; + state.base_path = leaf_temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::CONTROL), + ) + .await; + state.path_browse_index = find_entry_index(&state, leaf_temp.path()); + state.path_browse_current_dir = expected.clone(); + assert_eq!(state.path_browse_current_dir, expected); + + let selected_before = state.path_browse_index; + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + assert_eq!(state.path_browse_index, selected_before); + assert_eq!(state.path_browse_current_dir, expected); + assert!(state.path_browse_mode); +} + +#[tokio::test] +async fn very_large_directory_list_is_loaded() { + let temp = tempdir_in_cwd("gisa-path-many-"); + for i in 0..150 { + std::fs::create_dir_all(temp.path().join(format!("d{i:03}"))).unwrap(); + } + + let mut state = SetupState::new(&temp.path().to_string_lossy()); + state.step = SetupStep::SelectPath; + state.path_suggestions_mode = false; + state.base_path = temp.path().to_string_lossy().to_string(); + state.path_cursor = state.base_path.len(); + + handle_key( + &mut state, + KeyEvent::new(KeyCode::Char('b'), KeyModifiers::CONTROL), + ) + .await; + assert!(state.path_browse_error.is_none()); + state.path_browse_index = find_entry_index(&state, temp.path()); + handle_key( + &mut state, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + ) + .await; + + let children: Vec<_> = state + .path_browse_entries + .iter() + .filter(|entry| { + entry.depth == 2 + && entry.path.starts_with(&format!( + "{}/", + super::tilde_collapse(&temp.path().to_string_lossy()) + )) + }) + .map(|entry| entry.label.clone()) + .collect(); + assert_eq!(children.len(), 150); + assert_eq!(children.first().map(String::as_str), Some("d000/")); + assert_eq!(children.last().map(String::as_str), Some("d149/")); +} + +#[cfg(unix)] +#[tokio::test] +async fn unreadable_directory_surfaces_inline_error() { + use std::os::unix::fs::PermissionsExt; + + let temp = tempfile::tempdir().unwrap(); + let locked = temp.path().join("locked"); + std::fs::create_dir_all(&locked).unwrap(); + let mut perms = std::fs::metadata(&locked).unwrap().permissions(); + perms.set_mode(0o000); + std::fs::set_permissions(&locked, perms).unwrap(); + + if std::fs::read_dir(&locked).is_ok() { + let mut reset = std::fs::metadata(&locked).unwrap().permissions(); + reset.set_mode(0o700); + std::fs::set_permissions(&locked, reset).unwrap(); + return; + } + + let mut state = SetupState::new("~/Git-Same/GitHub"); + set_browse_root(&mut state, locked.clone()); + assert!(state.path_browse_error.is_some()); + + let mut reset = std::fs::metadata(&locked).unwrap().permissions(); + reset.set_mode(0o700); + std::fs::set_permissions(&locked, reset).unwrap(); +} diff --git a/src/setup/mod.rs b/src/setup/mod.rs new file mode 100644 index 0000000..b04b6d8 --- /dev/null +++ b/src/setup/mod.rs @@ -0,0 +1,126 @@ +//! Interactive setup wizard for creating workspace configurations. +//! +//! This module provides a self-contained ratatui mini-app that guides +//! the user through setting up a workspace: selecting a provider, +//! authenticating, selecting organizations, and choosing a base path. + +pub mod handler; +pub mod screens; +pub mod state; +pub mod ui; + +use crate::errors::Result; +use crossterm::{ + event::{DisableMouseCapture, EnableMouseCapture, Event as CtEvent}, + execute, + terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen}, +}; +use ratatui::backend::CrosstermBackend; +use ratatui::Terminal; +use state::{SetupOutcome, SetupState, SetupStep}; +use std::io; +use std::time::Duration; + +/// Run the setup wizard. +/// +/// Returns `Ok(true)` if the wizard completed (workspace saved), +/// `Ok(false)` if the user cancelled. +pub async fn run_setup() -> Result { + let default_path = std::env::current_dir() + .map(|p| state::tilde_collapse(&p.to_string_lossy())) + .unwrap_or_else(|_| "~/Git-Same/GitHub".to_string()); + let mut state = SetupState::new(&default_path); + + struct SetupTerminalGuard { + raw_enabled: bool, + alt_enabled: bool, + } + impl Drop for SetupTerminalGuard { + fn drop(&mut self) { + if self.alt_enabled { + let mut stdout = io::stdout(); + let _ = execute!(stdout, LeaveAlternateScreen, DisableMouseCapture); + } + if self.raw_enabled { + let _ = disable_raw_mode(); + } + } + } + + // Setup terminal + enable_raw_mode()?; + let mut guard = SetupTerminalGuard { + raw_enabled: true, + alt_enabled: false, + }; + let mut stdout = io::stdout(); + execute!(stdout, EnterAlternateScreen, EnableMouseCapture)?; + guard.alt_enabled = true; + let backend = CrosstermBackend::new(stdout); + let mut terminal = Terminal::new(backend)?; + + // Main loop + let result = run_wizard(&mut terminal, &mut state).await; + + // Restore terminal (always, even on error) + let _ = disable_raw_mode(); + guard.raw_enabled = false; + let _ = execute!( + terminal.backend_mut(), + LeaveAlternateScreen, + DisableMouseCapture + ); + guard.alt_enabled = false; + let _ = terminal.show_cursor(); + + result?; + + Ok(matches!(state.outcome, Some(SetupOutcome::Completed))) +} + +async fn run_wizard( + terminal: &mut Terminal>, + state: &mut SetupState, +) -> Result<()> { + loop { + terminal.draw(|frame| ui::render(state, frame))?; + + // If we're on the orgs step and loading, trigger discovery before waiting for input + if state.step == SetupStep::SelectOrgs && state.org_loading { + // Render loading state first, then do discovery + terminal.draw(|frame| ui::render(state, frame))?; + handler::handle_key( + state, + crossterm::event::KeyEvent::new( + crossterm::event::KeyCode::Null, + crossterm::event::KeyModifiers::NONE, + ), + ) + .await; + continue; + } + + // Increment tick counter for animations + state.tick_count = state.tick_count.wrapping_add(1); + + // Wait for input with a short timeout for responsive tick + if crossterm::event::poll(Duration::from_millis(100))? { + if let Ok(event) = crossterm::event::read() { + match event { + CtEvent::Key(key) => { + handler::handle_key(state, key).await; + } + CtEvent::Resize(_, _) => { + // Terminal will re-render on next loop iteration + } + _ => {} + } + } + } + + if state.should_quit { + break; + } + } + Ok(()) +} diff --git a/src/setup/screens/auth.rs b/src/setup/screens/auth.rs new file mode 100644 index 0000000..4d762cc --- /dev/null +++ b/src/setup/screens/auth.rs @@ -0,0 +1,111 @@ +//! Step 2: Authentication screen with spinner and centered layout. + +use crate::setup::state::{AuthStatus, SetupState}; +use ratatui::layout::{Alignment, Rect}; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::{Block, BorderType, Borders, Paragraph}; +use ratatui::Frame; + +/// Braille spinner frames. +const SPINNER: [char; 10] = [ + '\u{280b}', '\u{2819}', '\u{2839}', '\u{2838}', '\u{283c}', '\u{2834}', '\u{2826}', '\u{2827}', + '\u{2807}', '\u{280f}', +]; + +pub fn render(state: &SetupState, frame: &mut Frame, area: Rect) { + let provider = state.selected_provider(); + let green = Style::default().fg(Color::Rgb(21, 128, 61)); + let green_bold = green.add_modifier(Modifier::BOLD); + + let mut lines: Vec = Vec::new(); + lines.push(Line::raw("")); + + // Title + lines.push(Line::from(Span::styled( + format!("Authenticate with {}", provider.display_name()), + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ))); + lines.push(Line::raw("")); + lines.push(Line::from(Span::styled( + "Detection method: GitHub CLI (gh)", + Style::default().fg(Color::DarkGray), + ))); + lines.push(Line::raw("")); + + match &state.auth_status { + AuthStatus::Pending => { + lines.push(Line::from(Span::styled( + "Press Enter to authenticate...", + Style::default().fg(Color::Yellow), + ))); + } + AuthStatus::Checking => { + let spinner_char = SPINNER[(state.tick_count as usize) % SPINNER.len()]; + lines.push(Line::from(Span::styled( + format!("{} Authenticating...", spinner_char), + Style::default().fg(Color::Yellow), + ))); + } + AuthStatus::Success => { + lines.push(Line::from(Span::styled( + "\u{2713} Authenticated", + green_bold, + ))); + if let Some(ref username) = state.username { + lines.push(Line::from(vec![ + Span::styled("Logged in as: ", Style::default().fg(Color::DarkGray)), + Span::styled( + format!("@{}", username), + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ), + ])); + } + lines.push(Line::raw("")); + lines.push(Line::from(Span::styled( + "Press Enter to continue", + Style::default().fg(Color::DarkGray), + ))); + } + AuthStatus::Failed(msg) => { + lines.push(Line::from(Span::styled( + "\u{2717} Authentication failed", + Style::default().fg(Color::Red).add_modifier(Modifier::BOLD), + ))); + lines.push(Line::raw("")); + lines.push(Line::from(Span::styled( + msg.as_str(), + Style::default().fg(Color::White), + ))); + lines.push(Line::raw("")); + lines.push(Line::from(Span::styled( + "Ensure gh is installed and run: gh auth login", + Style::default().fg(Color::DarkGray), + ))); + } + } + + let content = Paragraph::new(lines).alignment(Alignment::Center); + + // Error block styling for failed state + let block = if matches!(state.auth_status, AuthStatus::Failed(_)) { + Block::default() + .borders(Borders::ALL) + .border_type(BorderType::Plain) + .border_style(Style::default().fg(Color::Red)) + .title(" Error ") + .title_style(Style::default().fg(Color::Red).add_modifier(Modifier::BOLD)) + } else { + Block::default() + }; + + frame.render_widget(content.block(block), area); +} + +#[cfg(test)] +#[path = "auth_tests.rs"] +mod tests; diff --git a/src/setup/screens/auth_tests.rs b/src/setup/screens/auth_tests.rs new file mode 100644 index 0000000..64c0f7b --- /dev/null +++ b/src/setup/screens/auth_tests.rs @@ -0,0 +1,59 @@ +use super::*; +use crate::setup::state::{AuthStatus, SetupState}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(100, 22); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn render_pending_state_prompts_for_authentication() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.auth_status = AuthStatus::Pending; + + let output = render_output(&state); + assert!(output.contains("Authenticate with")); + assert!(output.contains("Press Enter to authenticate")); +} + +#[test] +fn render_success_state_shows_username_and_continue_hint() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.auth_status = AuthStatus::Success; + state.username = Some("octocat".to_string()); + + let output = render_output(&state); + assert!(output.contains("Authenticated")); + assert!(output.contains("@octocat")); + assert!(output.contains("Press Enter to continue")); +} + +#[test] +fn render_failed_state_shows_error_guidance() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.auth_status = AuthStatus::Failed("token missing".to_string()); + + let output = render_output(&state); + assert!(output.contains("Authentication failed")); + assert!(output.contains("token missing")); + assert!(output.contains("gh auth login")); +} diff --git a/src/setup/screens/complete.rs b/src/setup/screens/complete.rs new file mode 100644 index 0000000..b989900 --- /dev/null +++ b/src/setup/screens/complete.rs @@ -0,0 +1,87 @@ +//! Step 6: Completion / success screen. + +use crate::setup::state::SetupState; +use ratatui::layout::{Constraint, Layout, Rect}; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::Paragraph; +use ratatui::Frame; + +pub fn render(state: &SetupState, frame: &mut Frame, area: Rect) { + let chunks = Layout::vertical([ + Constraint::Length(3), // Title + Constraint::Min(10), // Content + Constraint::Length(2), // Help + ]) + .split(area); + + // Title + let title_text = if state.is_first_setup { + "Workspace Created!" + } else { + "Workspace Added!" + }; + let title = Paragraph::new(Line::from(vec![ + Span::styled( + " \u{2713} ", + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ), + Span::styled( + title_text, + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ), + ])); + frame.render_widget(title, chunks[0]); + + // Summary + let selected_orgs = state.selected_orgs(); + let total_repos: usize = state + .orgs + .iter() + .filter(|o| o.selected) + .map(|o| o.repo_count) + .sum(); + let org_count = selected_orgs.len(); + + let value_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + let dim = Style::default().fg(Color::DarkGray); + let yellow = Style::default().fg(Color::Yellow); + + let lines = vec![ + Line::raw(""), + Line::from(Span::styled( + format!(" {}", state.workspace_name), + value_style, + )), + Line::from(Span::styled(format!(" {}", state.base_path), dim)), + Line::from(Span::styled( + format!( + " {} organization{} \u{00b7} {} repos", + org_count, + if org_count == 1 { "" } else { "s" }, + total_repos + ), + dim, + )), + Line::raw(""), + Line::raw(""), + Line::from(Span::styled(" Press Enter to continue", yellow)), + ]; + + let content = Paragraph::new(lines); + frame.render_widget(content, chunks[1]); + + // Help + let help = Paragraph::new("Enter Dashboard Esc Back").style(dim); + frame.render_widget(help, chunks[2]); +} + +#[cfg(test)] +#[path = "complete_tests.rs"] +mod tests; diff --git a/src/setup/screens/complete_tests.rs b/src/setup/screens/complete_tests.rs new file mode 100644 index 0000000..db314e3 --- /dev/null +++ b/src/setup/screens/complete_tests.rs @@ -0,0 +1,56 @@ +use super::*; +use crate::setup::state::{OrgEntry, SetupState}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(100, 20); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn render_complete_first_setup_shows_workspace_created() { + let mut state = SetupState::with_first_setup("~/Git-Same/GitHub", true); + state.workspace_name = "first-workspace".to_string(); + state.base_path = "~/Git-Same/GitHub".to_string(); + state.orgs = vec![OrgEntry { + name: "acme".to_string(), + repo_count: 12, + selected: true, + }]; + + let output = render_output(&state); + assert!(output.contains("Workspace Created!")); + assert!(output.contains("first-workspace")); + assert!(output.contains("1 organization")); + assert!(output.contains("12 repos")); +} + +#[test] +fn render_complete_additional_setup_shows_workspace_added() { + let mut state = SetupState::with_first_setup("~/Git-Same/GitHub", false); + state.workspace_name = "second-workspace".to_string(); + state.base_path = "~/Git-Same/GitHub".to_string(); + + let output = render_output(&state); + assert!(output.contains("Workspace Added!")); + assert!(output.contains("second-workspace")); + assert!(output.contains("Press Enter to continue")); +} diff --git a/src/setup/screens/confirm.rs b/src/setup/screens/confirm.rs new file mode 100644 index 0000000..5afb97b --- /dev/null +++ b/src/setup/screens/confirm.rs @@ -0,0 +1,109 @@ +//! Step 5: Review and save screen with bordered summary card. + +use crate::setup::state::SetupState; +use ratatui::layout::{Constraint, Layout, Rect}; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::{Block, Borders, Paragraph}; +use ratatui::Frame; + +pub fn render(state: &SetupState, frame: &mut Frame, area: Rect) { + let chunks = Layout::vertical([ + Constraint::Length(3), // Title + Constraint::Length(9), // Summary card + Constraint::Min(3), // Info + error + ]) + .split(area); + + // Title + let title = Paragraph::new(Line::from(Span::styled( + " Review Workspace Configuration", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ))); + frame.render_widget(title, chunks[0]); + + // Summary card + let provider = state.selected_provider(); + let selected_orgs = state.selected_orgs(); + let orgs_display = if selected_orgs.is_empty() { + "all organizations".to_string() + } else if selected_orgs.len() <= 3 { + selected_orgs.join(", ") + } else { + format!( + "{}, ... +{} more", + selected_orgs[..2].join(", "), + selected_orgs.len() - 2 + ) + }; + + let label_style = Style::default().fg(Color::DarkGray); + let value_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + + let summary_lines = vec![ + Line::from(vec![ + Span::styled(" Provider ", label_style), + Span::styled(provider.display_name(), value_style), + ]), + Line::from(vec![ + Span::styled(" Username ", label_style), + Span::styled( + format!("@{}", state.username.as_deref().unwrap_or("unknown")), + value_style, + ), + ]), + Line::from(vec![ + Span::styled(" Base Path ", label_style), + Span::styled(&state.base_path, value_style), + ]), + Line::from(vec![ + Span::styled(" Organizations ", label_style), + Span::styled(&orgs_display, value_style), + ]), + Line::from(vec![ + Span::styled(" Workspace ", label_style), + Span::styled(&state.workspace_name, value_style), + ]), + ]; + + let summary = Paragraph::new(summary_lines).block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(summary, chunks[1]); + + // Info + error + let mut info_lines: Vec = Vec::new(); + info_lines.push(Line::raw("")); + info_lines.push(Line::from(Span::styled( + format!( + " Config will be saved to: ~/.config/git-same/workspaces/{}/", + state.workspace_name + ), + Style::default().fg(Color::DarkGray), + ))); + info_lines.push(Line::raw("")); + info_lines.push(Line::from(Span::styled( + " Press Enter to save and continue", + Style::default().fg(Color::Yellow), + ))); + + if let Some(ref err) = state.error_message { + info_lines.push(Line::raw("")); + info_lines.push(Line::from(Span::styled( + format!(" Error: {}", err), + Style::default().fg(Color::Red), + ))); + } + + frame.render_widget(Paragraph::new(info_lines), chunks[2]); +} + +#[cfg(test)] +#[path = "confirm_tests.rs"] +mod tests; diff --git a/src/setup/screens/confirm_tests.rs b/src/setup/screens/confirm_tests.rs new file mode 100644 index 0000000..4822d84 --- /dev/null +++ b/src/setup/screens/confirm_tests.rs @@ -0,0 +1,62 @@ +use super::*; +use crate::setup::state::{OrgEntry, SetupState}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(110, 24); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn render_confirm_shows_workspace_summary() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.username = Some("octocat".to_string()); + state.workspace_name = "personal-workspace".to_string(); + state.orgs = vec![ + OrgEntry { + name: "acme".to_string(), + repo_count: 4, + selected: true, + }, + OrgEntry { + name: "tools".to_string(), + repo_count: 2, + selected: true, + }, + ]; + + let output = render_output(&state); + assert!(output.contains("Review Workspace Configuration")); + assert!(output.contains("@octocat")); + assert!(output.contains("personal-workspace")); + assert!(output.contains("acme, tools")); +} + +#[test] +fn render_confirm_shows_inline_error_when_present() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.workspace_name = "broken".to_string(); + state.error_message = Some("Unable to write config".to_string()); + + let output = render_output(&state); + assert!(output.contains("Press Enter to save and continue")); + assert!(output.contains("Error: Unable to write config")); +} diff --git a/src/setup/screens/mod.rs b/src/setup/screens/mod.rs new file mode 100644 index 0000000..f590757 --- /dev/null +++ b/src/setup/screens/mod.rs @@ -0,0 +1,9 @@ +//! Setup wizard screen renderers. + +pub mod auth; +pub mod complete; +pub mod confirm; +pub mod orgs; +pub mod path; +pub mod provider; +pub mod welcome; diff --git a/src/setup/screens/orgs.rs b/src/setup/screens/orgs.rs new file mode 100644 index 0000000..9a3f07b --- /dev/null +++ b/src/setup/screens/orgs.rs @@ -0,0 +1,162 @@ +//! Step 3: Organization selection screen with summary and proportional bars. + +use crate::setup::state::SetupState; +use ratatui::layout::Rect; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::Paragraph; +use ratatui::Frame; + +/// Braille spinner frames (same as auth). +const SPINNER: [char; 10] = [ + '\u{280b}', '\u{2819}', '\u{2839}', '\u{2838}', '\u{283c}', '\u{2834}', '\u{2826}', '\u{2827}', + '\u{2807}', '\u{280f}', +]; + +pub fn render(state: &SetupState, frame: &mut Frame, area: Rect) { + let mut lines: Vec = Vec::new(); + + // Title + let selected_count = state.orgs.iter().filter(|o| o.selected).count(); + let total_repos: usize = state.orgs.iter().map(|o| o.repo_count).sum(); + let selected_repos: usize = state + .orgs + .iter() + .filter(|o| o.selected) + .map(|o| o.repo_count) + .sum(); + + lines.push(Line::from(Span::styled( + " Select organizations to sync", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ))); + + if !state.orgs.is_empty() { + lines.push(Line::from(vec![ + Span::styled( + format!(" {} of {} selected", selected_count, state.orgs.len()), + Style::default().fg(Color::DarkGray), + ), + Span::styled(" \u{00b7} ", Style::default().fg(Color::DarkGray)), + Span::styled( + format!("{} repos", selected_repos), + Style::default().fg(Color::Rgb(21, 128, 61)), + ), + Span::styled( + format!(" of {} total", total_repos), + Style::default().fg(Color::DarkGray), + ), + ])); + } + lines.push(Line::raw("")); + + // Content + if state.org_loading { + let spinner_char = SPINNER[(state.tick_count as usize) % SPINNER.len()]; + lines.push(Line::from(Span::styled( + format!(" {} Discovering organizations...", spinner_char), + Style::default().fg(Color::Yellow), + ))); + } else if let Some(ref err) = state.org_error { + lines.push(Line::from(Span::styled( + " \u{2717} Failed to discover organizations", + Style::default().fg(Color::Red).add_modifier(Modifier::BOLD), + ))); + lines.push(Line::raw("")); + lines.push(Line::from(Span::styled( + format!(" {}", err), + Style::default().fg(Color::White), + ))); + lines.push(Line::raw("")); + lines.push(Line::from(Span::styled( + " Press Enter to retry", + Style::default().fg(Color::Yellow), + ))); + } else if state.orgs.is_empty() { + lines.push(Line::from(Span::styled( + " No organizations found. Press Enter to continue.", + Style::default().fg(Color::DarkGray), + ))); + lines.push(Line::from(Span::styled( + " Your personal repos will still be synced.", + Style::default().fg(Color::DarkGray), + ))); + } else { + let max_repos = state.orgs.iter().map(|o| o.repo_count).max().unwrap_or(1); + let bar_width = 16; + + for (i, org) in state.orgs.iter().enumerate() { + let is_selected = i == state.org_index; + let marker = if is_selected { " \u{25b8}" } else { " " }; + let checkbox = if org.selected { "[x]" } else { "[ ]" }; + + let green = Color::Rgb(21, 128, 61); + + let (marker_style, name_style, count_style) = if is_selected { + ( + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + Style::default().fg(Color::DarkGray), + ) + } else if org.selected { + ( + Style::default().fg(green), + Style::default().fg(Color::White), + Style::default().fg(Color::DarkGray), + ) + } else { + ( + Style::default().fg(Color::White), + Style::default().fg(Color::White), + Style::default().fg(Color::DarkGray), + ) + }; + + // Proportional bar + let filled = if max_repos > 0 { + (org.repo_count * bar_width) / max_repos + } else { + 0 + } + .max(if org.repo_count > 0 { 1 } else { 0 }); + let empty = bar_width - filled; + + let bar_color = if org.selected { green } else { Color::DarkGray }; + + let mut spans = vec![ + Span::styled(format!("{} {} ", marker, checkbox), marker_style), + Span::styled(format!("{:<20}", org.name), name_style), + Span::styled(format!("{:>4} repos ", org.repo_count), count_style), + Span::styled("\u{2588}".repeat(filled), Style::default().fg(bar_color)), + Span::styled( + "\u{2591}".repeat(empty), + Style::default().fg(Color::DarkGray), + ), + ]; + + // Percentage + if total_repos > 0 { + let pct = (org.repo_count * 100) / total_repos; + spans.push(Span::styled( + format!(" {:>3}%", pct), + Style::default().fg(Color::DarkGray), + )); + } + + lines.push(Line::from(spans)); + } + } + + let widget = Paragraph::new(lines); + frame.render_widget(widget, area); +} + +#[cfg(test)] +#[path = "orgs_tests.rs"] +mod tests; diff --git a/src/setup/screens/orgs_tests.rs b/src/setup/screens/orgs_tests.rs new file mode 100644 index 0000000..79862ba --- /dev/null +++ b/src/setup/screens/orgs_tests.rs @@ -0,0 +1,71 @@ +use super::*; +use crate::setup::state::{OrgEntry, SetupState}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(100, 24); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn render_loading_state_shows_discovery_message() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.org_loading = true; + state.tick_count = 3; + + let output = render_output(&state); + assert!(output.contains("Select organizations to sync")); + assert!(output.contains("Discovering organizations")); +} + +#[test] +fn render_populated_orgs_shows_selection_summary() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.org_loading = false; + state.orgs = vec![ + OrgEntry { + name: "acme".to_string(), + repo_count: 5, + selected: true, + }, + OrgEntry { + name: "beta".to_string(), + repo_count: 10, + selected: false, + }, + ]; + state.org_index = 0; + + let output = render_output(&state); + assert!(output.contains("1 of 2 selected")); + assert!(output.contains("5 repos")); + assert!(output.contains("acme")); + assert!(output.contains("beta")); +} + +#[test] +fn render_empty_orgs_shows_personal_repo_hint() { + let state = SetupState::new("~/Git-Same/GitHub"); + + let output = render_output(&state); + assert!(output.contains("No organizations found")); + assert!(output.contains("personal repos")); +} diff --git a/src/setup/screens/path.rs b/src/setup/screens/path.rs new file mode 100644 index 0000000..25ac598 --- /dev/null +++ b/src/setup/screens/path.rs @@ -0,0 +1,362 @@ +//! Step 4: Base path input screen with suggestions, tab completion, and live preview. + +use crate::setup::state::SetupState; +use ratatui::layout::{Alignment, Constraint, Layout, Rect}; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::{Block, BorderType, Borders, Clear, Paragraph}; +use ratatui::Frame; + +pub fn render(state: &SetupState, frame: &mut Frame, area: Rect) { + let popup_open = state.path_browse_mode; + let list_items = if popup_open { + 0 + } else if state.path_suggestions_mode { + state.path_suggestions.len() + } else { + state.path_completions.len() + }; + let list_height = if list_items > 0 { + (list_items as u16 + 1).min(7) + } else { + 0 + }; + + let chunks = Layout::vertical([ + Constraint::Length(4), // Title + info + Constraint::Length(3), // Input + Constraint::Length(list_height), // Suggestions or completions + Constraint::Min(3), // Preview + error + ]) + .split(area); + + let accent = if popup_open { + Color::DarkGray + } else { + Color::Cyan + }; + let muted = Color::DarkGray; + let input_text_color = if popup_open || state.path_suggestions_mode { + Color::DarkGray + } else { + Color::Yellow + }; + + // Title and info (above input) + let title_lines = vec![ + Line::from(Span::styled( + " Where should repositories be cloned?", + Style::default().fg(accent).add_modifier(Modifier::BOLD), + )), + Line::from(Span::styled( + " Repos will be organized as: //", + Style::default().fg(muted), + )), + ]; + frame.render_widget(Paragraph::new(title_lines), chunks[0]); + + // Path input with styled border + let input_style = Style::default().fg(input_text_color); + let cursor_pos = state.path_cursor.min(state.base_path.len()); + + let input_line = Line::from(vec![ + Span::styled(" ", Style::default()), + Span::styled(&state.base_path, input_style), + ]); + let border_type = if state.path_browse_mode { + BorderType::Thick + } else if state.path_suggestions_mode { + BorderType::Plain + } else { + BorderType::Thick + }; + let border_color = if state.path_suggestions_mode { + Color::DarkGray + } else { + accent + }; + let input = Paragraph::new(input_line).block( + Block::default() + .borders(Borders::ALL) + .title(" Base Path ") + .border_type(border_type) + .border_style(Style::default().fg(border_color)), + ); + frame.render_widget(input, chunks[1]); + + // Show cursor in input mode + if !state.path_suggestions_mode && !state.path_browse_mode { + let cursor_x = chunks[1].x + 1 + 2 + cursor_pos as u16; + let cursor_y = chunks[1].y + 1; + frame.set_cursor_position((cursor_x, cursor_y)); + } + + // Suggestions or completions list + if state.path_suggestions_mode && !state.path_suggestions.is_empty() { + render_suggestions(state, frame, chunks[2]); + } else if !state.path_suggestions_mode && !state.path_completions.is_empty() { + render_completions(state, frame, chunks[2]); + } + + // Preview + error + let mut preview_lines: Vec = Vec::new(); + let preview_path = &state.base_path; + if !preview_path.is_empty() { + preview_lines.push(Line::from(Span::styled( + " Preview:", + Style::default().fg(muted), + ))); + preview_lines.push(Line::from(Span::styled( + format!(" {preview_path}/acme-corp/my-repo/"), + Style::default().fg(muted), + ))); + } + + if let Some(ref err) = state.error_message { + preview_lines.push(Line::raw("")); + preview_lines.push(Line::from(Span::styled( + format!(" {}", err), + Style::default().fg(muted), + ))); + } + + frame.render_widget(Paragraph::new(preview_lines), chunks[3]); + if popup_open { + render_browse_popup(state, frame, area); + } +} + +fn render_browse_popup(state: &SetupState, frame: &mut Frame, area: Rect) { + let popup_area = centered_area(area, 80, 80); + frame.render_widget(Clear, popup_area); + + let popup = Block::default() + .title(" Folder Navigator ") + .borders(Borders::ALL) + .border_type(BorderType::Thick) + .border_style(Style::default().fg(Color::Cyan)); + let inner = popup.inner(popup_area); + frame.render_widget(popup, popup_area); + + let show_message = state.path_browse_error.is_some() || state.path_browse_info.is_some(); + let rows = Layout::vertical([ + Constraint::Length(1), // path + Constraint::Min(3), // tree + Constraint::Length(if show_message { 1 } else { 0 }), + Constraint::Length(1), // footer + ]) + .split(inner); + + let path_line = Line::from(vec![ + Span::styled("Path: ", Style::default().fg(Color::DarkGray)), + Span::styled( + &state.path_browse_current_dir, + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ), + ]); + frame.render_widget(Paragraph::new(path_line), rows[0]); + + render_browse_tree(state, frame, rows[1]); + + if show_message { + let message = state + .path_browse_error + .as_ref() + .map(|msg| { + ( + msg.as_str(), + Style::default().fg(Color::Red).add_modifier(Modifier::BOLD), + ) + }) + .or_else(|| { + state + .path_browse_info + .as_ref() + .map(|msg| (msg.as_str(), Style::default().fg(Color::DarkGray))) + }); + if let Some((msg, style)) = message { + frame.render_widget( + Paragraph::new(Line::from(Span::styled(msg, style))), + rows[2], + ); + } + } + + render_popup_footer(frame, rows[3]); +} + +fn render_browse_tree(state: &SetupState, frame: &mut Frame, area: Rect) { + if area.height == 0 { + return; + } + let mut lines: Vec = Vec::new(); + if state.path_browse_entries.is_empty() { + lines.push(Line::from(Span::styled( + " (No folders available)", + Style::default().fg(Color::DarkGray), + ))); + frame.render_widget(Paragraph::new(lines), area); + return; + } + + let visible = area.height as usize; + let selection = state + .path_browse_index + .min(state.path_browse_entries.len().saturating_sub(1)); + let half = visible / 2; + let mut start = selection.saturating_sub(half); + if start + visible > state.path_browse_entries.len() { + start = state.path_browse_entries.len().saturating_sub(visible); + } + + for (i, entry) in state + .path_browse_entries + .iter() + .enumerate() + .skip(start) + .take(visible) + { + let is_selected = i == selection; + let pointer = if is_selected { "› " } else { " " }; + let icon = if entry.has_children { + if entry.expanded { + "▾ " + } else { + "▸ " + } + } else { + " " + }; + let style = if is_selected { + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD) + } else { + Style::default().fg(Color::White) + }; + lines.push(Line::from(vec![ + Span::styled(pointer, style), + Span::styled( + " ".repeat(entry.depth as usize), + Style::default().fg(Color::DarkGray), + ), + Span::styled(icon, style), + Span::styled(&entry.label, style), + ])); + } + + frame.render_widget(Paragraph::new(lines), area); +} + +fn render_popup_footer(frame: &mut Frame, area: Rect) { + let left = "[Esc] Close"; + let center = "[←] Parent [↑/↓] Move [→] Open"; + let right = "[Enter] Select"; + let cols = Layout::horizontal([ + Constraint::Length(left.chars().count() as u16), + Constraint::Min(0), + Constraint::Length(right.chars().count() as u16), + ]) + .split(area); + + frame.render_widget( + Paragraph::new(Line::from(Span::styled( + left, + Style::default().fg(Color::DarkGray), + ))), + cols[0], + ); + frame.render_widget( + Paragraph::new(Line::from(Span::styled( + center, + Style::default().fg(Color::Cyan), + ))) + .alignment(Alignment::Center), + cols[1], + ); + frame.render_widget( + Paragraph::new(Line::from(Span::styled( + right, + Style::default().fg(Color::DarkGray), + ))) + .alignment(Alignment::Right), + cols[2], + ); +} + +fn centered_area(area: Rect, width_pct: u16, height_pct: u16) -> Rect { + let top = (100 - height_pct) / 2; + let bottom = 100 - height_pct - top; + let left = (100 - width_pct) / 2; + let right = 100 - width_pct - left; + + let vertical = Layout::vertical([ + Constraint::Percentage(top), + Constraint::Percentage(height_pct), + Constraint::Percentage(bottom), + ]) + .split(area); + let horizontal = Layout::horizontal([ + Constraint::Percentage(left), + Constraint::Percentage(width_pct), + Constraint::Percentage(right), + ]) + .split(vertical[1]); + horizontal[1] +} + +fn render_suggestions(state: &SetupState, frame: &mut Frame, area: Rect) { + let mut lines = vec![Line::from(Span::styled( + " Suggestions:", + Style::default().fg(Color::DarkGray), + ))]; + + for (i, suggestion) in state.path_suggestions.iter().enumerate() { + let is_selected = i == state.path_suggestion_index; + let marker = if is_selected { " \u{25b8} " } else { " " }; + let path_style = if is_selected { + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD) + } else { + Style::default().fg(Color::White) + }; + + let mut spans = vec![ + Span::styled(marker, path_style), + Span::styled(&suggestion.path, path_style), + ]; + if !suggestion.label.is_empty() { + spans.push(Span::styled( + format!(" ({})", suggestion.label), + Style::default().fg(Color::DarkGray), + )); + } + lines.push(Line::from(spans)); + } + + frame.render_widget(Paragraph::new(lines), area); +} + +fn render_completions(state: &SetupState, frame: &mut Frame, area: Rect) { + let mut lines: Vec = Vec::new(); + for (i, path) in state.path_completions.iter().enumerate() { + if i >= 6 { + break; + } + let style = if i == state.path_completion_index { + Style::default().fg(Color::Yellow) + } else { + Style::default().fg(Color::DarkGray) + }; + lines.push(Line::from(Span::styled(format!(" {path}"), style))); + } + + frame.render_widget(Paragraph::new(lines), area); +} + +#[cfg(test)] +#[path = "path_tests.rs"] +mod tests; diff --git a/src/setup/screens/path_tests.rs b/src/setup/screens/path_tests.rs new file mode 100644 index 0000000..1d7e757 --- /dev/null +++ b/src/setup/screens/path_tests.rs @@ -0,0 +1,96 @@ +use super::*; +use crate::setup::state::{PathBrowseEntry, PathSuggestion, SetupState}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(90, 26); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn render_suggestions_mode_shows_suggestions_block() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.path_suggestions_mode = true; + state.path_suggestions = vec![ + PathSuggestion { + path: "~/Git-Same/GitHub".to_string(), + label: "current directory".to_string(), + }, + PathSuggestion { + path: "~/Developer".to_string(), + label: "recommended".to_string(), + }, + ]; + state.path_suggestion_index = 1; + + let output = render_output(&state); + assert!(output.contains("Suggestions:")); + assert!(output.contains("~/Developer")); + assert!(output.contains("recommended")); +} + +#[test] +fn render_browse_mode_shows_folder_navigator_context() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.path_suggestions_mode = false; + state.path_browse_mode = true; + state.path_browse_current_dir = "~/Projects".to_string(); + state.path_browse_entries = vec![ + PathBrowseEntry { + label: "Projects/".to_string(), + path: "~/Projects".to_string(), + depth: 0, + expanded: true, + has_children: true, + }, + PathBrowseEntry { + label: "client/".to_string(), + path: "~/Projects/client".to_string(), + depth: 1, + expanded: false, + has_children: false, + }, + ]; + state.path_browse_index = 1; + + let output = render_output(&state); + assert!(output.contains("Folder Navigator")); + assert!(output.contains("Path:")); + assert!(output.contains("~/Projects")); + assert!(output.contains("client")); + assert!(output.contains("[Esc] Close")); + assert!(output.contains("[Enter] Select")); +} + +#[test] +fn render_error_state_shows_preview_and_error_message() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.path_suggestions_mode = false; + state.path_browse_mode = false; + state.base_path = "~/invalid-path".to_string(); + state.path_cursor = state.base_path.len(); + state.error_message = Some("Path does not exist".to_string()); + + let output = render_output(&state); + assert!(output.contains("Preview:")); + assert!(output.contains("~/invalid-path/acme-corp/my-repo/")); + assert!(output.contains("Path does not exist")); +} diff --git a/src/setup/screens/provider.rs b/src/setup/screens/provider.rs new file mode 100644 index 0000000..393c7c4 --- /dev/null +++ b/src/setup/screens/provider.rs @@ -0,0 +1,79 @@ +//! Step 1: Provider selection screen with descriptions. + +use crate::setup::state::SetupState; +use crate::types::ProviderKind; +use ratatui::layout::Rect; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::Paragraph; +use ratatui::Frame; + +/// Get a short description for each provider. +fn provider_description(kind: ProviderKind) -> &'static str { + match kind { + ProviderKind::GitHub => "github.com \u{2014} Public and private repositories", + ProviderKind::GitHubEnterprise => "GitHub Enterprise Cloud & Server (coming soon)", + ProviderKind::GitLab => "gitlab.com (coming soon)", + ProviderKind::GitLabSelfManaged => "GitLab Dedicated & Self-Managed (coming soon)", + ProviderKind::Codeberg => "codeberg.org (coming soon)", + ProviderKind::Bitbucket => "bitbucket.org (coming soon)", + } +} + +pub fn render(state: &SetupState, frame: &mut Frame, area: Rect) { + let mut lines: Vec = Vec::new(); + + // Title + lines.push(Line::from(Span::styled( + " Select your Git provider", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ))); + lines.push(Line::raw("")); + + // Provider list with descriptions + for (i, choice) in state.provider_choices.iter().enumerate() { + let is_selected = i == state.provider_index; + let marker = if is_selected { " \u{25b8} " } else { " " }; + + let (label_style, desc_style) = if !choice.available { + ( + Style::default().fg(Color::DarkGray), + Style::default().fg(Color::DarkGray), + ) + } else if is_selected { + ( + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + Style::default().fg(Color::White), + ) + } else { + ( + Style::default().fg(Color::White), + Style::default().fg(Color::DarkGray), + ) + }; + + lines.push(Line::from(vec![ + Span::styled(marker, label_style), + Span::styled(&choice.label, label_style), + ])); + + // Description line + lines.push(Line::from(Span::styled( + format!(" {}", provider_description(choice.kind)), + desc_style, + ))); + + lines.push(Line::raw("")); + } + + let widget = Paragraph::new(lines); + frame.render_widget(widget, area); +} + +#[cfg(test)] +#[path = "provider_tests.rs"] +mod tests; diff --git a/src/setup/screens/provider_tests.rs b/src/setup/screens/provider_tests.rs new file mode 100644 index 0000000..04eb753 --- /dev/null +++ b/src/setup/screens/provider_tests.rs @@ -0,0 +1,48 @@ +use super::*; +use crate::setup::state::SetupState; +use crate::types::ProviderKind; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(100, 24); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn provider_description_matches_expected_labels() { + assert!(provider_description(ProviderKind::GitHub).contains("github.com")); + assert!(provider_description(ProviderKind::GitHubEnterprise).contains("coming soon")); + assert!(provider_description(ProviderKind::GitLab).contains("gitlab.com")); + assert!(provider_description(ProviderKind::GitLabSelfManaged).contains("coming soon")); + assert!(provider_description(ProviderKind::Codeberg).contains("codeberg.org")); + assert!(provider_description(ProviderKind::Bitbucket).contains("bitbucket.org")); +} + +#[test] +fn render_provider_screen_shows_options_and_selection() { + let state = SetupState::new("~/Git-Same/GitHub"); + + let output = render_output(&state); + assert!(output.contains("Select your Git provider")); + assert!(output.contains("GitHub")); + assert!(output.contains("GitHub Enterprise (coming soon)")); + assert!(output.contains("Codeberg.org (coming soon)")); +} diff --git a/src/setup/screens/welcome.rs b/src/setup/screens/welcome.rs new file mode 100644 index 0000000..7c24aae --- /dev/null +++ b/src/setup/screens/welcome.rs @@ -0,0 +1,79 @@ +//! Step 0: Welcome screen (first-time setup only). + +use crate::setup::state::SetupState; +use ratatui::layout::{Constraint, Layout, Rect}; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::Paragraph; +use ratatui::Frame; + +pub fn render(_state: &SetupState, frame: &mut Frame, area: Rect) { + let chunks = Layout::vertical([ + Constraint::Length(3), // Title + Constraint::Min(10), // Content + Constraint::Length(2), // Help + ]) + .split(area); + + // Title + let title = Paragraph::new("Welcome to Git-Same").style( + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ); + frame.render_widget(title, chunks[0]); + + // Content + let cyan = Style::default().fg(Color::Cyan); + let dim = Style::default().fg(Color::DarkGray); + let white = Style::default().fg(Color::White); + + let lines = vec![ + Line::raw(""), + Line::from(Span::styled( + " Git-Same mirrors your GitHub organization structure", + white, + )), + Line::from(Span::styled( + " to your local file system with parallel cloning", + white, + )), + Line::from(Span::styled(" and syncing.", white)), + Line::raw(""), + Line::from(Span::styled(" This wizard will help you:", dim)), + Line::raw(""), + Line::from(vec![ + Span::styled(" 1. ", cyan), + Span::styled("Connect to your Git provider", white), + ]), + Line::from(vec![ + Span::styled(" 2. ", cyan), + Span::styled("Authenticate your account", white), + ]), + Line::from(vec![ + Span::styled(" 3. ", cyan), + Span::styled("Select which organizations to sync", white), + ]), + Line::from(vec![ + Span::styled(" 4. ", cyan), + Span::styled("Choose where to store repos", white), + ]), + Line::raw(""), + Line::from(Span::styled( + " Press Enter to get started", + Style::default().fg(Color::Yellow), + )), + ]; + + let content = Paragraph::new(lines); + frame.render_widget(content, chunks[1]); + + // Help + let help = + Paragraph::new("Enter Start Esc Cancel").style(Style::default().fg(Color::DarkGray)); + frame.render_widget(help, chunks[2]); +} + +#[cfg(test)] +#[path = "welcome_tests.rs"] +mod tests; diff --git a/src/setup/screens/welcome_tests.rs b/src/setup/screens/welcome_tests.rs new file mode 100644 index 0000000..ee30f63 --- /dev/null +++ b/src/setup/screens/welcome_tests.rs @@ -0,0 +1,37 @@ +use super::*; +use crate::setup::state::SetupState; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(state: &SetupState) -> String { + let backend = TestBackend::new(100, 20); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render(state, frame, area); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn render_welcome_shows_intro_and_steps() { + let state = SetupState::with_first_setup("~/Git-Same/GitHub", true); + + let output = render_output(&state); + assert!(output.contains("Welcome to Git-Same")); + assert!(output.contains("Connect to your Git provider")); + assert!(output.contains("Authenticate your account")); + assert!(output.contains("Press Enter to get started")); +} diff --git a/src/setup/state.rs b/src/setup/state.rs new file mode 100644 index 0000000..41210ce --- /dev/null +++ b/src/setup/state.rs @@ -0,0 +1,385 @@ +//! Setup wizard state (the "Model" in Elm architecture). + +use crate::config::WorkspaceProvider; +use crate::types::ProviderKind; + +/// Which step of the wizard is active. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum SetupStep { + /// Step 0: Welcome screen (first-time only). + Welcome, + /// Step 1: Select a provider. + SelectProvider, + /// Step 2: Authenticate and detect username. + Authenticate, + /// Step 3: Discover and select organizations. + SelectOrgs, + /// Step 4: Enter the base path. + SelectPath, + /// Step 5: Review and save. + Confirm, + /// Step 6: Success / completion screen. + Complete, +} + +/// An organization entry in the org selector. +#[derive(Debug, Clone)] +pub struct OrgEntry { + pub name: String, + pub repo_count: usize, + pub selected: bool, +} + +/// The outcome of the setup wizard. +#[derive(Debug, Clone)] +pub enum SetupOutcome { + /// User completed the wizard. + Completed, + /// User cancelled. + Cancelled, +} + +/// Represents one of the provider choices shown in step 1. +#[derive(Debug, Clone)] +pub struct ProviderChoice { + pub kind: ProviderKind, + pub label: String, + pub available: bool, +} + +/// A suggested directory path for the path selector. +#[derive(Debug, Clone)] +pub struct PathSuggestion { + pub path: String, + pub label: String, +} + +/// A selectable directory entry in the inline path navigator. +#[derive(Debug, Clone)] +pub struct PathBrowseEntry { + pub label: String, + pub path: String, + pub depth: u16, + pub expanded: bool, + pub has_children: bool, +} + +/// The wizard state (model). +pub struct SetupState { + /// Current wizard step. + pub step: SetupStep, + /// Whether to quit the wizard. + pub should_quit: bool, + /// Outcome when done. + pub outcome: Option, + + // Step 1: Provider selection + pub provider_choices: Vec, + pub provider_index: usize, + + // Step 2: Authentication + pub auth_status: AuthStatus, + pub username: Option, + pub auth_token: Option, + + // Step 3: Org selection + pub orgs: Vec, + pub org_index: usize, + pub org_loading: bool, + pub org_error: Option, + + // Step 4: Path + pub base_path: String, + pub path_cursor: usize, + pub path_suggestions_mode: bool, + pub path_suggestions: Vec, + pub path_suggestion_index: usize, + pub path_completions: Vec, + pub path_completion_index: usize, + pub path_browse_mode: bool, + pub path_browse_current_dir: String, + pub path_browse_entries: Vec, + pub path_browse_index: usize, + pub path_browse_show_hidden: bool, + pub path_browse_error: Option, + pub path_browse_info: Option, + + // Step 5: Confirm + pub workspace_name: String, + pub name_editing: bool, + + // General + pub error_message: Option, + + // Animation / UX + /// Tick counter for spinner and animation effects. + pub tick_count: u64, + /// Whether this is the first workspace setup (controls Welcome screen). + pub is_first_setup: bool, +} + +/// Authentication status during step 2. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum AuthStatus { + /// Haven't checked yet. + Pending, + /// Currently checking. + Checking, + /// Authenticated successfully. + Success, + /// Authentication failed. + Failed(String), +} + +/// Collapse an absolute path's home directory prefix into `~`. +pub fn tilde_collapse(path: &str) -> String { + if let Ok(home) = std::env::var("HOME") { + if path.starts_with(&home) { + return format!("~{}", &path[home.len()..]); + } + } + path.to_string() +} + +impl SetupState { + /// Create initial wizard state. + /// + /// If `is_first_setup` is true, the wizard starts with a Welcome screen. + pub fn new(default_base_path: &str) -> Self { + Self::with_first_setup(default_base_path, false) + } + + /// Create wizard state, optionally starting with the Welcome screen. + pub fn with_first_setup(default_base_path: &str, is_first_setup: bool) -> Self { + let provider_choices = vec![ + ProviderChoice { + kind: ProviderKind::GitHub, + label: "GitHub".to_string(), + available: true, + }, + ProviderChoice { + kind: ProviderKind::GitHubEnterprise, + label: "GitHub Enterprise (coming soon)".to_string(), + available: false, + }, + ProviderChoice { + kind: ProviderKind::GitLab, + label: "GitLab.com (coming soon)".to_string(), + available: false, + }, + ProviderChoice { + kind: ProviderKind::GitLabSelfManaged, + label: "GitLab Self-Managed (coming soon)".to_string(), + available: false, + }, + ProviderChoice { + kind: ProviderKind::Codeberg, + label: "Codeberg.org (coming soon)".to_string(), + available: false, + }, + ProviderChoice { + kind: ProviderKind::Bitbucket, + label: "Bitbucket.org (coming soon)".to_string(), + available: false, + }, + ]; + + let base_path = default_base_path.to_string(); + let path_cursor = base_path.len(); + + let step = if is_first_setup { + SetupStep::Welcome + } else { + SetupStep::SelectProvider + }; + + Self { + step, + should_quit: false, + outcome: None, + provider_choices, + provider_index: 0, + auth_status: AuthStatus::Pending, + username: None, + auth_token: None, + base_path, + path_cursor, + path_suggestions_mode: true, + path_suggestions: Vec::new(), + path_suggestion_index: 0, + path_completions: Vec::new(), + path_completion_index: 0, + path_browse_mode: false, + path_browse_current_dir: String::new(), + path_browse_entries: Vec::new(), + path_browse_index: 0, + path_browse_show_hidden: false, + path_browse_error: None, + path_browse_info: None, + orgs: Vec::new(), + org_index: 0, + org_loading: false, + org_error: None, + workspace_name: String::new(), + name_editing: false, + error_message: None, + tick_count: 0, + is_first_setup, + } + } + + /// Get the selected provider kind. + pub fn selected_provider(&self) -> ProviderKind { + self.provider_choices[self.provider_index].kind + } + + /// Build the WorkspaceProvider from current state. + pub fn build_workspace_provider(&self) -> WorkspaceProvider { + let kind = self.selected_provider(); + WorkspaceProvider { + kind, + api_url: None, + ..WorkspaceProvider::default() + } + } + + /// Get selected org names. + pub fn selected_orgs(&self) -> Vec { + self.orgs + .iter() + .filter(|o| o.selected) + .map(|o| o.name.clone()) + .collect() + } + + /// Populate the path suggestions list for the SelectPath step. + pub fn populate_path_suggestions(&mut self) { + let mut suggestions = Vec::new(); + + // 1. Current path (always first — this is the default) + suggestions.push(PathSuggestion { + path: self.base_path.clone(), + label: "current directory".to_string(), + }); + + // 2. Common developer directories (only if they exist and differ) + for candidate in &[ + "~/Git-Same/GitHub", + "~/Developer", + "~/Projects", + "~/repos", + "~/code", + ] { + let expanded = shellexpand::tilde(candidate); + let path = std::path::Path::new(expanded.as_ref()); + let expanded_candidate = expanded.as_ref().to_string(); + if path.is_dir() + && !suggestions.iter().any(|s| { + s.path == *candidate + || shellexpand::tilde(&s.path).as_ref() == expanded_candidate + }) + { + suggestions.push(PathSuggestion { + path: candidate.to_string(), + label: String::new(), + }); + } + } + + // 3. Home directory (always last) + if !suggestions.iter().any(|s| s.path == "~") { + suggestions.push(PathSuggestion { + path: "~".to_string(), + label: "home".to_string(), + }); + } + + self.path_suggestions = suggestions; + self.path_suggestion_index = 0; + self.path_suggestions_mode = true; + self.path_browse_mode = false; + self.path_browse_current_dir.clear(); + self.path_browse_entries.clear(); + self.path_browse_index = 0; + self.path_browse_show_hidden = false; + self.path_browse_error = None; + self.path_browse_info = None; + } + + /// The 1-based step number for display (Welcome is not counted). + pub fn step_number(&self) -> usize { + match self.step { + SetupStep::Welcome => 0, + SetupStep::SelectProvider => 1, + SetupStep::Authenticate => 2, + SetupStep::SelectOrgs => 3, + SetupStep::SelectPath => 4, + SetupStep::Confirm => 5, + SetupStep::Complete => 5, + } + } + + /// Total number of numbered steps (excluding Welcome and Complete). + pub const TOTAL_STEPS: usize = 5; + + /// Move to the next step. + pub fn next_step(&mut self) { + self.error_message = None; + self.step = match self.step { + SetupStep::Welcome => SetupStep::SelectProvider, + SetupStep::SelectProvider => SetupStep::Authenticate, + SetupStep::Authenticate => { + self.org_loading = true; + self.orgs.clear(); + self.org_index = 0; + self.org_error = None; + SetupStep::SelectOrgs + } + SetupStep::SelectOrgs => { + self.populate_path_suggestions(); + SetupStep::SelectPath + } + SetupStep::SelectPath => { + // Derive workspace name from base_path + provider + let path = std::path::Path::new(&self.base_path); + let base = + crate::config::WorkspaceManager::name_from_path(path, self.selected_provider()); + self.workspace_name = + crate::config::WorkspaceManager::unique_name(&base).unwrap_or(base); + SetupStep::Confirm + } + SetupStep::Confirm => SetupStep::Complete, + SetupStep::Complete => { + self.outcome = Some(SetupOutcome::Completed); + self.should_quit = true; + SetupStep::Complete + } + }; + } + + /// Move to the previous step. + pub fn prev_step(&mut self) { + self.error_message = None; + self.step = match self.step { + SetupStep::Welcome => { + self.outcome = Some(SetupOutcome::Cancelled); + self.should_quit = true; + SetupStep::Welcome + } + SetupStep::SelectProvider => { + self.outcome = Some(SetupOutcome::Cancelled); + self.should_quit = true; + SetupStep::SelectProvider + } + SetupStep::Authenticate => SetupStep::SelectProvider, + SetupStep::SelectOrgs => SetupStep::Authenticate, + SetupStep::SelectPath => SetupStep::SelectOrgs, + SetupStep::Confirm => SetupStep::SelectPath, + SetupStep::Complete => SetupStep::Confirm, + }; + } +} + +#[cfg(test)] +#[path = "state_tests.rs"] +mod tests; diff --git a/src/setup/state_tests.rs b/src/setup/state_tests.rs new file mode 100644 index 0000000..09ce5e0 --- /dev/null +++ b/src/setup/state_tests.rs @@ -0,0 +1,159 @@ +use super::*; + +#[test] +fn test_new_state() { + let state = SetupState::new("~/Git-Same/GitHub"); + assert_eq!(state.step, SetupStep::SelectProvider); + assert!(!state.should_quit); + assert_eq!(state.base_path, "~/Git-Same/GitHub"); + assert_eq!(state.provider_choices.len(), 6); + assert!(state.provider_choices[0].available); + assert!(!state.provider_choices[2].available); // GitLab + assert!(state.path_suggestions_mode); + assert!(!state.path_browse_mode); + assert!(state.path_browse_entries.is_empty()); + assert!(!state.path_browse_show_hidden); + assert!(state.path_browse_error.is_none()); + assert!(state.path_browse_info.is_none()); + assert!(state.path_suggestions.is_empty()); + assert_eq!(state.tick_count, 0); + assert!(!state.is_first_setup); +} + +#[test] +fn test_first_setup_starts_with_welcome() { + let state = SetupState::with_first_setup("~/Git-Same/GitHub", true); + assert_eq!(state.step, SetupStep::Welcome); + assert!(state.is_first_setup); +} + +#[test] +fn test_non_first_setup_starts_with_provider() { + let state = SetupState::with_first_setup("~/Git-Same/GitHub", false); + assert_eq!(state.step, SetupStep::SelectProvider); + assert!(!state.is_first_setup); +} + +#[test] +fn test_populate_path_suggestions() { + let mut state = SetupState::new("~/test-path"); + state.populate_path_suggestions(); + // First suggestion is always the current directory (default) + assert!(!state.path_suggestions.is_empty()); + assert_eq!(state.path_suggestions[0].path, "~/test-path"); + assert_eq!(state.path_suggestions[0].label, "current directory"); + // Last suggestion is always home + let last = state.path_suggestions.last().unwrap(); + assert_eq!(last.path, "~"); + assert_eq!(last.label, "home"); +} + +#[test] +fn test_tilde_collapse() { + if let Ok(home) = std::env::var("HOME") { + let path = format!("{}/projects", home); + assert_eq!(super::tilde_collapse(&path), "~/projects"); + } + assert_eq!(super::tilde_collapse("/tmp/foo"), "/tmp/foo"); +} + +#[test] +fn test_step_navigation() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + assert_eq!(state.step, SetupStep::SelectProvider); + + state.next_step(); + assert_eq!(state.step, SetupStep::Authenticate); + + state.next_step(); + assert_eq!(state.step, SetupStep::SelectOrgs); + + state.prev_step(); + assert_eq!(state.step, SetupStep::Authenticate); +} + +#[test] +fn test_welcome_navigation() { + let mut state = SetupState::with_first_setup("~/Git-Same/GitHub", true); + assert_eq!(state.step, SetupStep::Welcome); + + state.next_step(); + assert_eq!(state.step, SetupStep::SelectProvider); + assert!(!state.should_quit); +} + +#[test] +fn test_confirm_goes_to_complete() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.step = SetupStep::Confirm; + state.next_step(); + assert_eq!(state.step, SetupStep::Complete); + assert!(!state.should_quit); +} + +#[test] +fn test_complete_next_quits() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.step = SetupStep::Complete; + state.next_step(); + assert!(state.should_quit); + assert!(matches!(state.outcome, Some(SetupOutcome::Completed))); +} + +#[test] +fn test_selected_orgs() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.orgs = vec![ + OrgEntry { + name: "org1".to_string(), + repo_count: 5, + selected: true, + }, + OrgEntry { + name: "org2".to_string(), + repo_count: 3, + selected: false, + }, + OrgEntry { + name: "org3".to_string(), + repo_count: 8, + selected: true, + }, + ]; + let selected = state.selected_orgs(); + assert_eq!(selected, vec!["org1", "org3"]); +} + +#[test] +fn test_cancel_from_first_step() { + let mut state = SetupState::new("~/Git-Same/GitHub"); + state.prev_step(); + assert!(state.should_quit); + assert!(matches!(state.outcome, Some(SetupOutcome::Cancelled))); +} + +#[test] +fn test_cancel_from_welcome() { + let mut state = SetupState::with_first_setup("~/Git-Same/GitHub", true); + state.prev_step(); + assert!(state.should_quit); + assert!(matches!(state.outcome, Some(SetupOutcome::Cancelled))); +} + +#[test] +fn test_step_number() { + let mut state = SetupState::with_first_setup("~/Git-Same/GitHub", true); + assert_eq!(state.step_number(), 0); + state.step = SetupStep::SelectProvider; + assert_eq!(state.step_number(), 1); + state.step = SetupStep::Authenticate; + assert_eq!(state.step_number(), 2); + state.step = SetupStep::SelectOrgs; + assert_eq!(state.step_number(), 3); + state.step = SetupStep::SelectPath; + assert_eq!(state.step_number(), 4); + state.step = SetupStep::Confirm; + assert_eq!(state.step_number(), 5); + state.step = SetupStep::Complete; + assert_eq!(state.step_number(), 5); +} diff --git a/src/setup/ui.rs b/src/setup/ui.rs new file mode 100644 index 0000000..2a2c5ba --- /dev/null +++ b/src/setup/ui.rs @@ -0,0 +1,404 @@ +//! Setup wizard render dispatcher. + +use super::screens; +use super::state::{SetupState, SetupStep}; +use crate::banner; +use ratatui::layout::{Alignment, Constraint, Layout, Rect}; +use ratatui::style::{Color, Modifier, Style}; +use ratatui::text::{Line, Span}; +use ratatui::widgets::{Block, Borders, Paragraph}; +use ratatui::Frame; + +/// Render the setup wizard. +pub fn render(state: &SetupState, frame: &mut Frame) { + let area = frame.area(); + let height = area.height; + let path_popup_active = state.step == SetupStep::SelectPath && state.path_browse_mode; + + // Graceful degradation for small terminals + let show_banner = height >= 30 && !path_popup_active; + let show_progress = height >= 20; + + let mut constraints = Vec::new(); + if show_banner { + constraints.push(Constraint::Length(6)); // Banner + } + constraints.push(Constraint::Length(2)); // Title + if show_progress { + constraints.push(Constraint::Length(4)); // Step progress indicator (with border) + } + constraints.push(Constraint::Min(10)); // Step content (with border) + constraints.push(Constraint::Length(2)); // Status bar + + let chunks = Layout::vertical(constraints).split(area); + + let mut idx = 0; + + // Banner + if show_banner { + if state.step == SetupStep::Complete { + let phase = (state.tick_count % 100) as f64 / 100.0; + banner::render_animated_banner(frame, chunks[idx], phase); + } else { + banner::render_banner(frame, chunks[idx]); + } + idx += 1; + } + + // Title + let title_text = if state.step == SetupStep::Welcome { + "" + } else if state.is_first_setup { + "Workspace Setup" + } else { + "New Workspace" + }; + if !title_text.is_empty() { + let title = Paragraph::new(title_text) + .style( + Style::default() + .fg(if path_popup_active { + Color::DarkGray + } else { + Color::White + }) + .add_modifier(Modifier::BOLD), + ) + .alignment(Alignment::Center); + frame.render_widget(title, chunks[idx]); + } + idx += 1; + + // Step progress indicator + if show_progress { + let progress_block = Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)); + let progress_inner = progress_block.inner(chunks[idx]); + frame.render_widget(progress_block, chunks[idx]); + render_step_progress(state, frame, progress_inner, path_popup_active); + idx += 1; + } + + // Step content + let content_area = chunks[idx]; + let content_block = Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)); + let content_inner = content_block.inner(content_area); + frame.render_widget(content_block, content_area); + idx += 1; + + match state.step { + SetupStep::Welcome => screens::welcome::render(state, frame, content_inner), + SetupStep::SelectProvider => screens::provider::render(state, frame, content_inner), + SetupStep::Authenticate => screens::auth::render(state, frame, content_inner), + SetupStep::SelectOrgs => screens::orgs::render(state, frame, content_inner), + SetupStep::SelectPath => screens::path::render(state, frame, content_inner), + SetupStep::Confirm => screens::confirm::render(state, frame, content_inner), + SetupStep::Complete => screens::complete::render(state, frame, content_inner), + } + + // Status bar + render_status_bar(state, frame, chunks[idx]); +} + +/// Render the step progress indicator with nodes and connectors. +fn render_step_progress(state: &SetupState, frame: &mut Frame, area: Rect, dimmed: bool) { + let steps = ["Provider", "Auth", "Orgs", "Path", "Save"]; + let current = state.step_number(); // 0 for Welcome, 1-5 for steps, 5 for Complete + + let green = if dimmed { + Style::default().fg(Color::DarkGray) + } else { + Style::default().fg(Color::Rgb(21, 128, 61)) + }; + let green_bold = green.add_modifier(Modifier::BOLD); + let cyan_bold = if dimmed { + Style::default() + .fg(Color::DarkGray) + .add_modifier(Modifier::BOLD) + } else { + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD) + }; + let dim = Style::default().fg(Color::DarkGray); + + let segments = Layout::horizontal([ + Constraint::Ratio(3, 23), + Constraint::Ratio(2, 23), + Constraint::Ratio(3, 23), + Constraint::Ratio(2, 23), + Constraint::Ratio(3, 23), + Constraint::Ratio(2, 23), + Constraint::Ratio(3, 23), + Constraint::Ratio(2, 23), + Constraint::Ratio(3, 23), + ]) + .split(area); + + let mut node_spans: Vec = Vec::new(); + let mut label_spans: Vec = Vec::new(); + + for (i, label) in steps.iter().enumerate() { + let step_num = i + 1; + let node_width = segments[i * 2].width as usize; + let node_style = if step_num < current || state.step == SetupStep::Complete { + green_bold + } else if step_num == current { + cyan_bold + } else { + dim + }; + let label_style = if step_num < current || state.step == SetupStep::Complete { + green + } else if step_num == current { + cyan_bold + } else { + dim + }; + + let node_text = if step_num < current || state.step == SetupStep::Complete { + "(\u{2713})".to_string() + } else { + format!("({})", step_num) + }; + + node_spans.push(Span::styled( + center_cell(&node_text, node_width), + node_style, + )); + label_spans.push(Span::styled(center_cell(label, node_width), label_style)); + + if i < steps.len() - 1 { + let connector_width = segments[i * 2 + 1].width as usize; + let connector_done = step_num < current || state.step == SetupStep::Complete; + let connector_style = if connector_done { green } else { dim }; + node_spans.push(Span::styled( + connector_cell(connector_width, connector_done), + connector_style, + )); + label_spans.push(Span::raw(" ".repeat(connector_width))); + } + } + + let widget = Paragraph::new(vec![Line::from(node_spans), Line::from(label_spans)]); + frame.render_widget(widget, area); +} + +fn center_cell(text: &str, width: usize) -> String { + if width == 0 { + return String::new(); + } + + let text = if text.chars().count() > width { + text.chars().take(width).collect::() + } else { + text.to_string() + }; + let text_width = text.chars().count(); + let left_pad = (width - text_width) / 2; + let right_pad = width - text_width - left_pad; + format!("{}{}{}", " ".repeat(left_pad), text, " ".repeat(right_pad)) +} + +fn connector_cell(width: usize, completed: bool) -> String { + if width == 0 { + return String::new(); + } + + if completed { + return "\u{2501}".repeat(width); + } + + // Dashed connector for upcoming steps. + let mut out = String::with_capacity(width); + for i in 0..width { + if i % 2 == 0 { + out.push('\u{2500}'); + } else { + out.push(' '); + } + } + out +} + +/// Render the 2-line status bar with actions and navigation hints. +fn render_status_bar(state: &SetupState, frame: &mut Frame, area: Rect) { + let path_popup_active = state.step == SetupStep::SelectPath && state.path_browse_mode; + let blue = if path_popup_active { + Style::default() + .fg(Color::DarkGray) + .add_modifier(Modifier::BOLD) + } else { + Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD) + }; + let dim = Style::default().fg(Color::DarkGray); + let yellow = if path_popup_active { + Style::default().fg(Color::DarkGray) + } else { + Style::default().fg(Color::Yellow) + }; + + let top_center = match state.step { + SetupStep::Welcome => vec![ + Span::styled("Press ", dim), + Span::styled("[Enter]", blue), + Span::styled(" to get started", dim), + ], + SetupStep::SelectProvider => vec![ + Span::styled("[↑] [↓]", blue), + Span::styled(" Select provider", dim), + ], + SetupStep::Authenticate => { + use super::state::AuthStatus; + match &state.auth_status { + AuthStatus::Pending | AuthStatus::Failed(_) => vec![ + Span::styled("[Enter]", blue), + Span::styled(" Authenticate", dim), + ], + AuthStatus::Success => vec![ + Span::styled("[Enter]", blue), + Span::styled(" Continue", dim), + ], + AuthStatus::Checking => vec![Span::styled("Authenticating...", yellow)], + } + } + SetupStep::SelectPath => { + if state.path_browse_mode { + vec![Span::styled("Folder popup active", dim)] + } else if state.path_suggestions_mode { + vec![ + Span::styled("[Tab]", blue), + Span::styled(" Edit ", dim), + Span::styled("[b]", blue), + Span::styled(" Browse", dim), + ] + } else { + vec![ + Span::styled("[Tab]", blue), + Span::styled(" Complete ", dim), + Span::styled("[Ctrl+b]", blue), + Span::styled(" Browse", dim), + ] + } + } + SetupStep::SelectOrgs => { + if state.org_loading { + vec![Span::styled("Discovering organizations...", yellow)] + } else { + vec![ + Span::styled("[Space]", blue), + Span::styled(" Toggle ", dim), + Span::styled("[a]", blue), + Span::styled(" All ", dim), + Span::styled("[n]", blue), + Span::styled(" None", dim), + ] + } + } + SetupStep::Confirm => vec![ + Span::styled("[Enter]", blue), + Span::styled(" Save workspace", dim), + ], + SetupStep::Complete => vec![ + Span::styled("[Enter]", blue), + Span::styled(" Dashboard ", dim), + Span::styled("[s]", blue), + Span::styled(" Sync Now", dim), + ], + }; + + let bottom_left = if path_popup_active { + vec![ + Span::styled("[Esc]", blue), + Span::styled(" Close Popup", dim), + ] + } else { + vec![ + Span::styled("[q]", blue), + Span::styled(" Quit ", dim), + Span::styled("[Esc]", blue), + Span::styled(" Back", dim), + ] + }; + + let bottom_right = match state.step { + SetupStep::SelectProvider | SetupStep::SelectOrgs => vec![ + Span::styled("[↑] [↓]", blue), + Span::styled(" Move ", dim), + Span::styled("[←] [→]", blue), + Span::styled(" Step ", dim), + Span::styled("[Enter]", blue), + Span::styled(" Next Step", dim), + ], + SetupStep::SelectPath => { + if state.path_browse_mode { + vec![Span::styled("Use popup arrows and Enter", dim)] + } else if state.path_suggestions_mode { + vec![ + Span::styled("[↑] [↓]", blue), + Span::styled(" Move ", dim), + Span::styled("[←] [→]", blue), + Span::styled(" Step ", dim), + Span::styled("[Enter]", blue), + Span::styled(" Next Step", dim), + ] + } else { + vec![ + Span::styled("[←] [→]", blue), + Span::styled(" Step ", dim), + Span::styled("[Enter]", blue), + Span::styled(" Next Step", dim), + ] + } + } + _ => vec![ + Span::styled("[←] [→]", blue), + Span::styled(" Step ", dim), + Span::styled("[Enter]", blue), + Span::styled(" Next Step", dim), + ], + }; + + let rows = Layout::vertical([Constraint::Length(1), Constraint::Length(1)]).split(area); + let step_num = state.step_number(); + let step_text = if step_num > 0 { + Some(format!("Step {} of {}", step_num, SetupState::TOTAL_STEPS)) + } else { + None + }; + + let step_width = step_text + .as_ref() + .map(|s| s.chars().count() as u16 + 1) + .unwrap_or(0); + let top_cols = + Layout::horizontal([Constraint::Length(step_width), Constraint::Min(0)]).split(rows[0]); + + if let Some(text) = step_text { + frame.render_widget( + Paragraph::new(Line::from(Span::styled(text, dim))), + top_cols[0], + ); + } + frame.render_widget( + Paragraph::new(Line::from(top_center)).alignment(Alignment::Center), + top_cols[1], + ); + + let bottom_cols = + Layout::horizontal([Constraint::Length(24), Constraint::Min(0)]).split(rows[1]); + frame.render_widget(Paragraph::new(Line::from(bottom_left)), bottom_cols[0]); + frame.render_widget( + Paragraph::new(Line::from(bottom_right)).right_aligned(), + bottom_cols[1], + ); +} + +#[cfg(test)] +#[path = "ui_tests.rs"] +mod tests; diff --git a/src/setup/ui_tests.rs b/src/setup/ui_tests.rs new file mode 100644 index 0000000..2b6f4c7 --- /dev/null +++ b/src/setup/ui_tests.rs @@ -0,0 +1,14 @@ +use super::*; + +#[test] +fn center_cell_matches_width() { + let out = center_cell("Auth", 10); + assert_eq!(out.chars().count(), 10); + assert!(out.contains("Auth")); +} + +#[test] +fn connector_cell_matches_width() { + assert_eq!(connector_cell(7, true).chars().count(), 7); + assert_eq!(connector_cell(7, false).chars().count(), 7); +} diff --git a/src/tui/app.rs b/src/tui/app.rs new file mode 100644 index 0000000..cbffa8e --- /dev/null +++ b/src/tui/app.rs @@ -0,0 +1,437 @@ +//! TUI application state (the "Model" in Elm architecture). + +use crate::config::{Config, WorkspaceConfig}; +use crate::setup::state::{self, SetupState}; +use crate::types::{OpSummary, OwnedRepo}; +use ratatui::widgets::TableState; +use serde::{Deserialize, Serialize}; +use std::collections::HashMap; +use std::path::PathBuf; +use std::time::Instant; + +/// Which screen is active. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum Screen { + SystemCheck, + WorkspaceSetup, + Workspaces, + Dashboard, + Sync, + Settings, +} + +/// Focused pane on the Workspace screen. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum WorkspacePane { + Left, + Right, +} + +/// Which operation is running or was last selected. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum Operation { + Sync, + Status, +} + +impl std::fmt::Display for Operation { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Operation::Sync => write!(f, "Sync"), + Operation::Status => write!(f, "Status"), + } + } +} + +/// State of an ongoing async operation. +#[derive(Debug, Clone)] +pub enum OperationState { + Idle, + Discovering { + operation: Operation, + message: String, + }, + Running { + operation: Operation, + total: usize, + completed: usize, + failed: usize, + skipped: usize, + current_repo: String, + /// Repos that had new commits (updated or cloned). + with_updates: usize, + /// New repos cloned so far. + cloned: usize, + /// Existing repos synced so far. + synced: usize, + /// Planned clone count (for phase indicator). + to_clone: usize, + /// Planned sync count (for phase indicator). + to_sync: usize, + /// Aggregate new commits fetched. + total_new_commits: u32, + /// When the operation started (for elapsed/ETA). + started_at: Instant, + /// Repos currently being processed (for worker slots). + active_repos: Vec, + /// Throughput samples (repos completed per second window). + throughput_samples: Vec, + /// Completed count at last throughput sample. + last_sample_completed: usize, + }, + Finished { + operation: Operation, + summary: OpSummary, + /// Repos that had new commits. + with_updates: usize, + /// New repos cloned. + cloned: usize, + /// Existing repos synced. + synced: usize, + /// Aggregate new commits fetched. + total_new_commits: u32, + /// Wall-clock duration in seconds. + duration_secs: f64, + }, +} + +/// A structured log entry from a sync operation. +#[derive(Debug, Clone)] +pub struct SyncLogEntry { + pub repo_name: String, + pub status: SyncLogStatus, + pub message: String, + pub had_updates: bool, + pub is_clone: bool, + pub new_commits: Option, + pub path: Option, +} + +/// Status classification for a sync log entry. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum SyncLogStatus { + Success, + Updated, + Cloned, + Failed, + Skipped, +} + +/// Filter for post-sync log view. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum LogFilter { + All, + Updated, + Failed, + Skipped, + Changelog, +} + +/// A summary entry for sync history. +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct SyncHistoryEntry { + pub timestamp: String, + pub duration_secs: f64, + pub success: usize, + pub failed: usize, + pub skipped: usize, + pub with_updates: usize, + pub cloned: usize, + pub total_new_commits: u32, +} + +/// A local repo with its computed status. +#[derive(Debug, Clone)] +pub struct RepoEntry { + pub owner: String, + pub name: String, + pub full_name: String, + pub path: PathBuf, + pub branch: Option, + pub is_uncommitted: bool, + pub ahead: usize, + pub behind: usize, + pub staged_count: usize, + pub unstaged_count: usize, + pub untracked_count: usize, +} + +/// A requirement check result for the init check screen. +#[derive(Debug, Clone)] +pub struct CheckEntry { + pub name: String, + pub passed: bool, + pub message: String, + pub critical: bool, +} + +/// The application model (all TUI state). +pub struct App { + /// Whether the user has requested quit. + pub should_quit: bool, + + /// Active screen. + pub screen: Screen, + + /// Screen history for back navigation. + pub screen_stack: Vec, + + /// Loaded configuration. + pub config: Config, + + /// Available workspaces. + pub workspaces: Vec, + + /// Active workspace (selected or auto-selected). + pub active_workspace: Option, + + /// Selected index in workspace selector. + pub workspace_index: usize, + + /// Focused pane in the Workspaces screen. + pub workspace_pane: WorkspacePane, + + /// Base path for repos (derived from active workspace). + pub base_path: Option, + + /// Discovered repos grouped by org. + pub repos_by_org: HashMap>, + + /// All discovered repos (flat list). + pub all_repos: Vec, + + /// Org names (sorted). + pub orgs: Vec, + + /// Local repo entries with status. + pub local_repos: Vec, + + /// Current async operation state. + pub operation_state: OperationState, + + /// Operation log lines (last N events). + pub log_lines: Vec, + + // -- Selection state -- + /// Selected repo index in current view. + pub repo_index: usize, + + /// Scroll offset for tables. + pub scroll_offset: usize, + + /// Filter/search text. + pub filter_text: String, + + /// Whether filter input is active. + pub filter_active: bool, + + /// Whether dry-run is toggled in command picker. + pub dry_run: bool, + + /// Error message to display (clears on next keypress). + pub error_message: Option, + + /// Requirement check results (populated on InitCheck screen). + pub check_results: Vec, + + /// Whether checks are still running. + pub checks_loading: bool, + + /// Whether to use pull mode for sync (vs fetch). + pub sync_pull: bool, + + /// Setup wizard state (active when on SetupWizard screen). + pub setup_state: Option, + + /// Whether the config file was successfully created by init. + pub config_created: bool, + + /// Path where config was written (for display). + pub config_path_display: Option, + + /// Whether status scan is in progress. + pub status_loading: bool, + + /// When the last status scan completed (for auto-refresh cooldown). + pub last_status_scan: Option, + + /// Selected stat box index on dashboard (0-5) for ←/→ navigation. + pub stat_index: usize, + + /// Table state for dashboard tab content (tracks selection + scroll offset). + pub dashboard_table_state: TableState, + + /// Selected category index in settings screen (0 = Requirements, 1 = Options, 2+ = Workspaces). + pub settings_index: usize, + + /// Whether the config TOML section is expanded in workspace detail. + pub settings_config_expanded: bool, + + /// Scroll offset for the workspace detail right pane. + pub workspace_detail_scroll: u16, + + /// Tick counter for driving animations on the Progress screen. + pub tick_count: u64, + + /// Structured sync log entries (enriched data). + pub sync_log_entries: Vec, + + /// Active log filter for post-sync view. + pub log_filter: LogFilter, + + /// Sync history (last N summaries for comparison). + pub sync_history: Vec, + + /// Whether sync history overlay is visible. + pub show_sync_history: bool, + + /// Expanded repo in post-sync view (for commit deep dive). + pub expanded_repo: Option, + + /// Commit log for expanded repo. + pub repo_commits: Vec, + + /// Selected index in the post-sync filterable log. + pub sync_log_index: usize, + + /// Aggregated commits per repo for changelog view. + pub changelog_commits: HashMap>, + + /// Total number of repos to fetch commits for in changelog. + pub changelog_total: usize, + + /// Number of repos whose commits have been loaded for changelog. + pub changelog_loaded: usize, + + /// Scroll offset for the changelog view. + pub changelog_scroll: usize, +} + +impl App { + /// Create a new App with the given config and workspaces. + pub fn new(config: Config, workspaces: Vec) -> Self { + let (screen, active_workspace, base_path) = match workspaces.len() { + 0 => (Screen::WorkspaceSetup, None, None), + 1 => { + let ws = workspaces[0].clone(); + let bp = Some(ws.expanded_base_path()); + (Screen::Dashboard, Some(ws), bp) + } + _ => { + // Check for default workspace + if let Some(ref default_name) = config.default_workspace { + if let Some(ws) = workspaces.iter().find(|w| w.name == *default_name) { + let bp = Some(ws.expanded_base_path()); + (Screen::Dashboard, Some(ws.clone()), bp) + } else { + (Screen::Workspaces, None, None) + } + } else { + (Screen::Workspaces, None, None) + } + } + }; + + let sync_history = active_workspace + .as_ref() + .and_then(|ws| { + crate::cache::SyncHistoryManager::for_workspace(&ws.name) + .and_then(|m| m.load()) + .ok() + }) + .unwrap_or_default(); + + Self { + should_quit: false, + screen, + screen_stack: Vec::new(), + config, + workspaces, + active_workspace, + workspace_index: 0, + workspace_pane: WorkspacePane::Left, + base_path, + repos_by_org: HashMap::new(), + all_repos: Vec::new(), + orgs: Vec::new(), + local_repos: Vec::new(), + operation_state: OperationState::Idle, + log_lines: Vec::new(), + repo_index: 0, + scroll_offset: 0, + filter_text: String::new(), + filter_active: false, + dry_run: false, + error_message: None, + check_results: Vec::new(), + checks_loading: false, + sync_pull: false, + setup_state: if screen == Screen::WorkspaceSetup { + let default_path = std::env::current_dir() + .map(|p| state::tilde_collapse(&p.to_string_lossy())) + .unwrap_or_else(|_| "~/Git-Same/GitHub".to_string()); + Some(SetupState::with_first_setup(&default_path, true)) + } else { + None + }, + config_created: false, + config_path_display: None, + status_loading: false, + last_status_scan: None, + stat_index: 0, + dashboard_table_state: TableState::default().with_selected(0), + settings_index: 0, + settings_config_expanded: false, + workspace_detail_scroll: 0, + tick_count: 0, + sync_log_entries: Vec::new(), + log_filter: LogFilter::All, + sync_history, + show_sync_history: false, + expanded_repo: None, + repo_commits: Vec::new(), + sync_log_index: 0, + changelog_commits: HashMap::new(), + changelog_total: 0, + changelog_loaded: 0, + changelog_scroll: 0, + } + } + + /// Select a workspace and navigate to dashboard. + pub fn select_workspace(&mut self, index: usize) { + if let Some(ws) = self.workspaces.get(index).cloned() { + self.base_path = Some(ws.expanded_base_path()); + // Load sync history for this workspace + self.sync_history = crate::cache::SyncHistoryManager::for_workspace(&ws.name) + .and_then(|m| m.load()) + .unwrap_or_default(); + self.active_workspace = Some(ws); + // Reset discovered data when switching workspace + self.repos_by_org.clear(); + self.all_repos.clear(); + self.orgs.clear(); + self.local_repos.clear(); + self.last_status_scan = None; + } + } + + /// Navigate to a new screen, pushing current onto the stack. + pub fn navigate_to(&mut self, screen: Screen) { + self.screen_stack.push(self.screen); + self.screen = screen; + self.repo_index = 0; + self.scroll_offset = 0; + } + + /// Go back to previous screen. + pub fn go_back(&mut self) { + if let Some(prev) = self.screen_stack.pop() { + self.screen = prev; + } + } +} + +#[cfg(test)] +#[path = "app_tests.rs"] +mod tests; diff --git a/src/tui/app_tests.rs b/src/tui/app_tests.rs new file mode 100644 index 0000000..af830cc --- /dev/null +++ b/src/tui/app_tests.rs @@ -0,0 +1,51 @@ +use super::*; + +#[test] +fn test_new_no_workspaces_shows_setup_wizard() { + let app = App::new(Config::default(), vec![]); + assert_eq!(app.screen, Screen::WorkspaceSetup); + assert!(app.setup_state.is_some()); + assert!(app.active_workspace.is_none()); + assert!(app.base_path.is_none()); +} + +#[test] +fn test_new_single_workspace_auto_selects() { + let ws = WorkspaceConfig::new("test", "/tmp/test"); + let app = App::new(Config::default(), vec![ws]); + assert_eq!(app.screen, Screen::Dashboard); + assert!(app.active_workspace.is_some()); + assert_eq!(app.active_workspace.unwrap().name, "test"); + assert!(app.base_path.is_some()); +} + +#[test] +fn test_new_multiple_no_default_shows_selector() { + let ws1 = WorkspaceConfig::new("ws1", "/tmp/ws1"); + let ws2 = WorkspaceConfig::new("ws2", "/tmp/ws2"); + let app = App::new(Config::default(), vec![ws1, ws2]); + assert_eq!(app.screen, Screen::Workspaces); + assert!(app.active_workspace.is_none()); +} + +#[test] +fn test_new_multiple_with_valid_default_auto_selects() { + let ws1 = WorkspaceConfig::new("ws1", "/tmp/ws1"); + let ws2 = WorkspaceConfig::new("ws2", "/tmp/ws2"); + let mut config = Config::default(); + config.default_workspace = Some("ws2".to_string()); + let app = App::new(config, vec![ws1, ws2]); + assert_eq!(app.screen, Screen::Dashboard); + assert_eq!(app.active_workspace.unwrap().name, "ws2"); +} + +#[test] +fn test_new_multiple_with_invalid_default_shows_selector() { + let ws1 = WorkspaceConfig::new("ws1", "/tmp/ws1"); + let ws2 = WorkspaceConfig::new("ws2", "/tmp/ws2"); + let mut config = Config::default(); + config.default_workspace = Some("nonexistent".to_string()); + let app = App::new(config, vec![ws1, ws2]); + assert_eq!(app.screen, Screen::Workspaces); + assert!(app.active_workspace.is_none()); +} diff --git a/src/tui/backend.rs b/src/tui/backend.rs new file mode 100644 index 0000000..5e19853 --- /dev/null +++ b/src/tui/backend.rs @@ -0,0 +1,392 @@ +//! Backend integration — bridges TUI with existing async command handlers. +//! +//! Provides channel-based progress adapters and spawn functions for operations. + +use std::path::Path; +use std::sync::Arc; +use tokio::sync::mpsc::UnboundedSender; + +use crate::config::{Config, WorkspaceConfig}; +use crate::git::{FetchResult, GitOperations, PullResult, ShellGit}; +use crate::operations::clone::CloneProgress; +use crate::operations::sync::SyncProgress; +use crate::provider::DiscoveryProgress; +use crate::types::{OpSummary, OwnedRepo}; +use crate::workflows::status_scan::scan_workspace_status; +use crate::workflows::sync_workspace::{ + execute_prepared_sync, prepare_sync_workspace, SyncWorkspaceRequest, +}; + +use super::app::{App, Operation}; +use super::event::{AppEvent, BackendMessage}; + +// -- Progress adapters that send events to the TUI via channels -- + +struct TuiDiscoveryProgress { + tx: UnboundedSender, +} + +impl DiscoveryProgress for TuiDiscoveryProgress { + fn on_orgs_discovered(&self, count: usize) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::OrgsDiscovered(count))); + } + + fn on_org_started(&self, org_name: &str) { + let _ = self.tx.send(AppEvent::Backend(BackendMessage::OrgStarted( + org_name.to_string(), + ))); + } + + fn on_org_complete(&self, org_name: &str, repo_count: usize) { + let _ = self.tx.send(AppEvent::Backend(BackendMessage::OrgComplete( + org_name.to_string(), + repo_count, + ))); + } + + fn on_personal_repos_started(&self) {} + + fn on_personal_repos_complete(&self, _count: usize) {} + + fn on_error(&self, message: &str) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::DiscoveryError( + message.to_string(), + ))); + } +} + +struct TuiCloneProgress { + tx: UnboundedSender, +} + +impl CloneProgress for TuiCloneProgress { + fn on_start(&self, repo: &OwnedRepo, _index: usize, _total: usize) { + let _ = self.tx.send(AppEvent::Backend(BackendMessage::RepoStarted { + repo_name: repo.full_name().to_string(), + })); + } + + fn on_complete(&self, repo: &OwnedRepo, _index: usize, _total: usize) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: true, + skipped: false, + message: "cloned".to_string(), + had_updates: true, + is_clone: true, + new_commits: None, + skip_reason: None, + })); + } + + fn on_error(&self, repo: &OwnedRepo, error: &str, _index: usize, _total: usize) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: false, + skipped: false, + message: error.to_string(), + had_updates: false, + is_clone: true, + new_commits: None, + skip_reason: None, + })); + } + + fn on_skip(&self, repo: &OwnedRepo, reason: &str, _index: usize, _total: usize) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: true, + skipped: true, + message: format!("skipped: {}", reason), + had_updates: false, + is_clone: true, + new_commits: None, + skip_reason: Some(reason.to_string()), + })); + } +} + +struct TuiSyncProgress { + tx: UnboundedSender, +} + +impl SyncProgress for TuiSyncProgress { + fn on_start(&self, repo: &OwnedRepo, _path: &Path, _index: usize, _total: usize) { + let _ = self.tx.send(AppEvent::Backend(BackendMessage::RepoStarted { + repo_name: repo.full_name().to_string(), + })); + } + + fn on_fetch_complete( + &self, + repo: &OwnedRepo, + result: &FetchResult, + _index: usize, + _total: usize, + ) { + let status = if result.updated { + "updated" + } else { + "up to date" + }; + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: true, + skipped: false, + message: status.to_string(), + had_updates: result.updated, + is_clone: false, + new_commits: result.new_commits, + skip_reason: None, + })); + } + + fn on_pull_complete( + &self, + repo: &OwnedRepo, + result: &PullResult, + _index: usize, + _total: usize, + ) { + let status = if result.fast_forward { + "fast-forward" + } else { + "pulled" + }; + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: result.success, + skipped: false, + message: status.to_string(), + had_updates: result.success, + is_clone: false, + new_commits: None, + skip_reason: None, + })); + } + + fn on_error(&self, repo: &OwnedRepo, error: &str, _index: usize, _total: usize) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: false, + skipped: false, + message: error.to_string(), + had_updates: false, + is_clone: false, + new_commits: None, + skip_reason: None, + })); + } + + fn on_skip(&self, repo: &OwnedRepo, reason: &str, _index: usize, _total: usize) { + let _ = self + .tx + .send(AppEvent::Backend(BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: true, + skipped: true, + message: format!("skipped: {}", reason), + had_updates: false, + is_clone: false, + new_commits: None, + skip_reason: Some(reason.to_string()), + })); + } +} + +// -- Spawn functions -- + +/// Spawn an async task to fetch recent commits for a repo (post-sync deep dive). +pub fn spawn_commit_fetch( + repo_path: std::path::PathBuf, + repo_name: String, + tx: UnboundedSender, +) { + tokio::spawn(async move { + let commits = tokio::task::spawn_blocking(move || { + let git = ShellGit::new(); + git.recent_commits(&repo_path, 30).unwrap_or_default() + }) + .await + .unwrap_or_default(); + + let _ = tx.send(AppEvent::Backend(BackendMessage::RepoCommitLog { + repo_name, + commits, + })); + }); +} + +/// Spawn commit fetches for multiple repos (aggregate changelog). +pub fn spawn_changelog_fetch( + repos: Vec<(String, std::path::PathBuf)>, + tx: UnboundedSender, +) { + for (repo_name, repo_path) in repos { + let tx = tx.clone(); + tokio::spawn(async move { + let commits = tokio::task::spawn_blocking(move || { + let git = ShellGit::new(); + git.recent_commits(&repo_path, 30).unwrap_or_default() + }) + .await + .unwrap_or_default(); + + let _ = tx.send(AppEvent::Backend(BackendMessage::RepoCommitLog { + repo_name, + commits, + })); + }); + } +} + +/// Spawn a backend operation as a Tokio task. +pub fn spawn_operation(operation: Operation, app: &App, tx: UnboundedSender) { + let config = app.config.clone(); + let workspace = app.active_workspace.clone(); + let sync_pull = app.sync_pull; + + match operation { + Operation::Sync => { + tokio::spawn(async move { + run_sync_operation(config, workspace, tx, sync_pull).await; + }); + } + Operation::Status => { + let workspace = app.active_workspace.clone(); + let config = app.config.clone(); + tokio::spawn(async move { + run_status_scan(config, workspace, tx).await; + }); + } + } +} + +/// Combined sync operation: discover → clone new → fetch/pull existing. +async fn run_sync_operation( + config: Config, + workspace: Option, + tx: UnboundedSender, + pull_mode: bool, +) { + let workspace = match workspace { + Some(ws) => ws, + None => { + let _ = tx.send(AppEvent::Backend(BackendMessage::OperationError( + "No workspace selected. Run 'gisa setup' to configure one.".to_string(), + ))); + return; + } + }; + + let discovery_progress = TuiDiscoveryProgress { tx: tx.clone() }; + let prepared = match prepare_sync_workspace( + SyncWorkspaceRequest { + config: &config, + workspace: &workspace, + refresh: true, + skip_uncommitted: true, + pull: pull_mode, + concurrency_override: None, + create_base_path: true, + }, + &discovery_progress, + ) + .await + { + Ok(p) => p, + Err(e) => { + let _ = tx.send(AppEvent::Backend(BackendMessage::OperationError(format!( + "{}", + e + )))); + return; + } + }; + + // Send discovery results to populate org browser + let _ = tx.send(AppEvent::Backend(BackendMessage::DiscoveryComplete( + prepared.repos.clone(), + ))); + + if prepared.repos.is_empty() { + let _ = tx.send(AppEvent::Backend(BackendMessage::OperationComplete( + OpSummary::new(), + ))); + return; + } + + // Send OperationStarted so the UI transitions to Running state + let clone_count = prepared.plan.to_clone.len(); + let sync_count = prepared.to_sync.len(); + let total = clone_count + sync_count; + let _ = tx.send(AppEvent::Backend(BackendMessage::OperationStarted { + operation: Operation::Sync, + total, + to_clone: clone_count, + to_sync: sync_count, + })); + + let clone_progress: Arc = Arc::new(TuiCloneProgress { tx: tx.clone() }); + let sync_progress: Arc = Arc::new(TuiSyncProgress { tx: tx.clone() }); + let outcome = execute_prepared_sync(&prepared, false, clone_progress, sync_progress).await; + + let mut combined_summary = OpSummary::new(); + if let Some(summary) = outcome.clone_summary { + combined_summary.success += summary.success; + combined_summary.failed += summary.failed; + combined_summary.skipped += summary.skipped; + } + if let Some(summary) = outcome.sync_summary { + combined_summary.success += summary.success; + combined_summary.failed += summary.failed; + combined_summary.skipped += summary.skipped; + } + + let _ = tx.send(AppEvent::Backend(BackendMessage::OperationComplete( + combined_summary, + ))); +} + +/// Scans local repositories and gets their git status. +async fn run_status_scan( + config: Config, + workspace: Option, + tx: UnboundedSender, +) { + let workspace = match workspace { + Some(ws) => ws, + None => { + let _ = tx.send(AppEvent::Backend(BackendMessage::OperationError( + "No workspace selected.".to_string(), + ))); + return; + } + }; + + let entries = tokio::task::spawn_blocking(move || scan_workspace_status(&config, &workspace)) + .await + .unwrap_or_default(); + + let _ = tx.send(AppEvent::Backend(BackendMessage::StatusResults(entries))); +} + +#[cfg(test)] +#[path = "backend_tests.rs"] +mod tests; diff --git a/src/tui/backend_tests.rs b/src/tui/backend_tests.rs new file mode 100644 index 0000000..ada8c42 --- /dev/null +++ b/src/tui/backend_tests.rs @@ -0,0 +1,271 @@ +use super::*; +use crate::config::Config; +use crate::git::{FetchResult, PullResult}; +use crate::operations::clone::CloneProgress; +use crate::operations::sync::SyncProgress; +use crate::provider::DiscoveryProgress; +use crate::tui::event::{AppEvent, BackendMessage}; +use crate::types::{OwnedRepo, Repo}; +use tokio::sync::mpsc::error::TryRecvError; +use tokio::sync::mpsc::unbounded_channel; +use tokio::time::{timeout, Duration}; + +fn sample_repo() -> OwnedRepo { + OwnedRepo::new("acme", Repo::test("rocket", "acme")) +} + +fn expect_backend_event(event: AppEvent) -> BackendMessage { + match event { + AppEvent::Backend(msg) => msg, + _ => panic!("expected backend event"), + } +} + +#[test] +fn discovery_progress_emits_expected_messages() { + let (tx, mut rx) = unbounded_channel(); + let progress = TuiDiscoveryProgress { tx }; + + progress.on_orgs_discovered(2); + progress.on_org_started("acme"); + progress.on_org_complete("acme", 3); + progress.on_error("boom"); + + match expect_backend_event(rx.try_recv().expect("org count event")) { + BackendMessage::OrgsDiscovered(count) => assert_eq!(count, 2), + _ => panic!("expected OrgsDiscovered"), + } + + match expect_backend_event(rx.try_recv().expect("org started event")) { + BackendMessage::OrgStarted(org) => assert_eq!(org, "acme"), + _ => panic!("expected OrgStarted"), + } + + match expect_backend_event(rx.try_recv().expect("org complete event")) { + BackendMessage::OrgComplete(org, count) => { + assert_eq!(org, "acme"); + assert_eq!(count, 3); + } + _ => panic!("expected OrgComplete"), + } + + match expect_backend_event(rx.try_recv().expect("error event")) { + BackendMessage::DiscoveryError(msg) => assert_eq!(msg, "boom"), + _ => panic!("expected DiscoveryError"), + } + + progress.on_personal_repos_started(); + progress.on_personal_repos_complete(1); + assert!(matches!(rx.try_recv(), Err(TryRecvError::Empty))); +} + +#[test] +fn clone_progress_emits_started_complete_error_and_skip() { + let (tx, mut rx) = unbounded_channel(); + let progress = TuiCloneProgress { tx }; + let repo = sample_repo(); + + progress.on_start(&repo, 1, 4); + progress.on_complete(&repo, 1, 4); + progress.on_error(&repo, "clone failed", 2, 4); + progress.on_skip(&repo, "already exists", 3, 4); + + match expect_backend_event(rx.try_recv().expect("started event")) { + BackendMessage::RepoStarted { repo_name } => assert_eq!(repo_name, repo.full_name()), + _ => panic!("expected RepoStarted"), + } + + match expect_backend_event(rx.try_recv().expect("complete event")) { + BackendMessage::RepoProgress { + repo_name, + success, + skipped, + is_clone, + had_updates, + skip_reason, + .. + } => { + assert_eq!(repo_name, repo.full_name()); + assert!(success); + assert!(!skipped); + assert!(is_clone); + assert!(had_updates); + assert!(skip_reason.is_none()); + } + _ => panic!("expected RepoProgress (complete)"), + } + + match expect_backend_event(rx.try_recv().expect("error event")) { + BackendMessage::RepoProgress { + success, + skipped, + message, + is_clone, + .. + } => { + assert!(!success); + assert!(!skipped); + assert_eq!(message, "clone failed"); + assert!(is_clone); + } + _ => panic!("expected RepoProgress (error)"), + } + + match expect_backend_event(rx.try_recv().expect("skip event")) { + BackendMessage::RepoProgress { + success, + skipped, + message, + is_clone, + skip_reason, + .. + } => { + assert!(success); + assert!(skipped); + assert_eq!(message, "skipped: already exists"); + assert!(is_clone); + assert_eq!(skip_reason.as_deref(), Some("already exists")); + } + _ => panic!("expected RepoProgress (skip)"), + } +} + +#[test] +fn sync_progress_emits_fetch_pull_error_and_skip() { + let (tx, mut rx) = unbounded_channel(); + let progress = TuiSyncProgress { tx }; + let repo = sample_repo(); + + progress.on_start(&repo, std::path::Path::new("/tmp"), 1, 3); + + let fetch = FetchResult { + updated: true, + new_commits: Some(5), + }; + progress.on_fetch_complete(&repo, &fetch, 1, 3); + + let pull = PullResult { + success: true, + updated: true, + fast_forward: true, + error: None, + }; + progress.on_pull_complete(&repo, &pull, 2, 3); + progress.on_error(&repo, "fetch failed", 3, 3); + progress.on_skip(&repo, "dirty tree", 3, 3); + + match expect_backend_event(rx.try_recv().expect("started event")) { + BackendMessage::RepoStarted { repo_name } => assert_eq!(repo_name, repo.full_name()), + _ => panic!("expected RepoStarted"), + } + + match expect_backend_event(rx.try_recv().expect("fetch event")) { + BackendMessage::RepoProgress { + success, + skipped, + message, + is_clone, + had_updates, + new_commits, + .. + } => { + assert!(success); + assert!(!skipped); + assert_eq!(message, "updated"); + assert!(!is_clone); + assert!(had_updates); + assert_eq!(new_commits, Some(5)); + } + _ => panic!("expected RepoProgress (fetch)"), + } + + match expect_backend_event(rx.try_recv().expect("pull event")) { + BackendMessage::RepoProgress { + success, + message, + is_clone, + had_updates, + .. + } => { + assert!(success); + assert_eq!(message, "fast-forward"); + assert!(!is_clone); + assert!(had_updates); + } + _ => panic!("expected RepoProgress (pull)"), + } + + match expect_backend_event(rx.try_recv().expect("error event")) { + BackendMessage::RepoProgress { + success, + skipped, + message, + is_clone, + .. + } => { + assert!(!success); + assert!(!skipped); + assert_eq!(message, "fetch failed"); + assert!(!is_clone); + } + _ => panic!("expected RepoProgress (error)"), + } + + match expect_backend_event(rx.try_recv().expect("skip event")) { + BackendMessage::RepoProgress { + success, + skipped, + message, + skip_reason, + .. + } => { + assert!(success); + assert!(skipped); + assert_eq!(message, "skipped: dirty tree"); + assert_eq!(skip_reason.as_deref(), Some("dirty tree")); + } + _ => panic!("expected RepoProgress (skip)"), + } +} + +#[tokio::test] +async fn spawn_operation_sync_without_workspace_emits_operation_error() { + let mut app = App::new(Config::default(), Vec::new()); + app.active_workspace = None; + + let (tx, mut rx) = unbounded_channel(); + spawn_operation(Operation::Sync, &app, tx); + + let event = timeout(Duration::from_secs(1), rx.recv()) + .await + .expect("timed out waiting for backend message") + .expect("channel closed unexpectedly"); + + match expect_backend_event(event) { + BackendMessage::OperationError(msg) => { + assert!(msg.contains("No workspace selected")); + } + _ => panic!("expected OperationError"), + } +} + +#[tokio::test] +async fn spawn_operation_status_without_workspace_emits_operation_error() { + let mut app = App::new(Config::default(), Vec::new()); + app.active_workspace = None; + + let (tx, mut rx) = unbounded_channel(); + spawn_operation(Operation::Status, &app, tx); + + let event = timeout(Duration::from_secs(1), rx.recv()) + .await + .expect("timed out waiting for backend message") + .expect("channel closed unexpectedly"); + + match expect_backend_event(event) { + BackendMessage::OperationError(msg) => { + assert!(msg.contains("No workspace selected")); + } + _ => panic!("expected OperationError"), + } +} diff --git a/src/tui/event.rs b/src/tui/event.rs new file mode 100644 index 0000000..9c2e47f --- /dev/null +++ b/src/tui/event.rs @@ -0,0 +1,132 @@ +//! Event system: merges terminal input and backend notifications. + +use crossterm::event::{self, Event as CtEvent, KeyEvent}; +use std::time::Duration; +use tokio::sync::mpsc; +use tracing::warn; + +use crate::types::{OpSummary, OwnedRepo}; + +use super::app::{CheckEntry, Operation, RepoEntry}; + +/// Events that the TUI loop processes. +#[derive(Debug)] +pub enum AppEvent { + /// A keyboard event from the terminal. + Terminal(KeyEvent), + /// Terminal resize. + Resize(u16, u16), + /// Backend sent a progress update. + Backend(BackendMessage), + /// Periodic tick for animations/spinners. + Tick, +} + +/// Messages from backend async operations. +#[derive(Debug, Clone)] +pub enum BackendMessage { + /// Discovery: orgs found. + OrgsDiscovered(usize), + /// Discovery: processing an org. + OrgStarted(String), + /// Discovery: org complete with N repos. + OrgComplete(String, usize), + /// Discovery complete with full repo list. + DiscoveryComplete(Vec), + /// Discovery failed. + DiscoveryError(String), + /// Operation phase started with total and per-phase breakdown. + OperationStarted { + operation: Operation, + total: usize, + to_clone: usize, + to_sync: usize, + }, + /// A repo started processing (for live worker slots). + RepoStarted { repo_name: String }, + /// Operation progress: one repo processed. + RepoProgress { + repo_name: String, + success: bool, + skipped: bool, + message: String, + /// Whether this repo had new commits. + had_updates: bool, + /// Whether this was a clone (not a sync). + is_clone: bool, + /// Number of new commits fetched (if known). + new_commits: Option, + /// Structured skip reason (if skipped). + skip_reason: Option, + }, + /// Commit log for a specific repo (post-sync deep dive). + RepoCommitLog { + repo_name: String, + commits: Vec, + }, + /// Operation complete. + OperationComplete(OpSummary), + /// Operation error. + OperationError(String), + /// Status scan results. + StatusResults(Vec), + /// Init: config file created successfully. + InitConfigCreated(String), + /// Init: config creation failed. + InitConfigError(String), + /// Default workspace was set/cleared successfully. + DefaultWorkspaceUpdated(Option), + /// Default workspace operation failed. + DefaultWorkspaceError(String), + /// Requirement check results (background). + CheckResults(Vec), +} + +/// Spawn the terminal event reader in a blocking thread. +/// Returns a receiver for AppEvents and a sender for backend to push messages. +pub fn spawn_event_loop( + tick_rate: Duration, +) -> ( + mpsc::UnboundedReceiver, + mpsc::UnboundedSender, +) { + let (tx, rx) = mpsc::unbounded_channel(); + let event_tx = tx.clone(); + + // Terminal event reader (crossterm is blocking) + tokio::task::spawn_blocking(move || { + loop { + let has_event = match event::poll(tick_rate) { + Ok(v) => v, + Err(e) => { + warn!(error = %e, "Terminal poll failed; stopping event loop"); + break; + } + }; + + if has_event { + if let Ok(ev) = event::read() { + let app_event = match ev { + CtEvent::Key(key) => AppEvent::Terminal(key), + CtEvent::Resize(w, h) => AppEvent::Resize(w, h), + _ => continue, + }; + if event_tx.send(app_event).is_err() { + break; + } + } + } else { + // Tick on timeout + if event_tx.send(AppEvent::Tick).is_err() { + break; + } + } + } + }); + + (rx, tx) +} + +#[cfg(test)] +#[path = "event_tests.rs"] +mod tests; diff --git a/src/tui/event_tests.rs b/src/tui/event_tests.rs new file mode 100644 index 0000000..9b323f9 --- /dev/null +++ b/src/tui/event_tests.rs @@ -0,0 +1,102 @@ +use super::*; +use crate::tui::app::{CheckEntry, Operation, RepoEntry}; +use crate::types::OpSummary; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use std::path::PathBuf; + +fn sample_repo() -> OwnedRepo { + OwnedRepo::new("acme", crate::types::Repo::test("rocket", "acme")) +} + +fn sample_repo_entry() -> RepoEntry { + RepoEntry { + owner: "acme".to_string(), + name: "rocket".to_string(), + full_name: "acme/rocket".to_string(), + path: PathBuf::from("/tmp/acme/rocket"), + branch: Some("main".to_string()), + is_uncommitted: false, + ahead: 0, + behind: 0, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + } +} + +#[test] +fn app_event_variants_construct() { + let key = KeyEvent::new(KeyCode::Char('q'), KeyModifiers::NONE); + + let terminal = AppEvent::Terminal(key); + assert!(matches!(terminal, AppEvent::Terminal(_))); + + let resize = AppEvent::Resize(120, 40); + assert!(matches!(resize, AppEvent::Resize(120, 40))); + + let backend = AppEvent::Backend(BackendMessage::OperationError("oops".to_string())); + assert!(matches!(backend, AppEvent::Backend(_))); + + assert!(matches!(AppEvent::Tick, AppEvent::Tick)); +} + +#[test] +fn backend_message_variants_construct_and_clone() { + let repo = sample_repo(); + let status_rows = vec![sample_repo_entry()]; + let checks = vec![CheckEntry { + name: "git".to_string(), + passed: true, + message: "installed".to_string(), + critical: true, + }]; + + let msgs = vec![ + BackendMessage::OrgsDiscovered(1), + BackendMessage::OrgStarted("acme".to_string()), + BackendMessage::OrgComplete("acme".to_string(), 2), + BackendMessage::DiscoveryComplete(vec![repo.clone()]), + BackendMessage::DiscoveryError("err".to_string()), + BackendMessage::OperationStarted { + operation: Operation::Sync, + total: 3, + to_clone: 1, + to_sync: 2, + }, + BackendMessage::RepoStarted { + repo_name: repo.full_name().to_string(), + }, + BackendMessage::RepoProgress { + repo_name: repo.full_name().to_string(), + success: true, + skipped: false, + message: "ok".to_string(), + had_updates: true, + is_clone: false, + new_commits: Some(3), + skip_reason: None, + }, + BackendMessage::RepoCommitLog { + repo_name: repo.full_name().to_string(), + commits: vec!["abc".to_string()], + }, + BackendMessage::OperationComplete(OpSummary { + success: 1, + failed: 0, + skipped: 0, + }), + BackendMessage::OperationError("err".to_string()), + BackendMessage::StatusResults(status_rows), + BackendMessage::InitConfigCreated("/tmp/config.toml".to_string()), + BackendMessage::InitConfigError("failed".to_string()), + BackendMessage::DefaultWorkspaceUpdated(Some("ws".to_string())), + BackendMessage::DefaultWorkspaceError("bad".to_string()), + BackendMessage::CheckResults(checks), + ]; + + for msg in msgs { + let cloned = msg.clone(); + let dbg = format!("{:?}", cloned); + assert!(!dbg.is_empty()); + } +} diff --git a/src/tui/handler.rs b/src/tui/handler.rs new file mode 100644 index 0000000..ac645f1 --- /dev/null +++ b/src/tui/handler.rs @@ -0,0 +1,570 @@ +//! Input handler: keyboard events → state mutations (the "Update"). + +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use tokio::sync::mpsc::UnboundedSender; + +use super::app::{ + App, CheckEntry, LogFilter, Operation, OperationState, Screen, SyncHistoryEntry, SyncLogEntry, + SyncLogStatus, +}; +use super::event::{AppEvent, BackendMessage}; +use super::screens; +use crate::cache::SyncHistoryManager; +use crate::config::WorkspaceManager; +use crate::domain::RepoPathTemplate; +use crate::setup::state::{SetupOutcome, SetupStep}; + +const MAX_THROUGHPUT_SAMPLES: usize = 240; +const MAX_LOG_LINES: usize = 5_000; + +/// Handle an incoming event, updating app state and optionally spawning backend work. +pub async fn handle_event(app: &mut App, event: AppEvent, backend_tx: &UnboundedSender) { + match event { + AppEvent::Terminal(key) => handle_key(app, key, backend_tx).await, + AppEvent::Backend(msg) => handle_backend_message(app, msg, backend_tx), + AppEvent::Tick => { + let sync_in_progress = matches!( + &app.operation_state, + OperationState::Discovering { + operation: Operation::Sync, + .. + } | OperationState::Running { + operation: Operation::Sync, + .. + } + ); + + // Keep sync animation/throughput sampling active even when progress popup is hidden. + if sync_in_progress { + app.tick_count = app.tick_count.wrapping_add(1); + + // Sample throughput every 10 ticks (1 second at 100ms tick rate) + if app.tick_count.is_multiple_of(10) { + if let OperationState::Running { + operation: Operation::Sync, + completed, + ref mut throughput_samples, + ref mut last_sample_completed, + .. + } = app.operation_state + { + let delta = completed.saturating_sub(*last_sample_completed) as u64; + throughput_samples.push(delta); + if throughput_samples.len() > MAX_THROUGHPUT_SAMPLES { + let drop_count = throughput_samples.len() - MAX_THROUGHPUT_SAMPLES; + throughput_samples.drain(0..drop_count); + } + *last_sample_completed = completed; + } + } + } + // Drive setup wizard tick and org discovery on tick + if app.screen == Screen::WorkspaceSetup { + if let Some(ref mut setup) = app.setup_state { + setup.tick_count = setup.tick_count.wrapping_add(1); + if setup.step == SetupStep::SelectOrgs && setup.org_loading { + crate::setup::handler::handle_key( + setup, + KeyEvent::new(KeyCode::Null, KeyModifiers::NONE), + ) + .await; + } + } + } + // Run background requirement checks when on Dashboard + if app.screen == Screen::Dashboard + && app.check_results.is_empty() + && !app.checks_loading + { + app.checks_loading = true; + let tx = backend_tx.clone(); + tokio::spawn(async move { + let results = crate::checks::check_requirements().await; + let entries: Vec = results + .into_iter() + .map(|r| CheckEntry { + name: r.name, + passed: r.passed, + message: r.message, + critical: r.critical, + }) + .collect(); + let _ = tx.send(AppEvent::Backend(BackendMessage::CheckResults(entries))); + }); + } + // Auto-trigger status scan when data is stale or missing + let refresh_interval = app + .active_workspace + .as_ref() + .and_then(|ws| ws.refresh_interval) + .unwrap_or(app.config.refresh_interval); + if app.screen == Screen::Dashboard + && app.active_workspace.is_some() + && !app.status_loading + && !sync_in_progress + && app + .last_status_scan + .is_none_or(|t| t.elapsed().as_secs() >= refresh_interval) + { + app.status_loading = true; + super::backend::spawn_operation(Operation::Status, app, backend_tx.clone()); + } + } + AppEvent::Resize(_, _) => {} // ratatui handles resize + } +} + +async fn handle_key(app: &mut App, key: KeyEvent, backend_tx: &UnboundedSender) { + // Clear error message on any keypress + app.error_message = None; + + // If filter input is active, handle text input + if app.filter_active { + match key.code { + KeyCode::Esc => { + app.filter_active = false; + app.filter_text.clear(); + } + KeyCode::Enter => { + app.filter_active = false; + } + KeyCode::Backspace => { + app.filter_text.pop(); + } + KeyCode::Char(c) => { + app.filter_text.push(c); + } + _ => {} + } + return; + } + + // Global keybindings + if key.modifiers.contains(KeyModifiers::CONTROL) && key.code == KeyCode::Char('c') { + app.should_quit = true; + return; + } + + if key.code == KeyCode::Char('q') { + app.should_quit = true; + return; + } + + // WorkspaceSetup handles its own screen-specific keys. + if app.screen == Screen::WorkspaceSetup { + handle_setup_wizard_key(app, key).await; + return; + } + + if key.code == KeyCode::Esc { + // On Sync screen, collapse expanded entry before navigating back + if app.screen == Screen::Sync && app.expanded_repo.is_some() { + app.expanded_repo = None; + app.repo_commits.clear(); + return; + } + // Ensure Sync can always minimize back to Dashboard. + if app.screen == Screen::Sync && app.screen_stack.is_empty() { + app.screen = Screen::Dashboard; + return; + } + // Don't go back from entry points (no screen stack) + if app.screen == Screen::SystemCheck { + return; + } + // Workspace screen: only go back if navigated to (has screen stack) + if app.screen == Screen::Workspaces && app.screen_stack.is_empty() { + return; + } + app.go_back(); + return; + } + + // Screen-specific keybindings + match app.screen { + Screen::SystemCheck => screens::system_check::handle_key(app, key, backend_tx).await, + Screen::WorkspaceSetup => unreachable!(), // handled above + Screen::Workspaces => screens::workspaces::handle_key(app, key, backend_tx).await, + Screen::Dashboard => screens::dashboard::handle_key(app, key, backend_tx).await, + Screen::Sync => screens::sync::handle_key(app, key, backend_tx), + Screen::Settings => screens::settings::handle_key(app, key), + } +} + +async fn handle_setup_wizard_key(app: &mut App, key: KeyEvent) { + let Some(ref mut setup) = app.setup_state else { + return; + }; + + crate::setup::handler::handle_key(setup, key).await; + + if setup.should_quit { + if matches!(setup.outcome, Some(SetupOutcome::Completed)) { + // Reload workspaces and go to dashboard + match WorkspaceManager::list() { + Ok(workspaces) => { + app.workspaces = workspaces; + if let Some(ws) = app.workspaces.first().cloned() { + app.base_path = Some(ws.expanded_base_path()); + app.sync_history = SyncHistoryManager::for_workspace(&ws.name) + .and_then(|m| m.load()) + .unwrap_or_default(); + app.active_workspace = Some(ws); + } + } + Err(e) => { + app.error_message = Some(format!("Failed to load workspaces: {}", e)); + app.workspaces.clear(); + app.base_path = None; + app.active_workspace = None; + app.sync_history.clear(); + } + } + app.setup_state = None; + app.screen = Screen::Dashboard; + app.screen_stack.clear(); + } else { + // Cancelled — return to previous screen when available. + app.setup_state = None; + if app.screen_stack.is_empty() { + app.screen = Screen::SystemCheck; + } else { + app.go_back(); + } + } + } +} + +/// Compute the filesystem path for a repo from its full name (e.g. "org/repo"). +/// Mirrors `DiscoveryOrchestrator::compute_path()` logic using workspace config. +fn compute_repo_path(app: &App, repo_name: &str) -> Option { + let ws = app.active_workspace.as_ref()?; + let base_path = ws.expanded_base_path(); + let template = ws + .structure + .clone() + .unwrap_or_else(|| app.config.structure.clone()); + let provider_name = ws.provider.kind.slug().to_string(); + + RepoPathTemplate::new(template).render_full_name(&base_path, &provider_name, repo_name) +} + +fn handle_backend_message( + app: &mut App, + msg: BackendMessage, + backend_tx: &UnboundedSender, +) { + match msg { + BackendMessage::OrgsDiscovered(count) => { + app.operation_state = OperationState::Discovering { + operation: Operation::Sync, + message: format!("Found {} organizations", count), + }; + } + BackendMessage::OrgStarted(name) => { + app.operation_state = OperationState::Discovering { + operation: Operation::Sync, + message: format!("Discovering: {}", name), + }; + } + BackendMessage::OrgComplete(name, count) => { + app.log_lines + .push(format!("[ok] {} ({} repos)", name, count)); + } + BackendMessage::DiscoveryComplete(repos) => { + // Populate org data + let mut by_org: std::collections::HashMap> = + std::collections::HashMap::new(); + for repo in &repos { + by_org + .entry(repo.owner.clone()) + .or_default() + .push(repo.clone()); + } + let mut org_names: Vec = by_org.keys().cloned().collect(); + org_names.sort(); + app.orgs = org_names; + app.repos_by_org = by_org; + app.all_repos = repos; + } + BackendMessage::DiscoveryError(msg) => { + app.operation_state = OperationState::Idle; + app.error_message = Some(msg); + } + BackendMessage::OperationStarted { + operation, + total, + to_clone, + to_sync, + } => { + app.log_lines.clear(); + app.sync_log_entries.clear(); + app.log_filter = LogFilter::All; + app.sync_log_index = 0; + app.expanded_repo = None; + app.repo_commits.clear(); + app.show_sync_history = false; + app.operation_state = OperationState::Running { + operation, + total, + completed: 0, + failed: 0, + skipped: 0, + current_repo: String::new(), + with_updates: 0, + cloned: 0, + synced: 0, + to_clone, + to_sync, + total_new_commits: 0, + started_at: std::time::Instant::now(), + active_repos: Vec::new(), + throughput_samples: Vec::new(), + last_sample_completed: 0, + }; + } + BackendMessage::RepoStarted { repo_name } => { + if let OperationState::Running { + ref mut active_repos, + .. + } = app.operation_state + { + active_repos.push(repo_name); + } + } + BackendMessage::RepoProgress { + repo_name, + success, + skipped, + message, + had_updates, + is_clone, + new_commits, + skip_reason: _, + } => { + if let OperationState::Running { + ref mut completed, + ref mut failed, + skipped: ref mut skip_count, + ref mut current_repo, + ref mut with_updates, + ref mut cloned, + ref mut synced, + ref mut total_new_commits, + ref mut active_repos, + .. + } = app.operation_state + { + *completed += 1; + *current_repo = repo_name.clone(); + + // Remove from active workers + active_repos.retain(|r| r != &repo_name); + + if skipped { + *skip_count += 1; + } else if !success { + *failed += 1; + } else { + if is_clone { + *cloned += 1; + } else { + *synced += 1; + } + if had_updates { + *with_updates += 1; + if let Some(n) = new_commits { + *total_new_commits += n; + } + } + } + } + + // Build structured log entry + let log_status = if !success { + SyncLogStatus::Failed + } else if skipped { + SyncLogStatus::Skipped + } else if is_clone { + SyncLogStatus::Cloned + } else if had_updates { + SyncLogStatus::Updated + } else { + SyncLogStatus::Success + }; + + app.sync_log_entries.push(SyncLogEntry { + repo_name: repo_name.clone(), + status: log_status, + message: message.clone(), + had_updates, + is_clone, + new_commits, + path: compute_repo_path(app, &repo_name), + }); + + // Build legacy log line with enriched prefixes + let prefix = match log_status { + SyncLogStatus::Failed => "[!!]", + SyncLogStatus::Skipped => "[--]", + SyncLogStatus::Cloned => "[++]", + SyncLogStatus::Updated => "[**]", + SyncLogStatus::Success => "[ok]", + }; + + let commit_info = if had_updates { + if let Some(n) = new_commits { + if n > 0 { + format!(" ({} new commits)", n) + } else { + String::new() + } + } else { + String::new() + } + } else { + String::new() + }; + + if app.log_lines.len() >= MAX_LOG_LINES { + let drop_count = app.log_lines.len() + 1 - MAX_LOG_LINES; + app.log_lines.drain(0..drop_count); + app.scroll_offset = app.scroll_offset.saturating_sub(drop_count); + } + app.log_lines.push(format!( + "{} {} - {}{}", + prefix, repo_name, message, commit_info + )); + // Auto-scroll to bottom + app.scroll_offset = app.log_lines.len().saturating_sub(1); + } + BackendMessage::OperationComplete(summary) => { + // Extract accumulated metrics from Running state before transitioning + let (op, wu, cl, sy, tnc, dur) = match &app.operation_state { + OperationState::Running { + operation, + with_updates, + cloned, + synced, + total_new_commits, + started_at, + .. + } => ( + *operation, + *with_updates, + *cloned, + *synced, + *total_new_commits, + started_at.elapsed().as_secs_f64(), + ), + _ => (Operation::Sync, 0, 0, 0, 0, 0.0), + }; + + // Update last_synced after a successful sync + if op == Operation::Sync { + let now = chrono::Utc::now().to_rfc3339(); + if let Some(ref mut ws) = app.active_workspace { + ws.last_synced = Some(now.clone()); + let _ = WorkspaceManager::save(ws); + if let Some(entry) = app.workspaces.iter_mut().find(|w| w.name == ws.name) { + entry.last_synced = Some(now.clone()); + } + } + + // Save to sync history + app.sync_history.push(SyncHistoryEntry { + timestamp: now, + duration_secs: dur, + success: summary.success, + failed: summary.failed, + skipped: summary.skipped, + with_updates: wu, + cloned: cl, + total_new_commits: tnc, + }); + // Cap in-memory history + if app.sync_history.len() > 50 { + app.sync_history.remove(0); + } + + // Persist history to disk + if let Some(ref ws) = app.active_workspace { + if let Ok(manager) = SyncHistoryManager::for_workspace(&ws.name) { + let _ = manager.save(&app.sync_history); + } + } + + // Auto-trigger status scan so dashboard is fresh + super::backend::spawn_operation(Operation::Status, app, backend_tx.clone()); + } + + // Default to Updated filter if there were updates, else All + app.log_filter = if wu > 0 || cl > 0 { + LogFilter::Updated + } else { + LogFilter::All + }; + app.sync_log_index = 0; + + app.operation_state = OperationState::Finished { + operation: op, + summary, + with_updates: wu, + cloned: cl, + synced: sy, + total_new_commits: tnc, + duration_secs: dur, + }; + } + BackendMessage::OperationError(msg) => { + app.operation_state = OperationState::Idle; + app.error_message = Some(msg); + } + BackendMessage::StatusResults(entries) => { + app.local_repos = entries; + if matches!( + app.operation_state, + OperationState::Running { + operation: Operation::Status, + .. + } + ) { + app.operation_state = OperationState::Idle; + } + app.status_loading = false; + app.last_status_scan = Some(std::time::Instant::now()); + } + BackendMessage::RepoCommitLog { repo_name, commits } => { + // Single repo deep dive (Enter key) + if app.expanded_repo.as_deref() == Some(&repo_name) { + app.repo_commits = commits.clone(); + } + // Changelog aggregation (c key) + if app.log_filter == LogFilter::Changelog { + app.changelog_commits.insert(repo_name, commits); + app.changelog_loaded += 1; + } + } + BackendMessage::InitConfigCreated(path) => { + app.config_created = true; + app.config_path_display = Some(path); + } + BackendMessage::InitConfigError(msg) => { + app.error_message = Some(msg); + } + BackendMessage::DefaultWorkspaceUpdated(name) => { + app.config.default_workspace = name; + } + BackendMessage::DefaultWorkspaceError(msg) => { + app.error_message = Some(msg); + } + BackendMessage::CheckResults(entries) => { + app.check_results = entries; + app.checks_loading = false; + } + } +} + +#[cfg(test)] +#[path = "handler_tests.rs"] +mod tests; diff --git a/src/tui/handler_tests.rs b/src/tui/handler_tests.rs new file mode 100644 index 0000000..77ef8e1 --- /dev/null +++ b/src/tui/handler_tests.rs @@ -0,0 +1,66 @@ +use super::*; +use crate::config::{Config, WorkspaceConfig}; +use crate::setup::state::{SetupState, SetupStep}; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use tokio::sync::mpsc::unbounded_channel; + +#[tokio::test] +async fn q_quits_immediately() { + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(Config::default(), vec![ws]); + let (tx, _rx) = unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('q'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert!(app.should_quit); +} + +#[tokio::test] +async fn setup_cancel_returns_to_previous_screen_when_present() { + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(Config::default(), vec![ws]); + app.screen = Screen::WorkspaceSetup; + app.screen_stack = vec![Screen::SystemCheck, Screen::Workspaces]; + app.setup_state = Some(SetupState::new("~/Git-Same/GitHub")); + + handle_setup_wizard_key(&mut app, KeyEvent::new(KeyCode::Esc, KeyModifiers::NONE)).await; + + assert!(app.setup_state.is_none()); + assert_eq!(app.screen, Screen::Workspaces); + assert_eq!(app.screen_stack, vec![Screen::SystemCheck]); +} + +#[tokio::test] +async fn setup_cancel_without_history_falls_back_to_system_check() { + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(Config::default(), vec![ws]); + app.screen = Screen::WorkspaceSetup; + app.screen_stack.clear(); + app.setup_state = Some(SetupState::new("~/Git-Same/GitHub")); + + handle_setup_wizard_key(&mut app, KeyEvent::new(KeyCode::Esc, KeyModifiers::NONE)).await; + + assert!(app.setup_state.is_none()); + assert_eq!(app.screen, Screen::SystemCheck); + assert!(app.screen_stack.is_empty()); +} + +#[tokio::test] +async fn setup_right_moves_to_next_step() { + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(Config::default(), vec![ws]); + app.screen = Screen::WorkspaceSetup; + app.setup_state = Some(SetupState::new("~/Git-Same/GitHub")); + + handle_setup_wizard_key(&mut app, KeyEvent::new(KeyCode::Right, KeyModifiers::NONE)).await; + + assert_eq!( + app.setup_state.as_ref().map(|s| s.step), + Some(SetupStep::Authenticate) + ); +} diff --git a/src/tui/mod.rs b/src/tui/mod.rs new file mode 100644 index 0000000..78aca15 --- /dev/null +++ b/src/tui/mod.rs @@ -0,0 +1,88 @@ +//! Full-screen TUI for git-same. +//! +//! Launched when `gisa` is run with no subcommand. + +pub mod app; +pub mod backend; +pub mod event; +pub mod handler; +pub mod screens; +pub mod ui; +pub mod widgets; + +use crate::config::{Config, WorkspaceManager}; +use crate::errors::Result; +use app::App; +use crossterm::{ + execute, + terminal::{disable_raw_mode, enable_raw_mode, EnterAlternateScreen, LeaveAlternateScreen}, +}; +use ratatui::backend::CrosstermBackend; +use ratatui::Terminal; +use std::io; +use std::time::Duration; + +/// Run the TUI application. +pub async fn run_tui(config: Config) -> Result<()> { + // Setup terminal + enable_raw_mode()?; + let mut stdout = io::stdout(); + if let Err(e) = execute!(stdout, EnterAlternateScreen) { + let _ = disable_raw_mode(); + return Err(e.into()); + } + let backend = CrosstermBackend::new(stdout); + let mut terminal = match Terminal::new(backend) { + Ok(terminal) => terminal, + Err(e) => { + let _ = disable_raw_mode(); + let mut stdout = io::stdout(); + let _ = execute!(stdout, LeaveAlternateScreen); + return Err(e.into()); + } + }; + + let result = async { + // Load workspaces + let workspaces = WorkspaceManager::list()?; + + // Create app state + let mut app = App::new(config, workspaces); + + // Start event loop + let tick_rate = Duration::from_millis(100); + let (mut rx, backend_tx) = event::spawn_event_loop(tick_rate); + + // Main loop + run_app(&mut terminal, &mut app, &mut rx, &backend_tx).await + } + .await; + + // Restore terminal (always, even on error) + let _ = disable_raw_mode(); + let _ = execute!(terminal.backend_mut(), LeaveAlternateScreen); + let _ = terminal.show_cursor(); + + result +} + +async fn run_app( + terminal: &mut Terminal>, + app: &mut App, + rx: &mut tokio::sync::mpsc::UnboundedReceiver, + backend_tx: &tokio::sync::mpsc::UnboundedSender, +) -> Result<()> { + loop { + terminal.draw(|frame| ui::render(app, frame))?; + + match rx.recv().await { + Some(event) => handler::handle_event(app, event, backend_tx).await, + None => break, + } + + if app.should_quit { + break; + } + } + Ok(()) +} diff --git a/src/tui/screens/dashboard.rs b/src/tui/screens/dashboard.rs new file mode 100644 index 0000000..5053097 --- /dev/null +++ b/src/tui/screens/dashboard.rs @@ -0,0 +1,1158 @@ +//! Dashboard screen — home view with summary stats and quick-action hotkeys. + +use std::collections::{HashMap, HashSet}; + +use ratatui::{ + layout::{Constraint, Layout, Position, Rect}, + style::{Color, Modifier, Style}, + text::{Line, Span}, + widgets::{Block, BorderType, Borders, Paragraph, Row, Table, TableState}, + Frame, +}; + +use chrono::DateTime; + +use crossterm::event::{KeyCode, KeyEvent}; +use tokio::sync::mpsc::UnboundedSender; + +use crate::banner::render_banner; +use crate::tui::app::{App, Operation, OperationState, RepoEntry, Screen}; +use crate::tui::event::AppEvent; + +// ── Key handler ───────────────────────────────────────────────────────────── + +pub async fn handle_key(app: &mut App, key: KeyEvent, backend_tx: &UnboundedSender) { + match key.code { + KeyCode::Char('s') => { + start_sync_operation(app, backend_tx); + } + KeyCode::Char('p') => { + show_sync_progress(app); + } + KeyCode::Char('t') => { + app.last_status_scan = None; // Force immediate refresh + app.status_loading = true; + start_operation(app, Operation::Status, backend_tx); + } + // Tab shortcuts + KeyCode::Char('o') => { + app.stat_index = 0; + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Char('r') => { + app.stat_index = 1; + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Char('c') => { + app.stat_index = 2; + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Char('b') => { + app.stat_index = 3; + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Char('a') => { + app.stat_index = 4; + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Char('u') => { + app.stat_index = 5; + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Char('e') => { + app.navigate_to(Screen::Settings); + } + KeyCode::Char('w') => { + app.navigate_to(Screen::Workspaces); + } + KeyCode::Char('i') => { + app.navigate_to(Screen::SystemCheck); + } + KeyCode::Char('/') => { + app.filter_active = true; + app.filter_text.clear(); + app.stat_index = 1; + app.dashboard_table_state.select(Some(0)); + } + // Tab navigation (left/right between stat boxes) + KeyCode::Left => { + app.stat_index = app.stat_index.saturating_sub(1); + app.dashboard_table_state.select(Some(0)); + } + KeyCode::Right => { + if app.stat_index < 5 { + app.stat_index += 1; + app.dashboard_table_state.select(Some(0)); + } + } + // List navigation (up/down within tab content) + KeyCode::Down => { + let count = tab_item_count(app); + if count > 0 { + let current = app.dashboard_table_state.selected().unwrap_or(0); + if current + 1 < count { + app.dashboard_table_state.select(Some(current + 1)); + } + } + } + KeyCode::Up => { + let count = tab_item_count(app); + if count > 0 { + let current = app.dashboard_table_state.selected().unwrap_or(0); + app.dashboard_table_state + .select(Some(current.saturating_sub(1))); + } + } + KeyCode::Enter => { + // Open the selected repo's folder + if let Some(path) = selected_repo_path(app) { + let _ = std::process::Command::new("open").arg(&path).spawn(); + } + } + _ => {} + } +} + +fn start_operation(app: &mut App, operation: Operation, backend_tx: &UnboundedSender) { + if matches!( + app.operation_state, + OperationState::Discovering { .. } | OperationState::Running { .. } + ) { + app.error_message = Some("An operation is already running".to_string()); + return; + } + + app.tick_count = 0; + app.operation_state = OperationState::Discovering { + operation, + message: format!("Starting {}...", operation), + }; + app.log_lines.clear(); + app.scroll_offset = 0; + + crate::tui::backend::spawn_operation(operation, app, backend_tx.clone()); +} + +pub(crate) fn start_sync_operation(app: &mut App, backend_tx: &UnboundedSender) { + start_operation(app, Operation::Sync, backend_tx); +} + +pub(crate) fn show_sync_progress(app: &mut App) { + if !matches!(app.screen, Screen::Sync) { + app.screen_stack.push(app.screen); + app.screen = Screen::Sync; + } +} + +pub(crate) fn hide_sync_progress(app: &mut App) { + if !matches!(app.screen, Screen::Sync) { + return; + } + + if app.screen_stack.is_empty() { + app.screen = Screen::Dashboard; + } else { + app.go_back(); + } +} + +fn tab_item_count(app: &App) -> usize { + match app.stat_index { + 0 => app + .local_repos + .iter() + .map(|r| r.owner.as_str()) + .collect::>() + .len(), + 1 => { + if app.filter_text.is_empty() { + app.local_repos.len() + } else { + let ft = app.filter_text.to_lowercase(); + app.local_repos + .iter() + .filter(|r| r.full_name.to_lowercase().contains(&ft)) + .count() + } + } + 2 => app + .local_repos + .iter() + .filter(|r| !r.is_uncommitted && r.behind == 0 && r.ahead == 0) + .count(), + 3 => app.local_repos.iter().filter(|r| r.behind > 0).count(), + 4 => app.local_repos.iter().filter(|r| r.ahead > 0).count(), + 5 => app.local_repos.iter().filter(|r| r.is_uncommitted).count(), + _ => 0, + } +} + +fn selected_repo_path(app: &App) -> Option { + let selected = app.dashboard_table_state.selected()?; + let repos: Vec<&RepoEntry> = match app.stat_index { + 0 => return None, // Owners tab — no single repo + 1 => { + if app.filter_text.is_empty() { + app.local_repos.iter().collect() + } else { + let ft = app.filter_text.to_lowercase(); + app.local_repos + .iter() + .filter(|r| r.full_name.to_lowercase().contains(&ft)) + .collect() + } + } + 2 => app + .local_repos + .iter() + .filter(|r| !r.is_uncommitted && r.behind == 0 && r.ahead == 0) + .collect(), + 3 => app.local_repos.iter().filter(|r| r.behind > 0).collect(), + 4 => app.local_repos.iter().filter(|r| r.ahead > 0).collect(), + 5 => app + .local_repos + .iter() + .filter(|r| r.is_uncommitted) + .collect(), + _ => return None, + }; + repos.get(selected).map(|r| r.path.clone()) +} + +// ── Render ────────────────────────────────────────────────────────────────── + +pub(crate) fn format_timestamp(raw: &str) -> String { + use chrono::Utc; + + let parsed = DateTime::parse_from_rfc3339(raw); + match parsed { + Ok(dt) => { + let absolute = dt.format("%Y-%m-%d %H:%M:%S").to_string(); + let duration = Utc::now().signed_duration_since(dt); + let relative = if duration.num_days() > 30 { + format!("about {}mo ago", duration.num_days() / 30) + } else if duration.num_days() > 0 { + format!("about {}d ago", duration.num_days()) + } else if duration.num_hours() > 0 { + format!("about {}h ago", duration.num_hours()) + } else if duration.num_minutes() > 0 { + format!("about {} min ago", duration.num_minutes()) + } else { + "just now".to_string() + }; + format!("{} at {}", relative, absolute) + } + Err(_) => raw.to_string(), + } +} + +pub fn render(app: &mut App, frame: &mut Frame) { + let chunks = Layout::vertical([ + Constraint::Length(6), // Banner + Constraint::Length(1), // Tagline + Constraint::Length(1), // Requirements + Constraint::Length(1), // Workspace + Constraint::Length(4), // Stats + Constraint::Min(1), // Spacer + Constraint::Length(2), // Bottom actions + ]) + .split(frame.area()); + + render_banner(frame, chunks[0]); + render_tagline(frame, chunks[1]); + render_config_reqs(app, frame, chunks[2]); + render_workspace_info(app, frame, chunks[3]); + let stat_cols = render_stats(app, frame, chunks[4]); + let table_area = Rect { + y: chunks[5].y + 1, + height: chunks[5].height.saturating_sub(1), + ..chunks[5] + }; + render_tab_content(app, frame, table_area); + render_tab_connector(frame, &stat_cols, chunks[5], app.stat_index); + render_bottom_actions(app, frame, chunks[6]); +} + +fn render_tagline(frame: &mut Frame, area: Rect) { + let description = crate::banner::subheadline(); + + let line = Line::from(Span::styled( + description, + Style::default() + .fg(Color::DarkGray) + .add_modifier(Modifier::BOLD), + )); + let p = Paragraph::new(vec![line]).centered(); + frame.render_widget(p, area); +} + +fn render_info_line(frame: &mut Frame, area: Rect, left: Vec, right: Vec) { + let cols = + Layout::horizontal([Constraint::Percentage(41), Constraint::Percentage(59)]).split(area); + frame.render_widget(Paragraph::new(Line::from(left)).right_aligned(), cols[0]); + frame.render_widget(Paragraph::new(Line::from(right)), cols[1]); +} + +fn render_config_reqs(app: &App, frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + let left = vec![ + Span::styled("[e]", key_style), + Span::styled(" Settings ", dim), + ]; + + let right = if app.checks_loading || app.check_results.is_empty() { + vec![ + Span::styled(" Checking...", Style::default().fg(Color::Yellow)), + Span::raw(" "), + Span::styled("[t]", key_style), + Span::styled(" Refresh", dim), + ] + } else { + let all_passed = app.check_results.iter().all(|c| c.passed); + if all_passed { + vec![ + Span::styled(" [✓]", Style::default().fg(Color::Rgb(21, 128, 61))), + Span::styled(" Requirements Satisfied", dim), + Span::raw(" "), + Span::styled("[t]", key_style), + Span::styled(" Refresh", dim), + ] + } else { + vec![ + Span::styled(" [✗]", Style::default().fg(Color::Red)), + Span::styled(" Requirements Not Met", dim), + Span::raw(" "), + Span::styled("[t]", key_style), + Span::styled(" Refresh", dim), + ] + } + }; + + render_info_line(frame, area, left, right); +} + +fn render_workspace_info(app: &App, frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + match &app.active_workspace { + Some(ws) => { + let folder_name = std::path::Path::new(&ws.base_path) + .file_name() + .and_then(|n| n.to_str()) + .unwrap_or(&ws.base_path) + .to_string(); + + render_info_line( + frame, + area, + vec![ + Span::styled("[w]", key_style), + Span::styled(" Workspace ", dim), + ], + vec![ + Span::styled(" [✓]", Style::default().fg(Color::Rgb(21, 128, 61))), + Span::styled(" Folder ", dim), + Span::styled( + folder_name, + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ), + Span::raw(" "), + Span::styled("[/]", key_style), + Span::styled(" Search Repositories", dim), + ], + ); + } + None => { + let p = Paragraph::new(Line::from(Span::styled( + "No workspace selected", + Style::default().fg(Color::Yellow), + ))) + .centered(); + frame.render_widget(p, area); + } + } +} + +fn render_stats(app: &App, frame: &mut Frame, area: Rect) -> [Rect; 6] { + let cols = Layout::horizontal([ + Constraint::Ratio(1, 6), + Constraint::Ratio(1, 6), + Constraint::Ratio(1, 6), + Constraint::Ratio(1, 6), + Constraint::Ratio(1, 6), + Constraint::Ratio(1, 6), + ]) + .split(area); + + let completed_repos = app.local_repos.len(); + let completed_owners = app + .local_repos + .iter() + .map(|r| r.owner.as_str()) + .collect::>() + .len(); + let discovered_repos = app.all_repos.len(); + let discovered_owners = app + .all_repos + .iter() + .map(|r| r.owner.as_str()) + .collect::>() + .len(); + let total_repos = discovered_repos.max(completed_repos); + let total_owners = discovered_owners.max(completed_owners); + let owners_progress = format!("{}/{}", completed_owners, total_owners); + let repos_progress = total_repos.to_string(); + let uncommitted = app.local_repos.iter().filter(|r| r.is_uncommitted).count(); + let behind = app.local_repos.iter().filter(|r| r.behind > 0).count(); + let ahead = app.local_repos.iter().filter(|r| r.ahead > 0).count(); + let clean = app + .local_repos + .iter() + .filter(|r| !r.is_uncommitted && r.behind == 0 && r.ahead == 0) + .count(); + + let selected = app.stat_index; + render_stat_box( + frame, + cols[0], + &owners_progress, + "o", + "Owners", + Color::Rgb(21, 128, 61), + selected == 0, + ); + render_stat_box( + frame, + cols[1], + &repos_progress, + "r", + "Repositories", + Color::Rgb(21, 128, 61), + selected == 1, + ); + render_stat_box( + frame, + cols[2], + &clean.to_string(), + "c", + "Clean", + Color::Rgb(21, 128, 61), + selected == 2, + ); + render_stat_box( + frame, + cols[3], + &behind.to_string(), + "b", + "Behind", + Color::Rgb(21, 128, 61), + selected == 3, + ); + render_stat_box( + frame, + cols[4], + &ahead.to_string(), + "a", + "Ahead", + Color::Rgb(21, 128, 61), + selected == 4, + ); + render_stat_box( + frame, + cols[5], + &uncommitted.to_string(), + "u", + "Uncommitted", + Color::Rgb(21, 128, 61), + selected == 5, + ); + + [cols[0], cols[1], cols[2], cols[3], cols[4], cols[5]] +} + +fn render_stat_box( + frame: &mut Frame, + area: Rect, + value: &str, + key: &str, + label: &str, + color: Color, + selected: bool, +) { + let (border_style, borders, border_type) = if selected { + ( + Style::default().fg(color).add_modifier(Modifier::BOLD), + Borders::TOP | Borders::LEFT | Borders::RIGHT, + BorderType::Thick, + ) + } else { + ( + Style::default().fg(Color::DarkGray), + Borders::ALL, + BorderType::Plain, + ) + }; + let block = Block::default() + .borders(borders) + .border_type(border_type) + .border_style(border_style); + let content = Paragraph::new(vec![ + Line::from(Span::styled( + value, + Style::default().fg(color).add_modifier(Modifier::BOLD), + )), + Line::from(vec![ + Span::styled( + format!("[{}]", key), + Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD), + ), + Span::raw(" "), + Span::styled(label, Style::default().fg(Color::DarkGray)), + ]), + ]) + .centered() + .block(block); + frame.render_widget(content, area); +} + +fn tab_color(_stat_index: usize) -> Color { + Color::Rgb(21, 128, 61) +} + +fn render_tab_connector( + frame: &mut Frame, + stat_cols: &[Rect; 6], + content_area: Rect, + selected: usize, +) { + let color = tab_color(selected); + let style = Style::default().fg(color).add_modifier(Modifier::BOLD); + let y = content_area.y; + let x_start = content_area.x; + let x_end = content_area.x + content_area.width.saturating_sub(1); + let tab_left = stat_cols[selected].x; + let tab_right = stat_cols[selected].x + stat_cols[selected].width.saturating_sub(1); + + let buf = frame.buffer_mut(); + + for x in x_start..=x_end { + let symbol = if (x == tab_left && x == x_start) || (x == tab_right && x == x_end) { + "┃" // tab edge aligns with content edge: vertical continues + } else if x == tab_left { + "┛" // horizontal from left meets tab's left border going up + } else if x == tab_right { + "┗" // tab's right border going up meets horizontal going right + } else if x > tab_left && x < tab_right { + " " // gap under the selected tab + } else if x == x_start { + "┏" // content top-left corner + } else if x == x_end { + "┓" // content top-right corner + } else { + "━" // thick horizontal line + }; + + if let Some(cell) = buf.cell_mut(Position::new(x, y)) { + cell.set_symbol(symbol); + cell.set_style(style); + } + } +} + +fn render_tab_content(app: &mut App, frame: &mut Frame, area: Rect) { + if area.height < 2 { + return; + } + + let color = tab_color(app.stat_index); + let mut table_state = app.dashboard_table_state; + match app.stat_index { + 0 => render_owners_tab(app, frame, area, color, &mut table_state), + 1 => render_repos_tab(app, frame, area, color, &mut table_state), + 2 => render_clean_tab(app, frame, area, color, &mut table_state), + 3 => render_behind_tab(app, frame, area, color, &mut table_state), + 4 => render_ahead_tab(app, frame, area, color, &mut table_state), + 5 => render_uncommitted_tab(app, frame, area, color, &mut table_state), + _ => {} + } + app.dashboard_table_state = table_state; +} + +fn render_owners_tab( + app: &App, + frame: &mut Frame, + area: Rect, + color: Color, + table_state: &mut TableState, +) { + // (total, behind, ahead, uncommitted) + let mut owner_stats: HashMap<&str, (usize, usize, usize, usize)> = HashMap::new(); + for r in &app.local_repos { + let entry = owner_stats.entry(r.owner.as_str()).or_insert((0, 0, 0, 0)); + entry.0 += 1; + if r.behind > 0 { + entry.1 += 1; + } + if r.ahead > 0 { + entry.2 += 1; + } + if r.is_uncommitted { + entry.3 += 1; + } + } + + let mut owners: Vec<(&str, usize, usize, usize, usize)> = owner_stats + .into_iter() + .map(|(name, (total, behind, ahead, uncommitted))| { + (name, total, behind, ahead, uncommitted) + }) + .collect(); + owners.sort_by_key(|(name, _, _, _, _)| name.to_lowercase()); + + let header_cols = vec!["#", "Owner", "Repos", "Behind", "Ahead", "Uncommitted"]; + let widths = [ + Constraint::Length(4), + Constraint::Percentage(35), + Constraint::Percentage(15), + Constraint::Percentage(15), + Constraint::Percentage(15), + Constraint::Percentage(20), + ]; + + let rows: Vec = owners + .iter() + .enumerate() + .map(|(i, (name, total, behind, ahead, uncommitted))| { + let fmt = |n: &usize| { + if *n > 0 { + n.to_string() + } else { + ".".to_string() + } + }; + Row::new(vec![ + format!("{:>4}", i + 1), + name.to_string(), + total.to_string(), + fmt(behind), + fmt(ahead), + fmt(uncommitted), + ]) + }) + .collect(); + + render_table_block(frame, area, &header_cols, rows, &widths, color, table_state); +} + +fn render_repos_tab( + app: &App, + frame: &mut Frame, + area: Rect, + color: Color, + table_state: &mut TableState, +) { + let repos: Vec<&RepoEntry> = if app.filter_text.is_empty() { + app.local_repos.iter().collect() + } else { + let ft = app.filter_text.to_lowercase(); + app.local_repos + .iter() + .filter(|r| r.full_name.to_lowercase().contains(&ft)) + .collect() + }; + + let header_cols = vec!["#", "Org/Repo", "Branch", "Uncommitted", "Ahead", "Behind"]; + let widths = [ + Constraint::Length(4), + Constraint::Percentage(35), + Constraint::Percentage(20), + Constraint::Percentage(15), + Constraint::Percentage(15), + Constraint::Percentage(15), + ]; + + let rows: Vec = repos + .iter() + .enumerate() + .map(|(i, entry)| { + let branch = entry.branch.as_deref().unwrap_or("-"); + Row::new(vec![ + format!("{:>4}", i + 1), + entry.full_name.clone(), + branch.to_string(), + fmt_flag(entry.is_uncommitted), + fmt_count_plus(entry.ahead), + fmt_count_minus(entry.behind), + ]) + }) + .collect(); + + render_table_block(frame, area, &header_cols, rows, &widths, color, table_state); +} + +fn render_clean_tab( + app: &App, + frame: &mut Frame, + area: Rect, + color: Color, + table_state: &mut TableState, +) { + let repos: Vec<&RepoEntry> = app + .local_repos + .iter() + .filter(|r| !r.is_uncommitted && r.behind == 0 && r.ahead == 0) + .collect(); + + let header_cols = vec!["#", "Org/Repo", "Branch"]; + let widths = [ + Constraint::Length(4), + Constraint::Percentage(60), + Constraint::Percentage(40), + ]; + + let rows: Vec = repos + .iter() + .enumerate() + .map(|(i, entry)| { + let branch = entry.branch.as_deref().unwrap_or("-"); + Row::new(vec![ + format!("{:>4}", i + 1), + entry.full_name.clone(), + branch.to_string(), + ]) + }) + .collect(); + + render_table_block(frame, area, &header_cols, rows, &widths, color, table_state); +} + +fn render_behind_tab( + app: &App, + frame: &mut Frame, + area: Rect, + color: Color, + table_state: &mut TableState, +) { + let repos: Vec<&RepoEntry> = app.local_repos.iter().filter(|r| r.behind > 0).collect(); + + let header_cols = vec!["#", "Org/Repo", "Branch", "Behind"]; + let widths = [ + Constraint::Length(4), + Constraint::Percentage(45), + Constraint::Percentage(30), + Constraint::Percentage(25), + ]; + + let rows: Vec = repos + .iter() + .enumerate() + .map(|(i, entry)| { + let branch = entry.branch.as_deref().unwrap_or("-"); + Row::new(vec![ + format!("{:>4}", i + 1), + entry.full_name.clone(), + branch.to_string(), + fmt_count_minus(entry.behind), + ]) + }) + .collect(); + + render_table_block(frame, area, &header_cols, rows, &widths, color, table_state); +} + +fn render_ahead_tab( + app: &App, + frame: &mut Frame, + area: Rect, + color: Color, + table_state: &mut TableState, +) { + let repos: Vec<&RepoEntry> = app.local_repos.iter().filter(|r| r.ahead > 0).collect(); + + let header_cols = vec!["#", "Org/Repo", "Branch", "Ahead"]; + let widths = [ + Constraint::Length(4), + Constraint::Percentage(45), + Constraint::Percentage(30), + Constraint::Percentage(25), + ]; + + let rows: Vec = repos + .iter() + .enumerate() + .map(|(i, entry)| { + let branch = entry.branch.as_deref().unwrap_or("-"); + Row::new(vec![ + format!("{:>4}", i + 1), + entry.full_name.clone(), + branch.to_string(), + fmt_count_plus(entry.ahead), + ]) + }) + .collect(); + + render_table_block(frame, area, &header_cols, rows, &widths, color, table_state); +} + +fn render_uncommitted_tab( + app: &App, + frame: &mut Frame, + area: Rect, + color: Color, + table_state: &mut TableState, +) { + let repos: Vec<&RepoEntry> = app + .local_repos + .iter() + .filter(|r| r.is_uncommitted) + .collect(); + + let header_cols = vec!["#", "Org/Repo", "Branch", "Staged", "Unstaged", "Untracked"]; + let widths = [ + Constraint::Length(4), + Constraint::Percentage(30), + Constraint::Percentage(22), + Constraint::Percentage(16), + Constraint::Percentage(16), + Constraint::Percentage(16), + ]; + + let rows: Vec = repos + .iter() + .enumerate() + .map(|(i, entry)| { + let branch = entry.branch.as_deref().unwrap_or("-"); + let fmt_n = |n: usize| { + if n > 0 { + n.to_string() + } else { + ".".to_string() + } + }; + Row::new(vec![ + format!("{:>4}", i + 1), + entry.full_name.clone(), + branch.to_string(), + fmt_n(entry.staged_count), + fmt_n(entry.unstaged_count), + fmt_n(entry.untracked_count), + ]) + }) + .collect(); + + render_table_block(frame, area, &header_cols, rows, &widths, color, table_state); +} + +// -- Shared helpers -- + +fn fmt_flag(flag: bool) -> String { + if flag { + "*".to_string() + } else { + ".".to_string() + } +} + +fn fmt_count_plus(n: usize) -> String { + if n > 0 { + format!("+{}", n) + } else { + ".".to_string() + } +} + +fn fmt_count_minus(n: usize) -> String { + if n > 0 { + format!("-{}", n) + } else { + ".".to_string() + } +} + +fn render_table_block( + frame: &mut Frame, + area: Rect, + header_cols: &[&str], + rows: Vec, + widths: &[Constraint], + color: Color, + table_state: &mut TableState, +) { + let border_style = Style::default().fg(color).add_modifier(Modifier::BOLD); + let block = Block::default() + .borders(Borders::LEFT | Borders::RIGHT | Borders::BOTTOM) + .border_type(BorderType::Thick) + .border_style(border_style); + + if rows.is_empty() { + let msg = Paragraph::new(Line::from(Span::styled( + " No repositories in this category.", + Style::default().fg(Color::DarkGray), + ))) + .block(block); + frame.render_widget(msg, area); + return; + } + + let header = Row::new( + header_cols + .iter() + .map(|s| s.to_string()) + .collect::>(), + ) + .style( + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ) + .bottom_margin(1); + + let table = Table::new(rows, widths) + .header(header) + .row_highlight_style( + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ) + .block(block); + frame.render_stateful_widget(table, area, table_state); +} + +fn render_bottom_actions(app: &App, frame: &mut Frame, area: Rect) { + let rows = Layout::vertical([ + Constraint::Length(1), // Actions + sync info + Constraint::Length(1), // Navigation + ]) + .split(area); + + let dim = Style::default().fg(Color::DarkGray); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + + // Line 1: live sync status (centered full-width) + action hints (right overlay) + let sync_line = match &app.operation_state { + OperationState::Discovering { + operation: Operation::Sync, + message, + } => Some(Line::from(vec![ + Span::styled( + "Sync ", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ), + Span::styled("discovering in background", dim), + Span::styled(": ", dim), + Span::styled(message.clone(), dim), + ])), + OperationState::Running { + operation: Operation::Sync, + completed, + total, + started_at, + throughput_samples, + active_repos, + .. + } => { + let pct = if *total > 0 { + ((*completed as f64 / *total as f64) * 100.0).round() as u64 + } else { + 0 + }; + let elapsed_secs = started_at.elapsed().as_secs_f64(); + let sample_count = throughput_samples.len().min(10); + let sample_rate = if sample_count > 0 { + throughput_samples + .iter() + .rev() + .take(sample_count) + .copied() + .sum::() as f64 + / sample_count as f64 + } else { + 0.0 + }; + let repos_per_sec = if sample_rate > 0.0 { + sample_rate + } else if elapsed_secs > 1.0 { + *completed as f64 / elapsed_secs + } else { + 0.0 + }; + let remaining = total.saturating_sub(*completed); + let has_eta_data = throughput_samples.iter().any(|&n| n > 0); + let eta_secs = if has_eta_data && repos_per_sec > 0.1 { + Some((remaining as f64 / repos_per_sec).ceil() as u64) + } else { + None + }; + let concurrency = app + .active_workspace + .as_ref() + .and_then(|ws| ws.concurrency) + .unwrap_or(app.config.concurrency); + + let mut spans = vec![ + Span::styled( + "Sync ", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ), + Span::styled("running in background ", dim), + Span::styled(format!("{}%", pct), Style::default().fg(Color::Cyan)), + Span::styled(format!(" ({}/{})", completed, total), dim), + ]; + + if repos_per_sec > 0.0 { + spans.push(Span::styled( + format!(" | {:.1} repo/s", repos_per_sec), + Style::default().fg(Color::DarkGray), + )); + } + if let Some(eta_secs) = eta_secs.filter(|_| remaining > 0) { + spans.push(Span::styled( + format!(" | ETA {}", format_duration_secs(eta_secs)), + Style::default().fg(Color::Cyan), + )); + } + spans.push(Span::styled( + format!(" | workers {}/{}", active_repos.len(), concurrency), + Style::default().fg(Color::DarkGray), + )); + spans.push(Span::styled(" | show ", dim)); + spans.push(Span::styled("[p]", key_style)); + spans.push(Span::styled(" progress", dim)); + Some(Line::from(spans)) + } + OperationState::Finished { + operation: Operation::Sync, + summary, + with_updates, + duration_secs, + .. + } => { + let total = summary.success + summary.failed + summary.skipped; + Some(Line::from(vec![ + Span::styled( + "Last Sync ", + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ), + Span::styled( + format!("{} repos", total), + Style::default().fg(Color::Rgb(21, 128, 61)), + ), + Span::styled( + format!(" | {} updated", with_updates), + Style::default().fg(Color::Yellow), + ), + Span::styled( + format!(" | {} failed", summary.failed), + if summary.failed > 0 { + Style::default().fg(Color::Red) + } else { + Style::default().fg(Color::DarkGray) + }, + ), + Span::styled( + format!(" | {:.1}s", duration_secs), + Style::default().fg(Color::DarkGray), + ), + Span::styled(" | details ", dim), + Span::styled("[p]", key_style), + ])) + } + _ => app.active_workspace.as_ref().and_then(|ws| { + ws.last_synced.as_ref().map(|ts| { + let folder_name = std::path::Path::new(&ws.base_path) + .file_name() + .and_then(|n| n.to_str()) + .unwrap_or(&ws.base_path); + let formatted = format_timestamp(ts); + Line::from(vec![ + Span::styled("Synced ", dim), + Span::styled( + folder_name.to_string(), + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ), + Span::styled(" with GitHub ", dim), + Span::styled(formatted, dim), + ]) + }) + }), + }; + if let Some(sync_line) = sync_line { + frame.render_widget(Paragraph::new(vec![sync_line]).centered(), rows[0]); + } + + let actions_right = Line::from(vec![ + Span::styled("[s]", key_style), + Span::styled(" Start Sync", dim), + Span::raw(" "), + Span::styled("[p]", key_style), + Span::styled(" Show Sync Progress", dim), + Span::raw(" "), + ]); + frame.render_widget(Paragraph::new(vec![actions_right]).right_aligned(), rows[0]); + + // Line 2: Navigation — left-aligned (Quit, Back) and right-aligned (Left, Right, Select) + let nav_cols = + Layout::horizontal([Constraint::Percentage(50), Constraint::Percentage(50)]).split(rows[1]); + + let left_spans = vec![ + Span::raw(" "), + Span::styled("[q]", key_style), + Span::styled(" Quit", dim), + Span::raw(" "), + Span::styled("[Esc]", key_style), + Span::styled(" Back", dim), + ]; + + let right_spans = vec![ + Span::styled("[←]", key_style), + Span::raw(" "), + Span::styled("[↑]", key_style), + Span::raw(" "), + Span::styled("[↓]", key_style), + Span::raw(" "), + Span::styled("[→]", key_style), + Span::styled(" Move", dim), + Span::raw(" "), + Span::styled("[Enter]", key_style), + Span::styled(" Select", dim), + Span::raw(" "), + ]; + + let nav_left = Paragraph::new(vec![Line::from(left_spans)]); + let nav_right = Paragraph::new(vec![Line::from(right_spans)]).right_aligned(); + + frame.render_widget(nav_left, nav_cols[0]); + frame.render_widget(nav_right, nav_cols[1]); +} + +fn format_duration_secs(secs: u64) -> String { + if secs >= 60 { + format!("{}m{}s", secs / 60, secs % 60) + } else { + format!("{}s", secs) + } +} + +#[cfg(test)] +#[path = "dashboard_tests.rs"] +mod tests; diff --git a/src/tui/screens/dashboard_tests.rs b/src/tui/screens/dashboard_tests.rs new file mode 100644 index 0000000..0c8ace3 --- /dev/null +++ b/src/tui/screens/dashboard_tests.rs @@ -0,0 +1,65 @@ +use super::*; +use crate::config::{Config, WorkspaceConfig}; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use tokio::sync::mpsc::unbounded_channel; + +fn build_app() -> App { + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(Config::default(), vec![ws]); + app.screen = Screen::Dashboard; + app.screen_stack.clear(); + app +} + +#[tokio::test] +async fn dashboard_s_starts_sync_without_opening_popup() { + let mut app = build_app(); + let (tx, _rx) = unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('s'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.screen, Screen::Dashboard); + assert!(matches!( + app.operation_state, + OperationState::Discovering { + operation: Operation::Sync, + .. + } + )); +} + +#[tokio::test] +async fn dashboard_p_opens_sync_popup_when_idle() { + let mut app = build_app(); + let (tx, _rx) = unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('p'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.screen, Screen::Sync); + assert_eq!(app.screen_stack, vec![Screen::Dashboard]); + assert!(matches!(app.operation_state, OperationState::Idle)); +} + +#[test] +fn hide_show_sync_progress_preserves_sync_state() { + let mut app = build_app(); + app.scroll_offset = 9; + app.sync_log_index = 4; + + show_sync_progress(&mut app); + hide_sync_progress(&mut app); + + assert_eq!(app.screen, Screen::Dashboard); + assert_eq!(app.scroll_offset, 9); + assert_eq!(app.sync_log_index, 4); +} diff --git a/src/tui/screens/mod.rs b/src/tui/screens/mod.rs new file mode 100644 index 0000000..d85921c --- /dev/null +++ b/src/tui/screens/mod.rs @@ -0,0 +1,7 @@ +//! TUI screen modules. + +pub mod dashboard; +pub mod settings; +pub mod sync; +pub mod system_check; +pub mod workspaces; diff --git a/src/tui/screens/settings.rs b/src/tui/screens/settings.rs new file mode 100644 index 0000000..809912a --- /dev/null +++ b/src/tui/screens/settings.rs @@ -0,0 +1,341 @@ +//! Settings screen — two-pane layout with nav (left) and detail (right). +//! +//! Left sidebar: "Global" section with Requirements and Options. +//! Right panel shows detail for the selected item. + +use ratatui::{ + layout::{Constraint, Layout, Rect}, + style::{Color, Modifier, Style}, + text::{Line, Span}, + widgets::{Block, Borders, List, ListItem, Paragraph}, + Frame, +}; + +use crossterm::event::{KeyCode, KeyEvent}; + +use crate::banner::render_banner; +use crate::tui::app::App; + +pub fn handle_key(app: &mut App, key: KeyEvent) { + let num_items = 2; // Requirements, Options + match key.code { + KeyCode::Tab | KeyCode::Down => { + app.settings_index = (app.settings_index + 1) % num_items; + } + KeyCode::Up => { + app.settings_index = (app.settings_index + num_items - 1) % num_items; + } + KeyCode::Char('c') => { + // Open config directory in Finder / file manager + if let Ok(path) = crate::config::Config::default_path() { + if let Some(parent) = path.parent() { + if let Err(e) = open_directory(parent) { + app.error_message = Some(format!( + "Failed to open config directory '{}': {}", + parent.display(), + e + )); + } + } + } + } + KeyCode::Char('d') => { + app.dry_run = !app.dry_run; + } + KeyCode::Char('m') => { + app.sync_pull = !app.sync_pull; + } + _ => {} + } +} + +#[cfg(target_os = "macos")] +fn open_directory(path: &std::path::Path) -> std::io::Result<()> { + std::process::Command::new("open").arg(path).spawn()?; + Ok(()) +} + +#[cfg(target_os = "windows")] +fn open_directory(path: &std::path::Path) -> std::io::Result<()> { + std::process::Command::new("explorer").arg(path).spawn()?; + Ok(()) +} + +#[cfg(all(not(target_os = "macos"), not(target_os = "windows")))] +fn open_directory(path: &std::path::Path) -> std::io::Result<()> { + std::process::Command::new("xdg-open").arg(path).spawn()?; + Ok(()) +} + +pub fn render(app: &App, frame: &mut Frame) { + let chunks = Layout::vertical([ + Constraint::Length(6), // Banner + Constraint::Length(3), // Title + Constraint::Min(5), // Content (two panes) + Constraint::Length(2), // Bottom actions (2 lines) + ]) + .split(frame.area()); + + render_banner(frame, chunks[0]); + + // Title + let title = Paragraph::new(Line::from(vec![Span::styled( + " Settings ", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + )])) + .block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ) + .centered(); + frame.render_widget(title, chunks[1]); + + // Two-pane split + let panes = Layout::horizontal([Constraint::Percentage(25), Constraint::Percentage(75)]) + .split(chunks[2]); + + render_category_nav(app, frame, panes[0]); + + match app.settings_index { + 0 => render_requirements_detail(app, frame, panes[1]), + 1 => render_options_detail(app, frame, panes[1]), + _ => render_requirements_detail(app, frame, panes[1]), + } + + render_bottom_actions(app, frame, chunks[3]); +} + +fn render_category_nav(app: &App, frame: &mut Frame, area: Rect) { + let header_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + + let items: Vec = vec![ + // -- Global header -- + ListItem::new(Line::from(Span::styled(" Global", header_style))), + // Requirements (index 0) + nav_item("Requirements", app.settings_index == 0), + // Options (index 1) + nav_item("Options", app.settings_index == 1), + ]; + + let list = List::new(items).block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(list, area); +} + +fn nav_item(label: &str, selected: bool) -> ListItem<'static> { + let (marker, style) = if selected { + ( + ">", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ) + } else { + (" ", Style::default()) + }; + ListItem::new(Line::from(vec![ + Span::styled(format!(" {} ", marker), style), + Span::styled(label.to_string(), style), + ])) +} + +fn render_requirements_detail(app: &App, frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + let section_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + let pass_style = Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD); + let fail_style = Style::default().fg(Color::Red).add_modifier(Modifier::BOLD); + + let mut lines = vec![ + Line::from(""), + Line::from(Span::styled(" Requirements", section_style)), + Line::from(""), + ]; + + if app.check_results.is_empty() { + let msg = if app.checks_loading { + " Loading..." + } else { + " Checks not yet run" + }; + lines.push(Line::from(Span::styled(msg, dim))); + } else { + for check in &app.check_results { + let (marker, marker_style) = if check.passed { + ("\u{2713}", pass_style) + } else { + ("\u{2717}", fail_style) + }; + let mut spans = vec![ + Span::styled(" ", dim), + Span::styled(marker.to_string(), marker_style), + Span::styled(format!(" {:<14}", check.name), dim), + Span::styled(&check.message, dim), + ]; + if !check.passed && check.critical { + spans.push(Span::styled(" (critical)", fail_style)); + } + lines.push(Line::from(spans)); + } + } + + let content = Paragraph::new(lines).block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(content, area); +} + +fn render_options_detail(app: &App, frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + let section_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + let active_style = Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD); + + let config_path = crate::config::Config::default_path() + .ok() + .and_then(|p| p.parent().map(|parent| parent.display().to_string())) + .unwrap_or_else(|| "~/.config/git-same".to_string()); + + // Dry run toggle + let (dry_yes, dry_no) = if app.dry_run { + (active_style, dim) + } else { + (dim, active_style) + }; + + // Mode toggle + let (mode_fetch, mode_pull) = if app.sync_pull { + (dim, active_style) + } else { + (active_style, dim) + }; + + let lines = vec![ + Line::from(""), + Line::from(Span::styled(" Global Config", section_style)), + Line::from(""), + Line::from(vec![Span::styled( + format!(" Concurrency: {}", app.config.concurrency), + dim, + )]), + Line::from(""), + Line::from(Span::styled(" Options", section_style)), + Line::from(""), + Line::from(vec![ + Span::styled(" ", dim), + Span::styled("[d]", key_style), + Span::styled(" Dry run: ", dim), + Span::styled("Yes", dry_yes), + Span::styled(" / ", dim), + Span::styled("No", dry_no), + ]), + Line::from(vec![ + Span::styled(" ", dim), + Span::styled("[m]", key_style), + Span::styled(" Mode: ", dim), + Span::styled("Fetch", mode_fetch), + Span::styled(" / ", dim), + Span::styled("Pull", mode_pull), + ]), + Line::from(""), + Line::from(Span::styled(" Folders", section_style)), + Line::from(""), + Line::from(vec![ + Span::styled(" ", dim), + Span::styled("[c]", key_style), + Span::styled(" Config folder", dim), + Span::styled(format!(" \u{2014} {}", config_path), dim), + ]), + ]; + + let content = Paragraph::new(lines).block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(content, area); +} + +fn render_bottom_actions(app: &App, frame: &mut Frame, area: Rect) { + let rows = Layout::vertical([ + Constraint::Length(1), // Actions + Constraint::Length(1), // Navigation + ]) + .split(area); + + let dim = Style::default().fg(Color::DarkGray); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + + // Line 1: Context-sensitive actions (centered) + let mut action_spans = vec![]; + if app.settings_index == 1 { + action_spans.extend([ + Span::raw(" "), + Span::styled("[c]", key_style), + Span::styled(" Config", dim), + Span::raw(" "), + Span::styled("[d]", key_style), + Span::styled(" Dry-run", dim), + Span::raw(" "), + Span::styled("[m]", key_style), + Span::styled(" Mode", dim), + ]); + } + let actions = Paragraph::new(vec![Line::from(action_spans)]).centered(); + + // Line 2: Navigation — left (quit, back) and right (arrows) + let nav_cols = + Layout::horizontal([Constraint::Percentage(50), Constraint::Percentage(50)]).split(rows[1]); + + let left_spans = vec![ + Span::raw(" "), + Span::styled("[q]", key_style), + Span::styled(" Quit", dim), + Span::raw(" "), + Span::styled("[Esc]", key_style), + Span::styled(" Back", dim), + ]; + + let right_spans = vec![ + Span::styled("[Tab]", key_style), + Span::styled(" Switch", dim), + Span::raw(" "), + Span::styled("[\u{2191}]", key_style), + Span::raw(" "), + Span::styled("[\u{2193}]", key_style), + Span::styled(" Move", dim), + Span::raw(" "), + ]; + + frame.render_widget(actions, rows[0]); + frame.render_widget(Paragraph::new(vec![Line::from(left_spans)]), nav_cols[0]); + frame.render_widget( + Paragraph::new(vec![Line::from(right_spans)]).right_aligned(), + nav_cols[1], + ); +} + +#[cfg(test)] +#[path = "settings_tests.rs"] +mod tests; diff --git a/src/tui/screens/settings_tests.rs b/src/tui/screens/settings_tests.rs new file mode 100644 index 0000000..4c1686a --- /dev/null +++ b/src/tui/screens/settings_tests.rs @@ -0,0 +1,83 @@ +use super::*; +use crate::config::{Config, WorkspaceConfig}; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(app: &App) -> String { + let backend = TestBackend::new(110, 32); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal.draw(|frame| render(app, frame)).unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +fn app_for_settings() -> App { + let ws = WorkspaceConfig::new("ws", "/tmp/ws"); + App::new(Config::default(), vec![ws]) +} + +#[test] +fn handle_key_moves_selection_and_toggles_flags() { + let mut app = app_for_settings(); + app.settings_index = 0; + + handle_key(&mut app, KeyEvent::new(KeyCode::Down, KeyModifiers::NONE)); + assert_eq!(app.settings_index, 1); + + handle_key(&mut app, KeyEvent::new(KeyCode::Down, KeyModifiers::NONE)); + assert_eq!(app.settings_index, 0); + + handle_key(&mut app, KeyEvent::new(KeyCode::Up, KeyModifiers::NONE)); + assert_eq!(app.settings_index, 1); + + assert!(!app.dry_run); + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('d'), KeyModifiers::NONE), + ); + assert!(app.dry_run); + + assert!(!app.sync_pull); + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('m'), KeyModifiers::NONE), + ); + assert!(app.sync_pull); +} + +#[test] +fn render_requirements_view_shows_title_and_loading() { + let mut app = app_for_settings(); + app.settings_index = 0; + app.checks_loading = true; + + let output = render_output(&app); + assert!(output.contains("Settings")); + assert!(output.contains("Requirements")); + assert!(output.contains("Loading")); +} + +#[test] +fn render_options_view_shows_mode_and_dry_run() { + let mut app = app_for_settings(); + app.settings_index = 1; + app.dry_run = true; + app.sync_pull = true; + + let output = render_output(&app); + assert!(output.contains("Global Config")); + assert!(output.contains("Dry run")); + assert!(output.contains("Mode")); + assert!(output.contains("Fetch")); + assert!(output.contains("Pull")); +} diff --git a/src/tui/screens/sync.rs b/src/tui/screens/sync.rs new file mode 100644 index 0000000..b226fb4 --- /dev/null +++ b/src/tui/screens/sync.rs @@ -0,0 +1,1341 @@ +//! Sync progress screen — real-time metrics during sync, enriched summary after. + +use ratatui::{ + layout::{Alignment, Constraint, Layout, Position, Rect}, + style::{Color, Modifier, Style}, + text::{Line, Span}, + widgets::{Block, BorderType, Borders, Clear, Gauge, List, ListItem, Paragraph}, + Frame, +}; + +use crossterm::event::{KeyCode, KeyEvent}; +use tokio::sync::mpsc::UnboundedSender; + +use crate::tui::app::{App, LogFilter, OperationState, SyncLogEntry, SyncLogStatus}; +use crate::tui::event::AppEvent; +use crate::tui::screens::dashboard::{hide_sync_progress, start_sync_operation}; + +use crate::banner::render_animated_banner; + +// ── Key handler ───────────────────────────────────────────────────────────── + +pub fn handle_key(app: &mut App, key: KeyEvent, backend_tx: &UnboundedSender) { + let is_finished = matches!(app.operation_state, OperationState::Finished { .. }); + + match key.code { + KeyCode::Char('s') => { + start_sync_operation(app, backend_tx); + } + KeyCode::Char('p') => { + hide_sync_progress(app); + } + // Scroll log + KeyCode::Down => { + if is_finished { + if app.log_filter == LogFilter::Changelog { + app.changelog_scroll += 1; + } else { + let count = filtered_log_count(app); + if count > 0 && app.sync_log_index < count.saturating_sub(1) { + app.sync_log_index += 1; + } + } + } else if app.scroll_offset < app.log_lines.len().saturating_sub(1) { + app.scroll_offset += 1; + } + } + KeyCode::Up => { + if is_finished { + if app.log_filter == LogFilter::Changelog { + app.changelog_scroll = app.changelog_scroll.saturating_sub(1); + } else { + app.sync_log_index = app.sync_log_index.saturating_sub(1); + } + } else { + app.scroll_offset = app.scroll_offset.saturating_sub(1); + } + } + KeyCode::Left => { + if is_finished { + cycle_filter(app, backend_tx, -1); + } else { + app.scroll_offset = app.scroll_offset.saturating_sub(1); + } + } + KeyCode::Right => { + if is_finished { + cycle_filter(app, backend_tx, 1); + } else if app.scroll_offset < app.log_lines.len().saturating_sub(1) { + app.scroll_offset += 1; + } + } + // Expand/collapse commit deep dive + KeyCode::Enter if is_finished => { + // Extract data we need before mutating app + let selected = filtered_log_entries(app) + .get(app.sync_log_index) + .map(|e| (e.repo_name.clone(), e.path.clone())); + + if let Some((repo_name, path)) = selected { + if app.expanded_repo.as_deref() == Some(&repo_name) { + // Toggle off: collapse + app.expanded_repo = None; + app.repo_commits.clear(); + } else if let Some(path) = path { + // Expand: fetch commits + app.expanded_repo = Some(repo_name.clone()); + app.repo_commits.clear(); + crate::tui::backend::spawn_commit_fetch(path, repo_name, backend_tx.clone()); + } + } + } + // Post-sync log filters + KeyCode::Char('a') if is_finished => { + apply_log_filter(app, backend_tx, LogFilter::All); + } + KeyCode::Char('u') if is_finished => { + apply_log_filter(app, backend_tx, LogFilter::Updated); + } + KeyCode::Char('f') if is_finished => { + apply_log_filter(app, backend_tx, LogFilter::Failed); + } + KeyCode::Char('x') if is_finished => { + apply_log_filter(app, backend_tx, LogFilter::Skipped); + } + KeyCode::Char('c') if is_finished => { + apply_log_filter(app, backend_tx, LogFilter::Changelog); + } + // Sync history overlay toggle + KeyCode::Char('h') if is_finished => { + app.show_sync_history = !app.show_sync_history; + } + _ => {} + } +} + +fn apply_log_filter(app: &mut App, backend_tx: &UnboundedSender, filter: LogFilter) { + app.log_filter = filter; + app.sync_log_index = 0; + app.expanded_repo = None; + app.repo_commits.clear(); + app.changelog_scroll = 0; + + if filter != LogFilter::Changelog { + return; + } + + // Collect updated repos with paths for batch commit fetch. + let updated_repos: Vec<(String, std::path::PathBuf)> = app + .sync_log_entries + .iter() + .filter(|e| e.had_updates) + .filter_map(|e| e.path.clone().map(|p| (e.repo_name.clone(), p))) + .collect(); + app.changelog_total = updated_repos.len(); + app.changelog_loaded = 0; + app.changelog_commits.clear(); + + if !updated_repos.is_empty() { + crate::tui::backend::spawn_changelog_fetch(updated_repos, backend_tx.clone()); + } +} + +fn cycle_filter(app: &mut App, backend_tx: &UnboundedSender, direction: i8) { + const FILTERS: [LogFilter; 5] = [ + LogFilter::All, + LogFilter::Updated, + LogFilter::Failed, + LogFilter::Skipped, + LogFilter::Changelog, + ]; + + let idx = FILTERS + .iter() + .position(|f| *f == app.log_filter) + .unwrap_or(0) as i8; + let next = (idx + direction).rem_euclid(FILTERS.len() as i8) as usize; + apply_log_filter(app, backend_tx, FILTERS[next]); +} + +/// Count of log entries matching the current filter. +fn filtered_log_count(app: &App) -> usize { + match app.log_filter { + LogFilter::All => app.sync_log_entries.len(), + LogFilter::Updated => app + .sync_log_entries + .iter() + .filter(|e| e.had_updates || e.is_clone) + .count(), + LogFilter::Failed => app + .sync_log_entries + .iter() + .filter(|e| e.status == SyncLogStatus::Failed) + .count(), + LogFilter::Skipped => app + .sync_log_entries + .iter() + .filter(|e| e.status == SyncLogStatus::Skipped) + .count(), + LogFilter::Changelog => app + .sync_log_entries + .iter() + .filter(|e| e.had_updates) + .count(), + } +} + +/// Returns filtered log entries matching the current filter. +fn filtered_log_entries(app: &App) -> Vec<&SyncLogEntry> { + match app.log_filter { + LogFilter::All => app.sync_log_entries.iter().collect(), + LogFilter::Updated => app + .sync_log_entries + .iter() + .filter(|e| e.had_updates || e.is_clone) + .collect(), + LogFilter::Failed => app + .sync_log_entries + .iter() + .filter(|e| e.status == SyncLogStatus::Failed) + .collect(), + LogFilter::Skipped => app + .sync_log_entries + .iter() + .filter(|e| e.status == SyncLogStatus::Skipped) + .collect(), + LogFilter::Changelog => app + .sync_log_entries + .iter() + .filter(|e| e.had_updates) + .collect(), + } +} + +// ── Render ────────────────────────────────────────────────────────────────── + +const POPUP_WIDTH_PERCENT: u16 = 80; +const POPUP_HEIGHT_PERCENT: u16 = 80; + +pub fn render(app: &App, frame: &mut Frame) { + let is_finished = matches!(&app.operation_state, OperationState::Finished { .. }); + + // Animate during active ops, static otherwise + let phase = match &app.operation_state { + OperationState::Discovering { .. } | OperationState::Running { .. } => { + (app.tick_count as f64 / 50.0).fract() + } + _ => 0.0, + }; + + let popup_area = centered_rect(frame.area(), POPUP_WIDTH_PERCENT, POPUP_HEIGHT_PERCENT); + dim_outside_popup(frame, popup_area); + frame.render_widget(Clear, popup_area); + + let block = Block::default() + .title(" Sync Progress ") + .borders(Borders::ALL) + .border_type(BorderType::Thick) + .border_style(Style::default().fg(Color::Cyan)); + let inner = block.inner(popup_area); + frame.render_widget(block, popup_area); + + render_running_layout(app, frame, inner, phase); + + // Sync history overlay (on top of popup) + if app.show_sync_history && is_finished { + render_sync_history_overlay(app, frame, inner); + } +} + +// ── Popup layout ──────────────────────────────────────────────────────────── + +fn render_running_layout(app: &App, frame: &mut Frame, area: Rect, phase: f64) { + let chunks = Layout::vertical([ + Constraint::Length(6), // Banner + Constraint::Length(3), // Title + Constraint::Length(3), // Progress bar + Constraint::Length(1), // Enriched counters / summary + Constraint::Length(1), // Throughput / performance + Constraint::Length(1), // Phase / filter + Constraint::Length(1), // Worker slots / status + Constraint::Min(5), // Log (running or completed) + Constraint::Length(2), // Bottom actions + nav + ]) + .split(area); + + render_animated_banner(frame, chunks[0], phase); + render_title(app, frame, chunks[1]); + render_progress_bar(app, frame, chunks[2]); + render_enriched_counters(app, frame, chunks[3]); + render_throughput(app, frame, chunks[4]); + render_phase_indicator(app, frame, chunks[5]); + render_worker_slots(app, frame, chunks[6]); + render_main_log(app, frame, chunks[7]); + render_bottom_actions(app, frame, chunks[8]); +} + +fn render_main_log(app: &App, frame: &mut Frame, area: Rect) { + if matches!(app.operation_state, OperationState::Finished { .. }) { + render_filterable_log(app, frame, area); + } else { + render_running_log(app, frame, area); + } +} + +fn render_bottom_actions(app: &App, frame: &mut Frame, area: Rect) { + let rows = Layout::vertical([Constraint::Length(1), Constraint::Length(1)]).split(area); + + let dim = Style::default().fg(Color::DarkGray); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + + let mut action_spans = vec![ + Span::styled("[s]", key_style), + Span::styled(" Start Sync", dim), + Span::raw(" "), + Span::styled("[p]", key_style), + Span::styled(" Hide Sync Progress", dim), + ]; + + if matches!(app.operation_state, OperationState::Finished { .. }) { + action_spans.extend([ + Span::raw(" "), + Span::styled("[a]", key_style), + Span::styled(" All", dim), + Span::raw(" "), + Span::styled("[u]", key_style), + Span::styled(" Updated", dim), + Span::raw(" "), + Span::styled("[f]", key_style), + Span::styled(" Failed", dim), + Span::raw(" "), + Span::styled("[x]", key_style), + Span::styled(" Skipped", dim), + Span::raw(" "), + Span::styled("[c]", key_style), + Span::styled(" Changelog", dim), + Span::raw(" "), + Span::styled("[h]", key_style), + Span::styled(" History", dim), + ]); + } + frame.render_widget( + Paragraph::new(vec![Line::from(action_spans)]).centered(), + rows[0], + ); + + let nav_cols = + Layout::horizontal([Constraint::Percentage(50), Constraint::Percentage(50)]).split(rows[1]); + + let left_spans = vec![ + Span::raw(" "), + Span::styled("[q]", key_style), + Span::styled(" Quit", dim), + Span::raw(" "), + Span::styled("[Esc]", key_style), + Span::styled(" Back", dim), + ]; + let right_spans = vec![ + Span::styled("[←]", key_style), + Span::raw(" "), + Span::styled("[↑]", key_style), + Span::raw(" "), + Span::styled("[↓]", key_style), + Span::raw(" "), + Span::styled("[→]", key_style), + Span::styled(" Move", dim), + Span::raw(" "), + Span::styled("[Enter]", key_style), + Span::styled(" Select", dim), + Span::raw(" "), + ]; + + frame.render_widget(Paragraph::new(vec![Line::from(left_spans)]), nav_cols[0]); + frame.render_widget( + Paragraph::new(vec![Line::from(right_spans)]).right_aligned(), + nav_cols[1], + ); +} + +fn centered_rect(area: Rect, width_percent: u16, height_percent: u16) -> Rect { + let width = (area.width.saturating_mul(width_percent) / 100).max(1); + let height = (area.height.saturating_mul(height_percent) / 100).max(1); + let x = area.x + (area.width.saturating_sub(width) / 2); + let y = area.y + (area.height.saturating_sub(height) / 2); + Rect::new(x, y, width, height) +} + +fn dim_outside_popup(frame: &mut Frame, popup: Rect) { + let area = frame.area(); + let popup_right = popup.x.saturating_add(popup.width); + let popup_bottom = popup.y.saturating_add(popup.height); + + let buf = frame.buffer_mut(); + for y in area.y..area.y.saturating_add(area.height) { + for x in area.x..area.x.saturating_add(area.width) { + let inside_popup = x >= popup.x && x < popup_right && y >= popup.y && y < popup_bottom; + if inside_popup { + continue; + } + if let Some(cell) = buf.cell_mut(Position::new(x, y)) { + cell.set_style( + Style::default() + .fg(Color::DarkGray) + .add_modifier(Modifier::DIM), + ); + } + } + } +} + +// ── Shared render functions ───────────────────────────────────────────────── + +fn render_title(app: &App, frame: &mut Frame, area: Rect) { + let title_text = match &app.operation_state { + OperationState::Idle => "Sync Progress".to_string(), + OperationState::Discovering { .. } | OperationState::Running { .. } => { + "Sync Running".to_string() + } + OperationState::Finished { .. } => "Sync Completed".to_string(), + }; + + let style = match &app.operation_state { + OperationState::Finished { .. } => Style::default().fg(Color::Rgb(21, 128, 61)), + OperationState::Running { .. } => Style::default().fg(Color::Cyan), + _ => Style::default().fg(Color::Yellow), + }; + + let title = Paragraph::new(Line::from(Span::styled( + title_text, + style.add_modifier(Modifier::BOLD), + ))) + .centered() + .block( + Block::default() + .borders(Borders::BOTTOM) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(title, area); +} + +fn render_progress_bar(app: &App, frame: &mut Frame, area: Rect) { + let (ratio, label) = match &app.operation_state { + OperationState::Running { + total, completed, .. + } => { + let r = if *total > 0 { + *completed as f64 / *total as f64 + } else { + 0.0 + }; + let pct = (r * 100.0) as u32; + (r, format!("{}/{} ({}%)", completed, total, pct)) + } + OperationState::Finished { .. } => (1.0, "Done".to_string()), + OperationState::Discovering { .. } => (0.0, "Discovering repositories...".to_string()), + OperationState::Idle => (0.0, "Press [s] to start sync".to_string()), + }; + + let gauge = Gauge::default() + .block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ) + .gauge_style(Style::default().fg(Color::Cyan)) + .ratio(ratio.clamp(0.0, 1.0)) + .label(label); + frame.render_widget(gauge, area); +} + +// ── During-sync specific renders ──────────────────────────────────────────── + +fn render_enriched_counters(app: &App, frame: &mut Frame, area: Rect) { + match &app.operation_state { + OperationState::Running { + completed, + failed, + skipped, + with_updates, + cloned, + current_repo, + .. + } => { + let up_to_date = completed + .saturating_sub(*failed) + .saturating_sub(*skipped) + .saturating_sub(*with_updates) + .saturating_sub(*cloned); + + let mut spans = vec![ + Span::raw(" "), + Span::styled("Updated: ", Style::default().fg(Color::Yellow)), + Span::styled( + with_updates.to_string(), + Style::default() + .fg(Color::Yellow) + .add_modifier(Modifier::BOLD), + ), + Span::raw(" "), + Span::styled("Current: ", Style::default().fg(Color::Rgb(21, 128, 61))), + Span::styled( + up_to_date.to_string(), + Style::default().fg(Color::Rgb(21, 128, 61)), + ), + Span::raw(" "), + Span::styled("Cloned: ", Style::default().fg(Color::Cyan)), + Span::styled(cloned.to_string(), Style::default().fg(Color::Cyan)), + ]; + + if *failed > 0 { + spans.push(Span::raw(" ")); + spans.push(Span::styled("Failed: ", Style::default().fg(Color::Red))); + spans.push(Span::styled( + failed.to_string(), + Style::default().fg(Color::Red).add_modifier(Modifier::BOLD), + )); + } + + if *skipped > 0 { + spans.push(Span::raw(" ")); + spans.push(Span::styled( + "Skipped: ", + Style::default().fg(Color::DarkGray), + )); + spans.push(Span::styled( + skipped.to_string(), + Style::default().fg(Color::DarkGray), + )); + } + + if !current_repo.is_empty() { + spans.push(Span::raw(" ")); + spans.push(Span::styled( + current_repo.as_str(), + Style::default().fg(Color::DarkGray), + )); + } + + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + OperationState::Finished { + summary, + with_updates, + cloned, + .. + } => { + let current = summary + .success + .saturating_sub(*with_updates) + .saturating_sub(*cloned); + + let spans = vec![ + Span::raw(" "), + Span::styled("Updated: ", Style::default().fg(Color::Yellow)), + Span::styled( + with_updates.to_string(), + Style::default() + .fg(Color::Yellow) + .add_modifier(Modifier::BOLD), + ), + Span::raw(" "), + Span::styled("Current: ", Style::default().fg(Color::Rgb(21, 128, 61))), + Span::styled( + current.to_string(), + Style::default().fg(Color::Rgb(21, 128, 61)), + ), + Span::raw(" "), + Span::styled("Cloned: ", Style::default().fg(Color::Cyan)), + Span::styled(cloned.to_string(), Style::default().fg(Color::Cyan)), + Span::raw(" "), + Span::styled("Failed: ", Style::default().fg(Color::Red)), + Span::styled( + summary.failed.to_string(), + Style::default().fg(Color::Red).add_modifier(Modifier::BOLD), + ), + Span::raw(" "), + Span::styled("Skipped: ", Style::default().fg(Color::DarkGray)), + Span::styled( + summary.skipped.to_string(), + Style::default().fg(Color::DarkGray), + ), + ]; + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + OperationState::Discovering { message, .. } => { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled("Discovering: ", Style::default().fg(Color::Yellow)), + Span::styled(message.as_str(), Style::default().fg(Color::DarkGray)), + ])), + area, + ); + } + OperationState::Idle => { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + "No sync activity yet.", + Style::default().fg(Color::DarkGray), + ), + ])), + area, + ); + } + } +} + +fn render_throughput(app: &App, frame: &mut Frame, area: Rect) { + match &app.operation_state { + OperationState::Running { + completed, + total, + started_at, + throughput_samples, + .. + } => { + let elapsed = started_at.elapsed(); + let elapsed_secs = elapsed.as_secs_f64(); + let repos_per_sec = if elapsed_secs > 1.0 { + *completed as f64 / elapsed_secs + } else { + 0.0 + }; + let remaining = total.saturating_sub(*completed); + let eta_secs = if repos_per_sec > 0.1 { + (remaining as f64 / repos_per_sec).ceil() as u64 + } else { + 0 + }; + + let mut spans = vec![ + Span::raw(" "), + Span::styled("Elapsed: ", Style::default().fg(Color::DarkGray)), + Span::styled(format_duration(elapsed), Style::default().fg(Color::Cyan)), + ]; + + if repos_per_sec > 0.0 { + spans.push(Span::raw(" ")); + spans.push(Span::styled( + format!("~{:.1} repos/sec", repos_per_sec), + Style::default().fg(Color::DarkGray), + )); + } + + let has_eta_data = throughput_samples.iter().any(|&sample| sample > 0); + if has_eta_data && eta_secs > 0 && *completed > 0 { + spans.push(Span::raw(" ")); + spans.push(Span::styled("ETA: ", Style::default().fg(Color::DarkGray))); + spans.push(Span::styled( + format!("~{}s", eta_secs), + Style::default().fg(Color::Cyan), + )); + } + + // Add sparkline inline if we have samples. + if !throughput_samples.is_empty() { + spans.push(Span::raw(" ")); + let max_val = throughput_samples.iter().copied().max().unwrap_or(1).max(1); + let bars = ['▁', '▂', '▃', '▄', '▅', '▆', '▇', '█']; + let spark_str: String = throughput_samples + .iter() + .rev() + .take(20) + .collect::>() + .iter() + .rev() + .map(|&v| { + let idx = ((*v as f64 / max_val as f64) * 7.0) as usize; + bars[idx.min(7)] + }) + .collect(); + spans.push(Span::styled(spark_str, Style::default().fg(Color::Cyan))); + } + + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + OperationState::Finished { .. } => { + render_performance_line(app, frame, area); + } + OperationState::Discovering { .. } => { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + "Building sync plan...", + Style::default().fg(Color::DarkGray), + ), + ])), + area, + ); + } + OperationState::Idle => { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + "Press [p] to hide, [s] to start.", + Style::default().fg(Color::DarkGray), + ), + ])), + area, + ); + } + } +} + +fn render_phase_indicator(app: &App, frame: &mut Frame, area: Rect) { + match &app.operation_state { + OperationState::Running { + to_clone, + to_sync, + cloned, + synced, + .. + } => { + if *to_clone == 0 && *to_sync == 0 { + return; + } + + let mut spans = vec![Span::raw(" Phase: ")]; + + if *to_clone > 0 { + let clone_pct = if *to_clone > 0 { + *cloned as f64 / *to_clone as f64 + } else { + 0.0 + }; + let bar_width: usize = 8; + let filled = (clone_pct * bar_width as f64).round() as usize; + spans.push(Span::styled( + "\u{2588}".repeat(filled), + Style::default().fg(Color::Cyan), + )); + spans.push(Span::styled( + "\u{2591}".repeat(bar_width.saturating_sub(filled)), + Style::default().fg(Color::DarkGray), + )); + spans.push(Span::styled( + format!(" Clone {}/{}", cloned, to_clone), + Style::default().fg(Color::Cyan), + )); + spans.push(Span::raw(" ")); + } + + if *to_sync > 0 { + let sync_pct = if *to_sync > 0 { + *synced as f64 / *to_sync as f64 + } else { + 0.0 + }; + let bar_width: usize = 12; + let filled = (sync_pct * bar_width as f64).round() as usize; + spans.push(Span::styled( + "\u{2588}".repeat(filled), + Style::default().fg(Color::Rgb(21, 128, 61)), + )); + spans.push(Span::styled( + "\u{2591}".repeat(bar_width.saturating_sub(filled)), + Style::default().fg(Color::DarkGray), + )); + spans.push(Span::styled( + format!(" Sync {}/{}", synced, to_sync), + Style::default().fg(Color::Rgb(21, 128, 61)), + )); + } + + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + OperationState::Finished { .. } => { + let label = match app.log_filter { + LogFilter::All => "All", + LogFilter::Updated => "Updated", + LogFilter::Failed => "Failed", + LogFilter::Skipped => "Skipped", + LogFilter::Changelog => "Changelog", + }; + + let spans = vec![ + Span::raw(" "), + Span::styled("Filter: ", Style::default().fg(Color::DarkGray)), + Span::styled(label, Style::default().fg(Color::Cyan)), + Span::styled(" | ", Style::default().fg(Color::DarkGray)), + Span::styled( + format!("{} entries", filtered_log_count(app)), + Style::default().fg(Color::DarkGray), + ), + Span::styled(" | ", Style::default().fg(Color::DarkGray)), + Span::styled("[←]/[→]", Style::default().fg(Color::Rgb(37, 99, 235))), + Span::styled(" filter", Style::default().fg(Color::DarkGray)), + ]; + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + _ => {} + } +} + +fn render_worker_slots(app: &App, frame: &mut Frame, area: Rect) { + match &app.operation_state { + OperationState::Running { active_repos, .. } => { + if active_repos.is_empty() { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled("Workers idle", Style::default().fg(Color::DarkGray)), + ])), + area, + ); + return; + } + + let mut spans = vec![Span::raw(" ")]; + for (i, repo) in active_repos.iter().enumerate() { + if i > 0 { + spans.push(Span::raw(" ")); + } + spans.push(Span::styled( + format!("[{}]", i + 1), + Style::default() + .fg(Color::DarkGray) + .add_modifier(Modifier::BOLD), + )); + spans.push(Span::raw(" ")); + // Show just the repo name (not org/) to save space. + let short = repo.split('/').next_back().unwrap_or(repo); + spans.push(Span::styled(short, Style::default().fg(Color::Cyan))); + } + + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + OperationState::Finished { + total_new_commits, .. + } => { + let mut spans = vec![ + Span::raw(" "), + Span::styled( + "Completed. ", + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + ), + Span::styled("[↑]/[↓] move", Style::default().fg(Color::Rgb(37, 99, 235))), + Span::styled(" ", Style::default().fg(Color::DarkGray)), + Span::styled( + "[Enter] commit details", + Style::default().fg(Color::Rgb(37, 99, 235)), + ), + ]; + + if *total_new_commits > 0 { + spans.push(Span::styled( + format!(" | {} new commits", total_new_commits), + Style::default().fg(Color::Yellow), + )); + } + + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } + OperationState::Discovering { .. } => { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + "Waiting for workers...", + Style::default().fg(Color::DarkGray), + ), + ])), + area, + ); + } + OperationState::Idle => { + frame.render_widget( + Paragraph::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + "Use [p] to close this popup.", + Style::default().fg(Color::DarkGray), + ), + ])), + area, + ); + } + } +} + +fn render_running_log(app: &App, frame: &mut Frame, area: Rect) { + if app.log_lines.is_empty() { + let message = match app.operation_state { + OperationState::Idle => " No sync activity yet. Press [s] to start sync.", + OperationState::Discovering { .. } => " Discovering repositories...", + _ => " Waiting for log output...", + }; + let empty = Paragraph::new(Line::from(Span::styled( + message, + Style::default().fg(Color::DarkGray), + ))) + .block( + Block::default() + .title(" Log ") + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(empty, area); + return; + } + + let visible_height = area.height.saturating_sub(2) as usize; + let total = app.log_lines.len(); + let max_start = total.saturating_sub(visible_height); + let start = app.scroll_offset.min(max_start); + let end = (start + visible_height).min(total); + + let items: Vec = app.log_lines[start..end] + .iter() + .map(|line| { + let style = if line.starts_with("[**]") { + Style::default().fg(Color::Yellow) + } else if line.starts_with("[++]") { + Style::default().fg(Color::Cyan) + } else if line.starts_with("[ok]") { + Style::default().fg(Color::Rgb(21, 128, 61)) + } else if line.starts_with("[!!]") { + Style::default().fg(Color::Red) + } else if line.starts_with("[--]") { + Style::default().fg(Color::DarkGray) + } else { + Style::default() + }; + ListItem::new(Line::from(Span::styled(format!(" {}", line), style))) + }) + .collect(); + + let log = List::new(items).block( + Block::default() + .title(" Log ") + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(log, area); +} + +// ── Post-sync specific renders ────────────────────────────────────────────── + +fn render_performance_line(app: &App, frame: &mut Frame, area: Rect) { + if let OperationState::Finished { + summary, + duration_secs, + total_new_commits, + cloned, + .. + } = &app.operation_state + { + let total = summary.success + summary.failed + summary.skipped; + let repos_per_sec = if *duration_secs > 0.0 { + total as f64 / duration_secs + } else { + 0.0 + }; + + let mut spans = vec![ + Span::raw(" "), + Span::styled( + format!("{} repos", total), + Style::default().fg(Color::DarkGray), + ), + Span::styled(" in ", Style::default().fg(Color::DarkGray)), + Span::styled( + format!("{:.1}s", duration_secs), + Style::default().fg(Color::Cyan), + ), + Span::styled( + format!(" ({:.1} repos/sec)", repos_per_sec), + Style::default().fg(Color::DarkGray), + ), + ]; + + if *total_new_commits > 0 { + spans.push(Span::styled( + format!(" \u{00b7} {} new commits", total_new_commits), + Style::default().fg(Color::Yellow), + )); + } + + if *cloned > 0 { + spans.push(Span::styled( + format!(" \u{00b7} {} cloned", cloned), + Style::default().fg(Color::Cyan), + )); + } + + frame.render_widget(Paragraph::new(Line::from(spans)), area); + } +} + +fn render_filterable_log(app: &App, frame: &mut Frame, area: Rect) { + // Changelog mode has its own renderer + if app.log_filter == LogFilter::Changelog { + render_changelog(app, frame, area); + return; + } + + let entries: Vec<&crate::tui::app::SyncLogEntry> = match app.log_filter { + LogFilter::All => app.sync_log_entries.iter().collect(), + LogFilter::Updated => app + .sync_log_entries + .iter() + .filter(|e| e.had_updates || e.is_clone) + .collect(), + LogFilter::Failed => app + .sync_log_entries + .iter() + .filter(|e| e.status == SyncLogStatus::Failed) + .collect(), + LogFilter::Skipped => app + .sync_log_entries + .iter() + .filter(|e| e.status == SyncLogStatus::Skipped) + .collect(), + LogFilter::Changelog => app + .sync_log_entries + .iter() + .filter(|e| e.had_updates) + .collect(), + }; + + let visible_height = area.height.saturating_sub(2) as usize; + let total_entries = entries.len(); + + // Ensure scroll index is in bounds + let scroll_start = if total_entries > visible_height { + let max_start = total_entries.saturating_sub(visible_height); + app.sync_log_index.min(max_start) + } else { + 0 + }; + + let mut items: Vec = Vec::new(); + let is_expanded = app.expanded_repo.is_some(); + + for (i, entry) in entries + .iter() + .skip(scroll_start) + .take(visible_height) + .enumerate() + { + let (prefix, color) = match entry.status { + SyncLogStatus::Updated => ("[**]", Color::Yellow), + SyncLogStatus::Cloned => ("[++]", Color::Cyan), + SyncLogStatus::Success => ("[ok]", Color::Rgb(21, 128, 61)), + SyncLogStatus::Failed => ("[!!]", Color::Red), + SyncLogStatus::Skipped => ("[--]", Color::DarkGray), + }; + + let is_selected = i + scroll_start == app.sync_log_index; + let this_expanded = is_expanded && app.expanded_repo.as_deref() == Some(&entry.repo_name); + let style = if is_selected { + Style::default().fg(color).add_modifier(Modifier::BOLD) + } else { + Style::default().fg(color) + }; + + let indicator = if this_expanded { + " v " + } else if is_selected { + " > " + } else { + " " + }; + + let mut spans = vec![ + Span::styled(indicator, style), + Span::styled(prefix, style), + Span::raw(" "), + Span::styled(&entry.repo_name, style), + ]; + + // Add detail based on status + match entry.status { + SyncLogStatus::Updated | SyncLogStatus::Cloned => { + spans.push(Span::styled( + format!(" - {}", entry.message), + Style::default().fg(Color::DarkGray), + )); + if let Some(n) = entry.new_commits { + if n > 0 { + spans.push(Span::styled( + format!(" ({} new commits)", n), + Style::default().fg(Color::DarkGray), + )); + } + } + } + _ => { + spans.push(Span::styled( + format!(" - {}", entry.message), + Style::default().fg(Color::DarkGray), + )); + } + } + + items.push(ListItem::new(Line::from(spans))); + + // Render expanded commits inline below this entry + if this_expanded { + if app.repo_commits.is_empty() { + items.push(ListItem::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + "Loading...", + Style::default() + .fg(Color::DarkGray) + .add_modifier(Modifier::ITALIC), + ), + ]))); + } else { + let max_commits = visible_height.saturating_sub(items.len()).max(3); + for commit in app.repo_commits.iter().take(max_commits) { + items.push(ListItem::new(Line::from(vec![ + Span::raw(" "), + Span::styled(commit, Style::default().fg(Color::DarkGray)), + ]))); + } + if app.repo_commits.len() > max_commits { + items.push(ListItem::new(Line::from(vec![ + Span::raw(" "), + Span::styled( + format!("... and {} more", app.repo_commits.len() - max_commits), + Style::default().fg(Color::DarkGray), + ), + ]))); + } + } + } + } + + let filter_label = match app.log_filter { + LogFilter::All => "All", + LogFilter::Updated => "Updated", + LogFilter::Failed => "Failed", + LogFilter::Skipped => "Skipped", + LogFilter::Changelog => "Changelog", + }; + + let title = format!(" Log [{}] ({}) ", filter_label, total_entries); + + let log = List::new(items).block( + Block::default() + .title(title) + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(log, area); +} + +// ── Aggregate changelog ───────────────────────────────────────────────────── + +const REPO_COLORS: [Color; 4] = [Color::Yellow, Color::Cyan, Color::Green, Color::Magenta]; + +fn render_changelog(app: &App, frame: &mut Frame, area: Rect) { + let updated_repos: Vec<&crate::tui::app::SyncLogEntry> = app + .sync_log_entries + .iter() + .filter(|e| e.had_updates) + .collect(); + + // Loading state + if app.changelog_loaded < app.changelog_total && app.changelog_total > 0 { + let loading = format!( + "Fetching commits from {} updated repositories... {}/{}", + app.changelog_total, app.changelog_loaded, app.changelog_total + ); + let block = Block::default() + .title(" Log [Changelog] ") + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)); + let paragraph = Paragraph::new(loading) + .alignment(Alignment::Center) + .block(block); + frame.render_widget(paragraph, area); + return; + } + + // Empty state + if updated_repos.is_empty() { + let block = Block::default() + .title(" Log [Changelog] ") + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)); + let paragraph = Paragraph::new("No updated repositories") + .alignment(Alignment::Center) + .block(block); + frame.render_widget(paragraph, area); + return; + } + + // Build grouped timeline items + let mut items: Vec = Vec::new(); + let total_commits: usize = app.changelog_commits.values().map(|v| v.len()).sum(); + + for (i, entry) in updated_repos.iter().enumerate() { + let color = REPO_COLORS[i % REPO_COLORS.len()]; + let commits = app.changelog_commits.get(&entry.repo_name); + let count = commits.map(|c| c.len()).unwrap_or(0); + + // Repo header: ● repo/name ··················· N commits + let header_right = format!("{} commits ", count); + let used: u16 = 6 + entry.repo_name.len() as u16 + header_right.len() as u16; + let padding = area.width.saturating_sub(used + 2) as usize; + let dots = "·".repeat(padding); + + items.push(ListItem::new(Line::from(vec![ + Span::styled( + " ● ", + Style::default().fg(color).add_modifier(Modifier::BOLD), + ), + Span::styled( + entry.repo_name.as_str(), + Style::default().fg(color).add_modifier(Modifier::BOLD), + ), + Span::styled(format!(" {} ", dots), Style::default().fg(Color::DarkGray)), + Span::styled(header_right, Style::default().fg(Color::DarkGray)), + ]))); + + // Commit lines with │ connector + if let Some(commits) = commits { + for (j, commit) in commits.iter().enumerate() { + let connector = if j < commits.len() - 1 { "│" } else { " " }; + items.push(ListItem::new(Line::from(vec![ + Span::styled(format!(" {connector} "), Style::default().fg(color)), + Span::styled(commit.as_str(), Style::default().fg(Color::DarkGray)), + ]))); + } + } + + // Blank separator between repos (except last) + if i < updated_repos.len() - 1 { + items.push(ListItem::new(Line::from(""))); + } + } + + let visible_height = area.height.saturating_sub(2) as usize; + let total_lines = items.len(); + let max_scroll = total_lines.saturating_sub(visible_height); + let scroll = app.changelog_scroll.min(max_scroll); + + let title = format!( + " Log [Changelog] ({} commits across {} repos) ", + total_commits, + updated_repos.len() + ); + + let items: Vec = items + .into_iter() + .skip(scroll) + .take(visible_height) + .collect(); + + let list = List::new(items).block( + Block::default() + .title(title) + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(list, area); +} + +// ── Sync history overlay ──────────────────────────────────────────────────── + +fn render_sync_history_overlay(app: &App, frame: &mut Frame, area: Rect) { + if app.sync_history.is_empty() { + return; + } + + let overlay_height = (app.sync_history.len() as u16 + 2).min(14); + let overlay_width = 60u16.min(area.width.saturating_sub(4)); + + let x = area.x + area.width.saturating_sub(overlay_width) / 2; + let y = area.y + area.height.saturating_sub(overlay_height) / 2; + let overlay_area = Rect::new(x, y, overlay_width, overlay_height); + + frame.render_widget(Clear, overlay_area); + + let items: Vec = app + .sync_history + .iter() + .rev() + .map(|entry| { + // Parse and format timestamp + let time_str = if let Ok(dt) = chrono::DateTime::parse_from_rfc3339(&entry.timestamp) { + dt.format("%b %d, %H:%M").to_string() + } else { + "unknown".to_string() + }; + + let total = entry.success + entry.failed + entry.skipped; + let mut spans = vec![ + Span::raw(" "), + Span::styled( + format!("{:<14}", time_str), + Style::default().fg(Color::DarkGray), + ), + Span::styled( + format!("{:>3} repos", total), + Style::default().fg(Color::Cyan), + ), + Span::raw(" "), + ]; + + if entry.with_updates > 0 { + spans.push(Span::styled( + format!("{} updated", entry.with_updates), + Style::default().fg(Color::Yellow), + )); + } else if entry.cloned > 0 { + spans.push(Span::styled( + format!("{} cloned", entry.cloned), + Style::default().fg(Color::Cyan), + )); + } else { + spans.push(Span::styled( + "no changes", + Style::default().fg(Color::DarkGray), + )); + } + + spans.push(Span::raw(" ")); + spans.push(Span::styled( + format!("{:.1}s", entry.duration_secs), + Style::default().fg(Color::DarkGray), + )); + + ListItem::new(Line::from(spans)) + }) + .collect(); + + let list = List::new(items).block( + Block::default() + .title(" Sync History ") + .borders(Borders::ALL) + .border_type(BorderType::Thick) + .border_style(Style::default().fg(Color::Cyan)), + ); + frame.render_widget(list, overlay_area); +} + +// ── Utilities ─────────────────────────────────────────────────────────────── + +fn format_duration(d: std::time::Duration) -> String { + let secs = d.as_secs(); + if secs >= 60 { + format!("{}m{}s", secs / 60, secs % 60) + } else { + format!("{}s", secs) + } +} + +#[cfg(test)] +#[path = "sync_tests.rs"] +mod tests; diff --git a/src/tui/screens/sync_tests.rs b/src/tui/screens/sync_tests.rs new file mode 100644 index 0000000..a44b745 --- /dev/null +++ b/src/tui/screens/sync_tests.rs @@ -0,0 +1,79 @@ +use super::*; +use crate::config::{Config, WorkspaceConfig}; +use crate::tui::app::{Operation, Screen}; +use crate::types::OpSummary; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use tokio::sync::mpsc::unbounded_channel; + +fn build_app() -> App { + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(Config::default(), vec![ws]); + app.screen = Screen::Sync; + app.screen_stack = vec![Screen::Dashboard]; + app +} + +#[test] +fn sync_key_p_hides_progress_popup() { + let mut app = build_app(); + let (tx, _rx) = unbounded_channel(); + app.scroll_offset = 5; + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('p'), KeyModifiers::NONE), + &tx, + ); + + assert_eq!(app.screen, Screen::Dashboard); + assert_eq!(app.scroll_offset, 5); +} + +#[tokio::test] +async fn sync_key_s_starts_sync() { + let mut app = build_app(); + let (tx, _rx) = unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('s'), KeyModifiers::NONE), + &tx, + ); + + assert_eq!(app.screen, Screen::Sync); + assert!(matches!( + app.operation_state, + OperationState::Discovering { + operation: Operation::Sync, + .. + } + )); +} + +#[test] +fn right_arrow_cycles_finished_filter() { + let mut app = build_app(); + let (tx, _rx) = unbounded_channel(); + app.operation_state = OperationState::Finished { + operation: Operation::Sync, + summary: OpSummary { + success: 1, + failed: 0, + skipped: 0, + }, + with_updates: 0, + cloned: 0, + synced: 1, + total_new_commits: 0, + duration_secs: 1.0, + }; + app.log_filter = LogFilter::All; + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + &tx, + ); + + assert_eq!(app.log_filter, LogFilter::Updated); +} diff --git a/src/tui/screens/system_check.rs b/src/tui/screens/system_check.rs new file mode 100644 index 0000000..f5fb5df --- /dev/null +++ b/src/tui/screens/system_check.rs @@ -0,0 +1,229 @@ +//! Init check screen — displays requirement check results. + +use ratatui::{ + layout::{Constraint, Layout}, + style::{Color, Modifier, Style}, + text::{Line, Span}, + widgets::{Block, Borders, List, ListItem, Paragraph}, + Frame, +}; + +use crossterm::event::{KeyCode, KeyEvent}; +use tokio::sync::mpsc::UnboundedSender; + +use crate::config::Config; +use crate::setup::state::SetupState; +use crate::tui::app::{App, CheckEntry, Screen}; +use crate::tui::event::{AppEvent, BackendMessage}; +use crate::tui::widgets::status_bar; + +// ── Key handler ───────────────────────────────────────────────────────────── + +pub async fn handle_key(app: &mut App, key: KeyEvent, backend_tx: &UnboundedSender) { + match key.code { + KeyCode::Enter if !app.checks_loading => { + // Run requirement checks + app.checks_loading = true; + let results = crate::checks::check_requirements().await; + app.check_results = results + .into_iter() + .map(|r| CheckEntry { + name: r.name, + passed: r.passed, + message: r.message, + critical: r.critical, + }) + .collect(); + app.checks_loading = false; + } + KeyCode::Char('c') if !app.check_results.is_empty() && !app.config_created => { + // Create config file + let tx = backend_tx.clone(); + tokio::spawn(async move { + match Config::default_path() { + Ok(config_path) => { + if config_path.exists() { + let _ = tx.send(AppEvent::Backend(BackendMessage::InitConfigError( + format!( + "Config already exists at {}. Delete it first to recreate.", + config_path.display() + ), + ))); + return; + } + if let Some(parent) = config_path.parent() { + if let Err(e) = std::fs::create_dir_all(parent) { + let _ = + tx.send(AppEvent::Backend(BackendMessage::InitConfigError( + format!("Failed to create config directory: {}", e), + ))); + return; + } + } + let default_config = Config::default_toml(); + match std::fs::write(&config_path, default_config) { + Ok(()) => { + let _ = + tx.send(AppEvent::Backend(BackendMessage::InitConfigCreated( + config_path.display().to_string(), + ))); + } + Err(e) => { + let _ = + tx.send(AppEvent::Backend(BackendMessage::InitConfigError( + format!("Failed to write config: {}", e), + ))); + } + } + } + Err(e) => { + let _ = tx.send(AppEvent::Backend(BackendMessage::InitConfigError( + format!("Cannot determine config path: {}", e), + ))); + } + } + }); + } + KeyCode::Char('s') => { + // Launch setup wizard + let default_path = std::env::current_dir() + .map(|p| crate::setup::state::tilde_collapse(&p.to_string_lossy())) + .unwrap_or_else(|_| "~/Git-Same/GitHub".to_string()); + app.setup_state = Some(SetupState::new(&default_path)); + app.navigate_to(Screen::WorkspaceSetup); + } + _ => {} + } +} + +// ── Render ────────────────────────────────────────────────────────────────── + +pub fn render(app: &App, frame: &mut Frame) { + let chunks = Layout::vertical([ + Constraint::Length(3), // Title + Constraint::Min(8), // Check results + Constraint::Length(3), // Help + Constraint::Length(1), // Status bar + ]) + .split(frame.area()); + + // Title + let title = Paragraph::new(Line::from(Span::styled( + " System Requirements ", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ))) + .centered() + .block( + Block::default() + .borders(Borders::BOTTOM) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(title, chunks[0]); + + // Check results + if app.checks_loading { + let loading = Paragraph::new(Line::from(Span::styled( + " Checking requirements...", + Style::default().fg(Color::Yellow), + ))) + .block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(loading, chunks[1]); + } else if app.check_results.is_empty() { + let empty = Paragraph::new(Line::from(Span::styled( + " Press Enter to check requirements", + Style::default().fg(Color::DarkGray), + ))) + .block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(empty, chunks[1]); + } else { + let items: Vec = app + .check_results + .iter() + .map(|check| { + let (icon, color) = if check.passed { + (" pass ", Color::Rgb(21, 128, 61)) + } else if check.critical { + (" FAIL ", Color::Red) + } else { + (" warn ", Color::Yellow) + }; + ListItem::new(Line::from(vec![ + Span::styled( + icon, + Style::default().fg(color).add_modifier(Modifier::BOLD), + ), + Span::styled(&check.name, Style::default().fg(Color::White)), + Span::styled(" — ", Style::default().fg(Color::DarkGray)), + Span::styled(&check.message, Style::default().fg(Color::DarkGray)), + ])) + }) + .collect(); + + let list = List::new(items).block( + Block::default() + .title(" Results ") + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(list, chunks[1]); + } + + // Help text / config status + let help_lines = if app.config_created { + let path = app + .config_path_display + .as_deref() + .unwrap_or("~/.config/git-same/config.toml"); + vec![Line::from(vec![ + Span::styled( + " Config created at ", + Style::default().fg(Color::Rgb(21, 128, 61)), + ), + Span::styled(path, Style::default().fg(Color::Cyan)), + Span::styled( + " — Press 's' to set up a workspace.", + Style::default().fg(Color::Yellow), + ), + ])] + } else if !app.check_results.is_empty() { + vec![Line::from(vec![ + Span::styled( + " Press 'c' to create config", + Style::default().fg(Color::Yellow), + ), + Span::styled( + " or 's' to set up a workspace.", + Style::default().fg(Color::DarkGray), + ), + ])] + } else { + vec![Line::from(Span::styled( + " No workspaces configured. Press 's' to set up a workspace.", + Style::default().fg(Color::Yellow), + ))] + }; + + let help = Paragraph::new(help_lines).block(Block::default().borders(Borders::TOP)); + frame.render_widget(help, chunks[2]); + + let hint = if !app.check_results.is_empty() && !app.config_created { + "Enter: Re-check c: Create Config s: Setup q: Quit" + } else { + "s: Setup Enter: Check q: Quit" + }; + status_bar::render(frame, chunks[3], hint); +} + +#[cfg(test)] +#[path = "system_check_tests.rs"] +mod tests; diff --git a/src/tui/screens/system_check_tests.rs b/src/tui/screens/system_check_tests.rs new file mode 100644 index 0000000..363d419 --- /dev/null +++ b/src/tui/screens/system_check_tests.rs @@ -0,0 +1,74 @@ +use super::*; +use crate::config::{Config, WorkspaceConfig}; +use crate::tui::app::Screen; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use ratatui::backend::TestBackend; +use ratatui::Terminal; +use tokio::sync::mpsc::unbounded_channel; + +fn render_output(app: &App) -> String { + let backend = TestBackend::new(110, 28); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal.draw(|frame| render(app, frame)).unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +fn app_for_screen() -> App { + let ws = WorkspaceConfig::new("ws", "/tmp/ws"); + App::new(Config::default(), vec![ws]) +} + +#[tokio::test] +async fn handle_key_s_opens_setup_wizard() { + let mut app = app_for_screen(); + app.screen = Screen::SystemCheck; + + let (tx, _rx) = unbounded_channel(); + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('s'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.screen, Screen::WorkspaceSetup); + assert!(app.setup_state.is_some()); +} + +#[test] +fn render_loading_state_shows_checking_message() { + let mut app = app_for_screen(); + app.checks_loading = true; + app.check_results.clear(); + + let output = render_output(&app); + assert!(output.contains("System Requirements")); + assert!(output.contains("Checking requirements")); +} + +#[test] +fn render_results_state_shows_create_config_hint() { + let mut app = app_for_screen(); + app.checks_loading = false; + app.config_created = false; + app.check_results = vec![CheckEntry { + name: "git".to_string(), + passed: true, + message: "installed".to_string(), + critical: true, + }]; + + let output = render_output(&app); + assert!(output.contains("Results")); + assert!(output.contains("Press 'c' to create config")); +} diff --git a/src/tui/screens/workspaces.rs b/src/tui/screens/workspaces.rs new file mode 100644 index 0000000..9cd88e3 --- /dev/null +++ b/src/tui/screens/workspaces.rs @@ -0,0 +1,810 @@ +//! Workspace screen — two-pane layout with workspace list (left) and detail (right). +//! +//! Left sidebar lists all workspaces plus a "Create New Workspace" entry. +//! Right panel shows detail for the selected workspace or a create prompt. + +use chrono::{DateTime, Utc}; +use ratatui::{ + layout::{Constraint, Layout, Rect}, + style::{Color, Modifier, Style}, + text::{Line, Span}, + widgets::{Block, Borders, List, ListItem, Paragraph}, + Frame, +}; + +use crossterm::event::{KeyCode, KeyEvent}; +use tokio::sync::mpsc::UnboundedSender; + +#[cfg(test)] +use std::sync::atomic::{AtomicUsize, Ordering}; + +use crate::banner::render_banner; +use crate::config::{Config, SyncMode, WorkspaceConfig, WorkspaceManager}; +use crate::setup::state::SetupState; +use crate::tui::app::{App, Screen, WorkspacePane}; +use crate::tui::event::{AppEvent, BackendMessage}; + +#[cfg(test)] +static OPEN_WORKSPACE_FOLDER_CALLS: AtomicUsize = AtomicUsize::new(0); + +// ── Key handler ───────────────────────────────────────────────────────────── + +pub async fn handle_key(app: &mut App, key: KeyEvent, backend_tx: &UnboundedSender) { + let num_ws = app.workspaces.len(); + let total_entries = num_ws + 1; // workspaces + "Create New Workspace" + + match key.code { + KeyCode::Left => { + app.workspace_pane = WorkspacePane::Left; + } + KeyCode::Right => { + app.workspace_pane = WorkspacePane::Right; + } + KeyCode::Tab => { + app.workspace_pane = match app.workspace_pane { + WorkspacePane::Left => WorkspacePane::Right, + WorkspacePane::Right => WorkspacePane::Left, + }; + } + // Right pane: scroll detail when config is expanded + KeyCode::Down + if app.workspace_pane == WorkspacePane::Right && app.settings_config_expanded => + { + app.workspace_detail_scroll = app.workspace_detail_scroll.saturating_add(1); + } + KeyCode::Up + if app.workspace_pane == WorkspacePane::Right && app.settings_config_expanded => + { + app.workspace_detail_scroll = app.workspace_detail_scroll.saturating_sub(1); + } + // Left pane: navigate workspace list + KeyCode::Down if app.workspace_pane == WorkspacePane::Left && total_entries > 0 => { + app.workspace_index = (app.workspace_index + 1) % total_entries; + app.settings_config_expanded = false; + app.workspace_detail_scroll = 0; + } + KeyCode::Up if app.workspace_pane == WorkspacePane::Left && total_entries > 0 => { + app.workspace_index = (app.workspace_index + total_entries - 1) % total_entries; + app.settings_config_expanded = false; + app.workspace_detail_scroll = 0; + } + KeyCode::Enter => { + if app.workspace_index < num_ws { + // Select workspace and go to dashboard + app.select_workspace(app.workspace_index); + app.screen = Screen::Dashboard; + app.screen_stack.clear(); + } else { + // "Create New Workspace" entry + let default_path = std::env::current_dir() + .map(|p| crate::setup::state::tilde_collapse(&p.to_string_lossy())) + .unwrap_or_else(|_| "~/Git-Same/GitHub".to_string()); + app.setup_state = Some(SetupState::new(&default_path)); + app.navigate_to(Screen::WorkspaceSetup); + } + } + KeyCode::Char('c') if app.workspace_index < num_ws => { + app.workspace_pane = WorkspacePane::Right; + app.settings_config_expanded = !app.settings_config_expanded; + app.workspace_detail_scroll = 0; + } + KeyCode::Char('n') => { + // Shortcut to create workspace + let default_path = std::env::current_dir() + .map(|p| crate::setup::state::tilde_collapse(&p.to_string_lossy())) + .unwrap_or_else(|_| "~/Git-Same/GitHub".to_string()); + app.setup_state = Some(SetupState::new(&default_path)); + app.navigate_to(Screen::WorkspaceSetup); + } + KeyCode::Char('d') if app.workspace_index < num_ws => { + // Set default workspace + if let Some(ws) = app.workspaces.get(app.workspace_index) { + let ws_name = ws.name.clone(); + let new_default_name = match next_default_workspace_name( + app.config.default_workspace.as_deref(), + &ws_name, + ) { + Some(name) => name, + None => { + return; + } + }; + + let new_default = Some(new_default_name); + let tx = backend_tx.clone(); + let default_clone = new_default.clone(); + tokio::spawn(async move { + match Config::save_default_workspace(default_clone.as_deref()) { + Ok(()) => { + let _ = tx.send(AppEvent::Backend( + BackendMessage::DefaultWorkspaceUpdated(default_clone), + )); + } + Err(e) => { + let _ = tx.send(AppEvent::Backend( + BackendMessage::DefaultWorkspaceError(format!("{}", e)), + )); + } + } + }); + } + } + KeyCode::Char('f') if app.workspace_index < num_ws => { + // Open workspace folder + if let Some(ws) = app.workspaces.get(app.workspace_index) { + let path = ws.expanded_base_path(); + open_workspace_folder(&path); + } + } + _ => {} + } +} + +fn next_default_workspace_name( + current_default: Option<&str>, + selected_workspace: &str, +) -> Option { + if current_default == Some(selected_workspace) { + None + } else { + Some(selected_workspace.to_string()) + } +} + +#[cfg(all(not(test), target_os = "macos"))] +fn open_workspace_folder(path: &std::path::Path) { + let _ = std::process::Command::new("open").arg(path).spawn(); +} + +#[cfg(all(not(test), target_os = "linux"))] +fn open_workspace_folder(path: &std::path::Path) { + let _ = std::process::Command::new("xdg-open").arg(path).spawn(); +} + +#[cfg(all(not(test), target_os = "windows"))] +fn open_workspace_folder(path: &std::path::Path) { + let _ = std::process::Command::new("explorer").arg(path).spawn(); +} + +#[cfg(all( + not(test), + not(any(target_os = "macos", target_os = "linux", target_os = "windows")) +))] +fn open_workspace_folder(_path: &std::path::Path) {} + +#[cfg(test)] +fn open_workspace_folder(_path: &std::path::Path) { + OPEN_WORKSPACE_FOLDER_CALLS.fetch_add(1, Ordering::SeqCst); +} + +#[cfg(test)] +fn take_open_workspace_folder_call_count() -> usize { + OPEN_WORKSPACE_FOLDER_CALLS.swap(0, Ordering::SeqCst) +} + +// ── Render ────────────────────────────────────────────────────────────────── + +pub fn render(app: &App, frame: &mut Frame) { + let chunks = Layout::vertical([ + Constraint::Length(6), // Banner + Constraint::Length(3), // Title + Constraint::Min(5), // Content (two panes) + Constraint::Length(2), // Bottom actions (2 lines) + ]) + .split(frame.area()); + + render_banner(frame, chunks[0]); + + // Title + let title = Paragraph::new(Line::from(vec![Span::styled( + " Workspaces ", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + )])) + .block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ) + .centered(); + frame.render_widget(title, chunks[1]); + + // Two-pane split + let panes = Layout::horizontal([Constraint::Percentage(25), Constraint::Percentage(75)]) + .split(chunks[2]); + + render_workspace_nav(app, frame, panes[0]); + + if app.workspace_index < app.workspaces.len() { + if let Some(ws) = app.workspaces.get(app.workspace_index) { + render_workspace_detail(app, ws, frame, panes[1]); + } + } else { + render_create_workspace_detail(frame, panes[1]); + } + + render_bottom_actions(app, frame, chunks[3]); +} + +fn render_workspace_nav(app: &App, frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + let mut items: Vec = Vec::new(); + + if app.workspaces.is_empty() { + items.push(ListItem::new(Line::from(Span::styled( + " (no workspaces)", + dim, + )))); + } + + if !app.workspaces.is_empty() { + items.push(ListItem::new(Line::from(""))); + } + + for (i, ws) in app.workspaces.iter().enumerate() { + let selected = app.workspace_index == i; + let is_active = app + .active_workspace + .as_ref() + .map(|aw| aw.name == ws.name) + .unwrap_or(false); + let is_default = app.config.default_workspace.as_deref() == Some(ws.name.as_str()); + + let folder_name = std::path::Path::new(&ws.base_path) + .file_name() + .and_then(|f| f.to_str()) + .unwrap_or(&ws.base_path); + + let (marker, style) = if selected { + ( + ">", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ) + } else { + (" ", Style::default()) + }; + + let mut spans = vec![ + Span::styled(format!(" {} ", marker), style), + Span::styled(folder_name.to_string(), style), + ]; + + if is_active { + spans.push(Span::styled( + " \u{25CF}", + Style::default() + .fg(Color::Rgb(21, 128, 61)) + .add_modifier(Modifier::BOLD), + )); + } + if is_default { + spans.push(Span::styled( + " (default)", + Style::default().fg(Color::Rgb(21, 128, 61)), + )); + } + + items.push(ListItem::new(Line::from(spans))); + } + + // Spacer before Create entry + if !app.workspaces.is_empty() { + items.push(ListItem::new(Line::from(""))); + } + + // "Create New Workspace" entry + let create_selected = app.workspace_index == app.workspaces.len(); + let (create_marker, create_style) = if create_selected { + ( + ">", + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ) + } else { + (" ", Style::default().fg(Color::Rgb(21, 128, 61))) + }; + items.push(ListItem::new(Line::from(vec![ + Span::styled(format!(" {} ", create_marker), create_style), + Span::styled("Create New Workspace [n]", create_style), + ]))); + + let list = List::new(items).block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(list, area); +} + +fn render_workspace_detail(app: &App, ws: &WorkspaceConfig, frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + let section_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + let val_style = Style::default().fg(Color::White); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + + let is_default = app + .config + .default_workspace + .as_deref() + .map(|d| d == ws.name) + .unwrap_or(false); + + let is_active = app + .active_workspace + .as_ref() + .map(|aw| aw.name == ws.name) + .unwrap_or(false); + + let full_path = ws.expanded_base_path().display().to_string(); + + let config_file = WorkspaceManager::workspace_dir(&ws.name) + .map(|d| d.join("workspace-config.toml").display().to_string()) + .unwrap_or_else(|_| "unknown".to_string()); + + let cache_file = WorkspaceManager::cache_path(&ws.name) + .map(|p| p.display().to_string()) + .unwrap_or_else(|_| "unknown".to_string()); + + let username = if ws.username.is_empty() { + "\u{2014}".to_string() + } else { + ws.username.clone() + }; + + let sync_mode = ws + .sync_mode + .map(sync_mode_name) + .map(ToString::to_string) + .unwrap_or_else(|| format!("{} (global default)", sync_mode_name(app.config.sync_mode))); + + let concurrency = ws + .concurrency + .map(|c| c.to_string()) + .unwrap_or_else(|| format!("{} (global default)", app.config.concurrency)); + + let (last_synced_relative, last_synced_absolute) = + format_last_synced(ws.last_synced.as_deref()); + + let default_label = if is_default { "Yes" } else { "No" }; + let active_label = if is_active { "Yes" } else { "No" }; + + let folder_name = std::path::Path::new(&ws.base_path) + .file_name() + .and_then(|f| f.to_str()) + .unwrap_or(&ws.base_path); + + let mut lines = vec![ + Line::from(""), + Line::from(Span::styled( + format!(" Workspace: {}", folder_name), + section_style, + )), + Line::from(""), + ]; + + lines.push(detail_row_with_hint( + area, + "Active", + active_label, + Some(("[Enter]", "Select Workspace")), + dim, + val_style, + key_style, + )); + lines.push(detail_row_with_hint( + area, + "Default", + default_label, + if is_default { + None + } else { + Some(("[d]", "Set default")) + }, + dim, + val_style, + key_style, + )); + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Provider"), dim), + Span::styled(ws.provider.kind.display_name().to_string(), val_style), + ])); + + lines.push(Line::from("")); + lines.push(Line::from(Span::styled(" Paths", section_style))); + lines.push(Line::from("")); + lines.push(detail_row_with_hint( + area, + "Path", + &ws.base_path, + None, + dim, + val_style, + key_style, + )); + lines.push(detail_row_with_hint( + area, + "Full path", + &full_path, + Some(("[f]", "Open Finder Folder")), + dim, + val_style, + key_style, + )); + lines.push(detail_row_with_hint( + area, + "Config", + &config_file, + None, + dim, + val_style, + key_style, + )); + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Cache file"), dim), + Span::styled(cache_file, val_style), + ])); + + lines.push(Line::from("")); + lines.push(Line::from(Span::styled(" Sync", section_style))); + lines.push(Line::from("")); + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Sync mode"), dim), + Span::styled(sync_mode, val_style), + ])); + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Concurrency"), dim), + Span::styled(concurrency, val_style), + ])); + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Last synced"), dim), + Span::styled(last_synced_relative, val_style), + ])); + if let Some(absolute) = last_synced_absolute { + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", ""), dim), + Span::styled(absolute, val_style), + ])); + } + + lines.push(Line::from("")); + lines.push(Line::from(Span::styled(" Account", section_style))); + lines.push(Line::from("")); + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Username"), dim), + Span::styled(username, val_style), + ])); + let org_lines = wrap_comma_separated_values(&ws.orgs, field_value_width(area, 14)); + if let Some((first, rest)) = org_lines.split_first() { + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", "Organizations"), dim), + Span::styled(first.as_str(), val_style), + ])); + for line in rest { + lines.push(Line::from(vec![ + Span::styled(format!(" {:<14}", ""), dim), + Span::styled(line.as_str(), val_style), + ])); + } + } + + // Config content section (collapsible) + lines.push(Line::from("")); + if app.settings_config_expanded { + lines.push(section_line_with_hint( + area, + "\u{25BC} Config", + "[c]", + "Collapse config file", + section_style, + dim, + key_style, + )); + lines.push(Line::from("")); + match ws.to_toml() { + Ok(toml) => { + for toml_line in toml.lines() { + lines.push(Line::from(Span::styled(format!(" {}", toml_line), dim))); + } + } + Err(_) => { + lines.push(Line::from(Span::styled( + " (failed to serialize config)", + dim, + ))); + } + } + } else { + lines.push(section_line_with_hint( + area, + "\u{25B6} Config", + "[c]", + "Expand config file", + section_style, + dim, + key_style, + )); + } + + let content = Paragraph::new(lines) + .block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ) + .scroll((app.workspace_detail_scroll, 0)); + frame.render_widget(content, area); +} + +fn sync_mode_name(mode: SyncMode) -> &'static str { + match mode { + SyncMode::Fetch => "fetch", + SyncMode::Pull => "pull", + } +} + +fn detail_row_with_hint( + area: Rect, + label: &str, + value: &str, + hint: Option<(&str, &str)>, + dim: Style, + val_style: Style, + key_style: Style, +) -> Line<'static> { + let right_padding = 2usize; + let label_text = format!(" {:<14}", label); + let mut spans = vec![ + Span::styled(label_text.clone(), dim), + Span::styled(value.to_string(), val_style), + ]; + + if let Some((hint_key, hint_label)) = hint { + let content_width = area.width.saturating_sub(2) as usize; + let left_width = label_text.chars().count() + value.chars().count(); + let hint_width = hint_key.chars().count() + 1 + hint_label.chars().count() + right_padding; + let gap = content_width.saturating_sub(left_width + hint_width).max(1); + + spans.push(Span::raw(" ".repeat(gap))); + spans.push(Span::styled(hint_key.to_string(), key_style)); + spans.push(Span::styled(format!(" {}", hint_label), dim)); + spans.push(Span::raw(" ".repeat(right_padding))); + } + + Line::from(spans) +} + +fn section_line_with_hint( + area: Rect, + section: &str, + hint_key: &str, + hint_label: &str, + section_style: Style, + dim: Style, + key_style: Style, +) -> Line<'static> { + let right_padding = 2usize; + let section_text = format!(" {}", section); + let content_width = area.width.saturating_sub(2) as usize; + let left_width = section_text.chars().count(); + let hint_width = hint_key.chars().count() + 1 + hint_label.chars().count() + right_padding; + let gap = content_width.saturating_sub(left_width + hint_width).max(1); + + Line::from(vec![ + Span::styled(section_text, section_style), + Span::raw(" ".repeat(gap)), + Span::styled(hint_key.to_string(), key_style), + Span::styled(format!(" {}", hint_label), dim), + Span::raw(" ".repeat(right_padding)), + ]) +} + +fn format_last_synced(raw: Option<&str>) -> (String, Option) { + let Some(raw) = raw else { + return ("never".to_string(), None); + }; + + match DateTime::parse_from_rfc3339(raw) { + Ok(dt) => { + let absolute = dt.format("%Y-%m-%d %H:%M:%S").to_string(); + let duration = Utc::now().signed_duration_since(dt); + let relative = if duration.num_days() > 30 { + format!("about {}mo ago", duration.num_days() / 30) + } else if duration.num_days() > 0 { + format!("about {}d ago", duration.num_days()) + } else if duration.num_hours() > 0 { + format!("about {}h ago", duration.num_hours()) + } else if duration.num_minutes() > 0 { + format!("about {} min ago", duration.num_minutes()) + } else { + "just now".to_string() + }; + (relative, Some(absolute)) + } + Err(_) => (raw.to_string(), None), + } +} + +fn field_value_width(area: Rect, label_width: usize) -> usize { + let content_width = area.width.saturating_sub(2) as usize; + let prefix_width = 4 + label_width; + content_width.saturating_sub(prefix_width).max(16) +} + +fn wrap_comma_separated_values(values: &[String], max_width: usize) -> Vec { + if values.is_empty() { + return vec!["all".to_string()]; + } + + let mut lines = Vec::new(); + let mut current = String::new(); + + for value in values { + if current.is_empty() { + current.push_str(value); + continue; + } + + if current.len() + 2 + value.len() <= max_width { + current.push_str(", "); + current.push_str(value); + } else { + lines.push(current); + current = value.clone(); + } + } + + if !current.is_empty() { + lines.push(current); + } + + lines +} + +fn render_create_workspace_detail(frame: &mut Frame, area: Rect) { + let dim = Style::default().fg(Color::DarkGray); + let section_style = Style::default() + .fg(Color::White) + .add_modifier(Modifier::BOLD); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + + let lines = vec![ + Line::from(""), + section_line_with_hint( + area, + "New Workspace", + "[n]", + "Create New Workspace", + section_style, + dim, + key_style, + ), + Line::from(""), + Line::from(Span::styled( + " Press Enter to launch the Setup Wizard", + dim, + )), + Line::from(Span::styled(" and configure a new workspace.", dim)), + Line::from(""), + Line::from(Span::styled(" The wizard will guide you through:", dim)), + Line::from(Span::styled( + " \u{2022} Choosing a base directory", + dim, + )), + Line::from(Span::styled( + " \u{2022} Connecting to a provider (GitHub)", + dim, + )), + Line::from(Span::styled( + " \u{2022} Selecting organizations to sync", + dim, + )), + ]; + + let content = Paragraph::new(lines).block( + Block::default() + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(content, area); +} + +fn render_bottom_actions(app: &App, frame: &mut Frame, area: Rect) { + let rows = Layout::vertical([ + Constraint::Length(1), // Actions + Constraint::Length(1), // Navigation + ]) + .split(area); + + let dim = Style::default().fg(Color::DarkGray); + let key_style = Style::default() + .fg(Color::Rgb(37, 99, 235)) + .add_modifier(Modifier::BOLD); + + // Line 1: intentionally blank (action hints are shown inline in the right panel) + let actions = Paragraph::new(vec![Line::from("")]).centered(); + + // Line 2: Navigation — left (quit, back) and right (arrows) + let nav_cols = + Layout::horizontal([Constraint::Percentage(50), Constraint::Percentage(50)]).split(rows[1]); + + let left_spans = vec![ + Span::raw(" "), + Span::styled("[q]", key_style), + Span::styled(" Quit", dim), + Span::raw(" "), + Span::styled("[Esc]", key_style), + Span::styled(" Back", dim), + ]; + + let right_spans = if app.workspace_pane == WorkspacePane::Right + && app.workspace_index < app.workspaces.len() + && app.settings_config_expanded + { + vec![ + Span::styled("[\u{2190}]", key_style), + Span::raw(" "), + Span::styled("[\u{2192}]", key_style), + Span::styled(" Panel", dim), + Span::raw(" "), + Span::styled("[\u{2191}]", key_style), + Span::raw(" "), + Span::styled("[\u{2193}]", key_style), + Span::styled(" Scroll", dim), + Span::raw(" "), + Span::styled("[c]", key_style), + Span::styled(" Collapse", dim), + Span::raw(" "), + ] + } else if app.workspace_pane == WorkspacePane::Left { + vec![ + Span::styled("[\u{2190}]", key_style), + Span::raw(" "), + Span::styled("[\u{2192}]", key_style), + Span::styled(" Panel", dim), + Span::raw(" "), + Span::styled("[\u{2191}]", key_style), + Span::raw(" "), + Span::styled("[\u{2193}]", key_style), + Span::styled(" Move", dim), + Span::raw(" "), + Span::styled("[Enter]", key_style), + Span::styled(" Select", dim), + Span::raw(" "), + ] + } else { + vec![ + Span::styled("[\u{2190}]", key_style), + Span::raw(" "), + Span::styled("[\u{2192}]", key_style), + Span::styled(" Panel", dim), + Span::raw(" "), + Span::styled("[c]", key_style), + Span::styled(" Expand", dim), + Span::raw(" "), + Span::styled("[Enter]", key_style), + Span::styled(" Select", dim), + Span::raw(" "), + ] + }; + + frame.render_widget(actions, rows[0]); + frame.render_widget(Paragraph::new(vec![Line::from(left_spans)]), nav_cols[0]); + frame.render_widget( + Paragraph::new(vec![Line::from(right_spans)]).right_aligned(), + nav_cols[1], + ); +} + +#[cfg(test)] +#[path = "workspaces_tests.rs"] +mod tests; diff --git a/src/tui/screens/workspaces_tests.rs b/src/tui/screens/workspaces_tests.rs new file mode 100644 index 0000000..1fb1e9d --- /dev/null +++ b/src/tui/screens/workspaces_tests.rs @@ -0,0 +1,190 @@ +use super::*; +use crossterm::event::{KeyCode, KeyEvent, KeyModifiers}; +use tokio::sync::mpsc::error::TryRecvError; + +#[test] +fn wrap_comma_separated_values_wraps_and_preserves_order() { + let values = vec![ + "CommitBook".to_string(), + "GenAI-Wednesday".to_string(), + "M-com".to_string(), + "Manuel-Forks".to_string(), + ]; + + let lines = wrap_comma_separated_values(&values, 20); + assert!(lines.len() > 1); + assert_eq!(lines.join(", "), values.join(", ")); +} + +#[test] +fn wrap_comma_separated_values_empty_means_all() { + let lines = wrap_comma_separated_values(&[], 20); + assert_eq!(lines, vec!["all".to_string()]); +} + +fn build_workspace_app(default_workspace: Option<&str>) -> App { + let mut config = Config::default(); + config.default_workspace = default_workspace.map(ToString::to_string); + + let ws = WorkspaceConfig::new("test-ws", "/tmp/test-ws"); + let mut app = App::new(config, vec![ws.clone()]); + app.screen = Screen::Workspaces; + app.workspace_index = 0; + app.active_workspace = Some(ws); + app +} + +#[tokio::test] +async fn workspace_key_f_opens_folder_for_selected_workspace() { + let mut app = build_workspace_app(None); + let (tx, _rx) = tokio::sync::mpsc::unbounded_channel(); + let _ = take_open_workspace_folder_call_count(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('f'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(take_open_workspace_folder_call_count(), 1); +} + +#[tokio::test] +async fn workspace_key_c_toggles_config_expansion() { + let mut app = build_workspace_app(None); + let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('c'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.workspace_pane, WorkspacePane::Right); + assert!(app.settings_config_expanded); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('c'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert!(!app.settings_config_expanded); + assert!(matches!(rx.try_recv(), Err(TryRecvError::Empty))); +} + +#[tokio::test] +async fn workspace_left_right_controls_panel_focus_and_list_movement() { + let mut config = Config::default(); + config.default_workspace = None; + let ws1 = WorkspaceConfig::new("ws1", "/tmp/ws1"); + let ws2 = WorkspaceConfig::new("ws2", "/tmp/ws2"); + let mut app = App::new(config, vec![ws1.clone(), ws2]); + app.screen = Screen::Workspaces; + app.workspace_index = 0; + app.active_workspace = Some(ws1); + let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Right, KeyModifiers::NONE), + &tx, + ) + .await; + assert_eq!(app.workspace_pane, WorkspacePane::Right); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Down, KeyModifiers::NONE), + &tx, + ) + .await; + assert_eq!(app.workspace_index, 0); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Left, KeyModifiers::NONE), + &tx, + ) + .await; + assert_eq!(app.workspace_pane, WorkspacePane::Left); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Down, KeyModifiers::NONE), + &tx, + ) + .await; + assert_eq!(app.workspace_index, 1); + assert!(matches!(rx.try_recv(), Err(TryRecvError::Empty))); +} + +#[tokio::test] +async fn workspace_key_o_is_noop() { + let mut app = build_workspace_app(None); + let before_index = app.workspace_index; + let before_scroll = app.workspace_detail_scroll; + let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel(); + let _ = take_open_workspace_folder_call_count(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('o'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.workspace_index, before_index); + assert_eq!(app.workspace_detail_scroll, before_scroll); + assert_eq!(take_open_workspace_folder_call_count(), 0); + assert!(matches!(rx.try_recv(), Err(TryRecvError::Empty))); +} + +#[tokio::test] +async fn workspace_enter_selects_workspace_even_if_active() { + let mut app = build_workspace_app(None); + app.settings_config_expanded = true; + let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Enter, KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.screen, Screen::Dashboard); + assert!(matches!(rx.try_recv(), Err(TryRecvError::Empty))); +} + +#[tokio::test] +async fn workspace_key_d_does_not_clear_when_already_default() { + let mut app = build_workspace_app(Some("test-ws")); + let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel(); + + handle_key( + &mut app, + KeyEvent::new(KeyCode::Char('d'), KeyModifiers::NONE), + &tx, + ) + .await; + + assert_eq!(app.config.default_workspace.as_deref(), Some("test-ws")); + assert!(matches!(rx.try_recv(), Err(TryRecvError::Empty))); +} + +#[test] +fn next_default_workspace_name_is_set_only() { + assert_eq!( + next_default_workspace_name(Some("current"), "next"), + Some("next".to_string()) + ); + assert_eq!(next_default_workspace_name(Some("same"), "same"), None); + assert_eq!( + next_default_workspace_name(None, "selected"), + Some("selected".to_string()) + ); +} diff --git a/src/tui/ui.rs b/src/tui/ui.rs new file mode 100644 index 0000000..e76a8db --- /dev/null +++ b/src/tui/ui.rs @@ -0,0 +1,24 @@ +//! Top-level rendering dispatcher (the "View"). + +use super::app::{App, Screen}; +use super::screens; +use ratatui::Frame; + +/// Render the current screen. +pub fn render(app: &mut App, frame: &mut Frame) { + match app.screen { + Screen::SystemCheck => screens::system_check::render(app, frame), + Screen::WorkspaceSetup => { + if let Some(ref setup) = app.setup_state { + crate::setup::ui::render(setup, frame); + } + } + Screen::Workspaces => screens::workspaces::render(app, frame), + Screen::Dashboard => screens::dashboard::render(app, frame), + Screen::Sync => { + screens::dashboard::render(app, frame); + screens::sync::render(app, frame); + } + Screen::Settings => screens::settings::render(app, frame), + } +} diff --git a/src/tui/widgets/mod.rs b/src/tui/widgets/mod.rs new file mode 100644 index 0000000..9273244 --- /dev/null +++ b/src/tui/widgets/mod.rs @@ -0,0 +1,4 @@ +//! Reusable TUI widgets. + +pub mod repo_table; +pub mod status_bar; diff --git a/src/tui/widgets/repo_table.rs b/src/tui/widgets/repo_table.rs new file mode 100644 index 0000000..fed96e6 --- /dev/null +++ b/src/tui/widgets/repo_table.rs @@ -0,0 +1,72 @@ +//! Reusable repo table widget. + +use ratatui::{ + layout::{Constraint, Rect}, + style::{Color, Modifier, Style}, + widgets::{Block, Borders, Row, Table}, + Frame, +}; + +use crate::types::OwnedRepo; + +/// Render a table of OwnedRepo entries. +pub fn render_owned_repos( + frame: &mut Frame, + area: Rect, + title: &str, + repos: &[&OwnedRepo], + selected: usize, +) { + let header = Row::new(vec!["Name", "Default Branch", "Visibility"]) + .style( + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD), + ) + .bottom_margin(1); + + let rows: Vec = repos + .iter() + .enumerate() + .map(|(i, repo)| { + let style = if i == selected { + Style::default() + .fg(Color::Cyan) + .add_modifier(Modifier::BOLD) + } else { + Style::default() + }; + + let visibility = if repo.repo.private { + "private" + } else { + "public" + }; + + Row::new(vec![ + repo.repo.name.clone(), + repo.repo.default_branch.clone(), + visibility.to_string(), + ]) + .style(style) + }) + .collect(); + + let widths = [ + Constraint::Percentage(50), + Constraint::Percentage(25), + Constraint::Percentage(25), + ]; + + let table = Table::new(rows, widths).header(header).block( + Block::default() + .title(title) + .borders(Borders::ALL) + .border_style(Style::default().fg(Color::DarkGray)), + ); + frame.render_widget(table, area); +} + +#[cfg(test)] +#[path = "repo_table_tests.rs"] +mod tests; diff --git a/src/tui/widgets/repo_table_tests.rs b/src/tui/widgets/repo_table_tests.rs new file mode 100644 index 0000000..d77fa80 --- /dev/null +++ b/src/tui/widgets/repo_table_tests.rs @@ -0,0 +1,45 @@ +use super::*; +use ratatui::backend::TestBackend; +use ratatui::Terminal; + +fn render_output(repos: &[&OwnedRepo]) -> String { + let backend = TestBackend::new(100, 12); + let mut terminal = Terminal::new(backend).unwrap(); + + terminal + .draw(|frame| { + let area = frame.area(); + render_owned_repos(frame, area, "Repositories", repos, 0); + }) + .unwrap(); + + let buffer = terminal.backend().buffer(); + let mut text = String::new(); + for y in 0..buffer.area.height { + for x in 0..buffer.area.width { + text.push_str(buffer[(x, y)].symbol()); + } + text.push('\n'); + } + text +} + +#[test] +fn repo_table_renders_title_headers_and_rows() { + let public_repo = OwnedRepo::new("acme", crate::types::Repo::test("rocket", "acme")); + let mut private_repo = crate::types::Repo::test("vault", "acme"); + private_repo.private = true; + let private_repo = OwnedRepo::new("acme", private_repo); + + let rows = vec![&public_repo, &private_repo]; + let output = render_output(&rows); + + assert!(output.contains("Repositories")); + assert!(output.contains("Name")); + assert!(output.contains("Default Branch")); + assert!(output.contains("Visibility")); + assert!(output.contains("rocket")); + assert!(output.contains("vault")); + assert!(output.contains("public")); + assert!(output.contains("private")); +} diff --git a/src/tui/widgets/status_bar.rs b/src/tui/widgets/status_bar.rs new file mode 100644 index 0000000..1c32a2f --- /dev/null +++ b/src/tui/widgets/status_bar.rs @@ -0,0 +1,18 @@ +//! Bottom status bar showing context-sensitive keybindings. + +use ratatui::{ + layout::Rect, + style::{Color, Style}, + text::{Line, Span}, + widgets::Paragraph, + Frame, +}; + +/// Render a status bar at the given area. +pub fn render(frame: &mut Frame, area: Rect, hint: &str) { + let bar = Paragraph::new(Line::from(vec![Span::styled( + format!(" {} ", hint), + Style::default().fg(Color::DarkGray), + )])); + frame.render_widget(bar, area); +} diff --git a/src/types/mod.rs b/src/types/mod.rs new file mode 100644 index 0000000..8a235bd --- /dev/null +++ b/src/types/mod.rs @@ -0,0 +1,18 @@ +//! Core type definitions for gisa. +//! +//! This module contains the fundamental data structures used throughout +//! the application: +//! +//! - [`ProviderKind`] - Identifies Git hosting providers (GitHub, GitLab, etc.) +//! - [`Org`] - Represents an organization +//! - [`Repo`] - Represents a repository +//! - [`OwnedRepo`] - A repository with its owner context +//! - [`ActionPlan`] - Plan for clone/sync operations +//! - [`OpResult`] - Result of a single operation +//! - [`OpSummary`] - Summary statistics for batch operations + +mod provider; +mod repo; + +pub use provider::ProviderKind; +pub use repo::{ActionPlan, OpResult, OpSummary, Org, OwnedRepo, Repo, SkippedRepo}; diff --git a/src/types/provider.rs b/src/types/provider.rs new file mode 100644 index 0000000..39a5fc0 --- /dev/null +++ b/src/types/provider.rs @@ -0,0 +1,132 @@ +//! Provider type definitions. +//! +//! Defines the supported Git hosting providers and their identifiers. + +use serde::{Deserialize, Serialize}; +use std::fmt; + +/// Identifies which Git hosting provider a repository belongs to. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize, Default)] +pub enum ProviderKind { + /// GitHub.com (public) + #[serde(rename = "github")] + #[default] + GitHub, + /// GitHub Enterprise (Cloud & Server) + #[serde(rename = "github-enterprise")] + GitHubEnterprise, + /// GitLab.com + #[serde(rename = "gitlab")] + GitLab, + /// GitLab Dedicated & Self-Managed + #[serde(rename = "gitlab-self-managed")] + GitLabSelfManaged, + /// Codeberg.org + #[serde(rename = "codeberg")] + Codeberg, + /// Bitbucket.org + #[serde(rename = "bitbucket")] + Bitbucket, +} + +impl ProviderKind { + /// Returns a stable slug for path templating and cache keys. + pub fn slug(&self) -> &'static str { + match self { + ProviderKind::GitHub => "github", + ProviderKind::GitHubEnterprise => "github-enterprise", + ProviderKind::GitLab => "gitlab", + ProviderKind::GitLabSelfManaged => "gitlab-self-managed", + ProviderKind::Codeberg => "codeberg", + ProviderKind::Bitbucket => "bitbucket", + } + } + + /// Returns the default API base URL for this provider. + pub fn default_api_url(&self) -> &'static str { + match self { + ProviderKind::GitHub => "https://api.github.com", + ProviderKind::GitHubEnterprise => "", // Must be configured + ProviderKind::GitLab => "https://gitlab.com/api/v4", + ProviderKind::GitLabSelfManaged => "", // Must be configured + ProviderKind::Codeberg => "https://codeberg.org/api/v1", + ProviderKind::Bitbucket => "https://api.bitbucket.org/2.0", + } + } + + /// Returns the default git host for SSH URLs. + pub fn default_ssh_host(&self) -> &'static str { + match self { + ProviderKind::GitHub => "github.com", + ProviderKind::GitHubEnterprise => "", // Must be configured + ProviderKind::GitLab => "gitlab.com", + ProviderKind::GitLabSelfManaged => "", // Must be configured + ProviderKind::Codeberg => "codeberg.org", + ProviderKind::Bitbucket => "bitbucket.org", + } + } + + /// Returns true if this provider requires custom URL configuration. + pub fn requires_custom_url(&self) -> bool { + matches!( + self, + ProviderKind::GitHubEnterprise | ProviderKind::GitLabSelfManaged + ) + } + + /// Returns the human-readable name for this provider. + pub fn display_name(&self) -> &'static str { + match self { + ProviderKind::GitHub => "GitHub", + ProviderKind::GitHubEnterprise => "GitHub Enterprise", + ProviderKind::GitLab => "GitLab", + ProviderKind::GitLabSelfManaged => "GitLab Self-Managed", + ProviderKind::Codeberg => "Codeberg", + ProviderKind::Bitbucket => "Bitbucket", + } + } + + /// Returns all supported provider kinds. + pub fn all() -> &'static [ProviderKind] { + &[ + ProviderKind::GitHub, + ProviderKind::GitHubEnterprise, + ProviderKind::GitLab, + ProviderKind::GitLabSelfManaged, + ProviderKind::Codeberg, + ProviderKind::Bitbucket, + ] + } +} + +impl fmt::Display for ProviderKind { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!(f, "{}", self.display_name()) + } +} + +impl std::str::FromStr for ProviderKind { + type Err = String; + + fn from_str(s: &str) -> Result { + match s.to_lowercase().as_str() { + "github" | "gh" => Ok(ProviderKind::GitHub), + "github-enterprise" | "ghe" | "github_enterprise" => Ok(ProviderKind::GitHubEnterprise), + "gitlab" | "gl" => Ok(ProviderKind::GitLab), + "gitlab-self-managed" | "glsm" | "gitlab_self_managed" => { + Ok(ProviderKind::GitLabSelfManaged) + } + "codeberg" | "cb" => Ok(ProviderKind::Codeberg), + "bitbucket" | "bb" => Ok(ProviderKind::Bitbucket), + _ => Err(format!( + "Unknown provider: '{}'. Supported: github, github-enterprise, gitlab, \ + gitlab-self-managed, codeberg, bitbucket", + s + )), + } + } +} + +#[cfg(test)] +#[path = "provider_tests.rs"] +mod tests; diff --git a/src/types/provider_tests.rs b/src/types/provider_tests.rs new file mode 100644 index 0000000..38d7c78 --- /dev/null +++ b/src/types/provider_tests.rs @@ -0,0 +1,181 @@ +use super::*; + +#[test] +fn test_default_is_github() { + assert_eq!(ProviderKind::default(), ProviderKind::GitHub); +} + +#[test] +fn test_display() { + assert_eq!(format!("{}", ProviderKind::GitHub), "GitHub"); + assert_eq!( + format!("{}", ProviderKind::GitHubEnterprise), + "GitHub Enterprise" + ); + assert_eq!(format!("{}", ProviderKind::GitLab), "GitLab"); + assert_eq!( + format!("{}", ProviderKind::GitLabSelfManaged), + "GitLab Self-Managed" + ); + assert_eq!(format!("{}", ProviderKind::Codeberg), "Codeberg"); + assert_eq!(format!("{}", ProviderKind::Bitbucket), "Bitbucket"); +} + +#[test] +fn test_from_str() { + assert_eq!( + "github".parse::().unwrap(), + ProviderKind::GitHub + ); + assert_eq!("gh".parse::().unwrap(), ProviderKind::GitHub); + assert_eq!( + "GITHUB".parse::().unwrap(), + ProviderKind::GitHub + ); + + assert_eq!( + "github-enterprise".parse::().unwrap(), + ProviderKind::GitHubEnterprise + ); + assert_eq!( + "ghe".parse::().unwrap(), + ProviderKind::GitHubEnterprise + ); + + assert_eq!( + "gitlab".parse::().unwrap(), + ProviderKind::GitLab + ); + assert_eq!("gl".parse::().unwrap(), ProviderKind::GitLab); + + assert_eq!( + "gitlab-self-managed".parse::().unwrap(), + ProviderKind::GitLabSelfManaged + ); + assert_eq!( + "glsm".parse::().unwrap(), + ProviderKind::GitLabSelfManaged + ); + + assert_eq!( + "codeberg".parse::().unwrap(), + ProviderKind::Codeberg + ); + assert_eq!( + "cb".parse::().unwrap(), + ProviderKind::Codeberg + ); + + assert_eq!( + "bitbucket".parse::().unwrap(), + ProviderKind::Bitbucket + ); + assert_eq!( + "bb".parse::().unwrap(), + ProviderKind::Bitbucket + ); +} + +#[test] +fn test_from_str_invalid() { + let result = "invalid".parse::(); + assert!(result.is_err()); + assert!(result.unwrap_err().contains("Unknown provider")); +} + +#[test] +fn test_default_api_urls() { + assert_eq!( + ProviderKind::GitHub.default_api_url(), + "https://api.github.com" + ); + assert_eq!( + ProviderKind::GitLab.default_api_url(), + "https://gitlab.com/api/v4" + ); + assert_eq!( + ProviderKind::Codeberg.default_api_url(), + "https://codeberg.org/api/v1" + ); + assert_eq!( + ProviderKind::Bitbucket.default_api_url(), + "https://api.bitbucket.org/2.0" + ); + // Self-hosted providers have empty default (must be configured) + assert_eq!(ProviderKind::GitHubEnterprise.default_api_url(), ""); + assert_eq!(ProviderKind::GitLabSelfManaged.default_api_url(), ""); +} + +#[test] +fn test_slug() { + assert_eq!(ProviderKind::GitHub.slug(), "github"); + assert_eq!(ProviderKind::GitHubEnterprise.slug(), "github-enterprise"); + assert_eq!(ProviderKind::GitLab.slug(), "gitlab"); + assert_eq!( + ProviderKind::GitLabSelfManaged.slug(), + "gitlab-self-managed" + ); + assert_eq!(ProviderKind::Codeberg.slug(), "codeberg"); + assert_eq!(ProviderKind::Bitbucket.slug(), "bitbucket"); +} + +#[test] +fn test_requires_custom_url() { + assert!(!ProviderKind::GitHub.requires_custom_url()); + assert!(ProviderKind::GitHubEnterprise.requires_custom_url()); + assert!(!ProviderKind::GitLab.requires_custom_url()); + assert!(ProviderKind::GitLabSelfManaged.requires_custom_url()); + assert!(!ProviderKind::Codeberg.requires_custom_url()); + assert!(!ProviderKind::Bitbucket.requires_custom_url()); +} + +#[test] +fn test_serde_serialization() { + let json = serde_json::to_string(&ProviderKind::GitHub).unwrap(); + assert_eq!(json, "\"github\""); + + let json = serde_json::to_string(&ProviderKind::GitHubEnterprise).unwrap(); + assert_eq!(json, "\"github-enterprise\""); + + let json = serde_json::to_string(&ProviderKind::GitLabSelfManaged).unwrap(); + assert_eq!(json, "\"gitlab-self-managed\""); + + let json = serde_json::to_string(&ProviderKind::Codeberg).unwrap(); + assert_eq!(json, "\"codeberg\""); +} + +#[test] +fn test_serde_deserialization() { + let kind: ProviderKind = serde_json::from_str("\"github\"").unwrap(); + assert_eq!(kind, ProviderKind::GitHub); + + let kind: ProviderKind = serde_json::from_str("\"gitlab\"").unwrap(); + assert_eq!(kind, ProviderKind::GitLab); + + let kind: ProviderKind = serde_json::from_str("\"codeberg\"").unwrap(); + assert_eq!(kind, ProviderKind::Codeberg); +} + +#[test] +fn test_all_providers() { + let all = ProviderKind::all(); + assert_eq!(all.len(), 6); + assert!(all.contains(&ProviderKind::GitHub)); + assert!(all.contains(&ProviderKind::GitHubEnterprise)); + assert!(all.contains(&ProviderKind::GitLab)); + assert!(all.contains(&ProviderKind::GitLabSelfManaged)); + assert!(all.contains(&ProviderKind::Codeberg)); + assert!(all.contains(&ProviderKind::Bitbucket)); +} + +#[test] +fn test_equality_and_hash() { + use std::collections::HashSet; + + let mut set = HashSet::new(); + set.insert(ProviderKind::GitHub); + set.insert(ProviderKind::GitHub); // Duplicate + + assert_eq!(set.len(), 1); + assert!(set.contains(&ProviderKind::GitHub)); +} diff --git a/src/types/repo.rs b/src/types/repo.rs new file mode 100644 index 0000000..573b31e --- /dev/null +++ b/src/types/repo.rs @@ -0,0 +1,269 @@ +//! Repository and organization type definitions. +//! +//! These types represent the data structures returned by Git hosting provider APIs +//! and used internally for clone/sync planning. + +use chrono::{DateTime, Utc}; +use serde::{Deserialize, Serialize}; + +/// A GitHub/GitLab/Bitbucket organization. +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] +pub struct Org { + /// Organization login/username (e.g., "rust-lang") + pub login: String, + /// Unique ID from the provider + pub id: u64, + /// Optional description + #[serde(default)] + pub description: Option, +} + +impl Org { + /// Creates a new organization with just login and id. + pub fn new(login: impl Into, id: u64) -> Self { + Self { + login: login.into(), + id, + description: None, + } + } +} + +/// A repository from a Git hosting provider. +#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)] +pub struct Repo { + /// Unique ID from the provider + pub id: u64, + /// Repository name (e.g., "gisa") + pub name: String, + /// Full name including owner (e.g., "user/gisa") + pub full_name: String, + /// SSH clone URL (e.g., "git@github.com:user/gisa.git") + pub ssh_url: String, + /// HTTPS clone URL (e.g., "https://github.com/user/gisa.git") + pub clone_url: String, + /// Default branch name (e.g., "main") + pub default_branch: String, + /// Whether this is a private repository + #[serde(default)] + pub private: bool, + /// Whether this repository is archived (read-only) + #[serde(default)] + pub archived: bool, + /// Whether this is a fork of another repository + #[serde(default)] + pub fork: bool, + /// When the repository was last pushed to + #[serde(default)] + pub pushed_at: Option>, + /// Optional description + #[serde(default)] + pub description: Option, +} + +impl Repo { + /// Creates a minimal repo for testing. + #[cfg(test)] + pub fn test(name: &str, owner: &str) -> Self { + Self { + id: rand_id(), + name: name.to_string(), + full_name: format!("{}/{}", owner, name), + ssh_url: format!("git@github.com:{}/{}.git", owner, name), + clone_url: format!("https://github.com/{}/{}.git", owner, name), + default_branch: "main".to_string(), + private: false, + archived: false, + fork: false, + pushed_at: None, + description: None, + } + } + + /// Returns the owner from the full_name. + pub fn owner(&self) -> &str { + self.full_name.split('/').next().unwrap_or(&self.full_name) + } +} + +#[cfg(test)] +fn rand_id() -> u64 { + use std::sync::atomic::{AtomicU64, Ordering}; + + static COUNTER: AtomicU64 = AtomicU64::new(1); + COUNTER.fetch_add(1, Ordering::Relaxed) +} + +/// A repository with its owner information. +/// +/// This type pairs a repository with the owner that it was discovered under, +/// which may be an organization or the user's personal account. +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct OwnedRepo { + /// Organization name or username + pub owner: String, + /// The repository + pub repo: Repo, +} + +impl OwnedRepo { + /// Creates a new owned repo. + pub fn new(owner: impl Into, repo: Repo) -> Self { + Self { + owner: owner.into(), + repo, + } + } + + /// Returns the full path for this repo (e.g., "org/repo"). + pub fn full_name(&self) -> &str { + &self.repo.full_name + } + + /// Returns the repository name. + pub fn name(&self) -> &str { + &self.repo.name + } +} + +/// Result of comparing discovered repos with local filesystem. +/// +/// This represents the action plan for a clone/sync operation. +#[derive(Debug, Default)] +pub struct ActionPlan { + /// New repositories that need to be cloned + pub to_clone: Vec, + /// Existing repositories that should be synced + pub to_sync: Vec, + /// Repositories that were skipped (already exist, uncommitted changes, etc.) + pub skipped: Vec, +} + +impl ActionPlan { + /// Creates an empty action plan. + pub fn new() -> Self { + Self::default() + } + + /// Returns the total number of repositories in the plan. + pub fn total(&self) -> usize { + self.to_clone.len() + self.to_sync.len() + self.skipped.len() + } + + /// Returns true if there's nothing to do. + pub fn is_empty(&self) -> bool { + self.to_clone.is_empty() && self.to_sync.is_empty() + } + + /// Adds a repo to clone. + pub fn add_clone(&mut self, repo: OwnedRepo) { + self.to_clone.push(repo); + } + + /// Adds a repo to sync. + pub fn add_sync(&mut self, repo: OwnedRepo) { + self.to_sync.push(repo); + } + + /// Adds a skipped repo. + pub fn add_skipped(&mut self, repo: OwnedRepo, reason: impl Into) { + self.skipped.push(SkippedRepo { + repo, + reason: reason.into(), + }); + } +} + +/// A repository that was skipped during planning. +#[derive(Debug)] +pub struct SkippedRepo { + /// The repository that was skipped + pub repo: OwnedRepo, + /// Reason for skipping + pub reason: String, +} + +/// Outcome of a single clone or sync operation. +#[derive(Debug, Clone, PartialEq, Eq)] +pub enum OpResult { + /// Operation completed successfully + Success, + /// Operation failed with an error + Failed(String), + /// Operation was skipped for a reason + Skipped(String), +} + +impl OpResult { + /// Returns true if the operation was successful. + pub fn is_success(&self) -> bool { + matches!(self, OpResult::Success) + } + + /// Returns true if the operation failed. + pub fn is_failed(&self) -> bool { + matches!(self, OpResult::Failed(_)) + } + + /// Returns true if the operation was skipped. + pub fn is_skipped(&self) -> bool { + matches!(self, OpResult::Skipped(_)) + } + + /// Returns the error message if failed. + pub fn error_message(&self) -> Option<&str> { + match self { + OpResult::Failed(msg) => Some(msg), + _ => None, + } + } + + /// Returns the skip reason if skipped. + pub fn skip_reason(&self) -> Option<&str> { + match self { + OpResult::Skipped(reason) => Some(reason), + _ => None, + } + } +} + +/// Summary statistics for a batch operation. +#[derive(Debug, Default, Clone)] +pub struct OpSummary { + /// Number of successful operations + pub success: usize, + /// Number of failed operations + pub failed: usize, + /// Number of skipped operations + pub skipped: usize, +} + +impl OpSummary { + /// Creates an empty summary. + pub fn new() -> Self { + Self::default() + } + + /// Records a result. + pub fn record(&mut self, result: &OpResult) { + match result { + OpResult::Success => self.success += 1, + OpResult::Failed(_) => self.failed += 1, + OpResult::Skipped(_) => self.skipped += 1, + } + } + + /// Returns the total number of operations. + pub fn total(&self) -> usize { + self.success + self.failed + self.skipped + } + + /// Returns true if there were any failures. + pub fn has_failures(&self) -> bool { + self.failed > 0 + } +} + +#[cfg(test)] +#[path = "repo_tests.rs"] +mod tests; diff --git a/src/types/repo_tests.rs b/src/types/repo_tests.rs new file mode 100644 index 0000000..029f899 --- /dev/null +++ b/src/types/repo_tests.rs @@ -0,0 +1,109 @@ +use super::*; + +#[test] +fn test_org_creation() { + let org = Org::new("rust-lang", 1234); + assert_eq!(org.login, "rust-lang"); + assert_eq!(org.id, 1234); + assert!(org.description.is_none()); +} + +#[test] +fn test_repo_owner_extraction() { + let repo = Repo::test("gisa", "user"); + assert_eq!(repo.owner(), "user"); +} + +#[test] +fn test_owned_repo() { + let repo = Repo::test("gisa", "my-org"); + let owned = OwnedRepo::new("my-org", repo); + assert_eq!(owned.owner, "my-org"); + assert_eq!(owned.name(), "gisa"); + assert_eq!(owned.full_name(), "my-org/gisa"); +} + +#[test] +fn test_action_plan_empty() { + let plan = ActionPlan::new(); + assert!(plan.is_empty()); + assert_eq!(plan.total(), 0); +} + +#[test] +fn test_action_plan_add_repos() { + let mut plan = ActionPlan::new(); + + let repo1 = OwnedRepo::new("org", Repo::test("repo1", "org")); + let repo2 = OwnedRepo::new("org", Repo::test("repo2", "org")); + let repo3 = OwnedRepo::new("org", Repo::test("repo3", "org")); + + plan.add_clone(repo1); + plan.add_sync(repo2); + plan.add_skipped(repo3, "already up to date"); + + assert!(!plan.is_empty()); + assert_eq!(plan.to_clone.len(), 1); + assert_eq!(plan.to_sync.len(), 1); + assert_eq!(plan.skipped.len(), 1); + assert_eq!(plan.total(), 3); +} + +#[test] +fn test_op_result_methods() { + let success = OpResult::Success; + assert!(success.is_success()); + assert!(!success.is_failed()); + assert!(!success.is_skipped()); + assert!(success.error_message().is_none()); + + let failed = OpResult::Failed("network error".to_string()); + assert!(!failed.is_success()); + assert!(failed.is_failed()); + assert_eq!(failed.error_message(), Some("network error")); + + let skipped = OpResult::Skipped("already exists".to_string()); + assert!(!skipped.is_success()); + assert!(skipped.is_skipped()); + assert_eq!(skipped.skip_reason(), Some("already exists")); +} + +#[test] +fn test_op_summary() { + let mut summary = OpSummary::new(); + assert_eq!(summary.total(), 0); + assert!(!summary.has_failures()); + + summary.record(&OpResult::Success); + summary.record(&OpResult::Success); + summary.record(&OpResult::Failed("error".to_string())); + summary.record(&OpResult::Skipped("reason".to_string())); + + assert_eq!(summary.success, 2); + assert_eq!(summary.failed, 1); + assert_eq!(summary.skipped, 1); + assert_eq!(summary.total(), 4); + assert!(summary.has_failures()); +} + +#[test] +fn test_repo_serialization() { + let repo = Repo::test("gisa", "user"); + let json = serde_json::to_string(&repo).unwrap(); + assert!(json.contains("\"name\":\"gisa\"")); + assert!(json.contains("\"full_name\":\"user/gisa\"")); + + let deserialized: Repo = serde_json::from_str(&json).unwrap(); + assert_eq!(deserialized.name, repo.name); + assert_eq!(deserialized.full_name, repo.full_name); +} + +#[test] +fn test_org_serialization() { + let org = Org::new("rust-lang", 1234); + let json = serde_json::to_string(&org).unwrap(); + assert!(json.contains("\"login\":\"rust-lang\"")); + + let deserialized: Org = serde_json::from_str(&json).unwrap(); + assert_eq!(deserialized, org); +} diff --git a/src/workflows/mod.rs b/src/workflows/mod.rs new file mode 100644 index 0000000..7f49840 --- /dev/null +++ b/src/workflows/mod.rs @@ -0,0 +1,5 @@ +//! Use-case workflows. + +#[cfg(feature = "tui")] +pub mod status_scan; +pub mod sync_workspace; diff --git a/src/workflows/status_scan.rs b/src/workflows/status_scan.rs new file mode 100644 index 0000000..eec4e46 --- /dev/null +++ b/src/workflows/status_scan.rs @@ -0,0 +1,67 @@ +//! Shared status scan workflow. + +use crate::config::{Config, WorkspaceConfig}; +use crate::discovery::DiscoveryOrchestrator; +use crate::git::{GitOperations, ShellGit}; +use crate::tui::app::RepoEntry; + +/// Scan local repositories for git status for a workspace. +#[cfg(feature = "tui")] +pub fn scan_workspace_status(config: &Config, workspace: &WorkspaceConfig) -> Vec { + let base_path = workspace.expanded_base_path(); + if !base_path.exists() { + return Vec::new(); + } + + let structure = workspace + .structure + .clone() + .unwrap_or_else(|| config.structure.clone()); + + let git = ShellGit::new(); + let orchestrator = DiscoveryOrchestrator::new(workspace.filters.clone(), structure); + let local_repos = orchestrator.scan_local(&base_path, &git); + + let mut entries = Vec::new(); + for (path, org, name) in &local_repos { + let full_name = format!("{}/{}", org, name); + match git.status(path) { + Ok(s) => entries.push(RepoEntry { + owner: org.clone(), + name: name.clone(), + full_name, + path: path.clone(), + branch: if s.branch.is_empty() { + None + } else { + Some(s.branch) + }, + is_uncommitted: s.is_uncommitted || s.has_untracked, + ahead: s.ahead as usize, + behind: s.behind as usize, + staged_count: s.staged_count, + unstaged_count: s.unstaged_count, + untracked_count: s.untracked_count, + }), + Err(_) => entries.push(RepoEntry { + owner: org.clone(), + name: name.clone(), + full_name, + path: path.clone(), + branch: None, + is_uncommitted: false, + ahead: 0, + behind: 0, + staged_count: 0, + unstaged_count: 0, + untracked_count: 0, + }), + } + } + + entries +} + +#[cfg(all(test, feature = "tui"))] +#[path = "status_scan_tests.rs"] +mod tests; diff --git a/src/workflows/status_scan_tests.rs b/src/workflows/status_scan_tests.rs new file mode 100644 index 0000000..19db8e1 --- /dev/null +++ b/src/workflows/status_scan_tests.rs @@ -0,0 +1,21 @@ +use super::*; +use crate::config::{Config, WorkspaceConfig}; + +#[test] +fn scan_workspace_status_returns_empty_when_base_path_missing() { + let config = Config::default(); + let workspace = WorkspaceConfig::new("missing", "/tmp/git-same-does-not-exist-xyz"); + + let entries = scan_workspace_status(&config, &workspace); + assert!(entries.is_empty()); +} + +#[test] +fn scan_workspace_status_returns_empty_for_empty_directory() { + let temp = tempfile::tempdir().unwrap(); + let config = Config::default(); + let workspace = WorkspaceConfig::new("empty", temp.path().to_string_lossy().to_string()); + + let entries = scan_workspace_status(&config, &workspace); + assert!(entries.is_empty()); +} diff --git a/src/workflows/sync_workspace.rs b/src/workflows/sync_workspace.rs new file mode 100644 index 0000000..7b7643d --- /dev/null +++ b/src/workflows/sync_workspace.rs @@ -0,0 +1,316 @@ +//! Shared sync workflow for CLI and TUI. + +use crate::auth::{get_auth_for_provider, AuthResult}; +use crate::cache::{CacheManager, DiscoveryCache}; +use crate::config::{Config, WorkspaceConfig}; +use crate::discovery::DiscoveryOrchestrator; +use crate::errors::{AppError, Result}; +use crate::git::{CloneOptions, ShellGit}; +use crate::operations::clone::{ + CloneManager, CloneManagerOptions, CloneProgress, MAX_CONCURRENCY, MIN_CONCURRENCY, +}; +use crate::operations::sync::{ + LocalRepo, SyncManager, SyncManagerOptions, SyncMode, SyncProgress, SyncResult, +}; +use crate::provider::{create_provider, DiscoveryProgress}; +use crate::types::{ActionPlan, OpSummary, OwnedRepo}; +use std::collections::{HashMap, HashSet}; +use std::path::PathBuf; +use std::sync::Arc; +use tracing::warn; + +/// Request data used to prepare a workspace sync plan. +pub struct SyncWorkspaceRequest<'a> { + pub config: &'a Config, + pub workspace: &'a WorkspaceConfig, + pub refresh: bool, + pub skip_uncommitted: bool, + pub pull: bool, + pub concurrency_override: Option, + pub create_base_path: bool, +} + +/// Prepared sync workflow context. +pub struct PreparedSyncWorkspace { + pub workspace: WorkspaceConfig, + pub auth: AuthResult, + pub repos: Vec, + pub used_cache: bool, + pub cache_age_secs: Option, + pub base_path: PathBuf, + pub structure: String, + pub provider_name: String, + pub provider_prefer_ssh: bool, + pub skip_uncommitted: bool, + pub sync_mode: SyncMode, + pub requested_concurrency: usize, + pub effective_concurrency: usize, + pub plan: ActionPlan, + pub to_sync: Vec, + pub skipped_sync: Vec<(OwnedRepo, String)>, + pub clone_options: CloneOptions, +} + +/// Execution outcome for a prepared sync workflow. +pub struct SyncExecutionOutcome { + pub clone_summary: Option, + pub sync_summary: Option, + pub sync_results: Vec, +} + +/// Prepare workspace sync data: authenticate, discover, plan and resolve options. +pub async fn prepare_sync_workspace( + request: SyncWorkspaceRequest<'_>, + discovery_progress: &dyn DiscoveryProgress, +) -> Result { + let provider_entry = request.workspace.provider.to_provider_entry(); + + // Authenticate and build provider + let auth = get_auth_for_provider(&provider_entry)?; + let provider = create_provider(&provider_entry, &auth.token)?; + + // Build orchestrator from workspace + global config + let mut filters = request.workspace.filters.clone(); + if !request.workspace.orgs.is_empty() { + filters.orgs = request.workspace.orgs.clone(); + } + filters.exclude_repos = request.workspace.exclude_repos.clone(); + + let structure = request + .workspace + .structure + .clone() + .unwrap_or_else(|| request.config.structure.clone()); + let orchestrator = DiscoveryOrchestrator::new(filters, structure.clone()); + + // Discover repos (cache first unless refresh) + let mut repos = Vec::new(); + let mut used_cache = false; + let mut cache_age_secs = None; + + if !request.refresh { + if let Ok(cache_manager) = CacheManager::for_workspace(&request.workspace.name) { + if let Ok(Some(cache)) = cache_manager.load() { + let discovery_options = orchestrator.to_discovery_options(); + used_cache = true; + cache_age_secs = Some(cache.age_secs()); + for provider_repos in cache.repos.values() { + repos.extend( + provider_repos + .iter() + .filter(|owned| { + discovery_options.should_include_org(&owned.owner) + && discovery_options.should_include(&owned.repo) + }) + .cloned(), + ); + } + + // Surface cached counts through the existing progress interface + // so callers can keep one rendering path. + let org_count = repos + .iter() + .map(|r| r.owner.clone()) + .collect::>() + .len(); + discovery_progress.on_orgs_discovered(org_count); + let mut by_org: HashMap = HashMap::new(); + for repo in &repos { + *by_org.entry(repo.owner.clone()).or_insert(0) += 1; + } + for (org, count) in by_org { + discovery_progress.on_org_complete(&org, count); + } + } + } + } + + if repos.is_empty() { + repos = orchestrator + .discover(provider.as_ref(), discovery_progress) + .await + .map_err(AppError::Provider)?; + + if let Ok(cache_manager) = CacheManager::for_workspace(&request.workspace.name) { + let provider_label = provider_entry + .name + .clone() + .unwrap_or_else(|| provider_entry.kind.to_string()); + let mut repos_by_provider = HashMap::new(); + repos_by_provider.insert(provider_label, repos.clone()); + let cache = + DiscoveryCache::new(auth.username.clone().unwrap_or_default(), repos_by_provider); + if let Err(e) = cache_manager.save(&cache) { + warn!( + workspace = %request.workspace.name, + error = %e, + "Failed to save discovery cache" + ); + } + } + } + + let base_path = request.workspace.expanded_base_path(); + if !base_path.exists() { + if request.create_base_path { + std::fs::create_dir_all(&base_path).map_err(|e| { + AppError::path(format!( + "Failed to create base directory '{}': {}", + base_path.display(), + e + )) + })?; + } else { + return Err(AppError::config(format!( + "Base path does not exist: {}", + base_path.display() + ))); + } + } + + let provider_name = provider_entry.kind.slug().to_string(); + let git = ShellGit::new(); + let plan = orchestrator.plan_clone(&base_path, repos.clone(), &provider_name, &git); + let (to_sync, skipped_sync) = orchestrator.plan_sync( + &base_path, + repos.clone(), + &provider_name, + &git, + request.skip_uncommitted, + ); + + let requested_concurrency = request + .concurrency_override + .or(request.workspace.concurrency) + .unwrap_or(request.config.concurrency); + let effective_concurrency = requested_concurrency.clamp(MIN_CONCURRENCY, MAX_CONCURRENCY); + + let sync_mode = if request.pull { + SyncMode::Pull + } else { + match request + .workspace + .sync_mode + .unwrap_or(request.config.sync_mode) + { + crate::config::SyncMode::Pull => SyncMode::Pull, + crate::config::SyncMode::Fetch => SyncMode::Fetch, + } + }; + + let clone_options = CloneOptions { + depth: request + .workspace + .clone_options + .as_ref() + .map(|c| c.depth) + .unwrap_or(request.config.clone.depth), + branch: request + .workspace + .clone_options + .as_ref() + .and_then(|c| { + if c.branch.is_empty() { + None + } else { + Some(c.branch.clone()) + } + }) + .or_else(|| { + if request.config.clone.branch.is_empty() { + None + } else { + Some(request.config.clone.branch.clone()) + } + }), + recurse_submodules: request + .workspace + .clone_options + .as_ref() + .map(|c| c.recurse_submodules) + .unwrap_or(request.config.clone.recurse_submodules), + }; + + Ok(PreparedSyncWorkspace { + workspace: request.workspace.clone(), + auth, + repos, + used_cache, + cache_age_secs, + base_path, + structure, + provider_name, + provider_prefer_ssh: provider_entry.prefer_ssh, + skip_uncommitted: request.skip_uncommitted, + sync_mode, + requested_concurrency, + effective_concurrency, + plan, + to_sync, + skipped_sync, + clone_options, + }) +} + +/// Execute clone + sync phases for a prepared workspace plan. +pub async fn execute_prepared_sync( + prepared: &PreparedSyncWorkspace, + dry_run: bool, + clone_progress: Arc, + sync_progress: Arc, +) -> SyncExecutionOutcome { + if dry_run { + return SyncExecutionOutcome { + clone_summary: None, + sync_summary: None, + sync_results: Vec::new(), + }; + } + + let mut clone_summary = None; + let mut sync_summary = None; + let mut sync_results = Vec::new(); + + if !prepared.plan.to_clone.is_empty() { + let clone_options = CloneManagerOptions::new() + .with_concurrency(prepared.effective_concurrency) + .with_clone_options(prepared.clone_options.clone()) + .with_structure(prepared.structure.clone()) + .with_ssh(prepared.provider_prefer_ssh); + + let manager = CloneManager::new(ShellGit::new(), clone_options); + let (summary, _results) = manager + .clone_repos( + &prepared.base_path, + prepared.plan.to_clone.clone(), + &prepared.provider_name, + clone_progress, + ) + .await; + clone_summary = Some(summary); + } + + if !prepared.to_sync.is_empty() { + let sync_options = SyncManagerOptions::new() + .with_concurrency(prepared.effective_concurrency) + .with_mode(prepared.sync_mode) + .with_skip_uncommitted(prepared.skip_uncommitted); + + let manager = SyncManager::new(ShellGit::new(), sync_options); + let (summary, results) = manager + .sync_repos(prepared.to_sync.clone(), sync_progress) + .await; + sync_summary = Some(summary); + sync_results = results; + } + + SyncExecutionOutcome { + clone_summary, + sync_summary, + sync_results, + } +} + +#[cfg(test)] +#[path = "sync_workspace_tests.rs"] +mod tests; diff --git a/src/workflows/sync_workspace_tests.rs b/src/workflows/sync_workspace_tests.rs new file mode 100644 index 0000000..9248b25 --- /dev/null +++ b/src/workflows/sync_workspace_tests.rs @@ -0,0 +1,108 @@ +use super::*; +use crate::auth::{AuthResult, ResolvedAuthMethod}; +use crate::config::{Config, WorkspaceConfig}; +use crate::git::CloneOptions; +use crate::operations::clone::NoProgress; +use crate::operations::sync::{LocalRepo, NoSyncProgress, SyncMode}; +use crate::types::{ActionPlan, OwnedRepo, Repo}; +use std::path::PathBuf; +use std::sync::Arc; + +fn sample_repo() -> OwnedRepo { + OwnedRepo::new("acme", Repo::test("rocket", "acme")) +} + +fn prepared_workspace(with_clone: bool, with_sync: bool) -> PreparedSyncWorkspace { + let repo = sample_repo(); + let mut plan = ActionPlan::new(); + if with_clone { + plan.add_clone(repo.clone()); + } + + let to_sync = if with_sync { + vec![LocalRepo::new(repo.clone(), "/tmp/acme/rocket")] + } else { + Vec::new() + }; + + PreparedSyncWorkspace { + workspace: WorkspaceConfig::new("ws", "/tmp"), + auth: AuthResult { + token: "token".to_string(), + method: ResolvedAuthMethod::GhCli, + username: Some("octocat".to_string()), + }, + repos: vec![repo], + used_cache: false, + cache_age_secs: None, + base_path: PathBuf::from("/tmp"), + structure: "{org}/{repo}".to_string(), + provider_name: "github".to_string(), + provider_prefer_ssh: true, + skip_uncommitted: true, + sync_mode: SyncMode::Fetch, + requested_concurrency: 4, + effective_concurrency: 4, + plan, + to_sync, + skipped_sync: Vec::new(), + clone_options: CloneOptions::default(), + } +} + +#[tokio::test] +async fn execute_prepared_sync_dry_run_short_circuits() { + let prepared = prepared_workspace(true, true); + + let outcome = execute_prepared_sync( + &prepared, + true, + Arc::new(NoProgress), + Arc::new(NoSyncProgress), + ) + .await; + + assert!(outcome.clone_summary.is_none()); + assert!(outcome.sync_summary.is_none()); + assert!(outcome.sync_results.is_empty()); +} + +#[tokio::test] +async fn execute_prepared_sync_with_no_work_returns_empty_outcome() { + let prepared = prepared_workspace(false, false); + + let outcome = execute_prepared_sync( + &prepared, + false, + Arc::new(NoProgress), + Arc::new(NoSyncProgress), + ) + .await; + + assert!(outcome.clone_summary.is_none()); + assert!(outcome.sync_summary.is_none()); + assert!(outcome.sync_results.is_empty()); +} + +#[test] +fn sync_workspace_request_holds_expected_values() { + let config = Config::default(); + let workspace = WorkspaceConfig::new("team", "/tmp/team"); + + let request = SyncWorkspaceRequest { + config: &config, + workspace: &workspace, + refresh: true, + skip_uncommitted: false, + pull: true, + concurrency_override: Some(7), + create_base_path: true, + }; + + assert!(request.refresh); + assert!(request.pull); + assert!(!request.skip_uncommitted); + assert_eq!(request.concurrency_override, Some(7)); + assert!(request.create_base_path); + assert_eq!(request.workspace.name, "team"); +} diff --git a/tests/integration_test.rs b/tests/integration_test.rs new file mode 100644 index 0000000..0a43ec3 --- /dev/null +++ b/tests/integration_test.rs @@ -0,0 +1,426 @@ +//! Integration tests for git-same CLI. +//! +//! These tests verify the CLI behavior as a whole. + +use std::path::{Path, PathBuf}; +use std::process::Command; + +fn git_same_binary() -> PathBuf { + let mut path = PathBuf::from(env!("CARGO_MANIFEST_DIR")); + path.push("target/debug/git-same"); + path +} + +fn command_with_temp_env(home: &Path) -> Command { + let mut cmd = Command::new(git_same_binary()); + cmd.env("HOME", home) + .env("XDG_CONFIG_HOME", home.join(".config")) + .env("XDG_CACHE_HOME", home.join(".cache")) + .env("NO_COLOR", "1"); + cmd +} + +fn run_cli_with_env(home: &Path, args: &[&str]) -> std::process::Output { + command_with_temp_env(home) + .args(args) + .output() + .expect("Failed to execute git-same") +} + +fn assert_banner_branding(stdout: &str) { + let description = env!("CARGO_PKG_DESCRIPTION"); + assert!( + stdout.contains("██████╗ ██╗████████╗"), + "Expected ASCII logo in stdout, got:\n{}", + stdout + ); + assert!( + stdout.contains(description), + "Expected subheadline '{}' in stdout, got:\n{}", + description, + stdout + ); + assert!( + !stdout.contains(&format!("{description} Version")), + "Unexpected legacy version suffix in subheadline, got:\n{}", + stdout + ); +} + +#[test] +fn test_help_command() { + let output = Command::new(git_same_binary()) + .arg("--help") + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains(env!("CARGO_PKG_DESCRIPTION"))); + assert!(stdout.contains("init")); + assert!(stdout.contains("setup")); + assert!(stdout.contains("sync")); + assert!(stdout.contains("status")); + assert!(stdout.contains("workspace")); + assert!(stdout.contains("reset")); +} + +#[test] +fn test_reset_help() { + let output = Command::new(git_same_binary()) + .args(["reset", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("Reset")); + assert!(stdout.contains("--force")); +} + +#[test] +fn test_version_command() { + let output = Command::new(git_same_binary()) + .arg("--version") + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("git-same")); +} + +#[test] +fn test_clone_subcommand_removed() { + let output = Command::new(git_same_binary()) + .arg("clone") + .output() + .expect("Failed to execute git-same"); + + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("unrecognized subcommand") || stderr.contains("unexpected argument"), + "Expected unknown subcommand error, got: {}", + stderr + ); +} + +#[test] +fn test_fetch_subcommand_removed() { + let output = Command::new(git_same_binary()) + .arg("fetch") + .output() + .expect("Failed to execute git-same"); + + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("unrecognized subcommand") || stderr.contains("unexpected argument"), + "Expected unknown subcommand error, got: {}", + stderr + ); +} + +#[test] +fn test_pull_subcommand_removed() { + let output = Command::new(git_same_binary()) + .arg("pull") + .output() + .expect("Failed to execute git-same"); + + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("unrecognized subcommand") || stderr.contains("unexpected argument"), + "Expected unknown subcommand error, got: {}", + stderr + ); +} + +#[test] +fn test_status_help() { + let output = Command::new(git_same_binary()) + .args(["status", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("status")); + assert!(stdout.contains("--uncommitted")); + assert!(stdout.contains("--behind")); +} + +#[test] +fn test_init_help() { + let output = Command::new(git_same_binary()) + .args(["init", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("Initialize")); + assert!(stdout.contains("--force")); +} + +#[test] +fn test_global_verbose_flag() { + let output = Command::new(git_same_binary()) + .args(["-v", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); +} + +#[test] +fn test_global_quiet_flag() { + let output = Command::new(git_same_binary()) + .args(["-q", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); +} + +#[test] +fn test_init_creates_config() { + use tempfile::TempDir; + + let temp = TempDir::new().expect("Failed to create temp dir"); + let config_path = temp.path().join("gisa.config.toml"); + + let output = Command::new(git_same_binary()) + .args(["init", "--path", config_path.to_str().unwrap()]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success(), "Init failed: {:?}", output); + assert!(config_path.exists(), "Config file not created"); + + // Verify content is valid TOML + let content = std::fs::read_to_string(&config_path).expect("Failed to read config"); + assert!(content.contains("base_path")); + assert!(content.contains("concurrency")); +} + +#[test] +fn test_init_force_overwrites() { + use tempfile::TempDir; + + let temp = TempDir::new().expect("Failed to create temp dir"); + let config_path = temp.path().join("gisa.config.toml"); + + // Create initial file + std::fs::write(&config_path, "# existing").expect("Failed to write"); + + // Init without force should fail + let output = Command::new(git_same_binary()) + .args(["init", "--path", config_path.to_str().unwrap()]) + .output() + .expect("Failed to execute git-same"); + + assert!( + !output.status.success(), + "Init without force should fail when file exists" + ); + + // Init with force should succeed + let output = Command::new(git_same_binary()) + .args(["init", "--path", config_path.to_str().unwrap(), "--force"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success(), "Init with force should succeed"); + + // Verify content was overwritten + let content = std::fs::read_to_string(&config_path).expect("Failed to read config"); + assert!( + content.contains("base_path"), + "Config should contain base_path" + ); +} + +#[test] +fn test_status_nonexistent_workspace() { + use tempfile::TempDir; + + let temp = TempDir::new().expect("Failed to create temp dir"); + let config_path = temp.path().join("config.toml"); + + // Create a valid config so the test reaches workspace resolution + Command::new(git_same_binary()) + .args(["init", "--path", config_path.to_str().unwrap()]) + .output() + .expect("Failed to run init"); + + let output = Command::new(git_same_binary()) + .args([ + "-C", + config_path.to_str().unwrap(), + "status", + "--workspace", + "nonexistent-workspace", + ]) + .output() + .expect("Failed to execute git-same"); + + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("not found") + || stderr.contains("No workspaces") + || stderr.contains("No workspace configured"), + "Expected workspace not found error, got: {}", + stderr + ); +} + +#[test] +fn test_sync_help() { + let output = Command::new(git_same_binary()) + .args(["sync", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("Sync")); + assert!(stdout.contains("--workspace")); + assert!(stdout.contains("--pull")); + assert!(stdout.contains("--dry-run")); +} + +#[test] +fn test_setup_help() { + let output = Command::new(git_same_binary()) + .args(["setup", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("setup") || stdout.contains("Setup") || stdout.contains("wizard")); +} + +#[test] +fn test_workspace_help() { + let output = Command::new(git_same_binary()) + .args(["workspace", "--help"]) + .output() + .expect("Failed to execute git-same"); + + assert!(output.status.success()); + let stdout = String::from_utf8_lossy(&output.stdout); + assert!(stdout.contains("list")); + assert!(stdout.contains("default")); +} + +#[test] +fn test_workspace_list() { + use tempfile::TempDir; + + let temp = TempDir::new().expect("Failed to create temp dir"); + let config_path = temp.path().join("config.toml"); + + // Create a minimal valid config so the test doesn't depend on local config + Command::new(git_same_binary()) + .args(["init", "--path", config_path.to_str().unwrap()]) + .output() + .expect("Failed to run init"); + + let output = Command::new(git_same_binary()) + .args(["-C", config_path.to_str().unwrap(), "workspace", "list"]) + .output() + .expect("Failed to execute git-same"); + + // Should succeed even with no workspaces + assert!(output.status.success()); +} + +#[test] +fn test_missing_config_suggests_init() { + let output = Command::new(git_same_binary()) + .args(["-C", "/tmp/nonexistent-gisa-config.toml", "sync"]) + .output() + .expect("Failed to execute git-same"); + + assert!(!output.status.success()); + let stderr = String::from_utf8_lossy(&output.stderr); + assert!( + stderr.contains("gisa init"), + "Expected suggestion to run 'gisa init', got: {}", + stderr + ); +} + +#[test] +fn test_cli_subcommands_use_dashboard_subheadline() { + use tempfile::TempDir; + + let temp = TempDir::new().expect("Failed to create temp dir"); + let home = temp.path().join("home"); + std::fs::create_dir_all(home.join(".config")).expect("Failed to create config dir"); + std::fs::create_dir_all(home.join(".cache")).expect("Failed to create cache dir"); + + let config_path = temp.path().join("config.toml"); + let config_str = config_path + .to_str() + .expect("Config path is not valid UTF-8"); + + let init_output = run_cli_with_env(&home, &["init", "--path", config_str, "--force"]); + assert!( + init_output.status.success(), + "Init failed: {:?}", + init_output + ); + assert_banner_branding(&String::from_utf8_lossy(&init_output.stdout)); + + let command_matrix: Vec> = vec![ + vec![ + "-C".to_string(), + config_str.to_string(), + "sync".to_string(), + "--dry-run".to_string(), + ], + vec![ + "-C".to_string(), + config_str.to_string(), + "status".to_string(), + ], + vec![ + "-C".to_string(), + config_str.to_string(), + "workspace".to_string(), + "list".to_string(), + ], + vec![ + "-C".to_string(), + config_str.to_string(), + "workspace".to_string(), + "default".to_string(), + ], + vec!["reset".to_string(), "--force".to_string()], + ]; + + for args in command_matrix { + let arg_refs: Vec<&str> = args.iter().map(String::as_str).collect(); + let output = run_cli_with_env(&home, &arg_refs); + assert_banner_branding(&String::from_utf8_lossy(&output.stdout)); + } +} + +#[test] +fn test_banner_source_no_legacy_version_subheadline() { + let source = include_str!("../src/banner.rs"); + assert!( + !source.contains("Mirror GitHub structure /orgs/repos/ to local file system {}"), + "Found legacy CLI subheadline format string in banner.rs" + ); + assert!( + !source.contains("local file system Version"), + "Found legacy versioned subheadline text in banner.rs" + ); +} diff --git a/toolkit/Conductor/archive.sh b/toolkit/Conductor/archive.sh new file mode 100755 index 0000000..f546996 --- /dev/null +++ b/toolkit/Conductor/archive.sh @@ -0,0 +1,55 @@ +#!/bin/bash +# Git-Same (Gisa CLI) Archive Script +# Removes cargo-installed binaries from ~/.cargo/bin + +set -e + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" +cd "$PROJECT_DIR" + +PACKAGE_NAME="git-same" +CARGO_BIN_DIR="${CARGO_HOME:-$HOME/.cargo}/bin" +BINARIES=("git-same" "gitsame" "gitsa" "gisa") + +echo "========================================" +echo " Git-Same (Gisa CLI) Archive" +echo "========================================" +echo "" +echo "Project directory: $PROJECT_DIR" +echo "Cargo bin directory: $CARGO_BIN_DIR" +echo "" + +if ! command -v cargo &> /dev/null; then + echo "ERROR: cargo not found." + exit 1 +fi + +echo "--- Uninstalling Cargo Package ---" +if cargo uninstall "$PACKAGE_NAME"; then + echo "Removed package: $PACKAGE_NAME" +else + echo "Package '$PACKAGE_NAME' is not currently installed. Continuing cleanup..." +fi +echo "" + +echo "--- Removing Leftover Binaries ---" +FOUND_LEFTOVERS=false +for bin in "${BINARIES[@]}"; do + path="$CARGO_BIN_DIR/$bin" + if [ -f "$path" ]; then + rm -f "$path" + echo " [REMOVED] $path" + FOUND_LEFTOVERS=true + fi +done + +if [ "$FOUND_LEFTOVERS" = false ]; then + echo " No leftover binaries found." +fi + +echo "" +echo "========================================" +echo " Archive Complete" +echo "========================================" +echo "" diff --git a/toolkit/Conductor/run.sh b/toolkit/Conductor/run.sh new file mode 100755 index 0000000..c1b315b --- /dev/null +++ b/toolkit/Conductor/run.sh @@ -0,0 +1,74 @@ +#!/bin/bash +# Git-Same Run Script +# Installs binaries and shows available commands + +set -e + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" +cd "$PROJECT_DIR" + +CARGO_BIN_DIR="${CARGO_HOME:-$HOME/.cargo}/bin" +GS_COMMAND="$CARGO_BIN_DIR/gisa" + +# Install to ensure all binaries are up to date +echo "Installing with: cargo install --path ." +cargo install --path . +echo "" + +if [ ! -x "$GS_COMMAND" ]; then + echo "ERROR: gisa installation failed." + exit 1 +fi + +# Warn if gisa is also installed elsewhere (e.g. Homebrew) +RED='\033[0;31m' +NC='\033[0m' +OTHER_PATHS=$(which -a gisa 2>/dev/null | grep -v "$CARGO_BIN_DIR" || true) +if [ -n "$OTHER_PATHS" ]; then + echo -e "${RED}WARNING: gisa found in another location:${NC}" + echo -e "${RED} $OTHER_PATHS${NC}" + echo -e "${RED} This may shadow the version installed by this script.${NC}" + echo -e "${RED} Consider uninstalling it to avoid version conflicts.${NC}" + echo "" +fi + +echo "========================================" +echo " Gisa Commands" +echo "========================================" +echo "" +echo "Getting started:" +echo "" +echo " $GS_COMMAND init # Create config file" +echo " $GS_COMMAND setup # Interactive workspace wizard" +echo "" +echo "Sync repos (discover + clone new + fetch existing):" +echo "" +echo " $GS_COMMAND sync --dry-run # Preview what would happen" +echo " $GS_COMMAND sync # Run sync (fetch mode)" +echo " $GS_COMMAND sync --pull # Sync with pull instead of fetch" +echo " $GS_COMMAND sync --workspace github # Sync specific workspace" +echo " $GS_COMMAND sync --concurrency 8 # Control parallelism" +echo "" +echo "Status:" +echo "" +echo " $GS_COMMAND status # Show all repo status" +echo " $GS_COMMAND status --uncommitted # Only repos with changes" +echo " $GS_COMMAND status --detailed # Full detail per repo" +echo "" +echo "Workspace management:" +echo "" +echo " $GS_COMMAND workspace list # List configured workspaces" +echo " $GS_COMMAND workspace default my-ws # Set default workspace" +echo " $GS_COMMAND workspace default # Show current default" +echo "" +echo "Reset / cleanup:" +echo "" +echo " $GS_COMMAND reset # Interactive cleanup" +echo " $GS_COMMAND reset --force # Force remove everything" +echo "" +echo "Verbose and JSON output:" +echo "" +echo " $GS_COMMAND -v sync --dry-run" +echo " $GS_COMMAND --json status" +echo "" diff --git a/toolkit/Conductor/setup.sh b/toolkit/Conductor/setup.sh new file mode 100755 index 0000000..fcb92b5 --- /dev/null +++ b/toolkit/Conductor/setup.sh @@ -0,0 +1,76 @@ +#!/bin/bash +# Git-Same Setup Script +# Checks prerequisites + +set -e + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)" +cd "$PROJECT_DIR" + +echo "========================================" +echo " Git-Same (Gisa CLI) Setup" +echo "========================================" +echo "" +echo "Project directory: $PROJECT_DIR" +echo "" + +# Check Rust toolchain +echo "--- Checking Rust Toolchain ---" +if ! command -v rustc &> /dev/null; then + echo "ERROR: Rust not found." + echo "Install from: https://rustup.rs/" + echo " curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh" + exit 1 +fi +echo "rustc: $(rustc --version)" +echo "cargo: $(cargo --version)" +echo "" + +# Check GitHub CLI +echo "--- Checking GitHub CLI ---" +if ! command -v gh &> /dev/null; then + echo "WARNING: GitHub CLI (gh) not found." + echo "Install with: brew install gh" + echo "Git-Same requires GitHub CLI for authentication." + echo "Install gh first, then run: gh auth login" + echo "" +else + echo "gh: $(gh --version | head -1)" + echo "" + echo "GitHub CLI authentication status:" + if gh auth status 2>&1; then + echo "" + else + echo "" + echo "WARNING: GitHub CLI not authenticated." + echo "Run: gh auth login" + echo "" + fi +fi + +# Check Git +echo "--- Checking Git ---" +if ! command -v git &> /dev/null; then + echo "ERROR: Git not found." + exit 1 +fi +echo "git: $(git --version)" +echo "" + +# Clean build artifacts and update dependencies +echo "--- Cleaning Build Cache ---" +cargo clean +echo "" +echo "--- Updating Dependencies ---" +cargo update +echo "" + +echo "========================================" +echo " Setup Complete!" +echo "========================================" +echo "" +echo "Next steps:" +echo " 1. Run: ./toolkit/Conductor/run.sh" +echo " 2. Or manually install: cargo install --path ." +echo ""