Skip to content

Reliability: Headless and Drive Hardening#216

Open
techmore wants to merge 8 commits intodevfrom
reliability/headless-and-drive-hardening
Open

Reliability: Headless and Drive Hardening#216
techmore wants to merge 8 commits intodevfrom
reliability/headless-and-drive-hardening

Conversation

@techmore
Copy link
Copy Markdown
Owner

This PR includes improvements for headless operation and Google Drive integration reliability.

Sean Dolbec and others added 8 commits March 23, 2026 07:38
…leaks, pagination

Fixes from code audit (issues #175#192):

- jobs.py: Add threading.Lock to RateLimiter.can_scan/record_scan (#175)
- networking.py: Use .get("hops") to avoid KeyError on uninitialized state (#176)
- google_drive.py: Enforce 10-minute expiry on OAuth PKCE auth state token (#177)
- settings.py, google_drive.py: Remove filesystem paths from user-facing errors (#178)
- google_drive.py: Drop path interpolation from missing-credentials error message (#178)
- auto_scan.py: Catch ValueError in _parse_time_of_day with safe fallback (#188)
- runtime_db.py: Add limit/offset pagination to list_report_artifacts (#180)
- workflows.py: Use authoritative customer ID instead of linear name search (#190)

Fix pre-existing test failures:
- test_workflow_failures: stubs now return {"success": False} dict, not bool
- test_auth_routes: add missing ensure_google_drive_reports_folder/save_google_drive_credentials deps
- test_client_state: stubs return {"success": True} dict and accept **kwargs
- test_runtime_contract: update assertions to match current build.sh and Swift source
- test_health_modules: add max_scan_minutes/reports_save_to_desktop to expected payloads

All 293 tests pass.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add NMAPUI_DISABLE_UPDATE_CHECKS to skip GitHub API and idle auto-update UI
- Respect NMAPUI_SKIP_OPEN and NMAPUI_HEADLESS for update download URLs; use webbrowser.open cross-platform
- IdleStateManager honors update_checks_enabled from app startup
- Fix runtime _select_release_asset typing (str | None); add updates_disabled helpers
- Log google_drive_upload success/failure to module logger for grep-friendly ops logs
- Document audit checklist and deployment env (DEPLOYMENT.md, RELIABILITY_AUDIT.md)
- Bump VERSION; ignore .venv in gitignore
- Extend tests for update modules

Made-with: Cursor
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a8f60fe8ee

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

return [target], bool(chunked_requested), None

if not chunked_requested and network.version == 4 and network.prefixlen < 24:
chunk_targets = [str(subnet) for subnet in network.subnets(new_prefix=24)]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Cap forced /24 expansion for non-chunked large CIDRs

When a user disables chunking, _plan_report_scan_targets now eagerly builds chunk_targets as all /24 subnets for any IPv4 CIDR broader than /24. For sufficiently broad but valid targets (for example /8 and larger), this can materialize huge lists before scanning begins, causing long stalls or high memory usage in the report worker; unlike the normal chunking path, this override bypasses the existing 2048-chunk safety cap entirely. Please apply a hard cap (or lazy iteration) here as well.

Useful? React with 👍 / 👎.

self,
*,
customer_id: str | None = None,
limit: int = 500,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid truncating artifact queries via default 500-row limit

Changing list_report_artifacts to default limit=500 silently truncates all existing callers that do not pass a limit. In the current codebase, paths such as runtime history/report listing and customer history retrieval still call list_report_artifacts(...) without an explicit limit, so larger installations will now get incomplete results (for example, get_scan_history(limit=1000) can never return more than 500 runtime artifacts). This should either preserve the previous unbounded default or require callers to pass explicit pagination/limits.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant