ENH: Prepare PhysioMotion4D beta docs and release infrastructure#48
Conversation
Refresh release-facing documentation and validation for the 2026.05.07 beta. - Promote the GitHub Pages documentation as the primary user entry point - Align README, quickstart, tutorials, examples, CLI docs, API docs, and developer docs with current public APIs - Reorder tutorials so statistical model creation runs before model fitting - Add tutorial, CLI smoke, and public API import test coverage - Add release workflow for wheel/sdist build and PyPI/TestPyPI publishing - Default uv/PyTorch resolution to CUDA 13 and document CUDA 13 runner requirements - Add visible non-clinical-use notices and beta release references - Remove stale NIM, VISTA-3D, CHANGELOG, DirLab autodownload, and placeholder references - Make mypy blocking while ignoring missing third-party library stubs - Fix docs build/autodoc issues and Sphinx navigation warnings
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (14)
WalkthroughVersion and packaging bumped to 2026.05.07; CI mypy failures now fail the build; release workflow added to build/validate/publish to TestPyPI/PyPI. Large documentation reorganization and tutorial/test reshuffling. Core changes include VTK→USD conversion updates, PCA model registrar API/type changes, and workflow/CLI lazy-imports and typing refinements. ChangesRelease, API, Docs, Workflows & Tests (single cohesive DAG)
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI / User
participant Workflow as WorkflowFitStatModel
participant PCA as RegisterModelsPCA
participant VTKUSD as ConvertVTKToUSD
participant ITK as ITK / PyVista
participant USD as USD Stage
CLI->>Workflow: init(inputs, options)
Workflow->>PCA: (optional) run PCA-constrained registration (template, patient, pca_json)
PCA->>ITK: generate/load distance maps & template mesh
PCA->>PCA: optimize coefficients (scipy.minimize)
PCA-->>Workflow: registered model + coefficients
Workflow->>VTKUSD: convert registered model -> USD (colormap/labels)
VTKUSD->>ITK: extract surfaces, compute arrays/colormap
VTKUSD->>USD: create prims, set primvars and time samples
USD-->>CLI: return saved USD path
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Pull request overview
Prepares the PhysioMotion4D 2026.05.07 beta for public consumption by realigning tutorials/docs/tests to current public APIs and adding release/publishing infrastructure.
Changes:
- Reordered Tutorials 3/4 (PCA model creation before patient fitting) and updated tutorial/test harnessing to use a new
tutorialmarker gated by--run-tutorials. - Added smoke tests for public API exports and CLI
--help, and tightened CI by making mypy non-optional. - Refreshed hosted documentation structure/content (new landing-page notice, updated CLI/API/developer docs), bumped versioning, and added a GitHub Actions release workflow for build + PyPI/TestPyPI publishing.
Reviewed changes
Copilot reviewed 63 out of 63 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tutorials/tutorial_04_fit_statistical_model_to_patient.py | Tutorial renumbering + optional PCA JSON hook for patient-fitting workflow. |
| tutorials/tutorial_03_create_statistical_model.py | Tutorial renumbering + updated PCA JSON schema usage (components/eigenvalues). |
| tutorials/README.md | Reordered tutorial index and updated pytest flag/marker docs (--run-tutorials). |
| tests/test_tutorials.py | Rewired tutorial tests to new tutorial ordering and tutorial marker. |
| tests/test_import_public_api.py | New smoke test verifying physiomotion4d.__all__ exports are present. |
| tests/test_cli_smoke.py | New smoke test asserting each CLI module succeeds on --help. |
| tests/EXPERIMENT_TESTS_GUIDE.md | Updated resource guidance wording (removes VISTA-3D reference). |
| tests/conftest.py | Added --run-tutorials option + tutorial marker skip behavior + timing reporting. |
| src/physiomotion4d/workflow_fit_statistical_model_to_patient.py | Docstring formatting tweak (blank line in stage list). |
| src/physiomotion4d/workflow_create_statistical_model.py | Docstring pipeline section cleaned up. |
| src/physiomotion4d/vtk_to_usd/usd_utils.py | Added from __future__ import annotations. |
| src/physiomotion4d/usd_tools.py | Added from __future__ import annotations + docstring formatting. |
| src/physiomotion4d/segment_anatomy_base.py | Docstring updated to remove VISTA-3D mention. |
| src/physiomotion4d/register_models_pca.py | Added from __future__ import annotations. |
| src/physiomotion4d/convert_vtk_to_usd.py | Added from __future__ import annotations. |
| src/physiomotion4d/contour_tools.py | Added from __future__ import annotations. |
| src/physiomotion4d/init.py | Version bump to 2026.05.07 + updated CuPy install hint. |
| README.md | Promotes hosted docs as primary entry point; updates examples and tutorial ordering. |
| pyproject.toml | Version bump; removes nim extra; adds release CLI entry point; mypy config changes; adds tutorial marker. |
| docs/tutorials.rst | Reordered tutorials and updated narrative/commands for new tutorial dependencies. |
| docs/troubleshooting.rst | Updated troubleshooting guidance to match current workflow/CLI patterns. |
| docs/testing.rst | Refocused testing docs around fast subset + tutorial/experiment opt-ins. |
| docs/references.rst | Minor link label improvements. |
| docs/quickstart.rst | Added clinical-use warning + minimal Slicer-Heart quickstart + updated APIs. |
| docs/PYPI_RELEASE_GUIDE.md | Updated release instructions to match new release notes + removed nim references. |
| docs/installation.rst | Updated install verification snippet and removed nim extra mention. |
| docs/index.rst | Adds visible non-clinical notice, reorganizes topic cards, includes architecture page in toctree. |
| docs/examples.rst | Updated example code to current workflow APIs and utilities usage. |
| docs/developer/workflows.rst | Replaced large guide with current workflow mapping + focused contributor guidance. |
| docs/developer/utilities.rst | Replaced large guide with concise, correct class-based utility usage examples. |
| docs/developer/usd_generation.rst | Points to ConvertVTKToUSD and API ref page instead of embedding full autoclass here. |
| docs/developer/segmentation.rst | Replaced large guide with current segmentation contract and dev notes. |
| docs/developer/registration_models.rst | Replaced large guide with workflow entry point + lower-level class list. |
| docs/developer/registration_images.rst | Replaced large guide with current registration usage patterns and result shape. |
| docs/developer/core.rst | Replaced large guide with concise core conventions + validation commands. |
| docs/developer/architecture.rst | Replaced large guide with concise architecture summary + workflow list and boundaries. |
| docs/contributing.rst | Updates PR checklist and release/versioning references. |
| docs/conf.py | Version bump + updated exclude patterns + adjusted mock-import list. |
| docs/cli_scripts/vtk_to_usd.rst | Updated CLI docs from “planned” to actual supported command behavior. |
| docs/cli_scripts/overview.rst | Reframed docs around workflows/CLIs/tutorials as supported layers; removes “upcoming scripts” framing. |
| docs/cli_scripts/lung_gated_ct.rst | Marked as orphan + clarifies no dedicated lung-gated CLI exists. |
| docs/cli_scripts/fit_statistical_model_to_patient.rst | Updated option semantics (patient-image optional; mask-to-image explicit opt-in). |
| docs/cli_scripts/brain_vessel_modeling.rst | Marked as orphan + clarifies no brain-vessel CLI exists. |
| docs/cli_scripts/best_practices.rst | Version string updates. |
| docs/cli_scripts/4dct_reconstruction.rst | Updated from “planned” to supported reconstruct-highres-4d-ct CLI usage. |
| docs/changelog.rst | Reframed as “Release Notes” and added 2026.05.07 beta highlights. |
| docs/architecture.rst | Updated architecture page to match current workflows/components and new CLI list. |
| docs/api/workflows.rst | Reorganized workflow API docs and examples; adds list-table overview. |
| docs/api/segmentation/totalsegmentator.rst | Updated to describe current grouped-mask contract and example usage. |
| docs/api/segmentation/base.rst | Updated base contract documentation and guidance for extensions. |
| docs/api/registration/time_series.rst | Updated usage example and heading; excludes member in autodoc. |
| docs/api/registration/index.rst | Simplified to contract + result shape + example. |
| docs/api/registration/icon.rst | Updated ICON page to current API and configuration guidance. |
| docs/api/registration/ants.rst | Updated ANTs page to current API and example. |
| docs/api/model_registration/index.rst | Simplified model-registration docs; points users to workflow. |
| docs/api/index.rst | Simplified API toctree structure. |
| docs/api/base.rst | Updated base-class page to current logging usage patterns. |
| docs/API_MAP.md | Updated generated API map entries and line numbers. |
| docs/_static/custom.css | Adds styling for prominent clinical-use notice on landing page. |
| .github/workflows/release.yml | New workflow to build + publish wheel/sdist to TestPyPI or PyPI via trusted publishing. |
| .github/workflows/README.md | Documents new release workflow and notes mypy is now blocking. |
| .github/workflows/ci.yml | Makes mypy non-optional (continue-on-error: false). |
Comments suppressed due to low confidence (2)
tutorials/tutorial_04_fit_statistical_model_to_patient.py:158
pca_jsonis always a non-NonePath(CLI default), but if the file is missing this silently skips PCA registration. That can make Tutorial 4 run in a different mode than the documented command suggests and can hide user mistakes when a path is provided. Consider either (a) defaulting--pca-jsonto None and only enabling PCA when explicitly provided, or (b) raising aFileNotFoundError/ logging a warning whenpca_jsonis provided but does not exist.
docs/conf.py:206autodoc_mock_importsno longer mocksscipy, butsrc/physiomotion4d/register_models_pca.pyandregister_models_icp_itk.pyimportscipy.optimize. If the docs build environment doesn't have SciPy installed, autodoc imports will fail. Consider addingscipyback toautodoc_mock_imports(or ensuring SciPy is explicitly installed in the docs build env).
# Mock imports for packages that require heavy dependencies or CUDA
autodoc_mock_imports = [
"torch",
"torchvision",
"torchaudio",
"monai",
"totalsegmentator",
"icon_registration",
"unigradicon",
"vtk",
"pyvista",
"itk",
"nibabel",
"pynrrd",
"transformers",
"SimpleITK",
"cupy",
"cupyx",
"pxr",
"matplotlib",
"ants",
"antspyx",
"cv2",
"skimage",
"PIL",
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| [project.urls] | ||
| Homepage = "https://github.com/Project-MONAI/physiomotion4d" | ||
| Documentation = "https://github.com/Project-MONAI/physiomotion4d/blob/main/README.md" | ||
| Repository = "https://github.com/Project-MONAI/physiomotion4d.git" |
| @pytest.mark.parametrize("module_name", CLI_MODULES) | ||
| def test_cli_help(module_name: str) -> None: | ||
| """Each CLI module exits successfully for --help.""" | ||
| result = subprocess.run( | ||
| [sys.executable, "-m", module_name, "--help"], | ||
| check=False, | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=30, | ||
| ) |
| package = importlib.import_module("physiomotion4d") | ||
|
|
||
| public_names = getattr(package, "__all__") | ||
| assert public_names, "physiomotion4d.__all__ should not be empty" |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #48 +/- ##
==========================================
+ Coverage 21.71% 27.43% +5.71%
==========================================
Files 46 46
Lines 6272 6313 +41
==========================================
+ Hits 1362 1732 +370
+ Misses 4910 4581 -329
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
pyproject.toml (1)
149-153:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
DocumentationURL still points to the GitHub README, not the GitHub Pages site.The PR objective is to "Promote GitHub Pages documentation as the primary user entry point." The PyPI metadata
Documentationfield is the canonical docs pointer shown on the package's PyPI page — it should be updated to the GitHub Pages URL to match that goal.🔧 Proposed fix
-Documentation = "https://github.com/Project-MONAI/physiomotion4d/blob/main/README.md" +Documentation = "https://project-monai.github.io/physiomotion4d/"(Adjust the URL if the Pages site is published under a different path.)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pyproject.toml` around lines 149 - 153, Update the project metadata so the PyPI "Documentation" URL points to the GitHub Pages site rather than the repo README: modify the [project.urls] section and replace the value of the Documentation field with the published GitHub Pages URL (e.g., "https://project-moni/physiomotion4d.github.io" or the correct pages path) so the Documentation entry reflects the primary user-facing docs site.src/physiomotion4d/register_models_pca.py (3)
359-365:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winAllocate a fresh ITK point for each vertex.
This loop appends the same mutable
itk.Pointobject on every iteration, soself._pca_template_model_points_itkends up with many references to the last point only. The optimizer will then sample the wrong geometry for almost every vertex.Suggested fix
self._pca_template_model_points_itk = [] - itk_point = itk.Point[itk.D, 3]() for point in self.pca_template_model.points: + itk_point = itk.Point[itk.D, 3]() itk_point[0] = float(point[0]) itk_point[1] = float(point[1]) itk_point[2] = float(point[2]) self._pca_template_model_points_itk.append(itk_point)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/register_models_pca.py` around lines 359 - 365, The loop in register_models_pca.py is appending the same mutable itk.Point instance (itk_point) for every vertex, so self._pca_template_model_points_itk ends up containing repeated references to the last vertex; fix by creating a new itk.Point[itk.D, 3]() inside the loop (or otherwise cloning itk_point) for each iteration before setting its coordinates and appending it, i.e. allocate a fresh itk_point per point from self.pca_template_model.points so each appended element is a distinct object.
474-480:⚠️ Potential issue | 🟠 Major | ⚡ Quick winApply the PCA deformation before the pre-transform.
For any
pre_pca_transformwith rotation or scale,T(x) + dis not equivalent toT(x + d). The current order leaves the PCA displacement in the template frame, so the metric is optimizing a different shape than the basis actually defines. The same ordering is repeated around Line 674 when buildingregistered_model.Suggested fix
- if self.pre_pca_transform is not None: - point = self.pre_pca_transform.TransformPoint(point) - - # Add PCA deformation if provided point[0] += pca_deformation[i, 0] point[1] += pca_deformation[i, 1] point[2] += pca_deformation[i, 2] + + if self.pre_pca_transform is not None: + point = self.pre_pca_transform.TransformPoint(point)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/register_models_pca.py` around lines 474 - 480, The PCA displacement is being applied after the pre_pca_transform so the deformation ends up in the template frame; change the order to add pca_deformation to the point first and then call self.pre_pca_transform.TransformPoint(point) so the transform is applied to the deformed point. Update the same pattern when constructing registered_model (the block that references pca_deformation and pre_pca_transform around where registered_model is built) so both places apply deformation before calling pre_pca_transform.TransformPoint.
371-389:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReject
Noneforreference_imagehere.
set_fixed_model()acceptsreference_image: Optional[itk.Image]but immediately dereferences it throughcreate_distance_map(...). PassingNonecurrently fails deeper in the stack with a much less actionable error.Suggested fix
def set_fixed_model( self, fixed_model: pv.UnstructuredGrid, reference_image: Optional[itk.Image] ) -> None: """Set the fixed model for registration. @@ Args: fixed_model: PyVista model used to compute the distance map, if one isn't provided. reference_image: ITK image providing coordinate frame for computing the distance map. """ + if reference_image is None: + raise ValueError("reference_image must be provided when setting a fixed model") + self.fixed_distance_map = self._contour_tools.create_distance_map( fixed_model, reference_image, squared_distance=True,🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/register_models_pca.py` around lines 371 - 389, set_fixed_model currently accepts reference_image: Optional[itk.Image] but immediately passes it to _contour_tools.create_distance_map, causing obscure errors when None is given; add an explicit None check at the top of set_fixed_model (function name: set_fixed_model) and raise a clear ValueError (e.g. "reference_image must not be None when setting a fixed model") so callers get an actionable message before calling _contour_tools.create_distance_map; ensure the rest of the method (setting self.fixed_distance_map) remains unchanged.src/physiomotion4d/convert_vtk_to_usd.py (2)
645-663:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep one-frame labels time-sampled in animated exports.
When a label is present in only one frame of a time series, this branch writes it with
create_mesh(...), which makes the prim static instead of visible only at that frame._convert_unified()already avoids this for unlabeled parts; labels need the same treatment.Suggested fix
- if len(label_mesh_sequence) == 1: + if len(label_mesh_sequence) == 1 and len(self.input_polydata) == 1: label_mesh_sequence[0].material_id = material.name material_mgr.get_or_create_material(material) mesh_converter.create_mesh( label_mesh_sequence[0], mesh_path, bind_material=True ) else: if self._time_codes is not None:🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/convert_vtk_to_usd.py` around lines 645 - 663, The code currently writes single-frame labels with mesh_converter.create_mesh, making those prims static; instead, treat one-frame labels as time-sampled like in _convert_unified for unlabeled parts: when len(label_mesh_sequence) == 1, set label_mesh_sequence[0].material_id = material.name, call material_mgr.get_or_create_material(material), compute label_time_codes (use self._time_codes[i] when self._time_codes is not None else float(i) for each i in label_frame_indices) but for the single-frame case take the single time code, and call mesh_converter.create_time_varying_mesh(label_mesh_sequence, mesh_path, label_time_codes, bind_material=True) so the prim is time-sampled rather than static.
675-697:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
convert_to_surfacedoesn't cover the mesh types this class says it supports.
supports_mesh_type()returnsTrueforStructuredGrid,ImageData, andRectilinearGrid, but_vtk_to_mesh_data()only extracts a surface fromUnstructuredGrid. With the defaultconvert_to_surface=True, those inputs still fall through to thefacescheck and raise. Either surface-extract every non-PolyDatadataset here or narrow the advertised support.Suggested fix
- if isinstance(vtk_mesh, pv.UnstructuredGrid) and self.convert_to_surface: + if self.convert_to_surface and not isinstance(vtk_mesh, pv.PolyData): vtk_mesh = vtk_mesh.extract_surface(algorithm="dataset_surface")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/convert_vtk_to_usd.py` around lines 675 - 697, The code in _vtk_to_mesh_data only calls extract_surface for pv.UnstructuredGrid, but supports_mesh_type() advertises StructuredGrid, ImageData, and RectilinearGrid as supported while convert_to_surface=True will still fall through and trigger the "no faces" error; update _vtk_to_mesh_data so that when convert_to_surface is True it treats any non-PolyData dataset (e.g., pv.StructuredGrid, pv.ImageData, pv.RectilinearGrid, pv.UnstructuredGrid) by calling the appropriate surface extraction (extract_surface or extract_geometry consistent with pyvista) before accessing points/faces, and only raise ValueError if convert_to_surface is False and the mesh truly has no faces; reference functions/flags: _vtk_to_mesh_data(), supports_mesh_type(), convert_to_surface, and class checks for UnstructuredGrid/StructuredGrid/ImageData/RectilinearGrid.tests/conftest.py (1)
191-220:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winRegular test detail output appears gated by tutorial presence
pytest_terminal_summarycurrently prints regular test “Individual Test Times” only inside the tutorial block. On runs without tutorial tests, regular per-test timings won’t be shown (Line 212+).💡 Proposed fix
if regular_tests: terminalreporter.write_sep("-", "Regular Tests", bold=True) terminalreporter.write_line(f"Count: {len(regular_tests)}") @@ regular_total = sum(t["duration"] for t in regular_tests) terminalreporter.write_line( f"Total Time: {timedelta(seconds=int(regular_total))}" ) terminalreporter.write_line("") + + # Show all regular tests with timing + terminalreporter.write_line("Individual Test Times:") + for test in sorted_regular: + outcome_symbol = "+" if test["outcome"] == "passed" else "x" + duration_str = _format_duration(test["duration"]) + terminalreporter.write_line( + f" {outcome_symbol} {duration_str:>10s} {test['nodeid']}" + ) + terminalreporter.write_line("") # Tutorial tests section if tutorial_tests: @@ for test in sorted_tutorials: outcome_symbol = "+" if test["outcome"] == "passed" else "x" duration_str = _format_duration(test["duration"]) terminalreporter.write_line( f" {outcome_symbol} {duration_str:>10s} {test['nodeid']}" ) terminalreporter.write_line("") - - # Show all regular tests with timing - terminalreporter.write_line("Individual Test Times:") - for test in sorted_regular: - outcome_symbol = "+" if test["outcome"] == "passed" else "x" - duration_str = _format_duration(test["duration"]) - terminalreporter.write_line( - f" {outcome_symbol} {duration_str:>10s} {test['nodeid']}" - ) - terminalreporter.write_line("")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/conftest.py` around lines 191 - 220, The regular test timing block is currently inside the tutorial_tests conditional so per-test timings for regular tests (using sorted_regular) only show when tutorial_tests is present; move or duplicate the "Individual Test Times:" loop for regular tests (the block that iterates over sorted_regular, computes outcome_symbol and duration_str via _format_duration, and calls terminalreporter.write_line) out of the tutorial_tests if-block (or render it unconditionally after handling tutorial tests) so regular tests are always printed; update pytest_terminal_summary to reference sorted_regular in the unconditional path..github/workflows/README.md (1)
27-29:⚠️ Potential issue | 🟠 Major | ⚡ Quick winAlign GPU runner docs with the actual CI runner labels.
Line 27 and Line 95 document Linux runners, but
ci.ymltargets Windows self-hosted GPU runners. Following this guide as-is can cause GPU jobs to wait indefinitely for non-matching labels.Suggested doc fix
- - Requires self-hosted runner with `[self-hosted, linux, gpu]` labels + - Requires self-hosted runner with `[self-hosted, Windows, X64, gpu]` labels ... -GPU tests require self-hosted runners with: -- Linux OS -- NVIDIA GPU with CUDA 13.0 support -- Runner labels: `[self-hosted, linux, gpu]` +GPU tests require self-hosted runners with: +- Windows OS (x64) +- NVIDIA GPU with CUDA 13.0 support +- Runner labels: `[self-hosted, Windows, X64, gpu]`Also applies to: 95-98
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In @.github/workflows/README.md around lines 27 - 29, The README currently documents GPU CI runners as Linux-labeled, but the CI workflow (ci.yml) targets Windows self-hosted GPU runners; update the GPU runner bullets to match ci.yml by replacing the Linux label with the Windows label (i.e., require self-hosted runners labeled [self-hosted, windows, gpu]) and ensure the other metadata (PyTorch/CUDA version and 30 minute timeout) remain accurate and consistent across all occurrences referenced in the README.src/physiomotion4d/workflow_fit_statistical_model_to_patient.py (2)
123-131:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
template_modeltype annotation conflicts with the class docstring and real-world usage.The constructor declares
template_model: pv.PolyData(line 125), but:
- The class-level
Attributesdocstring (line 67) sayspv.UnstructuredGrid.- The docs example (
pv.read("template_heart.vtu")) returnspv.UnstructuredGrid.- Tutorial 4 passes a
pv.PolyDataafter calling_extract_surface.Both types work at runtime (PyVista duck-typing), but the annotation misleads callers and type-checkers. The broadest accurate annotation is
pv.DataSet.🔧 Proposed fix
def __init__( self, - template_model: pv.PolyData, + template_model: pv.DataSet, patient_models: list[pv.PolyData] | None = None,🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py` around lines 123 - 131, The constructor's type annotation for template_model is too narrow (pv.PolyData) and conflicts with the class docstring and usage; update the __init__ signature to accept pv.DataSet for template_model (and any related parameter annotations if present) and adjust the class-level Attributes docstring to state pv.DataSet instead of pv.UnstructuredGrid/PolyData so type hints and docs agree with runtime usage; reference the __init__ method and the class docstring/Attributes block when making the changes.
279-305:⚠️ Potential issue | 🟠 Major | ⚡ Quick winMulti-label mask accumulation is broken — only the last model's label survives.
mask_arris re-assigned fromitk.GetArrayFromImage(mask)at the start of each loop body, silently discarding all previously accumulated labels. Fornmodels the final array contains only labeln.Trace:
i=0→mask_arr = binary_0 * 1(label 1 ✅)i=1→mask_arr = binary_1(overwrites label 1 ❌) →where(…, 2*binary_1, 0)(label 2 only)i=2→mask_arr = binary_2(overwrites label 2 ❌) → label 3 onlyThe current internal call site (
[self.patient_model_surface]— single element) doesn't exercise this path, but the method's signature accepts multiple models and the bug is latent.🐛 Proposed fix
- mask_arr: Optional[np.ndarray] = None + combined_arr: Optional[np.ndarray] = None for i, model in enumerate(models): mask = self.contour_tools.create_mask_from_mesh( model, self.patient_image, ) - mask_arr = itk.GetArrayFromImage(mask).astype(np.uint8) - if i == 0: - mask_arr = mask_arr * (i + 1) # Label 1, 2, 3, ... - else: - mask_arr = np.where(mask_arr > 0, (i + 1) * mask_arr, 0) - assert mask_arr is not None, "mask_arr must be set after processing models" - mask = itk.GetImageFromArray(mask_arr.astype(np.uint8)) + model_arr = itk.GetArrayFromImage(mask).astype(np.uint8) + label_arr = np.where(model_arr > 0, i + 1, np.uint8(0)) + if combined_arr is None: + combined_arr = label_arr + else: + # Later labels take precedence where regions overlap + combined_arr = np.where(label_arr > 0, label_arr, combined_arr) + assert combined_arr is not None, "combined_arr must be set after processing models" + mask_arr = combined_arr + mask = itk.GetImageFromArray(mask_arr.astype(np.uint8))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py` around lines 279 - 305, The loop currently reassigns mask_arr from itk.GetArrayFromImage(mask) each iteration, discarding previous labels; fix by initializing mask_arr as a zero array once before the loop and for each model convert the model mask to an array (use itk.GetArrayFromImage on the return of create_mask_from_mesh) and merge it into the cumulative mask_arr (e.g., set mask_arr[cur_mask>0] = i+1 or use np.where to only write new label values) so labels accumulate; keep using the same names (models, mask, mask_arr, self.contour_tools.create_mask_from_mesh, itk.GetArrayFromImage, itk.GetImageFromArray) and preserve dtype/copying of image information after the loop.
🧹 Nitpick comments (4)
pyproject.toml (1)
218-229: 🏗️ Heavy lift
ignore_errors = trueis too broad for modules that are part of the public API.
ignore_errors = truesuppresses every mypy diagnostic in the listed modules — including real bugs, wrong return types, and untyped defs — not just missing third-party stubs. Three of the eight affected modules (register_models_pca,workflow_create_statistical_model,workflow_fit_statistical_model_to_patient) are exported in__all__and are central to the new statistical-model pipeline. This directly contradicts the PR objective of "Make mypy blocking while ignoring missing third-party library stubs."Prefer one of:
- Inline
# type: ignore[specific-code]at the offending sites, or- A targeted mypy override using
disable_error_code(e.g.,["import-not-found", "import-untyped"]) so that structural type errors are still caught.♻️ Suggested targeted override (replaces the blanket suppress)
-[[tool.mypy.overrides]] -module = [ - "physiomotion4d.cli.create_statistical_model", - "physiomotion4d.cli.fit_statistical_model_to_patient", - "physiomotion4d.cli.visualize_pca_modes", - "physiomotion4d.register_models_pca", - "physiomotion4d.vtk_to_usd.mesh_utils", - "physiomotion4d.vtk_to_usd.vtk_reader", - "physiomotion4d.workflow_create_statistical_model", - "physiomotion4d.workflow_fit_statistical_model_to_patient", -] -ignore_errors = true +[[tool.mypy.overrides]] +module = [ + "physiomotion4d.cli.create_statistical_model", + "physiomotion4d.cli.fit_statistical_model_to_patient", + "physiomotion4d.cli.visualize_pca_modes", + "physiomotion4d.register_models_pca", + "physiomotion4d.vtk_to_usd.mesh_utils", + "physiomotion4d.vtk_to_usd.vtk_reader", + "physiomotion4d.workflow_create_statistical_model", + "physiomotion4d.workflow_fit_statistical_model_to_patient", +] +# Only suppress stub-related noise; structural errors are still reported. +disable_error_code = ["import-not-found", "import-untyped", "no-untyped-def"]If the modules have deeper type issues beyond missing stubs, use
# type: ignore[return-value](or similar) inline at the affected sites so the suppression scope is minimal and self-documenting.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pyproject.toml` around lines 218 - 229, The mypy override currently sets ignore_errors = true for several modules (including physiomotion4d.register_models_pca, physiomotion4d.workflow_create_statistical_model, physiomotion4d.workflow_fit_statistical_model_to_patient) which suppresses all diagnostics; replace this blanket suppression with targeted fixes: remove the public API modules from the ignore list and instead either add a mypy override using disable_error_code for only missing-stub errors (e.g., "import-not-found", "import-untyped") for the remaining third-party-only modules (vtk_reader, mesh_utils, CLI modules), or add inline # type: ignore[<specific-code>] comments at the precise offending locations in functions/classes where third-party stubs are missing (e.g., in create_statistical_model, fit_statistical_model_to_patient, visualize_pca_modes) so structural type errors in exported modules remain checked.src/physiomotion4d/workflow_fit_statistical_model_to_patient.py (2)
584-599: 💤 Low valueDebug-mode writes go to the current working directory without a configurable path.
When
self.log_level == logging.DEBUG, three intermediate transform images are written to the CWD (pca_forward_point_transform_{x,y,z}.nii.gz). This can silently pollute unexpected directories — especially in tests or automated pipelines that happen to setDEBUG.Consider gating behind an explicit
save_debug_outputs: bool = Falseflag or writing underoutput_dirif one is available, so debug output is opt-in and location-controlled.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py` around lines 584 - 599, The debug block that writes pca_forward_point_transform_x/y/z.nii.gz when self.log_level == logging.DEBUG should be made opt-in and location-controlled: add a boolean flag (e.g. self.save_debug_outputs) and use an existing output directory attribute (e.g. self.output_dir) or a configurable debug_output_dir to build full paths; update the guard from checking only self.log_level to require both logging.DEBUG and self.save_debug_outputs, and write the itk.imwrite files to os.path.join(self.output_dir or self.debug_output_dir, filename) while keeping the existing use of self.pca_forward_point_transform and self.patient_image for CopyInformation.
855-888: 🏗️ Heavy liftPoint-by-point Python loop over
new_pointswill be very slow for high-resolution meshes.
transform_modeliterates every mesh point in a Pythonforloop, constructing anitk.Pointper iteration. For meshes with tens of thousands of points and multiple active transforms, this will be orders of magnitude slower than a vectorised approach (e.g.,transform_tools.transform_pvcontour, which is already used inregister_labelmap_to_imagefor the same purpose).If you would like, I can propose a vectorised replacement using the existing
TransformToolshelpers.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py` around lines 855 - 888, The per-point Python loop in transform_model is slow; replace it with a vectorised pipeline using the existing TransformTools helpers (e.g., TransformTools.transform_pvcontour) to apply ICP, PCA, m2m and m2i transforms in bulk: collect new_points into an Nx3 numpy array, call the appropriate TransformTools function(s) passing self.icp_forward_point_transform, self.pca_registrar (or self.pca_coefficients guard), self.m2m_inverse_transform and self.m2i_inverse_transform as needed (or apply transforms in vectorized order), then write the transformed Nx3 array back into new_points; preserve progress logging by processing in chunks if memory is a concern. Ensure you reference and remove the loop that used itk.Point and the individual TransformPoint/transform_point calls and instead call the vectorised TransformTools method(s) and update new_points in-place.tests/test_tutorials.py (1)
202-216: 💤 Low valueMisplaced "Tutorial 4" section header between Tutorial 1 and Tutorial 2.
The section comment on lines 202–204 labels itself as Tutorial 4, but it sits between the Tutorial 1 class (line 176) and the Tutorial 2 class (line 227). The actual Tutorial 4 end-to-end class is at line 301. A reader scanning the file top-to-bottom will assume all Tutorial 4 material is grouped here.
Consider moving
test_tutorial_04_extract_surface_uses_dataset_surfaceadjacent to theTestTutorial04FitStatisticalModelToPatientclass, or renaming the section header to "Tutorial 4 – unit tests" to make the split intentional.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/test_tutorials.py` around lines 202 - 216, The "Tutorial 4" section header is misplaced between Tutorial 1 and Tutorial 2 causing confusion; relocate the test function test_tutorial_04_extract_surface_uses_dataset_surface so it sits next to the TestTutorial04FitStatisticalModelToPatient class (or alternatively change the header text to "Tutorial 4 – unit tests" to indicate separation), ensuring references like test_tutorial_04_extract_surface_uses_dataset_surface and TestTutorial04FitStatisticalModelToPatient are adjacent to keep all Tutorial 4 items grouped logically.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/release.yml:
- Around line 27-33: Replace mutable GitHub Action tags with immutable commit
SHAs for each referenced action to eliminate supply-chain risk: pin
actions/checkout@v4, actions/setup-python@v5, actions/upload-artifact@v4,
actions/download-artifact@v4, and pypa/gh-action-pypi-publish@release/v1 to
their corresponding commit SHAs (look up the latest verified commit SHA for each
action repository) and update the workflow entries that reference those strings
so the workflow uses the SHA-pinned refs instead of the `@v`* or `@release/`* tags.
- Around line 17-20: The workflow currently grants id-token: write at the
workflow level; change this by removing id-token: write from the top-level
permissions block (leaving contents: read) and instead add permissions:
id-token: write under each publishing job definition (publish-testpypi and
publish-pypi), while ensuring the build job retains only read-level permissions
(e.g., contents: read) — update the job blocks named publish-testpypi,
publish-pypi, and build accordingly.
In `@docs/cli_scripts/4dct_reconstruction.rst`:
- Around line 25-28: The example command using
physiomotion4d-reconstruct-highres-4d-ct passes a shell glob
(--time-series-images frame_*.mha) which expands on Unix but is passed literally
on Windows; update the "Basic Usage" example to use the explicit-list form for
--time-series-images (e.g., list files individually as in the "With Upsampling"
example) or add a short note that Windows users must expand globs (or use
PowerShell/Get-ChildItem) so the --time-series-images arg receives actual
filenames rather than a literal glob.
In `@tests/test_tutorials.py`:
- Around line 318-336: After calling run_tutorial_03 to bootstrap Tutorial 03,
add a guard assertion that pca_json.exists() and raise/assert with a clear
message if it does not; this ensures that the PCA model file produced by
run_tutorial_03 (reference run_tutorial_03 and the pca_json Path variable) was
created at the expected location before invoking
tutorials.tutorial_04_fit_statistical_model_to_patient.run_tutorial, preventing
downstream confusing errors in run_tutorial.
In `@tutorials/tutorial_03_create_statistical_model.py`:
- Around line 201-207: The loop currently iterates up to pca_components but can
index past returned PCA results; change the iteration bound to also consider the
lengths of eigenvalues and components (and handle None) so you only loop while
mode_idx < pca_components and mode_idx < len(eigenvalues) and mode_idx <
len(components); keep the existing None checks for components/eigenvalues and
compute sigma and ev only when the corresponding entries exist (references:
pca_components, components, eigenvalues, mode_idx, sigma, ev).
---
Outside diff comments:
In @.github/workflows/README.md:
- Around line 27-29: The README currently documents GPU CI runners as
Linux-labeled, but the CI workflow (ci.yml) targets Windows self-hosted GPU
runners; update the GPU runner bullets to match ci.yml by replacing the Linux
label with the Windows label (i.e., require self-hosted runners labeled
[self-hosted, windows, gpu]) and ensure the other metadata (PyTorch/CUDA version
and 30 minute timeout) remain accurate and consistent across all occurrences
referenced in the README.
In `@pyproject.toml`:
- Around line 149-153: Update the project metadata so the PyPI "Documentation"
URL points to the GitHub Pages site rather than the repo README: modify the
[project.urls] section and replace the value of the Documentation field with the
published GitHub Pages URL (e.g.,
"https://project-moni/physiomotion4d.github.io" or the correct pages path) so
the Documentation entry reflects the primary user-facing docs site.
In `@src/physiomotion4d/convert_vtk_to_usd.py`:
- Around line 645-663: The code currently writes single-frame labels with
mesh_converter.create_mesh, making those prims static; instead, treat one-frame
labels as time-sampled like in _convert_unified for unlabeled parts: when
len(label_mesh_sequence) == 1, set label_mesh_sequence[0].material_id =
material.name, call material_mgr.get_or_create_material(material), compute
label_time_codes (use self._time_codes[i] when self._time_codes is not None else
float(i) for each i in label_frame_indices) but for the single-frame case take
the single time code, and call
mesh_converter.create_time_varying_mesh(label_mesh_sequence, mesh_path,
label_time_codes, bind_material=True) so the prim is time-sampled rather than
static.
- Around line 675-697: The code in _vtk_to_mesh_data only calls extract_surface
for pv.UnstructuredGrid, but supports_mesh_type() advertises StructuredGrid,
ImageData, and RectilinearGrid as supported while convert_to_surface=True will
still fall through and trigger the "no faces" error; update _vtk_to_mesh_data so
that when convert_to_surface is True it treats any non-PolyData dataset (e.g.,
pv.StructuredGrid, pv.ImageData, pv.RectilinearGrid, pv.UnstructuredGrid) by
calling the appropriate surface extraction (extract_surface or extract_geometry
consistent with pyvista) before accessing points/faces, and only raise
ValueError if convert_to_surface is False and the mesh truly has no faces;
reference functions/flags: _vtk_to_mesh_data(), supports_mesh_type(),
convert_to_surface, and class checks for
UnstructuredGrid/StructuredGrid/ImageData/RectilinearGrid.
In `@src/physiomotion4d/register_models_pca.py`:
- Around line 359-365: The loop in register_models_pca.py is appending the same
mutable itk.Point instance (itk_point) for every vertex, so
self._pca_template_model_points_itk ends up containing repeated references to
the last vertex; fix by creating a new itk.Point[itk.D, 3]() inside the loop (or
otherwise cloning itk_point) for each iteration before setting its coordinates
and appending it, i.e. allocate a fresh itk_point per point from
self.pca_template_model.points so each appended element is a distinct object.
- Around line 474-480: The PCA displacement is being applied after the
pre_pca_transform so the deformation ends up in the template frame; change the
order to add pca_deformation to the point first and then call
self.pre_pca_transform.TransformPoint(point) so the transform is applied to the
deformed point. Update the same pattern when constructing registered_model (the
block that references pca_deformation and pre_pca_transform around where
registered_model is built) so both places apply deformation before calling
pre_pca_transform.TransformPoint.
- Around line 371-389: set_fixed_model currently accepts reference_image:
Optional[itk.Image] but immediately passes it to
_contour_tools.create_distance_map, causing obscure errors when None is given;
add an explicit None check at the top of set_fixed_model (function name:
set_fixed_model) and raise a clear ValueError (e.g. "reference_image must not be
None when setting a fixed model") so callers get an actionable message before
calling _contour_tools.create_distance_map; ensure the rest of the method
(setting self.fixed_distance_map) remains unchanged.
In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py`:
- Around line 123-131: The constructor's type annotation for template_model is
too narrow (pv.PolyData) and conflicts with the class docstring and usage;
update the __init__ signature to accept pv.DataSet for template_model (and any
related parameter annotations if present) and adjust the class-level Attributes
docstring to state pv.DataSet instead of pv.UnstructuredGrid/PolyData so type
hints and docs agree with runtime usage; reference the __init__ method and the
class docstring/Attributes block when making the changes.
- Around line 279-305: The loop currently reassigns mask_arr from
itk.GetArrayFromImage(mask) each iteration, discarding previous labels; fix by
initializing mask_arr as a zero array once before the loop and for each model
convert the model mask to an array (use itk.GetArrayFromImage on the return of
create_mask_from_mesh) and merge it into the cumulative mask_arr (e.g., set
mask_arr[cur_mask>0] = i+1 or use np.where to only write new label values) so
labels accumulate; keep using the same names (models, mask, mask_arr,
self.contour_tools.create_mask_from_mesh, itk.GetArrayFromImage,
itk.GetImageFromArray) and preserve dtype/copying of image information after the
loop.
In `@tests/conftest.py`:
- Around line 191-220: The regular test timing block is currently inside the
tutorial_tests conditional so per-test timings for regular tests (using
sorted_regular) only show when tutorial_tests is present; move or duplicate the
"Individual Test Times:" loop for regular tests (the block that iterates over
sorted_regular, computes outcome_symbol and duration_str via _format_duration,
and calls terminalreporter.write_line) out of the tutorial_tests if-block (or
render it unconditionally after handling tutorial tests) so regular tests are
always printed; update pytest_terminal_summary to reference sorted_regular in
the unconditional path.
---
Nitpick comments:
In `@pyproject.toml`:
- Around line 218-229: The mypy override currently sets ignore_errors = true for
several modules (including physiomotion4d.register_models_pca,
physiomotion4d.workflow_create_statistical_model,
physiomotion4d.workflow_fit_statistical_model_to_patient) which suppresses all
diagnostics; replace this blanket suppression with targeted fixes: remove the
public API modules from the ignore list and instead either add a mypy override
using disable_error_code for only missing-stub errors (e.g., "import-not-found",
"import-untyped") for the remaining third-party-only modules (vtk_reader,
mesh_utils, CLI modules), or add inline # type: ignore[<specific-code>] comments
at the precise offending locations in functions/classes where third-party stubs
are missing (e.g., in create_statistical_model,
fit_statistical_model_to_patient, visualize_pca_modes) so structural type errors
in exported modules remain checked.
In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py`:
- Around line 584-599: The debug block that writes
pca_forward_point_transform_x/y/z.nii.gz when self.log_level == logging.DEBUG
should be made opt-in and location-controlled: add a boolean flag (e.g.
self.save_debug_outputs) and use an existing output directory attribute (e.g.
self.output_dir) or a configurable debug_output_dir to build full paths; update
the guard from checking only self.log_level to require both logging.DEBUG and
self.save_debug_outputs, and write the itk.imwrite files to
os.path.join(self.output_dir or self.debug_output_dir, filename) while keeping
the existing use of self.pca_forward_point_transform and self.patient_image for
CopyInformation.
- Around line 855-888: The per-point Python loop in transform_model is slow;
replace it with a vectorised pipeline using the existing TransformTools helpers
(e.g., TransformTools.transform_pvcontour) to apply ICP, PCA, m2m and m2i
transforms in bulk: collect new_points into an Nx3 numpy array, call the
appropriate TransformTools function(s) passing self.icp_forward_point_transform,
self.pca_registrar (or self.pca_coefficients guard), self.m2m_inverse_transform
and self.m2i_inverse_transform as needed (or apply transforms in vectorized
order), then write the transformed Nx3 array back into new_points; preserve
progress logging by processing in chunks if memory is a concern. Ensure you
reference and remove the loop that used itk.Point and the individual
TransformPoint/transform_point calls and instead call the vectorised
TransformTools method(s) and update new_points in-place.
In `@tests/test_tutorials.py`:
- Around line 202-216: The "Tutorial 4" section header is misplaced between
Tutorial 1 and Tutorial 2 causing confusion; relocate the test function
test_tutorial_04_extract_surface_uses_dataset_surface so it sits next to the
TestTutorial04FitStatisticalModelToPatient class (or alternatively change the
header text to "Tutorial 4 – unit tests" to indicate separation), ensuring
references like test_tutorial_04_extract_surface_uses_dataset_surface and
TestTutorial04FitStatisticalModelToPatient are adjacent to keep all Tutorial 4
items grouped logically.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 749a34ec-d7c6-43d0-950c-be919d9151d0
📒 Files selected for processing (63)
.github/workflows/README.md.github/workflows/ci.yml.github/workflows/release.ymlREADME.mddocs/API_MAP.mddocs/PYPI_RELEASE_GUIDE.mddocs/_static/custom.cssdocs/api/base.rstdocs/api/index.rstdocs/api/model_registration/index.rstdocs/api/registration/ants.rstdocs/api/registration/icon.rstdocs/api/registration/index.rstdocs/api/registration/time_series.rstdocs/api/segmentation/base.rstdocs/api/segmentation/totalsegmentator.rstdocs/api/workflows.rstdocs/architecture.rstdocs/changelog.rstdocs/cli_scripts/4dct_reconstruction.rstdocs/cli_scripts/best_practices.rstdocs/cli_scripts/brain_vessel_modeling.rstdocs/cli_scripts/fit_statistical_model_to_patient.rstdocs/cli_scripts/lung_gated_ct.rstdocs/cli_scripts/overview.rstdocs/cli_scripts/vtk_to_usd.rstdocs/conf.pydocs/contributing.rstdocs/developer/architecture.rstdocs/developer/core.rstdocs/developer/extending.rstdocs/developer/registration_images.rstdocs/developer/registration_models.rstdocs/developer/segmentation.rstdocs/developer/usd_generation.rstdocs/developer/utilities.rstdocs/developer/workflows.rstdocs/examples.rstdocs/index.rstdocs/installation.rstdocs/quickstart.rstdocs/references.rstdocs/testing.rstdocs/troubleshooting.rstdocs/tutorials.rstpyproject.tomlsrc/physiomotion4d/__init__.pysrc/physiomotion4d/contour_tools.pysrc/physiomotion4d/convert_vtk_to_usd.pysrc/physiomotion4d/register_models_pca.pysrc/physiomotion4d/segment_anatomy_base.pysrc/physiomotion4d/usd_tools.pysrc/physiomotion4d/vtk_to_usd/usd_utils.pysrc/physiomotion4d/workflow_create_statistical_model.pysrc/physiomotion4d/workflow_fit_statistical_model_to_patient.pytests/EXPERIMENT_TESTS_GUIDE.mdtests/conftest.pytests/test_cli_smoke.pytests/test_import_public_api.pytests/test_tutorials.pytutorials/README.mdtutorials/tutorial_03_create_statistical_model.pytutorials/tutorial_04_fit_statistical_model_to_patient.py
💤 Files with no reviewable changes (1)
- docs/api/index.rst
| physiomotion4d-reconstruct-highres-4d-ct \ | ||
| --time-series-images frame_*.mha \ | ||
| --fixed-image highres_reference.mha \ | ||
| --output-dir ./results |
There was a problem hiding this comment.
Shell glob in "Basic Usage" will silently fail on Windows.
The frame_*.mha glob on line 26 is expanded by the shell on Linux/macOS but is passed literally on Windows cmd/PowerShell, causing a "no such file" error. The explicit-list form shown in the "With Upsampling" example below is already Windows-safe — consider using it (or a brief note) in the basic example too.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/cli_scripts/4dct_reconstruction.rst` around lines 25 - 28, The example
command using physiomotion4d-reconstruct-highres-4d-ct passes a shell glob
(--time-series-images frame_*.mha) which expands on Unix but is passed literally
on Windows; update the "Basic Usage" example to use the explicit-list form for
--time-series-images (e.g., list files individually as in the "With Upsampling"
example) or add a short note that Windows users must expand globs (or use
PowerShell/Get-ChildItem) so the --time-series-images arg receives actual
filenames rather than a literal glob.
Rename the PR review workflow and agent configuration from Claude-specific paths to shared AI-agent paths. Make Codex the default review agent while retaining Claude as an explicit option, and refresh the contributor docs and API map to match the new .agents layout.
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.py (2)
45-45:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUse
Unionor separate type hints instead of|syntax.Line 45 uses
int | strsyntax, but the coding guidelines require full type hints compatible withmypystrict mode and preferOptional[X]overX | None. For union types, useUnion[int, str]from thetypingmodule.Fix type hint syntax
+from typing import Optional, Union + class WorkflowConvertHeartGatedCTToUSD(PhysioMotion4DBase): def __init__( self, ... - log_level: int | str = logging.INFO, + log_level: Union[int, str] = logging.INFO, ):As per coding guidelines: "Use full type hints with
mypystrict mode; useOptional[X]instead ofX | None"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.py` at line 45, Replace the use of PEP 604 pipe union syntax for the log_level parameter with typing.Union to satisfy mypy strict mode: add "from typing import Union" to the imports and change the annotation "log_level: int | str" to "log_level: Union[int, str]" (referencing the log_level parameter in the function definition in workflow_convert_heart_gated_ct_to_usd.py).
134-134:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winUpdate docstring to reflect new output filename pattern.
The docstring states "Path to the final dynamic anatomy USD file" but the actual return value (line 154) is now
"{project_name}.dynamic_painted.usd"(simplified fromdynamic_anatomy_painted.usd). The docstring should be updated for accuracy.Update docstring
def process(self) -> str: """ Execute the complete workflow from 4D CT to dynamic USD models. Returns: - str: Path to the final dynamic anatomy USD file + str: Path to the final dynamic painted USD file """As per coding guidelines: "Update docstrings for every changed public method; keep claims factual"
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.py` at line 134, Update the docstring for the public function convert_heart_gated_ct_to_usd in src/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.py to accurately describe the returned filename pattern: replace the current "Path to the final dynamic anatomy USD file" text with a statement that the function returns the path to the final dynamic painted anatomy USD file using the "{project_name}.dynamic_painted.usd" pattern (matching the actual return expression that constructs "{project_name}.dynamic_painted.usd").
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@README.md`:
- Line 660: The README table row for "Codex CLI" is vague—replace "Install from
the current Codex CLI distribution" with a concrete install command consistent
with other tool entries; update the table cell next to "Codex CLI" (the row
referencing ".agents/" and "ai_agent_github_reviews.py") to show the actual
install command or commands users should run (for example a pip or npm install
line or a link to the distribution download), so readers have an actionable
instruction rather than ambiguous text.
In `@src/physiomotion4d/cli/visualize_pca_modes.py`:
- Around line 40-47: The function _generate_pc_variation currently uses Any
annotations; replace them with specific string-literal type hints to preserve
type safety and lazy imports (e.g., annotate mean_mesh as "Mesh" (or the
project's mesh class name), mean_shape as "Shape" or "np.ndarray", components as
"Sequence[np.ndarray]" or "list[\"np.ndarray\"]", and eigenvalues as
"np.ndarray" or "Sequence[float]"), keep pc_index as int and std_dev_multiplier
as float; update the return annotation from tuple[Any, Any, Any] to an explicit
tuple like tuple["Mesh", "np.ndarray", "np.ndarray"] (matching the actual return
types) so mypy strict mode can validate without forcing eager imports.
- Around line 20-29: Replace the weak Any annotations in _shape_at_sigma with
precise string-literal types to preserve mypy strict-mode checks while keeping
lazy imports: annotate mean_shape as "pv.PolyData", components as
list["np.ndarray"] (or "np.ndarray" if it's a single array), eigenvalues as
"np.ndarray" (or list["float"] per actual shape), and the return type as
"pv.PolyData"; add a TYPE_CHECKING guard and import pv and np only inside if
TYPE_CHECKING to avoid runtime imports while enabling static type checking for
pv.PolyData and np.ndarray.
In `@tutorials/tutorial_01_heart_gated_ct_to_usd.py`:
- Around line 102-103: Replace PEP 604 union syntax with typing.Union/Optional:
change the __init__ signature parameter type `int | str` to `Union[int, str]`
(in the __init__ method) and replace any `Path | None` occurrence (e.g., the
parameter at line ~203) with `Optional[Path]`; also add the missing typing
imports by including `from typing import Any, Optional, Union` near the top
(since `Union`/`Optional` are not yet imported) so mypy-compatible full type
hints are used.
In `@tutorials/tutorial_02_ct_to_vtk.py`:
- Around line 94-95: Replace the PEP 604 usage "int | str" with typing.Union in
the two private __init__ signatures (the __init__ methods shown in the diff and
the other occurrence around line 110) and add "from typing import Union" to the
module imports; specifically change the parameter type annotations to
"Union[int, str]" in both __init__ definitions so mypy-compatible full type
hints are used.
In `@utils/ai_agent_github_reviews.py`:
- Line 255: Replace the PEP 604 union usage in the cast call by using
typing.Union: change cast(list | dict, json.loads(raw)) to cast(Union[list,
dict], json.loads(raw)) and add Union to the existing typing imports (the same
import group that already imports cast) so the type hint conforms to project
style; ensure the change targets the cast(...) expression that wraps
json.loads(raw).
- Around line 733-740: The subprocess invocation in invoke_codex should pass
non-interactive approval flags: update the subprocess.run call inside
invoke_codex to include modern Codex CLI approval flags (replace any deprecated
--full-auto usage with --sandbox workspace-write and any other needed
non-interactive flags) so the call won’t block for interactive prompts, and also
update the fallback message referenced around the existing fallback text
(previously on line ~766) to mention the new --sandbox workspace-write flag
instead of --full-auto; additionally, change the PEP 604 union usage in the cast
at the earlier location (around line ~255) from cast(list | dict, ...) to use
typing.Union[list, dict] (or Optional[...] as appropriate) to comply with the
style guideline.
---
Outside diff comments:
In `@src/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.py`:
- Line 45: Replace the use of PEP 604 pipe union syntax for the log_level
parameter with typing.Union to satisfy mypy strict mode: add "from typing import
Union" to the imports and change the annotation "log_level: int | str" to
"log_level: Union[int, str]" (referencing the log_level parameter in the
function definition in workflow_convert_heart_gated_ct_to_usd.py).
- Line 134: Update the docstring for the public function
convert_heart_gated_ct_to_usd in
src/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.py to accurately
describe the returned filename pattern: replace the current "Path to the final
dynamic anatomy USD file" text with a statement that the function returns the
path to the final dynamic painted anatomy USD file using the
"{project_name}.dynamic_painted.usd" pattern (matching the actual return
expression that constructs "{project_name}.dynamic_painted.usd").
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d6a9f5a2-9e99-41da-9381-6f725503a059
⛔ Files ignored due to path filters (5)
tests/baselines/tutorial_01_heart_gated_ct_to_usd/contours_3d.pngis excluded by!**/*.pngtests/baselines/tutorial_01_heart_gated_ct_to_usd/reference_frame_axial.pngis excluded by!**/*.pngtests/baselines/tutorial_01_heart_gated_ct_to_usd/segmentation_overlay.pngis excluded by!**/*.pngtests/baselines/tutorial_02_ct_to_vtk/segmentation_overlay.pngis excluded by!**/*.pngtests/baselines/tutorial_02_ct_to_vtk/vtk_surfaces.pngis excluded by!**/*.png
📒 Files selected for processing (28)
.agents/.gitignore.agents/agents/architecture.md.agents/agents/docs.md.agents/agents/implementation.md.agents/agents/testing.md.agents/settings.json.agents/skills/commit/SKILL.md.agents/skills/doc-feature/SKILL.md.agents/skills/impl/SKILL.md.agents/skills/plan/SKILL.md.agents/skills/simplify-staged/SKILL.md.agents/skills/test-feature/SKILL.mdAGENTS.mdCLAUDE.mdREADME.mddocs/API_MAP.mdsrc/physiomotion4d/cli/convert_ct_to_vtk.pysrc/physiomotion4d/cli/convert_heart_gated_ct_to_usd.pysrc/physiomotion4d/cli/convert_vtk_to_usd.pysrc/physiomotion4d/cli/create_statistical_model.pysrc/physiomotion4d/cli/fit_statistical_model_to_patient.pysrc/physiomotion4d/cli/reconstruct_highres_4d_ct.pysrc/physiomotion4d/cli/visualize_pca_modes.pysrc/physiomotion4d/workflow_convert_heart_gated_ct_to_usd.pytests/conftest.pytutorials/tutorial_01_heart_gated_ct_to_usd.pytutorials/tutorial_02_ct_to_vtk.pyutils/ai_agent_github_reviews.py
✅ Files skipped from review due to trivial changes (2)
- CLAUDE.md
- src/physiomotion4d/cli/reconstruct_highres_4d_ct.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/conftest.py
| |------|-------------|---------| | ||
| | [Claude Code](https://claude.ai/code) | All `/plan`, `/impl`, `/test-feature`, `/doc-feature` skills and `claude_github_reviews.py` | `winget install Anthropic.ClaudeCode` | | ||
| | [gh CLI](https://cli.github.com) | `claude_github_reviews.py` | `winget install GitHub.cli`, then `gh auth login` | | ||
| | Codex CLI | Default agent for `.agents/` skills and `ai_agent_github_reviews.py` | Install from the current Codex CLI distribution | |
There was a problem hiding this comment.
Codex CLI install instruction is under-specified compared to the other tool entries
The gh and Claude Code entries each give a concrete install command, while the Codex CLI row only says "Install from the current Codex CLI distribution" — which gives the reader nothing actionable.
✏️ Suggested wording
-| Codex CLI | Default agent for `.agents/` skills and `ai_agent_github_reviews.py` | Install from the current Codex CLI distribution |
+| Codex CLI | Default agent for `.agents/` skills and `ai_agent_github_reviews.py` | `npm install -g `@openai/codex`` (Node 18+) |📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| | Codex CLI | Default agent for `.agents/` skills and `ai_agent_github_reviews.py` | Install from the current Codex CLI distribution | | |
| | Codex CLI | Default agent for `.agents/` skills and `ai_agent_github_reviews.py` | `npm install -g `@openai/codex`` (Node 18+) | |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~660-~660: The official name of this software platform is spelled with a capital “H”.
Context: ...Default agent for .agents/ skills and ai_agent_github_reviews.py | Install from the current ...
(GITHUB)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@README.md` at line 660, The README table row for "Codex CLI" is vague—replace
"Install from the current Codex CLI distribution" with a concrete install
command consistent with other tool entries; update the table cell next to "Codex
CLI" (the row referencing ".agents/" and "ai_agent_github_reviews.py") to show
the actual install command or commands users should run (for example a pip or
npm install line or a link to the distribution download), so readers have an
actionable instruction rather than ambiguous text.
| def __init__(self, labelmap: Any, log_level: int | str = logging.INFO): | ||
| super().__init__(log_level=log_level) |
There was a problem hiding this comment.
Replace PEP 604 union syntax with Optional / Union from typing
Two changed lines use PEP 604 | syntax, which the project explicitly disables (ruff UP007 is suppressed and rejection triggers in the agent prompt call it out as a hard rule):
- Line 102:
int | str→Union[int, str] - Line 203:
Path | None→Optional[Path](directX | Noneviolation)
Union is not yet in the from typing import Any, Optional at line 88.
🛠️ Proposed fix
-from typing import Any, Optional
+from typing import Any, Optional, Union- def __init__(self, labelmap: Any, log_level: int | str = logging.INFO):
+ def __init__(self, labelmap: Any, log_level: Union[int, str] = logging.INFO):- cached_labelmap: Path | None = None
+ cached_labelmap: Optional[Path] = NoneAs per coding guidelines: "Use Optional[X] not X | None" and "Use full type hints with mypy strict mode".
Also applies to: 203-203
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tutorials/tutorial_01_heart_gated_ct_to_usd.py` around lines 102 - 103,
Replace PEP 604 union syntax with typing.Union/Optional: change the __init__
signature parameter type `int | str` to `Union[int, str]` (in the __init__
method) and replace any `Path | None` occurrence (e.g., the parameter at line
~203) with `Optional[Path]`; also add the missing typing imports by including
`from typing import Any, Optional, Union` near the top (since `Union`/`Optional`
are not yet imported) so mypy-compatible full type hints are used.
| def __init__(self, labelmap: Any, log_level: int | str = logging.INFO): | ||
| super().__init__(log_level=log_level) |
There was a problem hiding this comment.
Replace int | str with Union[int, str] in both private class __init__ signatures
Same PEP 604 violation as in tutorial_01: lines 94 and 110 both use int | str instead of Union[int, str]. Union is not imported in this file.
🛠️ Proposed fix
-import argparse
-import logging
-from pathlib import Path
-from typing import Any
+import argparse
+import logging
+from pathlib import Path
+from typing import Any, Union- def __init__(self, labelmap: Any, log_level: int | str = logging.INFO):
+ def __init__(self, labelmap: Any, log_level: Union[int, str] = logging.INFO):- log_level: int | str = logging.INFO,
+ log_level: Union[int, str] = logging.INFO,As per coding guidelines: "Use full type hints with mypy strict mode; use Optional[X] not X | None".
Also applies to: 110-110
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tutorials/tutorial_02_ct_to_vtk.py` around lines 94 - 95, Replace the PEP 604
usage "int | str" with typing.Union in the two private __init__ signatures (the
__init__ methods shown in the diff and the other occurrence around line 110) and
add "from typing import Union" to the module imports; specifically change the
parameter type annotations to "Union[int, str]" in both __init__ definitions so
mypy-compatible full type hints are used.
- Pin release workflow actions and scope PyPI OIDC permissions to publish jobs - Replace broad mypy suppressions with targeted import-stub handling - Tighten typing across PCA, workflow, VTK-to-USD, and CLI code paths - Fix PCA registration and workflow model transform edge cases - Improve CLI smoke, public API, tutorial, and VTK-to-USD regression tests - Add Tutorial 5 screenshot baseline and workflow/PCA regression coverage - Update tutorial rendering, grouping, and timing behavior - Update AI review helper to invoke Codex non-interactively with workspace-write sandboxing
| if self.pca_coefficients is not None: | ||
| assert self.pca_registrar is not None, "PCA registrar must be set" | ||
| if self.pca_registrar.pre_pca_transform is not None: | ||
| transform_steps.append( | ||
| ("PCA pre-transform", self.pca_registrar.pre_pca_transform) |
| "cupy", | ||
| "cupyx", | ||
| "pxr", | ||
| "scipy", | ||
| "matplotlib", | ||
| "ants", |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/physiomotion4d/workflow_create_statistical_model.py (1)
146-157:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftThe new
pv.DataSetpath still goes throughPolyData-only code.When
solve_for_surface_pca=False,movingandself.reference_modelcan bepv.UnstructuredGrid, but Lines 157, 188-189, and 220-223 onlycast()them topv.PolyDatabefore calling surface-oriented helpers.cast()does not convert the object at runtime, so the advertised full-mesh path is still relying on surface-only behavior and can break or strip non-surface topology. This needs a dataset-aware transform/registration path before widening these attributes topv.DataSet.Based on learnings: "Flag changes at the ITK↔PyVista boundary or the RAS→Y-up coordinate transform as high-risk."
Also applies to: 188-189, 220-223
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/physiomotion4d/workflow_create_statistical_model.py` around lines 146 - 157, The code incorrectly uses cast(...) to pv.PolyData for datasets that may be pv.UnstructuredGrid (moving and self.reference_model), which doesn't convert them at runtime and can strip topology; update RegisterModelsICP/contour transform usage to be dataset-aware: before calling _extract_surface, RegisterModelsICP.register, or contour_tools.transform_contours, branch on the actual pyvista type (e.g., isinstance(moving, pv.UnstructuredGrid) or pv.DataSet) and either (a) explicitly extract a surface when a surface-based algorithm is required using pyvista's extract_surface()/triangulate() (and preserve a mapping back to cell/point ids) or (b) add/call a dataset-aware registration/transform routine that operates on pv.UnstructuredGrid (implement a new path or overload in RegisterModelsICP and contour_tools like register_dataset / transform_contours_dataset); apply the same handling for self.reference_model and anywhere you currently do cast(pv.PolyData, ...) (notably the uses in register call and the contour_tools.transform_contours call) so runtime types are respected and non-surface topology is not lost.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.github/workflows/README.md:
- Around line 27-29: The README currently mixes Windows runner labels
`[self-hosted, Windows, X64, gpu]` with Linux/Ubuntu CUDA install instructions
(apt-get); pick one platform and make the doc consistent: either change the
runner labels to Linux (e.g., `[self-hosted, Linux, X64, gpu]`) and keep the
Ubuntu/apt-get CUDA steps, or keep the Windows labels and replace the
Ubuntu/apt-get install steps with Windows-specific CUDA/CuDNN installation and
any Windows runner notes; update the OS references in the GPU runner summary and
the install sections that mention "Ubuntu"/"apt-get"/"Linux" to match the chosen
platform so labels, install commands, and OS text are aligned.
In `@pyproject.toml`:
- Around line 204-206: Remove the global ignore_missing_imports = true and
restrict that setting to per-module overrides for external third-party packages
only (e.g., create overrides under tool.mypy.overrides for modules like "itk.*",
"pyvista.*", "pxr.*", "vtk.*", "simpleware.*" with ignore_missing_imports =
true); also remove ignore_missing_imports from the existing overrides that
target your own packages (physiomotion4d.* and tests.*) so they no longer
suppress unresolved-import errors while keeping any disable_error_code entries
as needed for your codebase.
In `@src/physiomotion4d/workflow_fit_statistical_model_to_patient.py`:
- Around line 487-492: The code is incorrectly casting pv.DataSet inputs to
pv.PolyData and calling contour-only transforms (transform_contours /
transform_pvcontour), which can strip mesh topology and cell data; update places
referencing transform_contours/transform_pvcontour (e.g., where
icp_template_model, icp_forward_point_transform, and public transform_model are
used) to first inspect the actual pyvista type (isinstance check for pv.PolyData
vs pv.UnstructuredGrid/DataSet) and route to the appropriate transformer: call
the contour-specific method only for genuine PolyData, and for
UnstructuredGrid/DataSet call a mesh-preserving transform (add or reuse a
transform_mesh/transform_dataset function that applies the point transform to
points while preserving cells/cell_data and point_data). Ensure the ICP/PCA
intermediate creation (icp_template_model, PCA transforms) and the public
transform_model API use this branching logic so non-PolyData inputs are not
lossily cast.
In `@tests/test_workflow_fit_statistical_model_to_patient.py`:
- Around line 16-19: Update the docstring for
test_auto_generate_mask_accumulates_multilabel_models to include the image shape
and axis order per guidelines; specifically edit the triple-quoted string of the
function test_auto_generate_mask_accumulates_multilabel_models to append a
concise statement like "Image shape (X, Y, Z, T) and axis order [e.g. (Z, Y, X)
or channel-last]" (replace X,Y,Z,T with the actual dimensions used in the test)
so the docstring clearly documents the image dimensions and axis ordering.
In `@utils/ai_agent_github_reviews.py`:
- Around line 589-592: The prompt uses hard-coded references to CLAUDE.md and
AGENTS.md regardless of the chosen agent, which causes incorrect guidance when
the agent is not Claude. Fix this by passing the selected 'agent' parameter into
the build_prompt() function and dynamically select the tool-specific file (e.g.,
CLAUDE.md or CODEX.md) based on that agent value instead of always using
Claude-specific references. Apply this change consistently in both the prompt
construction sections around lines 589-592 and 612-623.
---
Outside diff comments:
In `@src/physiomotion4d/workflow_create_statistical_model.py`:
- Around line 146-157: The code incorrectly uses cast(...) to pv.PolyData for
datasets that may be pv.UnstructuredGrid (moving and self.reference_model),
which doesn't convert them at runtime and can strip topology; update
RegisterModelsICP/contour transform usage to be dataset-aware: before calling
_extract_surface, RegisterModelsICP.register, or
contour_tools.transform_contours, branch on the actual pyvista type (e.g.,
isinstance(moving, pv.UnstructuredGrid) or pv.DataSet) and either (a) explicitly
extract a surface when a surface-based algorithm is required using pyvista's
extract_surface()/triangulate() (and preserve a mapping back to cell/point ids)
or (b) add/call a dataset-aware registration/transform routine that operates on
pv.UnstructuredGrid (implement a new path or overload in RegisterModelsICP and
contour_tools like register_dataset / transform_contours_dataset); apply the
same handling for self.reference_model and anywhere you currently do
cast(pv.PolyData, ...) (notably the uses in register call and the
contour_tools.transform_contours call) so runtime types are respected and
non-surface topology is not lost.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ba0cd0ef-e97c-469d-9c17-ce6095b737b2
⛔ Files ignored due to path filters (1)
tests/baselines/tutorial_05_vtk_to_usd/usd_mesh_rendering.pngis excluded by!**/*.png
📒 Files selected for processing (22)
.github/workflows/README.md.github/workflows/release.ymldocs/API_MAP.mdpyproject.tomlsrc/physiomotion4d/cli/create_statistical_model.pysrc/physiomotion4d/cli/fit_statistical_model_to_patient.pysrc/physiomotion4d/cli/visualize_pca_modes.pysrc/physiomotion4d/convert_vtk_to_usd.pysrc/physiomotion4d/register_models_pca.pysrc/physiomotion4d/vtk_to_usd/mesh_utils.pysrc/physiomotion4d/vtk_to_usd/vtk_reader.pysrc/physiomotion4d/workflow_create_statistical_model.pysrc/physiomotion4d/workflow_fit_statistical_model_to_patient.pytests/conftest.pytests/test_cli_smoke.pytests/test_convert_vtk_to_usd.pytests/test_import_public_api.pytests/test_register_models_pca.pytests/test_tutorials.pytests/test_workflow_fit_statistical_model_to_patient.pytutorials/tutorial_03_create_statistical_model.pyutils/ai_agent_github_reviews.py
✅ Files skipped from review due to trivial changes (1)
- src/physiomotion4d/vtk_to_usd/mesh_utils.py
🚧 Files skipped from review as they are similar to previous changes (10)
- tutorials/tutorial_03_create_statistical_model.py
- .github/workflows/release.yml
- tests/test_import_public_api.py
- src/physiomotion4d/cli/create_statistical_model.py
- src/physiomotion4d/cli/visualize_pca_modes.py
- src/physiomotion4d/cli/fit_statistical_model_to_patient.py
- tests/conftest.py
- tests/test_tutorials.py
- src/physiomotion4d/convert_vtk_to_usd.py
- src/physiomotion4d/register_models_pca.py
| def test_auto_generate_mask_accumulates_multilabel_models( | ||
| monkeypatch: Any, | ||
| ) -> None: | ||
| """Multi-model masks accumulate label IDs instead of overwriting prior labels.""" |
There was a problem hiding this comment.
Docstring missing image shape/axis order — coding guideline violation.
The docstring states the test intent but omits the image dimensions and axis ordering. Per coding guidelines, every test docstring must state the image shape and axis order.
✏️ Proposed fix
- """Multi-model masks accumulate label IDs instead of overwriting prior labels."""
+ """Multi-model masks accumulate label IDs instead of overwriting prior labels.
+
+ Image shape (Z, Y, X) = (3, 3, 3).
+ """As per coding guidelines: "State image shape and axis order in every test docstring: e.g. shape (X, Y, Z, T)."
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tests/test_workflow_fit_statistical_model_to_patient.py` around lines 16 -
19, Update the docstring for
test_auto_generate_mask_accumulates_multilabel_models to include the image shape
and axis order per guidelines; specifically edit the triple-quoted string of the
function test_auto_generate_mask_accumulates_multilabel_models to append a
concise statement like "Image shape (X, Y, Z, T) and axis order [e.g. (Z, Y, X)
or channel-last]" (replace X,Y,Z,T with the actual dimensions used in the test)
so the docstring clearly documents the image dimensions and axis ordering.
Rename PCA alignment handling from pre_pca_transform to post_pca_transform and apply it after PCA deformation throughout the registrar, workflow transform composition, tests, docs, and experiments. Also address related CI/review issues: - preserve PyVista dataset topology when applying model transforms - avoid eager TubeTK import/resampling paths that can crash CI - tighten mypy missing-import suppression to third-party modules - add SciPy as an explicit runtime dependency - fix PCA visualization callback typing - update Windows self-hosted GPU runner docs - make GitHub review prompts use agent-specific guidance files Regenerate docs/API_MAP.md and add focused regression coverage for PCA transform ordering and topology-preserving dataset transforms.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 87 out of 104 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (1)
tutorials/tutorial_04_fit_statistical_model_to_patient.py:158
--pca-jsondefaults to a concrete path, but if the file is missing the tutorial silently skips PCA registration (if pca_json is not None and pca_json.exists()). This can produce a successful run with different behavior than intended and make failures harder to diagnose. Consider either (a) making the defaultNoneand only enabling PCA when the user opts in, or (b) raising a clearFileNotFoundErrorwhenpca_jsonis provided but does not exist.
| elif self.convert_to_surface: | ||
| face_counts = np.array([], dtype=np.int32) | ||
| face_indices = np.array([], dtype=np.int32) |
Refresh release-facing documentation and validation for the 2026.05.07 beta.
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores