Conversation
d6829d8 to
8d97d4a
Compare
Deploying infrahub-sdk-python with
|
| Latest commit: |
4fd9d59
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://3a422a08.infrahub-sdk-python.pages.dev |
| Branch Preview URL: | https://pog-metadata-tests.infrahub-sdk-python.pages.dev |
WalkthroughThe PR adds integration tests for node and relationship metadata to the async and sync test suites ( 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan for PR comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests.
@@ Coverage Diff @@
## stable #861 +/- ##
==========================================
- Coverage 80.29% 72.38% -7.92%
==========================================
Files 116 116
Lines 9964 9964
Branches 1520 1520
==========================================
- Hits 8001 7212 -789
- Misses 1442 2246 +804
+ Partials 521 506 -15
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
2295da9 to
a6a2332
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/integration/test_infrahub_client_sync.py (1)
334-400: Please add the syncall(include_metadata=True)case as well.The async suite now exercises metadata deserialization on
client.all(...), but this file still only coversget(...). That leaves the sync list path untested even though it is a separate client surface.Based on learnings: "Always provide both async and sync versions for client implementations (InfrahubClient for async, InfrahubClientSync for sync)"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_infrahub_client_sync.py` around lines 334 - 400, Add a new sync test mirroring the async suite that calls client_sync.all(include_metadata=True) and verifies metadata deserialization for nodes and attributes; locate the test file's existing get-based tests (e.g., test_node_metadata_with_get, test_attribute_metadata, test_relationship_metadata_cardinality_one/many) and add a sibling test (e.g., test_node_metadata_with_all_sync) that calls client_sync.all(kind=TESTING_CAT, include_metadata=True) (and a similar call for TESTING_PERSON with include=["animals"] / exclude=["favorite_animal"]) then iterate returned nodes to assert NodeMetadata/Attribute metadata and RelationshipMetadata fields (created_at/updated_at and created_by/updated_by.display_label) are present and correct.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/test_infrahub_client_sync.py`:
- Around line 351-370: The test test_attribute_metadata mutates the shared
fixture cat_luna by changing node.name.value to "Luna Updated" and calling
node.save(), which leaves persistent state modified for later tests; fix by
either using a disposable node (create a temporary node via InfrahubClientSync
create or clone an existing node and operate on that) or by capturing the
original value (e.g., original_name_value = node.name.value) and restoring it
after assertions (set node.name.value back and call node.save() in a
teardown/finally block) when using client_sync.get / node.save so the shared
cat_luna is not left modified.
In `@tests/integration/test_infrahub_client.py`:
- Around line 360-378: The test test_attribute_metadata mutates shared node name
to "Luna Updated" and doesn't restore it; wrap the mutation and save in a
try/finally (or create a disposable node) so the original state is restored:
before changing node.name.value read and store the original value (and
timestamps if necessary), perform the update and assertions, then in finally set
node.name.value back to the saved original and await node.save() (or call
client.delete(kind=TESTING_CAT, id=cat_luna.id) if you choose to create a
disposable node) to ensure subsequent tests are not affected; reference
test_attribute_metadata, node / node_after, client.get, node.save, TESTING_CAT
and cat_luna.id when locating where to add the cleanup.
---
Nitpick comments:
In `@tests/integration/test_infrahub_client_sync.py`:
- Around line 334-400: Add a new sync test mirroring the async suite that calls
client_sync.all(include_metadata=True) and verifies metadata deserialization for
nodes and attributes; locate the test file's existing get-based tests (e.g.,
test_node_metadata_with_get, test_attribute_metadata,
test_relationship_metadata_cardinality_one/many) and add a sibling test (e.g.,
test_node_metadata_with_all_sync) that calls client_sync.all(kind=TESTING_CAT,
include_metadata=True) (and a similar call for TESTING_PERSON with
include=["animals"] / exclude=["favorite_animal"]) then iterate returned nodes
to assert NodeMetadata/Attribute metadata and RelationshipMetadata fields
(created_at/updated_at and created_by/updated_by.display_label) are present and
correct.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 28cba89f-6a7a-40c8-b54a-52cc93bb31a7
📒 Files selected for processing (2)
tests/integration/test_infrahub_client.pytests/integration/test_infrahub_client_sync.py
| async def test_attribute_metadata(self, client: InfrahubClient, base_dataset: None, cat_luna: InfrahubNode) -> None: | ||
| node = await client.get(kind=TESTING_CAT, id=cat_luna.id, include_metadata=True) | ||
|
|
||
| assert node.name.updated_by.display_label == "Admin" | ||
| assert node.breed.updated_by.display_label == "Admin" | ||
| original_name_updated_at = node.name.updated_at | ||
| original_breed_updated_at = node.breed.updated_at | ||
|
|
||
| node.name.value = "Luna Updated" | ||
| await node.save() | ||
|
|
||
| assert original_name_updated_at is not None | ||
|
|
||
| node_after = await client.get(kind=TESTING_CAT, id=cat_luna.id, include_metadata=True) | ||
| assert node_after.name.value == "Luna Updated" | ||
| assert node_after.name.updated_by.display_label == "Admin" | ||
| assert node_after.name.updated_at is not None | ||
| assert node_after.name.updated_at > original_name_updated_at | ||
| assert node_after.breed.updated_at == original_breed_updated_at |
There was a problem hiding this comment.
Please clean up the renamed test node before exiting.
This test mutates shared class-scoped data and leaves "Luna Updated" behind, which makes later integration tests depend on execution order. A disposable node or a restore step in cleanup would keep the suite isolated.
As per coding guidelines: "tests/integration/**/*.py: Clean up resources in integration tests"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/test_infrahub_client.py` around lines 360 - 378, The test
test_attribute_metadata mutates shared node name to "Luna Updated" and doesn't
restore it; wrap the mutation and save in a try/finally (or create a disposable
node) so the original state is restored: before changing node.name.value read
and store the original value (and timestamps if necessary), perform the update
and assertions, then in finally set node.name.value back to the saved original
and await node.save() (or call client.delete(kind=TESTING_CAT, id=cat_luna.id)
if you choose to create a disposable node) to ensure subsequent tests are not
affected; reference test_attribute_metadata, node / node_after, client.get,
node.save, TESTING_CAT and cat_luna.id when locating where to add the cleanup.
a6a2332 to
4fd9d59
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
tests/integration/test_infrahub_client_sync.py (1)
354-377:⚠️ Potential issue | 🟡 MinorAlways delete the disposable cat in a
finallyblock.Line 377 only runs on the happy path. If any assertion or
save()above fails, this leavesMetadataTestCatbehind and can skew later integration tests.As per coding guidelines:
tests/integration/**/*.py: Clean up resources in integration tests🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_infrahub_client_sync.py` around lines 354 - 377, Wrap the test body that creates the disposable node in a try/finally so the created cat is always cleaned up: after calling client_sync.create(...) and disposable.save(), ensure a finally block calls disposable.delete() (guarded by a truthy check like if disposable and getattr(disposable, "id", None) to avoid errors if creation failed) so the resource is removed even when assertions or saves fail; update the test around symbols disposable, client_sync.create, disposable.save, and disposable.delete accordingly.tests/integration/test_infrahub_client.py (1)
363-387:⚠️ Potential issue | 🟡 MinorAlways clean up the disposable node in a
finallyblock.Line 387 only runs if every assertion succeeds. A failure earlier in the test leaves
MetadataTestCatbehind and can make later integration tests order-dependent.As per coding guidelines:
tests/integration/**/*.py: Clean up resources in integration tests🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_infrahub_client.py` around lines 363 - 387, Wrap the test actions that create and modify the disposable node in a try/finally so the created resource is always removed; specifically, enclose the sequence that calls client.create(...)->disposable.save(), the gets/updates and assertions in a try and move the cleanup into finally where you check for and await deletion of the resource (prefer deleting node_after if set, else node if set, else disposable) — use the existing variables disposable, node, node_after and calls like await node_after.delete() / await node.delete() / await disposable.delete(), and optionally catch/log any delete errors to avoid masking test failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@node_refactoring_analysis.md`:
- Around line 9-14: The separator rows in the Markdown tables lack spaces around
the pipes and are triggering MD060; update the separator lines in the table that
lists `_generate_input_data`, `_strip_unmodified_dict`, and
`generate_query_data_node` (both async and sync entries) to use spaced
separators like "| --- | --- | --- |" (one space between pipes and dashes) and
apply the same change to the other similar table mentioned (the lower table
covering the same pattern). Ensure each separator row matches the column count
and uses consistent single-space padding so markdown-lint no longer flags MD060.
In `@tests/integration/test_infrahub_client_sync.py`:
- Around line 363-375: The strict timestamp assertion is flaky; replace the
direct assert node_after.name.updated_at > original_name_updated_at with a short
polling loop that calls client_sync.get(kind=TESTING_CAT, id=disposable.id,
include_metadata=True) until node_after.name.updated_at is different from
original_name_updated_at or a small timeout (e.g., 1–2s) elapses, then assert
the timestamp changed (or fail after timeout); keep the existing checks for
node_after.name.value and node_after.name.updated_by.display_label and reference
the existing node.save() call and original_name_updated_at variable when
implementing the poll.
In `@tests/integration/test_infrahub_client.py`:
- Around line 372-385: The test's strict assertion that
node_after.name.updated_at > original_name_updated_at is flaky because
create+update can produce identical timestamps; change the test to poll/retry
fetching with client.get(kind=TESTING_CAT, id=disposable.id,
include_metadata=True) until node_after.name.updated_at !=
original_name_updated_at (or until a short timeout) and then assert the
timestamp changed, or alternatively remove the strict ordering and assert only
that updated_at is not None; update references to node.save(),
original_name_updated_at, node_after.name.updated_at, and
node_after.name.updated_by.display_label accordingly.
---
Duplicate comments:
In `@tests/integration/test_infrahub_client_sync.py`:
- Around line 354-377: Wrap the test body that creates the disposable node in a
try/finally so the created cat is always cleaned up: after calling
client_sync.create(...) and disposable.save(), ensure a finally block calls
disposable.delete() (guarded by a truthy check like if disposable and
getattr(disposable, "id", None) to avoid errors if creation failed) so the
resource is removed even when assertions or saves fail; update the test around
symbols disposable, client_sync.create, disposable.save, and disposable.delete
accordingly.
In `@tests/integration/test_infrahub_client.py`:
- Around line 363-387: Wrap the test actions that create and modify the
disposable node in a try/finally so the created resource is always removed;
specifically, enclose the sequence that calls
client.create(...)->disposable.save(), the gets/updates and assertions in a try
and move the cleanup into finally where you check for and await deletion of the
resource (prefer deleting node_after if set, else node if set, else disposable)
— use the existing variables disposable, node, node_after and calls like await
node_after.delete() / await node.delete() / await disposable.delete(), and
optionally catch/log any delete errors to avoid masking test failures.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6c331e7d-a65e-4722-a0b3-95e6d9e46f02
📒 Files selected for processing (3)
node_refactoring_analysis.mdtests/integration/test_infrahub_client.pytests/integration/test_infrahub_client_sync.py
| | Method | Branches | Line | | ||
| |--------|----------|------| | ||
| | `_generate_input_data` | 21 | 289 | | ||
| | `_strip_unmodified_dict` | 13 | 378 | | ||
| | `generate_query_data_node` (async) | 17 | 971 | | ||
| | `generate_query_data_node` (sync) | 17 | 1850 | |
There was a problem hiding this comment.
Fix the table separator spacing so markdown-lint passes.
Both tables are currently tripping MD060 on their separator rows.
📝 Suggested fix
-|--------|----------|------|
+| ------ | -------- | ---- |-|--------|------|--------|
+| ------ | ---- | ------ |Also applies to: 125-132
🧰 Tools
🪛 GitHub Check: markdown-lint
[failure] 10-10: Table column style
node_refactoring_analysis.md:10:28 MD060/table-column-style Table column style [Table pipe is missing space to the left for style "compact"] https://github.com/DavidAnson/markdownlint/blob/v0.40.0/doc/md060.md
[failure] 10-10: Table column style
node_refactoring_analysis.md:10:21 MD060/table-column-style Table column style [Table pipe is missing space to the right for style "compact"] https://github.com/DavidAnson/markdownlint/blob/v0.40.0/doc/md060.md
[failure] 10-10: Table column style
node_refactoring_analysis.md:10:21 MD060/table-column-style Table column style [Table pipe is missing space to the left for style "compact"] https://github.com/DavidAnson/markdownlint/blob/v0.40.0/doc/md060.md
[failure] 10-10: Table column style
node_refactoring_analysis.md:10:10 MD060/table-column-style Table column style [Table pipe is missing space to the right for style "compact"] https://github.com/DavidAnson/markdownlint/blob/v0.40.0/doc/md060.md
[failure] 10-10: Table column style
node_refactoring_analysis.md:10:10 MD060/table-column-style Table column style [Table pipe is missing space to the left for style "compact"] https://github.com/DavidAnson/markdownlint/blob/v0.40.0/doc/md060.md
[failure] 10-10: Table column style
node_refactoring_analysis.md:10:1 MD060/table-column-style Table column style [Table pipe is missing space to the right for style "compact"] https://github.com/DavidAnson/markdownlint/blob/v0.40.0/doc/md060.md
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@node_refactoring_analysis.md` around lines 9 - 14, The separator rows in the
Markdown tables lack spaces around the pipes and are triggering MD060; update
the separator lines in the table that lists `_generate_input_data`,
`_strip_unmodified_dict`, and `generate_query_data_node` (both async and sync
entries) to use spaced separators like "| --- | --- | --- |" (one space between
pipes and dashes) and apply the same change to the other similar table mentioned
(the lower table covering the same pattern). Ensure each separator row matches
the column count and uses consistent single-space padding so markdown-lint no
longer flags MD060.
| original_name_updated_at = node.name.updated_at | ||
| original_breed_updated_at = node.breed.updated_at | ||
| assert original_name_updated_at is not None | ||
|
|
||
| node.name.value = "MetadataTestCat Updated" | ||
| node.save() | ||
|
|
||
| node_after = client_sync.get(kind=TESTING_CAT, id=disposable.id, include_metadata=True) | ||
| assert node_after.name.value == "MetadataTestCat Updated" | ||
| assert node_after.name.updated_by.display_label == "Admin" | ||
| assert node_after.name.updated_at is not None | ||
| assert node_after.name.updated_at > original_name_updated_at | ||
| assert node_after.breed.updated_at == original_breed_updated_at |
There was a problem hiding this comment.
The strict updated_at > original_name_updated_at check can be flaky.
This update happens immediately after creation, so both values can land in the same timestamp bucket on fast containers. Consider polling until the timestamp changes, or assert on a non-time-based signal instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/test_infrahub_client_sync.py` around lines 363 - 375, The
strict timestamp assertion is flaky; replace the direct assert
node_after.name.updated_at > original_name_updated_at with a short polling loop
that calls client_sync.get(kind=TESTING_CAT, id=disposable.id,
include_metadata=True) until node_after.name.updated_at is different from
original_name_updated_at or a small timeout (e.g., 1–2s) elapses, then assert
the timestamp changed (or fail after timeout); keep the existing checks for
node_after.name.value and node_after.name.updated_by.display_label and reference
the existing node.save() call and original_name_updated_at variable when
implementing the poll.
| original_name_updated_at = node.name.updated_at | ||
| original_breed_updated_at = node.breed.updated_at | ||
|
|
||
| node.name.value = "MetadataTestCat Updated" | ||
| await node.save() | ||
|
|
||
| assert original_name_updated_at is not None | ||
|
|
||
| node_after = await client.get(kind=TESTING_CAT, id=disposable.id, include_metadata=True) | ||
| assert node_after.name.value == "MetadataTestCat Updated" | ||
| assert node_after.name.updated_by.display_label == "Admin" | ||
| assert node_after.name.updated_at is not None | ||
| assert node_after.name.updated_at > original_name_updated_at | ||
| assert node_after.breed.updated_at == original_breed_updated_at |
There was a problem hiding this comment.
The immediate updated_at ordering assertion is prone to flakiness.
A create and update performed back-to-back can legitimately share the same timestamp resolution in the containerized test environment. Poll until the timestamp changes, or avoid asserting strict monotonicity here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/test_infrahub_client.py` around lines 372 - 385, The test's
strict assertion that node_after.name.updated_at > original_name_updated_at is
flaky because create+update can produce identical timestamps; change the test to
poll/retry fetching with client.get(kind=TESTING_CAT, id=disposable.id,
include_metadata=True) until node_after.name.updated_at !=
original_name_updated_at (or until a short timeout) and then assert the
timestamp changed, or alternatively remove the strict ordering and assert only
that updated_at is not None; update references to node.save(),
original_name_updated_at, node_after.name.updated_at, and
node_after.name.updated_by.display_label accordingly.
Why
Add integration tests for metadata
What changed
Summary by CodeRabbit