-
Notifications
You must be signed in to change notification settings - Fork 1
Sk/ept classifier #120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Sk/ept classifier #120
Conversation
📝 WalkthroughWalkthroughThis pull request introduces an end-to-end inference pipeline for the EPT attack framework. Changes include adding configuration flags for inference execution and results storage, implementing helper functions for training summarization and best attack classifier persistence, and extending the MLPClassifier with training and prediction methods. A new Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
examples/ept_attack/run_ept_attack.py (2)
108-140: Critical:_summarize_and_save_training_resultsis defined twice.The function
_summarize_and_save_training_resultsis defined at lines 108-140 and again at lines 275-307. This is a merge/rebase artifact. The second definition (lines 275-307) will shadow the first. Remove one of the duplicate definitions.Also applies to: 275-307
144-273: Critical:run_attack_classifier_trainingis defined twice.The function
run_attack_classifier_trainingis defined at lines 144-273 and again at lines 362-497. The static analysis tool also flags this (F811). The second definition appears to be the intended version as it includes the call to_train_and_save_best_attack_classifier. Remove the first definition (lines 144-273).Also applies to: 362-497
🤖 Fix all issues with AI agents
In `@examples/ept_attack/run_ept_attack.py`:
- Around line 531-535: The inference currently concatenates CSVs into
df_inference_features and calls trained_model.predict directly, but training
used filter_data with best_column_types in train_attack_classifier; update
inference to apply the same filtering: ensure best_column_types is persisted
with the model or passed into run_inference, then call
filter_data(df_inference_features, best_column_types) before calling
trained_model.predict; reference trained_model.predict, train_attack_classifier,
filter_data, best_column_types and run_inference to locate and update the code
paths.
- Around line 583-586: In main(), there's a duplicated conditional block that
re-checks config.pipeline.run_inference and
config.pipeline.run_attack_classifier_training; remove the second occurrence so
each pipeline check is only executed once—locate the duplicate if
config.pipeline.run_inference: ... and if
config.pipeline.run_attack_classifier_training: ... (referencing
run_inference(config) and run_attack_classifier_training(config)) and delete
that repeated block, leaving the original checks intact.
- Around line 551-552: The comment contains a typo: change "challenege" to
"challenge" in the TODO line referencing evaluation of inference results; update
the TODO comment text that mentions evaluating inference results using the
challenge labels and ensure the referenced function call
_evaluate_inference_results(predictions, diffusion_model_name) remains correctly
spelled and uncommented/implemented when you add the evaluation logic.
- Around line 11-15: Remove duplicate imports: keep a single import for pickle,
one for "from collections import defaultdict", and one for "from datetime import
datetime"; edit the top of run_ept_attack.py to eliminate the repeated lines
referencing pickle, defaultdict, and datetime so only one import statement per
symbol remains and ensure no other code depends on the removed duplicates.
In `@tests/unit/attacks/ept_attack/test_classification.py`:
- Around line 238-241: The model fixture currently hardcodes input_dim=20 and
hidden_dim=10 in its signature instead of using the existing input_dim and
hidden_dim fixtures; update the model fixture declaration to accept input_dim
and hidden_dim as parameters (remove the default values) and construct the
MLPClassifier using those injected fixture values (e.g.,
MLPClassifier(input_size=input_dim, hidden_size=hidden_dim, ...)) so changes to
the input_dim/hidden_dim fixtures propagate to tests that depend on model.
🧹 Nitpick comments (5)
examples/ept_attack/config.yaml (1)
12-13: Minor formatting: Add blank line before the pipeline control section.There's a missing blank line between the new
inference_results_pathand the# Pipeline controlcomment, which would improve readability and maintain consistency with the rest of the file.Suggested fix
inference_results_path: ${data_paths.output_data_path}/inference_results # Path to save inference results + # Pipeline controlsrc/midst_toolkit/attacks/ept/classification.py (1)
122-142: Consider mini-batch training for scalability.The current implementation passes all training data through the network as a single batch per epoch. For large datasets, this could exhaust GPU memory. Consider adding a
batch_sizeparameter with mini-batch iteration for better scalability, or document the expected dataset size limitations.tests/unit/attacks/ept_attack/test_classification.py (1)
254-260: Consider consolidating with existingtest_mlp_classifier.This test duplicates much of the layer dimension checking already done in
test_mlp_classifier(lines 111-145). Consider either consolidating these tests or differentiating their purposes more clearly (e.g., this one could focus on the newepochs/deviceparameters).examples/ept_attack/run_ept_attack.py (2)
522-523: Note: Security consideration with pickle deserialization.Using
pickle.load()on files can be a security risk if the file source is untrusted (arbitrary code execution). In this context, the model files are generated by the same pipeline, so the risk is mitigated. However, consider documenting this assumption or using a safer serialization format (e.g.,torch.save/torch.loadfor PyTorch models, orjoblibwith explicit trust flags) for defense in depth.
499-549: Consider adding logging for model loading.The inference function logs when saving results but doesn't log when successfully loading the trained model. Adding a log statement after loading would help with debugging and monitoring.
Suggested addition
with open(model_path, "rb") as file: trained_model = pickle.load(file) + + log(INFO, f"Loaded trained attack classifier from {model_path}")
| import pickle | ||
| from collections import defaultdict | ||
| from datetime import datetime | ||
| from collections import defaultdict | ||
| from datetime import datetime |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Duplicate imports detected.
Lines 11-15 contain duplicate imports that need to be removed:
pickle(line 11)from collections import defaultdict(lines 12 and 14)from datetime import datetime(lines 13 and 15)
This appears to be a merge/rebase artifact.
Proposed fix
import itertools
import json
import pickle
from collections import defaultdict
from datetime import datetime
-from collections import defaultdict
-from datetime import datetime
from logging import INFO
from pathlib import Path📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import pickle | |
| from collections import defaultdict | |
| from datetime import datetime | |
| from collections import defaultdict | |
| from datetime import datetime | |
| import pickle | |
| from collections import defaultdict | |
| from datetime import datetime |
🧰 Tools
🪛 Ruff (0.14.13)
14-14: Redefinition of unused defaultdict from line 12: defaultdict redefined here
Remove definition: defaultdict
(F811)
15-15: Redefinition of unused datetime from line 13: datetime redefined here
Remove definition: datetime
(F811)
🤖 Prompt for AI Agents
In `@examples/ept_attack/run_ept_attack.py` around lines 11 - 15, Remove duplicate
imports: keep a single import for pickle, one for "from collections import
defaultdict", and one for "from datetime import datetime"; edit the top of
run_ept_attack.py to eliminate the repeated lines referencing pickle,
defaultdict, and datetime so only one import statement per symbol remains and
ensure no other code depends on the removed duplicates.
| challenge_feature_files = inference_features_path.glob("*.csv") | ||
|
|
||
| df_inference_features = pd.concat([pd.read_csv(f) for f in challenge_feature_files], ignore_index=True) | ||
|
|
||
| predictions = trained_model.predict(df_inference_features) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential bug: Inference may fail due to missing feature filtering.
The inference code passes the raw DataFrame directly to trained_model.predict():
df_inference_features = pd.concat([pd.read_csv(f) for f in challenge_feature_files], ignore_index=True)
predictions = trained_model.predict(df_inference_features)However, train_attack_classifier uses filter_data() to select specific columns before training. The inference features should be filtered using the same best_column_types that were used during training. Without this filtering, the model may receive incorrect features or fail entirely.
Proposed fix
You'll need to either:
- Store and load the
best_column_typesalongside the model, or - Pass the column types to
run_inferenceand usefilter_data():
+from midst_toolkit.attacks.ept.classification import filter_data
# In run_inference, after loading the model:
+# Load or determine the column types used for training
+best_column_types = [...] # Need to persist this during training
+
df_inference_features = pd.concat([pd.read_csv(f) for f in challenge_feature_files], ignore_index=True)
-predictions = trained_model.predict(df_inference_features)
+filtered_features = filter_data(df_inference_features, best_column_types)
+predictions = trained_model.predict(filtered_features)🤖 Prompt for AI Agents
In `@examples/ept_attack/run_ept_attack.py` around lines 531 - 535, The inference
currently concatenates CSVs into df_inference_features and calls
trained_model.predict directly, but training used filter_data with
best_column_types in train_attack_classifier; update inference to apply the same
filtering: ensure best_column_types is persisted with the model or passed into
run_inference, then call filter_data(df_inference_features, best_column_types)
before calling trained_model.predict; reference trained_model.predict,
train_attack_classifier, filter_data, best_column_types and run_inference to
locate and update the code paths.
| #TODO: Implement evaluation of inference results using the challenege labels | ||
| # _evaluate_inference_results(predictions, diffusion_model_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo: "challenege" should be "challenge".
Proposed fix
- `#TODO`: Implement evaluation of inference results using the challenege labels
+ # TODO: Implement evaluation of inference results using the challenge labels🤖 Prompt for AI Agents
In `@examples/ept_attack/run_ept_attack.py` around lines 551 - 552, The comment
contains a typo: change "challenege" to "challenge" in the TODO line referencing
evaluation of inference results; update the TODO comment text that mentions
evaluating inference results using the challenge labels and ensure the
referenced function call _evaluate_inference_results(predictions,
diffusion_model_name) remains correctly spelled and uncommented/implemented when
you add the evaluation logic.
| if config.pipeline.run_inference: | ||
| run_inference(config) | ||
| if config.pipeline.run_attack_classifier_training: | ||
| run_attack_classifier_training(config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Duplicate conditional blocks in main().
Lines 583-586 contain duplicated conditional execution:
if config.pipeline.run_inference:
run_inference(config)
if config.pipeline.run_attack_classifier_training:
run_attack_classifier_training(config)The run_attack_classifier_training check is already present at lines 580-581. Remove the duplicate block.
Proposed fix
if config.pipeline.run_attack_classifier_training:
run_attack_classifier_training(config)
if config.pipeline.run_inference:
run_inference(config)
- if config.pipeline.run_attack_classifier_training:
- run_attack_classifier_training(config)
-🤖 Prompt for AI Agents
In `@examples/ept_attack/run_ept_attack.py` around lines 583 - 586, In main(),
there's a duplicated conditional block that re-checks
config.pipeline.run_inference and
config.pipeline.run_attack_classifier_training; remove the second occurrence so
each pipeline check is only executed once—locate the duplicate if
config.pipeline.run_inference: ... and if
config.pipeline.run_attack_classifier_training: ... (referencing
run_inference(config) and run_attack_classifier_training(config)) and delete
that repeated block, leaving the original checks intact.
| @pytest.fixture | ||
| def model(input_dim=20, hidden_dim=10): | ||
| """Fixture to create a fresh model instance for each test.""" | ||
| return MLPClassifier(input_size=input_dim, hidden_size=hidden_dim, output_size=1, epochs=5, device=DEVICE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Fixture uses hardcoded defaults instead of injected fixtures.
The model fixture defines default values input_dim=20, hidden_dim=10 directly in the function signature instead of requesting the input_dim and hidden_dim fixtures. This means changes to those fixtures won't propagate to the model fixture, and tests like test_initialization that use both model and input_dim/hidden_dim fixtures could fail if the values diverge.
Proposed fix
`@pytest.fixture`
-def model(input_dim=20, hidden_dim=10):
+def model(input_dim, hidden_dim):
"""Fixture to create a fresh model instance for each test."""
return MLPClassifier(input_size=input_dim, hidden_size=hidden_dim, output_size=1, epochs=5, device=DEVICE)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @pytest.fixture | |
| def model(input_dim=20, hidden_dim=10): | |
| """Fixture to create a fresh model instance for each test.""" | |
| return MLPClassifier(input_size=input_dim, hidden_size=hidden_dim, output_size=1, epochs=5, device=DEVICE) | |
| `@pytest.fixture` | |
| def model(input_dim, hidden_dim): | |
| """Fixture to create a fresh model instance for each test.""" | |
| return MLPClassifier(input_size=input_dim, hidden_size=hidden_dim, output_size=1, epochs=5, device=DEVICE) |
🤖 Prompt for AI Agents
In `@tests/unit/attacks/ept_attack/test_classification.py` around lines 238 - 241,
The model fixture currently hardcodes input_dim=20 and hidden_dim=10 in its
signature instead of using the existing input_dim and hidden_dim fixtures;
update the model fixture declaration to accept input_dim and hidden_dim as
parameters (remove the default values) and construct the MLPClassifier using
those injected fixture values (e.g., MLPClassifier(input_size=input_dim,
hidden_size=hidden_dim, ...)) so changes to the input_dim/hidden_dim fixtures
propagate to tests that depend on model.
PR Type
[Feature | Fix]
Short Description
Clickup Ticket(s): https://app.clickup.com/t/868g60gxr
Refactored classification pipeline and classifier classes.
Added final classifier training (somehow missed it in the previous PR, sorry.)
Added basic inference to get predictions on challenge data (no comparison with ground truth yet.)
Tests Added
Modified tests to sync with the refactoring.
No new tests. Will add some when the inference evaluation is complete.