Skip to content

[tmva][sofie] Parse generated code at test runtime#21184

Open
guitargeek wants to merge 1 commit intoroot-project:masterfrom
guitargeek:sofie_tests
Open

[tmva][sofie] Parse generated code at test runtime#21184
guitargeek wants to merge 1 commit intoroot-project:masterfrom
guitargeek:sofie_tests

Conversation

@guitargeek
Copy link
Contributor

TMVA SOFIE development is challenging sometimes, because of how the tests are structured.

The tests that covers many possible models imported from ONNX or ROOT have the issue that they includes all emitted code in the compiled executables. This means that one gets a build failure on the first model that generated invalid code, and that was it. Therefore, it's difficult to debug what is going wrong.

This commit suggests include the generated code with the interpreter instead. Then, one can check for each individual model if the code was valid, and if not, skip over to the next test a print the emitted code that failed to compile.

It has some performance overhead, but the tests still only take about 6 seconds. The drastically improved debugging experience justifies these few extra seconds spent on testing.

This was motivated by the effort to refactor the SOFIE-emitted code to make it differentiable with Clad.

@guitargeek guitargeek self-assigned this Feb 6, 2026
@guitargeek guitargeek force-pushed the sofie_tests branch 2 times, most recently from cfba928 to a47de91 Compare February 7, 2026 00:38
@github-actions
Copy link

github-actions bot commented Feb 7, 2026

Test Results

    22 files      22 suites   3d 15h 35m 27s ⏱️
 3 777 tests  3 777 ✅ 0 💤 0 ❌
76 017 runs  76 017 ✅ 0 💤 0 ❌

Results for commit 2f82ccb.

♻️ This comment has been updated with latest results.

TMVA SOFIE development is challenging sometimes, because of how the
tests are structured.

The tests that covers many possible models imported from ONNX or ROOT
have the issue that they includes **all** emitted code in the
compiled executables. This means that one gets a build failure on the
first model that generated invalid code, and that was it. Therefore,
it's difficult to debug what is going wrong.

This commit suggests include the generated code with the interpreter
instead. Then, one can check for each individual model if the code was
valid, and if not, skip over to the next test a print the emitted code
that failed to compile.

It has some performance overhead, but the tests still only take about 6
seconds. The drastically improved debugging experience justifies these
few extra seconds spent on testing.

This was motivated by the effort to refactor the SOFIE-emitted code to
make it differentiable with Clad.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant