Skip to content

add manual NumPy patching to mkl_fft#224

Merged
ndgrigorian merged 9 commits intomasterfrom
add-mkl-fft-patching
Mar 4, 2026
Merged

add manual NumPy patching to mkl_fft#224
ndgrigorian merged 9 commits intomasterfrom
add-mkl-fft-patching

Conversation

@ndgrigorian
Copy link
Collaborator

@ndgrigorian ndgrigorian commented Sep 10, 2025

this PR proposes implementing manual patching for NumPy a la mkl_umath to mkl_fft

this patching is specifically designed with NEP-36 in mind

@ndgrigorian ndgrigorian changed the title add manual patching to mkl_fft add manual NumPy patching to mkl_fft Sep 10, 2025
@ndgrigorian ndgrigorian force-pushed the add-mkl-fft-patching branch 3 times, most recently from b2ebfdc to d22ae8e Compare September 10, 2025 16:17
@ndgrigorian ndgrigorian force-pushed the add-mkl-fft-patching branch from 559373e to e4c64d4 Compare October 9, 2025 18:32
@ndgrigorian ndgrigorian marked this pull request as ready for review February 20, 2026 06:35
Copilot AI review requested due to automatic review settings February 20, 2026 06:35
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces manual NumPy patching functionality to mkl_fft, following NEP-36 fair play guidelines. It allows users to dynamically replace NumPy's FFT functions with mkl_fft's optimized implementations at runtime, similar to the approach used in mkl_umath.

Changes:

  • Added _patch_numpy.py module with patching/restoration functions and a context manager
  • Added test file test_patch.py to verify basic patch/restore functionality
  • Exported new public API functions: patch_numpy_fft(), restore_numpy_fft(), is_patched(), and mkl_fft context manager

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 10 comments.

File Description
mkl_fft/_patch_numpy.py Implements the patching mechanism using thread-local storage to track state, provides functions to patch/restore numpy.fft functions, and includes a context manager for temporary patching
mkl_fft/tests/test_patch.py Basic test verifying patch and restore functionality by checking module names
mkl_fft/init.py Exports the new patching API functions and context manager class

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@ndgrigorian ndgrigorian force-pushed the add-mkl-fft-patching branch 5 times, most recently from 51ba5d3 to 83496b2 Compare February 20, 2026 09:09
@ndgrigorian ndgrigorian requested a review from Copilot February 27, 2026 10:21
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@jharlow-intel
Copy link
Contributor

@ndgrigorian ping here, bot has some good points about idempotency of the patch function and some test cases

@ndgrigorian
Copy link
Collaborator Author

@ndgrigorian ping here, bot has some good points about idempotency of the patch function and some test cases

yep, I agree, and same goes for in umath

@ndgrigorian
Copy link
Collaborator Author

@antonwolfy @jharlow-intel
I've updated this PR with a reimplementation of patching. We use a singleton class now, with locks, to prevent races, and we have an explicit counter for patches as well as thread-local patch counter. The double bookkeeping prevents edge cases where one thread never calls patch, but calls unpatch, or calls unpatch twice. Just an extra guardrail.

This whole thing is a little hacky, so I added a note saying to prefer the context manager in general.

@antonwolfy antonwolfy added this to the 2.2.0 release milestone Mar 3, 2026
Copy link
Contributor

@jharlow-intel jharlow-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments on non-important stuff but this PR looks good to me

Copy link
Collaborator

@antonwolfy antonwolfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! The only comment from me is that it'd be great to update the changelog.

@ndgrigorian
Copy link
Collaborator Author

@jharlow-intel @antonwolfy
added to changelog and pulled out dead contextdecorator code in the patch object (I initially almost made the patch the context decorator and made it parent to mkl_fft manager, but decided against it)

should be ready

@ndgrigorian ndgrigorian force-pushed the add-mkl-fft-patching branch from f1604dd to ac6eca8 Compare March 4, 2026 01:31
* improve thread-safety of patching
* gate repeated patch calls

there still exist problematic edge cases (race condition where one thread restores while another is using patched functions)
leftover from idea to use GlobalPatch as parent to mkl_fft context manager
@ndgrigorian ndgrigorian force-pushed the add-mkl-fft-patching branch from acb0658 to c5931a9 Compare March 4, 2026 21:27
also use from mkl_fft import interfaces to avoid redundant mkl_fft module in namespace
@ndgrigorian ndgrigorian force-pushed the add-mkl-fft-patching branch from c5931a9 to a162685 Compare March 4, 2026 21:30
Copy link
Collaborator

@antonwolfy antonwolfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thank you @ndgrigorian

@ndgrigorian ndgrigorian merged commit e18fcc9 into master Mar 4, 2026
100 of 109 checks passed
@ndgrigorian ndgrigorian deleted the add-mkl-fft-patching branch March 4, 2026 22:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants