Migrate qBraid Target in CUDAQ to qBraid v2#5
Open
TheGupta2012 wants to merge 116 commits intomainfrom
Open
Conversation
Member
Author
|
@ryanhill1 I discarded the commit in current |
a72236c to
46593c0
Compare
* working implementation using openQasm * modified and added test files(incomplete) * fix emulate command alignment * update polling + format * update polling interval and make code more readable * remove ionq fields from target-arguments * fix formatting * Add qBraid mock python server for testing Signed-off-by: Ryan Hill <ryanjh88@gmail.com> * Update __init__.py Signed-off-by: Ryan Hill <ryanjh88@gmail.com> * QbraidTester running correctly * added documentation for qbraid --------- Signed-off-by: Ryan Hill <ryanjh88@gmail.com> Co-authored-by: feelerx <superfeelerxx@gmail.com>
46593c0 to
3b0a1e4
Compare
The deployments cleanup job only removes `default` environment deployments but not `ghcr-ci` ones. Every CI run creates multiple ghcr-ci deployments via dev_environment.yml, leaving "copy-pr-bot temporarily deployed to ghcr-ci — Inactive" entries cluttering PR timelines. Extend the existing cleanup loop to also delete ghcr-ci deployments. The production `ghcr-deployment` environment used by deployments.yml is not affected. Signed-off-by: mitchdz <mitch_dz@hotmail.com>
…DIA#4320) Fixes NVIDIA#4319. The basis-driven pattern selection in `decomposition{basis=...}` failed to select decomposition chains involving `SToR1` and `TToR1` because these patterns were registered with `s(1)`/`t(1)` metadata (controlled-only) despite their implementations handling any control count. The graph lookup in `DecompositionPatternSelection.cpp` used exact hash matching on `OperatorInfo`, so an unbounded `(n)` entry could not match a concrete control count. This left `CCX` gates undecomposed when `t` was not directly in the target basis. The fix updates `SToR1`/`TToR1`/`R1ToU3`/`U3ToRotations` registration to `(n)` and adds `OperatorInfo::matches()` for wildcard control count matching in `incomingPatterns()` and `findGateDist()`. Signed-off-by: Thomas Alexander <talexander@nvidia.com>
…4332) Signed-off-by: Adam Geller <adgeller@nvidia.com>
…IA#4330) This updates the unittest so that cudaq::state objects are used to capture and pass state information (amplitude vectors) into kernels. The new API contract is that this sort of state information shall be passed into CUDA-Q kernels as state objects and not raw vectors. --------- Signed-off-by: Eric Schweitz <eschweitz@nvidia.com>
Migrating Python bindings from pybind11 to nanobind - Adding nanobind as a submodule - Creating NanobindAdaptors for MLIR C-API type casters - Keeping pybind11 only for upstream MLIR Python extensions - Converting all `*_py.cpp ` binding files, headers, CUDAQuantumExtension.cpp, pyDynamics, interop library, and PYSCF plugin to nanobind --------- Signed-off-by: Sachin Pisal <spisal@nvidia.com>
I, Harshit <harshit.11235@gmail.com>, hereby add my Signed-off-by to this commit: 9cd62cf I, Harshit <harshit.11235@gmail.com>, hereby add my Signed-off-by to this commit: 3b0a1e4 I, Harshit <harshit.11235@gmail.com>, hereby add my Signed-off-by to this commit: 1a24c66 Signed-off-by: Harshit <harshit.11235@gmail.com>
I, TheGupta2012 <harshit.11235@gmail.com>, hereby add my Signed-off-by to this commit: 925ae39 I, TheGupta2012 <harshit.11235@gmail.com>, hereby add my Signed-off-by to this commit: 41fe248 I, TheGupta2012 <harshit.11235@gmail.com>, hereby add my Signed-off-by to this commit: d74243d Signed-off-by: TheGupta2012 <harshit.11235@gmail.com>
This is a rewrite of NVIDIA#4329, using a stateless class with static functions rather than a builder pattern. Signed-off-by: Luca Mondada <luca@mondada.net>
Fixes NVIDIA#4343. Signed-off-by: Sachin Pisal <spisal@nvidia.com>
…VIDIA#4335) When a kernel returns a vector (for `cudaq::run`), we insert `__nvqpp_vectorCopyCtor` which performs a `malloc` + `memcpy` to copy stack data to the heap. After `AggressiveInlining` and `ReturnToOutputLog`, the heap copy becomes dead but remains in the IR. This is normally cleaned up by LLVM's optimization passes, but on code paths that emit MLIR directly (e.g., `nop` for backends that consume `quake`), these dead allocations persist and get sent to the server. This PR adds a new MLIR pass, `eliminate-dead-heap-copy`, that redirects reads from the `malloc`'d buffer to the original `memcpy` source (the stack `alloca`), then erases the dead `malloc`, `memcpy`, and `cc.stdvec_init` ops. This can be added on-demand via target yml file. Update the mock server test to demonstrate that. --------- Signed-off-by: Thien Nguyen <thiennguyen@nvidia.com>
Updating cuquantum version to 26.03.1 --------- Signed-off-by: Sachin Pisal <spisal@nvidia.com>
## Background
`cudaq.sample` with `set_target("braket")` fails on v0.14.0+ with:
RuntimeError: [line 10] cannot declare bit register. Only 1 bit
register(s) is/are supported
Amazon Braket's OpenQASM 2.0 parser enforces exactly one classical
register per circuit. The payload CUDA-Q emits for the Bell-state
reproducer in NVIDIA#4341 contains two.
## Root cause
`addPipelineTranslateToOpenQASM` (`lib/Optimizer/CodeGen/Pipelines.cpp`)
was refactored in NVIDIA#3693 to run `ExpandMeasurements` unconditionally. For
`qasm2` backends that run `combine-measurements` in the mid pipeline
(Braket, Scaleway, Quantum Machines), the sequence becomes:
1. Mid pipeline: `combine-measurements` merges per-qubit measurements
into a single `quake.mz` on the whole `!quake.veq` - the intent being
"emit one `creg` spanning all qubits".
2. Translate pipeline: `ExpandMeasurements` re-expands the combined `mz`
into one `mz` per qubit, then loop-unrolls.
3. OpenQASM2.0 emitter: writes one `creg` declaration per `mz`.
Target-specific YAML intent is silently overridden in the translate
pipeline.
## Fix
1. `lib/Optimizer/CodeGen/Pipelines.cpp`: revert
`addPipelineTranslateToOpenQASM` to the thin cleanup it was before
NVIDIA#3693 . Each backend's YAML now drives measurement expansion.
2. `infleqtion.yml` and `tii.yml`: add `jit-high-level-pipeline:
"expand-measurements"`. These targets previously depended on the
unconditional expansion to get one `creg` per measured qubit; the
explicit entry preserves that behavior.
3. `test/Translate/OpenQASM/basic.qke` and
`test/Translate/openqasm2_*.cpp`: update CHECK lines to match the
single-`creg` output for a vector `mz` (which is what the emitter
produces after the fix).
## Impact
| Backend | creg count for `mz(qvector(n))` |
|---|---|
| Braket, Scaleway, Quantum Machines | 1 (single `creg` of size n) |
| Infleqtion, TII | n (preserved via new YAML entry) |
| Quantinuum, IQM, OQC, Anyon, QCI | n (unchanged; already had
`expand-measurements` in YAML) |
The change is scoped to `addPipelineTranslateToOpenQASM`, which only
runs for `codegen-emission: qasm2`. Simulators and non-OpenQASM2.0
backends are unaffected.
## Testing
- `ninja check-cudaq-mlir` passes with the updated CHECK lines.
- `cudaq.translate(kernel, format="openqasm2")` under `set_target(...)`
for Braket, Scaleway, Infleqtion, TII — creg counts match the matrix
above.
- Reproducer from NVIDIA#4341 now emits exactly the "expected" OpenQASM2.0
shown in the issue: `creg var3[2]; measure var0 -> var3;`.
- Manually tested against real servers: `test_braket.py`,
`test_Infleqtion.py`, `test_tii.py`, `test_scaleway.py`.
## Follow-up
An automated local test set up for OpenQASM payload validator will be
added in a separate PR.
Fixes NVIDIA#4341.
---------
Signed-off-by: Pradnya Khalate <pkhalate@nvidia.com>
…frastructure (NVIDIA#4349) ## Summary Reverts PRs - NVIDIA#3800, - NVIDIA#4204, - NVIDIA#4208, - NVIDIA#4266, - NVIDIA#4267. Following an architecture alignment meeting (Apr 17), we are changing direction on how measurement results are represented in CUDA-Q. The `measure_result` standalone class and `!quake.measurements<N>` Quake type introduced by these PRs are being replaced by a new `measure_handle` approach with fundamentally different semantics. This revert restores: * `measure_result` as a typedef to bool (compiler mode) * Multi-qubit mz returning `!cc.stdvec<!quake.measure>` * Removes `!quake.measurements<N>` type, `quake.get_measure`, `quake.measurements_size` ops * Removes `quake.relax_size` extension for measurements * Removes `QIRResultArrayCreate` / `QIRResultArrayGetElementPtr1d` QIR intrinsics * Removes 8 test files added by the reverted PRs ### Forward direction (follow-up PRs): New `measure_handle` Signed-off-by: Pradnya Khalate <pkhalate@nvidia.com>
Skipping identity terms when building the Pauli word and coefficient lists passed to the Krylov kernel. Controlled exp_pauli does not handle the identity terms. We add their contribution back when assembling the Hamiltonian matrix. Fixes https://github.com/NVIDIA/cuda-quantum/actions/runs/24584888146/job/71904057326#step:5:1955 Signed-off-by: Sachin Pisal <spisal@nvidia.com>
…rs (NVIDIA#4351) Fixed the `test_state_mps.py - AttributeError: 'list' object has no attribute 'dtype'` errors in https://github.com/NVIDIA/cuda-quantum/actions/runs/24624569814/job/72005503960#step:7:43857 The fix for the rest of the failure (`RuntimeError: invalid value`) will come in a separate PR. Signed-off-by: Thien Nguyen <thiennguyen@nvidia.com>
This PR removes argument synthesis by default for Python kernels run on the local simulator, instead directly invoking them with the arguments (currently, by constructing a message buffer through `.argsCreator` which is passed to the kernel's `thunk`). This only affects entry point kernels. Benefits: 1. This makes it unnecessary to recompile kernels for different arguments in this setting, simplifying the `reuse_compiler_artifacts` logic. 2. It aligns the python local simulation path more closely with C++, where arguments are similarly not synthesized. 3. As a result of 1 and 2, it is a useful and important first step towards an inter-launch caching strategy for python. --------- Signed-off-by: Adam Geller <adgeller@nvidia.com> Signed-off-by: Luca Mondada <luca@mondada.net> Co-authored-by: Luca Mondada <luca@mondada.net>
Signed-off-by: TheGupta2012 <harshit.11235@gmail.com>
## Summary - Binds the runtime `Tracer` as `cudaq.util.trace` with: - `span(name, **kwargs)` context manager - `traced(name=None)` decorator (name defaults to `fn.__module__ + "." + fn.__qualname__`) - `TraceBackend` / `ChromeBackend` / `SpdlogBackend` first-class classes - `set_backend` / `get_backend` / `reset_backend` - Backends constructed via `std::make_shared` factories (`nb::new_`). Python wrapper and C++ Tracer slot each hold an independent `shared_ptr`, so Python finalization tears down wrappers without the C++ slot losing its reference, and C++ static destruction runs the ChromeBackend file write cleanly without touching Python state. - `ChromeBackend` exposes `to_json` / `to_dict` / `write_file` / `clear` for in-memory inspection with no filesystem round trip. - Applies `@trace.traced` to every public kernel-action entry point (`sample`, `observe`, `run`, `get_state`, `get_unitary`, `estimate_resources`, `draw`, `translate`, `evolve`, `ptsbe.sample`, and async variants) and to `PyKernelDecorator.compile` / `prepare_call`, with a nested `kernel.clone_module` span around the `cudaq_runtime.cloneModule` call. ## Dependencies Stacks on PR NVIDIA#4389 Rebase onto main after PR 1 merges and retarget the PR base. --------- Signed-off-by: Thomas Alexander <talexander@nvidia.com>
NOTE: This is a re-post of NVIDIA#4392, which I merged into the wrong branch! It's already been reviewed, discussed and approved. --- This PR splits out the container that is used in CompiledModule into its own type. This is so that it can be re-used by other upcoming types that look very similar, e.g. KernelArgs. I took the opportunity to change to using a vector of pairs instead of a std::map to store the artifacts. This should be faster (most of the time, there will be <5 artifacts) and means that several artifacts of different types can share the same name. This removes the need to adopt some naming convention to differentiate multiple artifact types for the same kernel, as they can share the same name. Signed-off-by: Luca Mondada <luca@mondada.net>
…IA#4404) ## Summary * Lower `!cc.measure_handle` to its `i64` payload through `--convert-to-qir-api`'s existing `TypeConverter`, completing the IR-side of the `cudaq::measure_handle` feature. * Builds on NVIDIA#4403. ## Motivation NVIDIA#4403 introduced `!cc.measure_handle` as IR vocabulary; nothing yet routes it to QIR. This PR adds the converter rule plus boundary bridging on `quake.mz` (which still calls a QIR function returning `Result*`) and `quake.discriminate` (which still consumes `Result*`), so handle-form kernels reach the QIR pipeline as `i64` payloads through the same `TypeConverter` machinery the rest of QIR conversion already uses. ## What Changed - **`QIRAPITypeConverter`** gains three `addConversion` rules: `!cc.measure_handle -> i64`, plus recursive descent through `!cc.array<...>` and `!cc.stdvec<...>` so container-shaped function signatures, allocations, and pointers see consistent post-conversion element types. The `!cc.ptr<...>` recursion was already in place. - **`MeasurementOpPattern`** detects when the original `quake.mz` produced a handle (its `measOut` is `!cc.measure_handle`) and emits a `cc.cast Result* -> i64` so downstream uses see the converted payload. The cast is materialized in the mz call's block, ahead of the optional terminator-relative insertion point used for record-output, so it dominates downstream `quake.discriminate` uses. - **`DiscriminateOpToCallRewrite`** mirrors this on the read side: when the post-conversion operand is integer-typed it emits `cc.cast i64 -> Result*` before delegating to the existing read-result lowering. In the full-QIR (`!discriminateToClassical`) branch the bridge cast and the inner double-cast fold against each other, leaving a single `cc.cast i64 -> !cc.ptr<i1>` + `cc.load`. - **`ExpandMeasurements`** accepts `!cc.measure_handle` alongside `!quake.measure` in `usesIndividualQubit`, so single-qubit handle measurements aren't rewritten as registers. - **Predicate rename**: the misnamed `hasQuakeType` is now `needsTypeConversion`, leaf check extended to include `!cc.measure_handle`, recursion extended to descend through `!cc.array`/`!cc.stdvec`. The old name was incorrect — it has always reported "this op carries a type the converter rewrites," not "this op carries a Quake type." - **Test**: `test/Transforms/qir_api_measure_handle.qke` covering scalar handle measurement + discriminate, function signature with handle parameter and return, `cc.alloca` of a scalar handle, static- and dynamic-size arrays of handles, `cc.stdvec<!cc.measure_handle>` in a function signature, `cc.indirect_callable<() -> !cc.measure_handle>`, and a no-handle negative. ## Risks - `cc.loop` iter-args carrying `!cc.measure_handle` are not exercised by the conversion's region-aware patterns. Low immediate risk because no current frontend or test produces such IR; flagged as a follow-up. - Container types beyond `cc.array`/`cc.stdvec` (e.g., a `cc.struct` with a handle field) are not in the converter's recursion. None of the current frontends produce these; not a regression vs. the prototype. ## Downstream Impact - CUDA-QX: none. - Public API: none. - Stack: the next PR adds C++/Python frontend bindings that produce handle-form IR, which this PR now correctly routes. --------- Signed-off-by: Pradnya Khalate <pkhalate@nvidia.com> Signed-off-by: Pradnya Khalate <148914294+khalatepradnya@users.noreply.github.com>
Bumps [notebook](https://github.com/jupyter/notebook) from 7.5.2 to 7.5.6. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/jupyter/notebook/releases">notebook's releases</a>.</em></p> <blockquote> <h2>v7.5.6</h2> <h2>7.5.6</h2> <p>(<a href="https://github.com/jupyter/notebook/compare/@jupyter-notebook/application-extension@7.5.5...2e642f0cb10be314ba5d97d709cffe41bf992d9e">Full Changelog</a>)</p> <h3>Security patches</h3> <ul> <li>CVE-2026-42557 <a href="https://github.com/jupyterlab/jupyterlab/security/advisories/GHSA-mqcg-5x36-vfcg">https://github.com/jupyterlab/jupyterlab/security/advisories/GHSA-mqcg-5x36-vfcg</a></li> <li>CVE-2026-40171 <a href="https://github.com/jupyter/notebook/security/advisories/GHSA-rch3-82jr-f9w9">https://github.com/jupyter/notebook/security/advisories/GHSA-rch3-82jr-f9w9</a></li> </ul> <h3>Maintenance and upkeep improvements</h3> <ul> <li>Update to JupyterLab v4.5.7 <a href="https://redirect.github.com/jupyter/notebook/pull/7902">#7902</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> </ul> <h3>Documentation improvements</h3> <ul> <li>docs: Fix broken links in troubleshooting and migration docs <a href="https://redirect.github.com/jupyter/notebook/pull/7824">#7824</a> (<a href="https://github.com/RamiNoodle733"><code>@RamiNoodle733</code></a>)</li> </ul> <h3>Contributors to this release</h3> <p>The following people contributed discussions, new ideas, code and documentation contributions, and review. See <a href="https://github-activity.readthedocs.io/en/latest/use/#how-does-this-tool-define-contributions-in-the-reports">our definition of contributors</a>.</p> <p>(<a href="https://github.com/jupyter/notebook/graphs/contributors?from=2026-03-11&to=2026-04-30&type=c">GitHub contributors page for this release</a>)</p> <p><a href="https://github.com/jtpio"><code>@jtpio</code></a> (<a href="https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2026-03-11..2026-04-30&type=Issues">activity</a>) | <a href="https://github.com/RamiNoodle733"><code>@RamiNoodle733</code></a> (<a href="https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ARamiNoodle733+updated%3A2026-03-11..2026-04-30&type=Issues">activity</a>)</p> <h2>v7.5.5</h2> <h2>7.5.5</h2> <p>(<a href="https://github.com/jupyter/notebook/compare/@jupyter-notebook/application-extension@7.5.4...4f8438b0c67dc4f010bf8cd052da4f16e2ed3828">Full Changelog</a>)</p> <h3>Maintenance and upkeep improvements</h3> <ul> <li>Update to JupyterLab v4.5.6 <a href="https://redirect.github.com/jupyter/notebook/pull/7861">#7861</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> <li>[7.5.x] Drop Python 3.9 on CI <a href="https://redirect.github.com/jupyter/notebook/pull/7860">#7860</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> <li>Fix check links <a href="https://redirect.github.com/jupyter/notebook/pull/7857">#7857</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> </ul> <h3>Contributors to this release</h3> <p>The following people contributed discussions, new ideas, code and documentation contributions, and review. See <a href="https://github-activity.readthedocs.io/en/latest/use/#how-does-this-tool-define-contributions-in-the-reports">our definition of contributors</a>.</p> <p>(<a href="https://github.com/jupyter/notebook/graphs/contributors?from=2026-02-24&to=2026-03-11&type=c">GitHub contributors page for this release</a>)</p> <p><a href="https://github.com/jtpio"><code>@jtpio</code></a> (<a href="https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2026-02-24..2026-03-11&type=Issues">activity</a>)</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/jupyter/notebook/blob/@jupyter-notebook/tree@7.5.6/CHANGELOG.md">notebook's changelog</a>.</em></p> <blockquote> <h2>7.5.6</h2> <p>(<a href="https://github.com/jupyter/notebook/compare/@jupyter-notebook/application-extension@7.5.5...2e642f0cb10be314ba5d97d709cffe41bf992d9e">Full Changelog</a>)</p> <h3>Maintenance and upkeep improvements</h3> <ul> <li>Update to JupyterLab v4.5.7 <a href="https://redirect.github.com/jupyter/notebook/pull/7902">#7902</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> </ul> <h3>Documentation improvements</h3> <ul> <li>docs: Fix broken links in troubleshooting and migration docs <a href="https://redirect.github.com/jupyter/notebook/pull/7824">#7824</a> (<a href="https://github.com/RamiNoodle733"><code>@RamiNoodle733</code></a>)</li> </ul> <h3>Contributors to this release</h3> <p>The following people contributed discussions, new ideas, code and documentation contributions, and review. See <a href="https://github-activity.readthedocs.io/en/latest/use/#how-does-this-tool-define-contributions-in-the-reports">our definition of contributors</a>.</p> <p>(<a href="https://github.com/jupyter/notebook/graphs/contributors?from=2026-03-11&to=2026-04-30&type=c">GitHub contributors page for this release</a>)</p> <p><a href="https://github.com/jtpio"><code>@jtpio</code></a> (<a href="https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2026-03-11..2026-04-30&type=Issues">activity</a>) | <a href="https://github.com/RamiNoodle733"><code>@RamiNoodle733</code></a> (<a href="https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3ARamiNoodle733+updated%3A2026-03-11..2026-04-30&type=Issues">activity</a>)</p> <!-- raw HTML omitted --> <h2>7.5.5</h2> <p>(<a href="https://github.com/jupyter/notebook/compare/@jupyter-notebook/application-extension@7.5.4...4f8438b0c67dc4f010bf8cd052da4f16e2ed3828">Full Changelog</a>)</p> <h3>Maintenance and upkeep improvements</h3> <ul> <li>Update to JupyterLab v4.5.6 <a href="https://redirect.github.com/jupyter/notebook/pull/7861">#7861</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> <li>[7.5.x] Drop Python 3.9 on CI <a href="https://redirect.github.com/jupyter/notebook/pull/7860">#7860</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> <li>Fix check links <a href="https://redirect.github.com/jupyter/notebook/pull/7857">#7857</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> </ul> <h3>Contributors to this release</h3> <p>The following people contributed discussions, new ideas, code and documentation contributions, and review. See <a href="https://github-activity.readthedocs.io/en/latest/use/#how-does-this-tool-define-contributions-in-the-reports">our definition of contributors</a>.</p> <p>(<a href="https://github.com/jupyter/notebook/graphs/contributors?from=2026-02-24&to=2026-03-11&type=c">GitHub contributors page for this release</a>)</p> <p><a href="https://github.com/jtpio"><code>@jtpio</code></a> (<a href="https://github.com/search?q=repo%3Ajupyter%2Fnotebook+involves%3Ajtpio+updated%3A2026-02-24..2026-03-11&type=Issues">activity</a>)</p> <h2>7.5.4</h2> <p>(<a href="https://github.com/jupyter/notebook/compare/@jupyter-notebook/application-extension@7.5.3...e5d8418b706fcefd4208bb61c22399dd3123555b">Full Changelog</a>)</p> <h3>Maintenance and upkeep improvements</h3> <ul> <li>Update to JupyterLab v4.5.5 <a href="https://redirect.github.com/jupyter/notebook/pull/7842">#7842</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> <li>Fix PyO3 CI failure with Python 3.15 <a href="https://redirect.github.com/jupyter/notebook/pull/7836">#7836</a> (<a href="https://github.com/jtpio"><code>@jtpio</code></a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/jupyter/notebook/commit/1ab2d2b99261996e94069ca53dd3d74b8b2ee1ba"><code>1ab2d2b</code></a> Publish 7.5.6</li> <li><a href="https://github.com/jupyter/notebook/commit/50e5222c9670121c3369900c7dce01aae53823fc"><code>50e5222</code></a> Merge commit from fork</li> <li><a href="https://github.com/jupyter/notebook/commit/2e642f0cb10be314ba5d97d709cffe41bf992d9e"><code>2e642f0</code></a> Update to JupyterLab v4.5.7 (<a href="https://redirect.github.com/jupyter/notebook/issues/7902">#7902</a>)</li> <li><a href="https://github.com/jupyter/notebook/commit/4b93f98b5a6e57027a2e1d58694b56e2ebd793a3"><code>4b93f98</code></a> Backport PR <a href="https://redirect.github.com/jupyter/notebook/issues/7824">#7824</a>: docs: Fix broken links in troubleshooting and migration do...</li> <li><a href="https://github.com/jupyter/notebook/commit/9a2c88fe646bac05b39dbe53e3e0ce95cafee016"><code>9a2c88f</code></a> Publish 7.5.5</li> <li><a href="https://github.com/jupyter/notebook/commit/4f8438b0c67dc4f010bf8cd052da4f16e2ed3828"><code>4f8438b</code></a> Update to JupyterLab v4.5.6 (<a href="https://redirect.github.com/jupyter/notebook/issues/7861">#7861</a>)</li> <li><a href="https://github.com/jupyter/notebook/commit/f78fcfada85f9e4b46003bc1b831c83e6f4c30b3"><code>f78fcfa</code></a> Backport PR <a href="https://redirect.github.com/jupyter/notebook/issues/7857">#7857</a>: Fix check links (<a href="https://redirect.github.com/jupyter/notebook/issues/7858">#7858</a>)</li> <li><a href="https://github.com/jupyter/notebook/commit/9e4cf2a44594e650e1ae3da49f81ae420135f32f"><code>9e4cf2a</code></a> [7.5.x] Drop Python 3.9 on CI (<a href="https://redirect.github.com/jupyter/notebook/issues/7860">#7860</a>)</li> <li><a href="https://github.com/jupyter/notebook/commit/ecc3aaf1bbf8f9cbec9c5d85df79db0f62b6d1e6"><code>ecc3aaf</code></a> Publish 7.5.4</li> <li><a href="https://github.com/jupyter/notebook/commit/e5d8418b706fcefd4208bb61c22399dd3123555b"><code>e5d8418</code></a> Update to JupyterLab v4.5.5 (<a href="https://redirect.github.com/jupyter/notebook/issues/7842">#7842</a>)</li> <li>Additional commits viewable in <a href="https://github.com/jupyter/notebook/compare/@jupyter-notebook/tree@7.5.2...@jupyter-notebook/tree@7.5.6">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/NVIDIA/cuda-quantum/network/alerts). </details> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Signed-off-by: Eric Schweitz <eschweitz@nvidia.com>
Signed-off-by: mdzurick <mitch_dz@hotmail.com>
The test `test_mpi_mqpu.py` was hanging where PyGILState_Ensure() on the worker waits for the main thread to release the GIL, but the main thread is in `asyncResult.get()` holding the GIL. `addPythonSignalInstrumentation` now skips installing the per-pass PyGILState_Ensure based instrumentation when called from a thread that doesn't currently hold the GIL. Signed-off-by: Sachin Pisal <spisal@nvidia.com>
The clone captured into the async task lambda was never erased. That left the operation in the MLIR context. Now we are wrapping it in a shared pointer with an erasing deleter that ties cleanup to lambda destruction. Similar solution as how we already handled it in [pyLaunchModule](https://github.com/NVIDIA/cuda-quantum/blob/main/python/runtime/cudaq/platform/py_alt_launch_kernel.cpp#L682). The memory leak of around 15 KB per call is no more. Fixes NVIDIA#3355 Signed-off-by: Sachin Pisal <spisal@nvidia.com>
The end goal is to be able to have QPU headers visible from user code,
so compile-time checks can be performed based on QPU traits without
leaking headers from 3rd party libraries.
1. Importing the latest `nlohmann::json` library which introduces a
header for forward declaration of the `nlohmann::json` type.
- Using FectchContent
- Removed tpls/json
- Exporting json_fwd.hpp in the install tree
- Introducing the `cudaq_json` type to encapsulate the forward declared
`nlohmann::json`. Only needed by `KernelExecution`
- Only including the full `nlohmann::json` type in `.cpp` files.
2. Adding to the build tree, the files newly needed at runtime to enable
`nvq++` to run from the build tree
(TargetConfig.h Version.h json_fwd.hpp)
3. Moved Fermioniq implementation that was using `nlohmann::json` to the
`.cpp` file. Merged FermioniqQPU with FermioniqBaseQPU.
4. Introduced qpu_utils.{h,cpp} to prevent header includes to bleed
through the qpu headers. This, for now, stops the inclusion of:
```
#include "cudaq/Optimizer/Builder/RuntimeNames.h"
#include "cudaq/Support/TargetConfig.h"
#include "cudaq/Support/TargetConfigYaml.h"
#include "llvm/Support/Base64.h"
```
5. Using `ModuleOp::getFromOpaquePointer`,
`ModuleOp::getAsOpaquePointer` and an MLIRContextDesctuctor to pass the
MLIR modules and contexts through the QPUs.
This is a stop gap until we have a unified launch interface @lmondada is
working on.
---------
Signed-off-by: Renaud Kauffmann <rkauffmann@nvidia.com>
- `x.ctrl(qubits[1:3], qubits[0])` was producing IQM JSON translation errors: 'quake.subveq' op unable to translate op to IQM Json. - MultiControlDecomposition defers CCX/CCZ to the CCXToCCZ decomposition pattern. But that pattern used checkNumControls and forwarded the original veq/subveq operand straight onto the rewritten quake.z. The subveq then survived into translateToIQMJson, which has no case for it. - The fix is to switch CCXToCCZ to checkAndExtractControls, which is already used by `CCZToCX`. So any veq/subveq control is split into individual quake.extract_refs before the new quake.z is built. Fixes NVIDIA#4141 # CPP example used for testing ``` #include <cudaq.h> int main() { auto kernel = cudaq::make_kernel(); auto qubit = kernel.qalloc(4); auto ctrl_bits = qubit.slice(1,2); kernel.x<cudaq::ctrl>(ctrl_bits, qubit[0]); auto counts = cudaq::sample(kernel); counts.dump(); auto kernel1 = cudaq::make_kernel(); auto qubit1 = kernel1.qalloc(4); auto ctrl_bits1 = qubit1.slice(1,3); kernel.x<cudaq::ctrl>(ctrl_bits1, qubit1[0]); auto counts1 = cudaq::sample(kernel1); counts1.dump(); } ``` # Output of testing against IQM server ``` [2026-05-01 22:51:18.045] [info] [IQMServerHelper.cpp:270] postJobResponse: {"counts_batch":[{"counts":{"0000":493,"1111":507},"measurement_keys":["m_QB1","m_QB2","m_QB3","m_QB4"]}],"message":null,"metadata":{"calibration_set_id":"61f8a617-7440-4521-a306-aa2096682706","request":{"circuits":[{"instructions":[{"args":{"key":"m_QB1"},"name":"measure","qubits":["QB1"]},{"args":{"key":"m_QB2"},"name":"measure","qubits":["QB2"]},{"args":{"key":"m_QB3"},"name":"measure","qubits":["QB3"]},{"args":{"key":"m_QB4"},"name":"measure","qubits":["QB4"]}],"name":"__nvqppBuilderKernel_093606261879"}],"qubit_mapping":[{"logical_name":"QB1","physical_name":"QB1"},{"logical_name":"QB2","physical_name":"QB2"},{"logical_name":"QB3","physical_name":"QB3"},{"logical_name":"QB4","physical_name":"QB4"},{"logical_name":"QB5","physical_name":"QB5"}],"shots":1000},"timestamps":{"compilation_ended":"2026-05-01T22:51:17.104693Z","compilation_started":"2026-05-01T22:51:16.919366Z","compile_end":"2026-05-01T22:51:17.104693Z","compile_start":"2026-05-01T22:51:16.919366Z","execution_end":"2026-05-01T22:51:17.312677Z","execution_ended":"2026-05-01T22:51:17.312677Z","execution_start":"2026-05-01T22:51:17.282203Z","execution_started":"2026-05-01T22:51:17.282203Z","post_processing_ended":"2026-05-01T22:51:17.316822Z","post_processing_started":"2026-05-01T22:51:17.313485Z","ready":"2026-05-01T22:51:17.317357Z","received":"2026-05-01T22:51:16.828059Z","validation_ended":"2026-05-01T22:51:16.878544Z","validation_started":"2026-05-01T22:51:16.877568Z"}},"status":"ready","warnings":null} { 0000:493 1111:507 } ``` Signed-off-by: Sachin Pisal <spisal@nvidia.com>
NVIDIA#4442) Issue NVIDIA#4441: test_cupy_to_state_ownership_semantics fails on the CI's NVIDIA/CUDA stack because it asserts that the contiguous CuPy fast path aliases the source device buffer (mutating contig[0] should be visible through np.array(state)). That assertion was written from a local observation, not a contract — whether CusvState adopts the user-supplied device pointer or eagerly cudaMemcpy's it into its own pool is a backend implementation detail that varies with cupy version, CUDA runtime version, and memory pool configuration. Drop the contiguous half of the test and keep only the strided half, renaming it to test_cupy_to_state_strided_canonicalization_is_independent to reflect what is actually being asserted. Independence is part of the strided-import fix's contract because we explicitly canonicalize non-contiguous CuPy input through cupy.asnumpy(), which always returns a host copy. Signed-off-by: huaweil <huaweil@nvidia.com>
Previously, we had a special code handling the adjoint of projector tensors (used to construct tensor network state from state vector data). This turned out to be a workaround for an unknown bug, which has now been fixed. Hence, remove the workaround code block. Also, guard against the cutensornet fixed version, which should have been updated in NVIDIA#4342. Also, extend the test to cover more cases beyond the simple bell state. --------- Signed-off-by: Thien Nguyen <thiennguyen@nvidia.com> Co-authored-by: Anthony Cabrera <cabreraam@users.noreply.github.com>
This fixes the issue currently seen in NVIDIA#4370 Where if you open a PR and then immediately dispatch a `workflow_run`, the CI Summary may fail the checks for the per-PR status when it actually passes. --------- Signed-off-by: mdzurick <mitch_dz@hotmail.com>
Make a separate CI summary for push events and merge_queue events. Signed-off-by: mdzurick <mitch_dz@hotmail.com>
## Summary Set `CUDAQ_LIBRARY_MODE` at directory scope in `unittests/CMakeLists.txt` so every unit-test binary in the tree compiles against the library-mode inline implementations of the runtime headers. ## Motivation The unit-test binaries instantiate `__qpu__` functor structs and invoke `cudaq::sample(myKernel)` directly from C++ test code, with no `nvq++` / `cudaq-quake` step in between. The AST bridge never runs, so measurement and gate calls reach the inline bodies in `runtime/cudaq/qis/qubit_qis.h` on the host. That is exactly the library-mode execution model. Today this happens to work because the historic inline `mz` / `mx` / `my` bodies execute on host. Future runtime header changes can make this implicit dependency explicit by trapping host invocation of the MLIR-mode entry points (e.g., `std::abort()` stubs); pre-emptively setting the macro keeps these test binaries on the supported execution path. --------- Signed-off-by: Pradnya Khalate <pkhalate@nvidia.com>
Summary: Migrate CUDA-Q compiler and runtime from LLVM/MLIR 16 to LLVM/MLIR 22.1. This encompasses 154 files with ~7,300 lines of changes across the entire codebase, including op creation APIs, opaque pointer adoption, interface refactors, Python binding migration, and build system updates. Major Changes: - Op Creation API: `builder.create<Op>(loc, ...)` → `Op::create(builder, loc, ...)` - Opaque Pointers: Migrate typed LLVM pointers to opaque pointers (`!llvm.ptr`); remove element-type info from LLVMPointerType::get() - Rewrite Infrastructure: `PatternRewriter::updateRootInPlace` → `modifyOpInPlace`; `applyPatternsAndFoldGreedily → applyPatternsGreedily` - Pass Macros: Update pass definition macros from `GEN_PASS_CLASSES` to `GEN_PASS_DEF_*` - Python Bindings: Fix cross-DSO registration, return value policies, and type coercion - Runtime: Update JIT compilation infrastructure, LLVM target APIs, and argument conversion for opaque pointers - Tests: Update FileCheck patterns, QIR CHECK directives, and clang diagnostic verification for MLIR 22 output format changes - Toolchains: Replace `gcc11`, `gcc12`, and `clang16`, with bootstrapped `llvm` toolchain for CI testing. Follow up will work on re-enabling cross-compiler `gcc12` toolchain (still used for wheel builds). --------- Signed-off-by: boschmitt <7152025+boschmitt@users.noreply.github.com> Signed-off-by: Renaud Kauffmann <rkauffmann@nvidia.com> Signed-off-by: Sachin Pisal <spisal@nvidia.com> Signed-off-by: Eric Schweitz <eschweitz@nvidia.com> Signed-off-by: Adam Geller <adgeller@nvidia.com> Signed-off-by: Adam T. Geller <adgeller@nvidia.com> Co-authored-by: boschmitt <7152025+boschmitt@users.noreply.github.com> Co-authored-by: Renaud Kauffmann <rkauffmann@nvidia.com> Co-authored-by: Sachin Pisal <spisal@nvidia.com> Co-authored-by: Eric Schweitz <eschweitz@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
…VIDIA#4450) Signed-off-by: Adam Geller <adgeller@nvidia.com>
cuda 12.6 doesn't work with clang++ 22.1. Re-enable gcc12 toolchain support to work around this. --------- Signed-off-by: Adam Geller <adgeller@nvidia.com> Signed-off-by: Mitchell <mitchdz@plasticmemories.xyz> Co-authored-by: Mitchell <mitch_dz@hotmail.com> Co-authored-by: Mitchell <mitchdz@plasticmemories.xyz>
While building flang in gcc12, potential OOM errors persisted. This update uses beefier runners with 64GB of RAM, and also restricts ninja to 8 concurrent threads. Signed-off-by: mdzurick <mitch_dz@hotmail.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
- If the launched server exits before becoming reachable, waitpid(WNOHANG) breaks out of the 50s ping loop so we move to the next port immediately, instead of throwing `RuntimeError: No usable ports available` only after few minutes. - Dropping static from the mt19937 so seed_offset is honoured on every construction, not just the first one. Fixes: ``` @pytest.fixture(scope="session", autouse=True) def startUpMockServer(): > cudaq.set_target("remote-mqpu", auto_launch=str(num_qpus)) E RuntimeError: No usable ports available tmp/tests/remote/test_remote_platform.py:71: RuntimeError ``` https://github.com/NVIDIA/cuda-quantum/actions/runs/25393104770/job/74490650292#step:7:1282 Signed-off-by: Sachin Pisal <spisal@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com>
…4459) - `move_artifacts` in `scripts/migrate_assets.sh` emitted an `rm` + `rmdir -p` pair per file. With LLVM 22 (~7k files) bundled into `cudaq/lib/llvm`, the generated `uninstall.sh` ballooned to a ~15k-line `if $continue; then ... fi` body, causing bash to segfault mid-uninstall in the "Additional validation (MPI and uninstall)" CI step on ubuntu/debian/fedora/redhat. - Capture top-level entries in `$1` before the move and emit one `rm -rf -- "$2/<entry>"` per entry. Trailing `rm -rf "$CUDA_QUANTUM_PATH"` is unchanged. Co-authored by: AI Signed-off-by: Sachin Pisal <spisal@nvidia.com>
…4463) Tests will soon be removed due to NVIDIA#4276 anyway. Signed-off-by: Adam Geller <adgeller@nvidia.com>
Signed-off-by: Adam Geller <adgeller@nvidia.com> Signed-off-by: Adam T. Geller <adgeller@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Add the updates for migrating cudaq
qbraidtarget to use qbraid platform v2. Include updates for jobs, main api update, auth updates for api-key auth, etc.