You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,6 +34,7 @@ Protenix is built for high-accuracy structure prediction. It serves as an initia
34
34
-**[Protenix-Dock](https://github.com/bytedance/Protenix-Dock)**: Our implementation of a classical protein-ligand docking framework that leverages empirical scoring functions. Without using deep neural networks, Protenix-Dock delivers competitive performance in rigid docking tasks.
35
35
36
36
## 🎉 Updates
37
+
- 2025-11-05: [**Protenix-v0.7.0**](./assets/inference_time_vs_ntoken.png) is now open-sourced, with new options for faster diffusion inference: shared variable caching, efficient bias fusion, and TF32 acceleration.
37
38
- 2025-07-17: **Protenix-Mini released!**: Lightweight model variants with significantly reduced inference cost are now available. Users can choose from multiple configurations to balance speed and accuracy based on deployment needs. See our [paper](https://arxiv.org/abs/2507.11839) and [model configs](./configs/configs_model_type.py) for more information.
38
39
- 2025-07-17: [***New constraint feature***](docs/infer_json_format.md#constraint) is released! Now supports **atom-level contact** and **pocket** constraints, significantly improving performance in our evaluations.
39
40
- 2025-05-30: **Protenix-v0.5.0** is now available! You may try Protenix-v0.5.0 by accessing the [server](https://protenix-server.com), or upgrade to the latest version using pip.
help="Kernel to use for triangle attention. Options: 'triattention', 'cuequivariance', 'deepspeed', 'torch'.",
303
320
)
321
+
@click.option(
322
+
"--enable_cache",
323
+
type=bool,
324
+
default=True,
325
+
help="The diffusion module precomputes and caches pair_z, p_lm, and c_l (which are shareable across the N_sample and N_step dimensions)",
326
+
)
327
+
@click.option(
328
+
"--enable_fusion",
329
+
type=bool,
330
+
default=True,
331
+
help="The diffusion transformer consists of 24 transformer blocks, and the biases in these blocks can be pre-transformed in terms of dimensionality and normalization",
332
+
)
333
+
@click.option(
334
+
"--enable_tf32",
335
+
type=bool,
336
+
default=True,
337
+
help="When the diffusion module uses FP32 computation, enabling enable_tf32 reduces the matrix multiplication precision from FP32 to TF32.",
0 commit comments