Releases: bytedance/Protenix
Releases · bytedance/Protenix
v0.7.3: Fix the bug in the code where ref_space_uid was mistakenly written as ref_mask in the cache computation.
v0.7.3: Fix the bug in the code where ref_space_uid was mistakenly written as ref_mask in the cache computation.
Latest
What's changed
- Fix the bug in the code where ref_space_uid was mistakenly written as ref_mask in the cache computation. commit 855973d.
Full Changelog: v0.7.2...v0.7.3
v0.7.2: Allow for the absence of pairing.a3m in inference.
What's changed
- If the directory specified by
precomputed_msa_dirunder themsafield in the inference file does not contain the pairing.a3m file, no error will be thrown; instead, only the non_pairing.a3m file will be used for inference. In previous versions, this would have caused an immediate error.
example.json
[{
"sequences": [
{
"proteinChain": {
"sequence": "MGSSHHHHHHSSGLVPRGSHMSGKIQHKAVVPAPSRIPLTLSEIEDLRRKGFNQTEIAELYGVTRQAVSWHKKTYGGRLTTRQIVQQNWPWDTRKPHDKSKAFQRLRDHGEYMRVGSFRTMSEDKKKRLLSWWKMLRDNDLVLEFDPSIEPYEGMAGGGFRYVPRDISDDDLLIRVNEHTQLTAEGELLWSWPDDIEELLSEP",
"count": 1,
"msa": {
"precomputed_msa_dir": "./examples/7r6r/msa/1",
"pairing_db": "uniref100"
}
}
},
{
"dnaSequence": {
"sequence": "TTTCGGTGGCTGTCAAGCGGG",
"count": 1
}
},
{
"dnaSequence": {
"sequence": "CCCGCTTGACAGCCACCGAAA",
"count": 1
}
}
],
"name": "7r6r"
}
]v0.7.1: enforce fp32 and torch kernels for triangle attention and multiplicative on V100.
What's Changed
- Add a dtype parameter to the Protenix CLI for inference, enabling FP32 inference via the -d flag.
- For inference on V100 GPUs, certain configurations are forcibly adjusted — for example, BF16 precision and unsupported optimized kernels are disabled by default.
v0.7.0: add options for faster diffusion inference: shared variable caching, efficient bias fusion, and TF32 acceleration.
What's Changed
We’re excited to announce the open-source release of Protenix v0.7.0, supported by @yangyanpinghpc, featuring several performance optimizations for diffusion inference. This version introduces three new optional acceleration flags (enabled by default in inference stage) and improved support for batched inference:
- --enable_cache
Precomputes and caches shared intermediate variables (pair_z, p_lm, c_l) across the N_sample and N_step dimensions. - --enable_fusion
Fuses bias transformations and normalization in the 24-layer diffusion transformer blocks at compile time. - --enable_tf32
Enables TF32 precision for matrix multiplications when using FP32 computation, trading slight numerical accuracy for speed. - Batched Diffusion Support (N_sample > 1)
Shares s_trunk and z_pair across the N_sample dimension during diffusion, reducing memory and compute overhead without affecting results.
You can run it using the following example command:
(Note: if not specified, --enable_cache, --enable_fusion, and --enable_tf32 default to true.)
protenix predict -i examples/example.json -o ./test_outputs/cmd/output_mini -s 105,106 -n "protenix_mini_default_v0.5.0" --triatt_kernel "torch" --trimul_kernel "torch" --enable_cache true --enable_fusion true --enable_tf32 true
v0.6.3: support polymer–polymer bond input at inference.
What's Changed
- Polymer–polymer bond input at inference. Inference can now read user-specified polymer–polymer covalent bonds from JSON and incorporate them into features. This supports cyclic peptides formed by head-to-tail amide linkage or disulfide bonds.
- CIF output quality. Cleaned and optimized fields in the generated CIF files for better downstream compatibility.
- msa_pairing.py: remove an assertion with deprecated np.string_ for improved NumPy compatibility.
- Updated inference README. Clarifies how to specify polymer–polymer bonds in JSON, the supported cyclic-peptide cases, and current limitations.
v0.6.2: update cuequivariance to 0.6.1 and update constraint api
What's Changed
- minor modification by @OccupyMars2025 in #177
- add compatibility with colabfold mmseqs server api by @JinyuanSun in #178
- tests: Add test cases for installation and compatibility issues by @ShadNygren in #192
- fix: Resolve DeepSpeed/Pydantic compatibility issue (#182) by @ShadNygren in #193
- Fix #185: Enable consumer GPU support (RTX 3090/4090) with Triton fallback by @ShadNygren in #194
- minor modification: switch
residue_indextotoken_indexby @OccupyMars2025 in #195 - fix typo for get_atom_permutation_list function by @mrzzmrzz in #196
- update cuequivariance to 0.6.1
- update constraint api and Protenix web server
New Contributors
- @JinyuanSun made their first contribution in #178
- @ShadNygren made their first contribution in #192
- @mrzzmrzz made their first contribution in #196
Full Changelog: v0.6.1...v0.6.2
v0.6.1: Fixed ESM model loading compatibility with PyTorch 2.6 and later versions.
- Fixed ESM model loading compatibility with PyTorch 2.6 and later versions.
Full Changelog: v0.6.0...v0.6.1
v0.6.0: Optimized kernels and upgraded dependencies for enhanced performance in PyTorch 2.4+
What's Changed
- Optimized the custom LayerNorm kernel, further boosting end-to-end inference and training speed.
- Integrated a custom Triton-based implementation of the TriangleAttention operator (triattention), improving computational efficiency.
- Integrated the cuEquivariance operator from NVIDIA/cuEquivariance to accelerate equivariant operations, with notable efficiency gains in the TriangleAttention and TriangleMultiplication modules.
- Upgraded the container image and dependencies to resolve efficiency bottlenecks in PyTorch 2.4 and later versions; Supported Biotite 1.2 and above.
New Contributors
Full Changelog: v0.5.5...v0.6.0
v0.5.5: Fix inference cache directory for users without root permissions.
- Fix inference cache directory for users without root permissions.
Full Changelog: v0.5.4...v0.5.5
v0.5.4: fix redundant scaling in F.scaled_dot_product_attention
What's Changed
- find all new chain starts by inspecting atom_array.hetero[c_start:c_stop] by @OccupyMars2025 in #157
- some minor fixes for pdb_to_cif() execution flow by @OccupyMars2025 in #158
- fix redundant scaling in F.scaled_dot_product_attention when use_memory_efficient_kernel is enabled
Full Changelog: v0.5.3...v0.5.4
