-
Notifications
You must be signed in to change notification settings - Fork 45
Implement On-Policy Distillation #444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
pan-x-c
merged 15 commits into
modelscope:main
from
garyzhang99:dev/on-policy-distilation
Dec 19, 2025
Merged
Changes from all commits
Commits
Show all changes
15 commits
Select commit
Hold shift + click to select a range
bc88283
[WIP]init on-policy distillation
0381d17
merge main and fix wrapper type
d081921
assert instead of padding in workflow
b4ec917
change default loss to ppo with clip
2dd56e0
actual run and add on-policy distillation example in GSM8K
cb89407
Merge main into dev/on-policy-distillation
16e6c75
change model wrapper for workflows
297fd8f
make sure all workflow is changed
549ab51
change docs too
6444059
fix workflow test
fea1666
fix docs and comments
4da21a5
adjust based on comments
da28601
fix precommit
8aa7a51
add readme and odcs
23ca0ee
add readme and docs
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,36 @@ | ||
| # Example: On-Policy Distillation on GSM8K dataset | ||
|
|
||
| This example demonstrates On-Policy Distillation (OPD) algorithm training on the GSM8K dataset. | ||
|
|
||
| On-Policy Distillation is a knowledge distillation method, where in this example: | ||
| 1. **Student model** (`Qwen/Qwen2.5-1.5B-Instruct`) generates trajectories with logprobs | ||
| 2. **Teacher model** (`Qwen/Qwen2.5-Math-7B-Instruct`) computes logprobs on the same trajectories | ||
| 3. The advantage is computed as: `advantages = kl_coef * (teacher_logprobs - student_logprobs)` | ||
| 4. The student model is trained to minimize this KL divergence, effectively learning from the teacher | ||
|
|
||
| ## Key Configuration | ||
|
|
||
| - **Algorithm**: `on_policy_distill` | ||
| - **Workflow**: `on_policy_distill_workflow` | ||
| - **Student Model**: `Qwen/Qwen2.5-1.5B-Instruct` | ||
| - **Teacher Model**: `Qwen/Qwen2.5-Math-7B-Instruct` (configured as auxiliary model) | ||
|
|
||
| ## Running the Example | ||
|
|
||
| Download the model checkpoint and modify your config file, then run: | ||
| ```bash | ||
| trinity run examples/opd_gsm8k/opd_gsm8k.yaml | ||
| ``` | ||
|
|
||
| Then you are all set! It should be pretty simple😄, and the training should converge very quick. | ||
|
|
||
|
|
||
|
|
||
|  | ||
|  | ||
|
|
||
|
|
||
| ## References | ||
|
|
||
| - https://arxiv.org/pdf/2306.13649 | ||
| - https://thinkingmachines.ai/blog/on-policy-distillation/ |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,74 @@ | ||
| project: "Trinity-RFT-gsm8k-opd" | ||
| name: "qwen2.5-1.5B-distill-from-math-7B-lr1e-5" | ||
| checkpoint_root_dir: ${oc.env:TRINITY_CHECKPOINT_ROOT_DIR,./checkpoints} | ||
| algorithm: | ||
| algorithm_type: on_policy_distill | ||
| repeat_times: 8 | ||
| optimizer: | ||
| lr: 1e-5 | ||
| advantage_fn_args: | ||
| kl_coef: 1.0 | ||
| model: | ||
| # Student model | ||
| model_path: ${oc.env:TRINITY_MODEL_PATH,Qwen/Qwen2.5-1.5B-Instruct} | ||
| max_response_tokens: 1024 | ||
| max_model_len: 2048 | ||
| cluster: | ||
| node_num: 1 | ||
| gpu_per_node: 8 | ||
| buffer: | ||
| total_epochs: 1 | ||
| batch_size: 96 | ||
| explorer_input: | ||
| taskset: | ||
| name: gsm8k | ||
| storage_type: file | ||
| path: ${oc.env:TRINITY_TASKSET_PATH,openai/gsm8k} | ||
| subset_name: main | ||
| split: train | ||
| format: | ||
| prompt_key: 'question' | ||
| response_key: 'answer' | ||
| rollout_args: | ||
| temperature: 1.0 | ||
| # Use on_policy_distill_math_workflow for Qwen2.5-Math style format with accuracy reward | ||
| default_workflow_type: 'on_policy_distill_math_workflow' | ||
| trainer_input: | ||
| experience_buffer: | ||
| name: gsm8k_opd_buffer | ||
| storage_type: queue | ||
| explorer: | ||
| eval_interval: 50 | ||
| runner_per_model: 8 | ||
| rollout_model: | ||
| # Student model for rollout | ||
| engine_num: 4 | ||
| tensor_parallel_size: 1 | ||
| enable_prefix_caching: false | ||
| enforce_eager: true | ||
| dtype: bfloat16 | ||
| seed: 42 | ||
| auxiliary_models: | ||
| # Teacher model for distillation | ||
| - model_path: ${oc.env:TRINITY_MODEL_PATH,Qwen/Qwen2.5-Math-7B-Instruct} | ||
| engine_num: 1 | ||
| tensor_parallel_size: 2 | ||
| enable_prefix_caching: false | ||
| enforce_eager: true | ||
| dtype: bfloat16 | ||
| seed: 42 | ||
| max_model_len: 4096 | ||
| max_prompt_tokens: 2048 | ||
| max_response_tokens: 1024 | ||
| synchronizer: | ||
| sync_method: 'nccl' | ||
| sync_interval: 1 | ||
| sync_timeout: 1200 | ||
| trainer: | ||
| save_interval: 100 | ||
| grad_clip: 1.0 | ||
| use_dynamic_bsz: true | ||
| max_token_len_per_gpu: 16384 | ||
| ulysses_sequence_parallel_size: 1 | ||
| monitor: | ||
| monitor_type: wandb |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
68 changes: 68 additions & 0 deletions
68
trinity/algorithm/advantage_fn/on_policy_distill_advantage.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,68 @@ | ||
| # -*- coding: utf-8 -*- | ||
| """On-Policy Distillation advantage computation. | ||
|
|
||
| Reference: Tinker library's on-policy distillation. | ||
|
|
||
| advantages = -(student_logprobs - teacher_logprobs) | ||
| = teacher_logprobs - student_logprobs | ||
| """ | ||
|
|
||
| from typing import Dict, Tuple | ||
|
|
||
| from verl import DataProto | ||
|
|
||
| from trinity.algorithm.advantage_fn.advantage_fn import AdvantageFn | ||
|
|
||
|
|
||
| class OnPolicyDistillAdvantage(AdvantageFn): | ||
| """Advantage function for on-policy distillation. | ||
|
|
||
| Computes: advantages = kl_coef * (teacher_logprobs - student_logprobs) | ||
|
|
||
| The teacher_logprobs should be stored in Experience.teacher_logprobs | ||
| by the workflow during exploration. | ||
| """ | ||
|
|
||
| def __init__(self, kl_coef: float = 1.0) -> None: | ||
| self.kl_coef = kl_coef | ||
|
|
||
| def __call__(self, exps: DataProto, **kwargs) -> Tuple[DataProto, Dict]: | ||
| """Compute advantages from teacher and student logprobs. | ||
|
|
||
| Args: | ||
| exps: DataProto containing: | ||
| - old_log_probs: student's sampling logprobs [batch, seq] | ||
| - teacher_log_probs: teacher's logprobs [batch, seq] | ||
| - response_mask: mask for response tokens [batch, seq] | ||
|
|
||
| Returns: | ||
| exps: DataProto with advantages and returns added | ||
| metrics: Dict with kl and advantage statistics | ||
| """ | ||
| metrics = {} | ||
|
|
||
| old_log_probs = exps.batch["old_log_probs"] # student sampling logprobs | ||
| teacher_log_probs = exps.batch["teacher_log_probs"] | ||
| response_mask = exps.batch["response_mask"] | ||
|
|
||
| # advantages = -(student - teacher) = teacher - student | ||
| advantages = self.kl_coef * (teacher_log_probs - old_log_probs) | ||
|
|
||
| # Apply mask | ||
| advantages = advantages * response_mask | ||
|
|
||
| exps.batch["advantages"] = advantages | ||
| exps.batch["returns"] = advantages.clone() | ||
|
|
||
| # Metrics | ||
| kl_per_token = old_log_probs - teacher_log_probs | ||
| kl_sum = (kl_per_token * response_mask).sum(dim=-1) | ||
| metrics["kl/mean"] = kl_sum.mean().item() | ||
| metrics["kl/std"] = kl_sum.std().item() if kl_sum.numel() > 1 else 0.0 | ||
| metrics["advantages/mean"] = advantages.sum(dim=-1).mean().item() | ||
|
|
||
| return exps, metrics | ||
|
|
||
| @classmethod | ||
| def default_args(cls) -> Dict: | ||
| return {"kl_coef": 1.0} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.