Skip to content

DG: Add optional LR scheduler support via train_params#179

Open
dagarfield wants to merge 1 commit intomainfrom
lr-scheduler
Open

DG: Add optional LR scheduler support via train_params#179
dagarfield wants to merge 1 commit intomainfrom
lr-scheduler

Conversation

@dagarfield
Copy link
Collaborator

Description

configure_optimizers now supports an optional learning rate scheduler configured via train_params["lr_scheduler"]. When set to a dict, a scheduler is created and returned alongside the optimizer. When None (the default), behavior is unchanged.

Supported scheduler types:

  • "reduce_on_plateau"ReduceLROnPlateau with factor, patience, monitor
  • "step"StepLR with step_size, gamma
  • "cosine"CosineAnnealingLR with T_max (defaults to max_epochs)

Example usage:

train_params = {
    "task": "regression",
    "loss": "mse",
    "lr": 1e-4,
    "optimizer": "adam",
    "lr_scheduler": {
        "type": "reduce_on_plateau",
        "patience": 3,
        "factor": 0.5,
        "monitor": "val_loss",
    },
    ...
}

Related Issue(s)

N/A

How to test

  1. Train a model without lr_scheduler in train_params — verify behavior is identical to before.
  2. Train with a scheduler dict, e.g.:
train_params["lr_scheduler"] = {"type": "step", "step_size": 5, "gamma": 0.1}

Verify the learning rate decays every 5 epochs by checking logged LR or optimizer state.
3. Pass an invalid type and verify the error message is raised.

Allow train_params['lr_scheduler'] to accept a dict specifying a
scheduler type and its parameters. Supports reduce_on_plateau, step,
and cosine schedulers. Defaults to None (no scheduler), preserving
existing behavior.

Co-authored-by: Ona <no-reply@ona.com>
@dagarfield dagarfield marked this pull request as ready for review February 27, 2026 01:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant