towhee.trainer.scheduler.configure_polynomial_decay_scheduler_with_warmup¶
- towhee.trainer.scheduler.configure_polynomial_decay_scheduler_with_warmup(optimizer, num_warmup_steps, num_training_steps, lr_end=1e-07, power=1.0, last_epoch=-1)[source]¶
Return a scheduler with a learning rate that decreases as a polynomial decay from the initial lr set in the optimizer to end lr defined by lr_end, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer.
- Parameters:
optimizer (Optimizer) – The optimizer to be scheduled.
num_warmup_steps (int) – The steps for the warmup phase.
num_training_steps (int) – The number of training steps
lr_end (float) – The end LR.
power (float) – Power factor.
last_epoch (int) – The index of the last epoch when training is resumed.
- Return (LambdaLR):
A polynomial decay scheduler with warmup.
Example
>>> from towhee.trainer.scheduler import configure_polynomial_decay_scheduler_with_warmup >>> from towhee.trainer.optimization.adamw import AdamW >>> from torch import nn >>> def unwrap_scheduler(scheduler, num_steps=10): >>> lr_sch = [] >>> for _ in range(num_steps): >>> lr_sch.append(scheduler.get_lr()[0]) >>> scheduler.step() >>> return lr_sch >>> mdl = nn.Linear(50, 50) >>> optimizer = AdamW(mdl.parameters(), lr=10.0) >>> num_steps = 10 >>> num_warmup_steps = 4 >>> num_training_steps = 10 >>> power = 2.0 >>> lr_end = 1e-7 >>> scheduler = configure_polynomial_decay_scheduler_with_warmup(optimizer, num_warmup_steps, num_training_steps, num_cycles) >>> lr_sch_1 = unwrap_scheduler(scheduler, num_steps) [0.0, 5.0, 10.0, 7.656, 5.625, 3.906, 2.5, 1.406, 0.625, 0.156]