towhee.trainer.scheduler.configure_cosine_with_hard_restarts_scheduler_with_warmup

towhee.trainer.scheduler.configure_cosine_with_hard_restarts_scheduler_with_warmup(optimizer: Optimizer, num_warmup_steps: int, num_training_steps: int, num_cycles: int = 1, last_epoch: int = -1)[source]

Return a scheduler with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, with several hard restarts, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer.

Parameters:
  • optimizer (Optimizer) – The optimizer to be scheduled.

  • num_warmup_steps (int) – The steps for the warmup phase.

  • num_training_steps (int) – The number of training steps.

  • num_cycles (int) – The number of hard restarts to be used.

  • last_epoch (int) – The index of the last epoch when training is resumed.

Return (LambdaLR):

A cosine with hard restarts scheduler with warmup.

Example

>>> from towhee.trainer.scheduler import configure_cosine_with_hard_restarts_scheduler_with_warmup
>>> from towhee.trainer.optimization.adamw import AdamW
>>> from torch import nn
>>> def unwrap_scheduler(scheduler, num_steps=10):
>>>     lr_sch = []
>>>     for _ in range(num_steps):
>>>         lr_sch.append(scheduler.get_lr()[0])
>>>         scheduler.step()
>>>     return lr_sch
>>> mdl = nn.Linear(50, 50)
>>> optimizer = AdamW(mdl.parameters(), lr=10.0)
>>> num_steps = 10
>>> num_warmup_steps = 4
>>> num_training_steps = 10
>>> num_cycles = 2
>>> scheduler = configure_cosine_with_hard_restarts_scheduler_with_warmup(optimizer,
num_warmup_steps, num_training_steps, num_cycles)
>>> lr_sch_1 = unwrap_scheduler(scheduler, num_steps)
[0.0, 5.0, 10.0, 8.53, 5.0, 1.46, 10.0, 8.53, 5.0, 1.46]