trainer.optimization package

Submodules

trainer.optimization.adafactor module

class trainer.optimization.adafactor.Adafactor(params, lr=None, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=- 0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=True, warmup_init=False)[source]

Bases: Optimizer

AdaFactor pytorch implementation as introduced in Adafactor: Adaptive Learning Rates with Sublinear Memory Cost https://arxiv.org/abs/1804.04235.

Parameters
  • params (Iterable[nn.parameter.Parameter]) – Iterable of parameters to optimize or dictionaries defining parameter groups.

  • lr (float, optional) – The external learning rate.

  • eps (Tuple[float, float], optional) – Regularization constants for square gradient and parameter scale respectively.

  • clip_threshold (float, optional) – Threshold of root mean square of final gradient update.

  • decay_rate (float, optional) – Coefficient used to compute running averages of square.

  • beta1 (float, optional) – Coefficient used for computing running averages of gradient.

  • weight_decay (float, optional) – Weight decay (L2 penalty).

  • scale_parameter (bool, optional) – If True, learning rate is scaled by root mean square.

  • relative_step (bool, optional) – If True, time-dependent learning rate is computed instead of external learning rate.

  • warmup_init (bool, optional) – Time-dependent learning rate computation depends on whether warm-up initialization is being used.

step(closure=None)[source]

Performs a single optimization step

Parameters

closure (callable, optional) – A closure that reevaluates the model and returns the loss.

trainer.optimization.adamw module

class trainer.optimization.adamw.AdamW(params: Iterable[Parameter], lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-06, weight_decay: float = 0.0, correct_bias: bool = True)[source]

Bases: Optimizer

Implements Adam algorithm with weight decay fix as introduced in Decoupled Weight Decay Regularization <https://arxiv.org/abs/1711.05101>. :param params: Iterable of parameters to optimize or dictionaries defining parameter groups. :type params: Iterable[nn.parameter.Parameter] :param lr: The learning rate to use. :type lr: float, optional :param betas: Adam’s betas parameters (b1, b2). :type betas: Tuple[float,float], optional :param eps: Adam’s epsilon for numerical stability. :type eps: float, optional :param weight_decay: Decoupled weight decay to apply. :type weight_decay: float, optional :param correct_bias: Whether or not to correct bias in Adam. :type correct_bias: bool, optional

step(closure: Optional[Callable] = None)[source]

Performs a single optimization step. :param closure: A closure that reevaluates the model and returns the loss. :type closure: Callable, optional

trainer.optimization.optimization module

Module contents