towhee.trainer.optimization.adamw.AdamW

class towhee.trainer.optimization.adamw.AdamW(params: Iterable[Parameter], lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-06, weight_decay: float = 0.0, correct_bias: bool = True)[source]

Bases: Optimizer

Implements Adam algorithm with weight decay fix as introduced in Decoupled Weight Decay Regularization <https://arxiv.org/abs/1711.05101>. :param params: Iterable of parameters to optimize or dictionaries defining parameter groups. :type params: Iterable[nn.parameter.Parameter] :param lr: The learning rate to use. :type lr: float, optional :param betas: Adam’s betas parameters (b1, b2). :type betas: Tuple[float,float], optional :param eps: Adam’s epsilon for numerical stability. :type eps: float, optional :param weight_decay: Decoupled weight decay to apply. :type weight_decay: float, optional :param correct_bias: Whether or not to correct bias in Adam. :type correct_bias: bool, optional

Methods

add_param_group

Add a param group to the Optimizer s param_groups.

load_state_dict

Loads the optimizer state.

state_dict

Returns the state of the optimizer as a dict.

step

Performs a single optimization step.

zero_grad

Sets the gradients of all optimized torch.Tensor s to zero.

__init__(params: Iterable[Parameter], lr: float = 0.001, betas: Tuple[float, float] = (0.9, 0.999), eps: float = 1e-06, weight_decay: float = 0.0, correct_bias: bool = True)[source]
__repr__()

Return repr(self).

add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the Optimizer as training progresses.

Parameters:

param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.

load_state_dict(state_dict)

Loads the optimizer state.

Parameters:

state_dict (dict) – optimizer state. Should be an object returned from a call to state_dict().

state_dict()

Returns the state of the optimizer as a dict.

It contains two entries:

  • state - a dict holding current optimization state. Its content

    differs between optimizer classes.

  • param_groups - a list containing all parameter groups where each

    parameter group is a dict

step(closure: Optional[Callable] = None)[source]

Performs a single optimization step. :param closure: A closure that reevaluates the model and returns the loss. :type closure: Callable, optional

zero_grad(set_to_none: bool = False)

Sets the gradients of all optimized torch.Tensor s to zero.

Parameters:

set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests zero_grad(set_to_none=True) followed by a backward pass, .grads are guaranteed to be None for params that did not receive a gradient. 3. torch.optim optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).