towhee.trainer.optimization.adafactor.Adafactor¶
- class towhee.trainer.optimization.adafactor.Adafactor(params, lr=None, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=True, warmup_init=False)[source]¶
Bases:
Optimizer
AdaFactor pytorch implementation as introduced in Adafactor: Adaptive Learning Rates with Sublinear Memory Cost https://arxiv.org/abs/1804.04235.
- Parameters:
params (Iterable[nn.parameter.Parameter]) – Iterable of parameters to optimize or dictionaries defining parameter groups.
lr (float, optional) – The external learning rate.
eps (Tuple[float, float], optional) – Regularization constants for square gradient and parameter scale respectively.
clip_threshold (float, optional) – Threshold of root mean square of final gradient update.
decay_rate (float, optional) – Coefficient used to compute running averages of square.
beta1 (float, optional) – Coefficient used for computing running averages of gradient.
weight_decay (float, optional) – Weight decay (L2 penalty).
scale_parameter (bool, optional) – If True, learning rate is scaled by root mean square.
relative_step (bool, optional) – If True, time-dependent learning rate is computed instead of external learning rate.
warmup_init (bool, optional) – Time-dependent learning rate computation depends on whether warm-up initialization is being used.
Methods
Add a param group to the
Optimizer
s param_groups.Loads the optimizer state.
profile_hook_step
Register an optimizer step post hook which will be called after optimizer step. It should have the following signature::.
Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature::.
Returns the state of the optimizer as a
dict
.Performs a single optimization step
Sets the gradients of all optimized
torch.Tensor
s to zero.- __init__(params, lr=None, eps=(1e-30, 0.001), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, scale_parameter=True, relative_step=True, warmup_init=False)[source]¶
- __repr__()¶
Return repr(self).
- add_param_group(param_group)¶
Add a param group to the
Optimizer
s param_groups.This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the
Optimizer
as training progresses.- Parameters:
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
- load_state_dict(state_dict)¶
Loads the optimizer state.
- Parameters:
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict()
.
- register_step_post_hook(hook: Callable[[...], None]) RemovableHandle ¶
Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The
optimizer
argument is the optimizer instance being used.- Parameters:
hook (Callable) – The user defined hook to be registered.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_step_pre_hook(hook: Callable[[...], None]) RemovableHandle ¶
Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The
optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.- Parameters:
hook (Callable) – The user defined hook to be registered.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- state_dict()¶
Returns the state of the optimizer as a
dict
.It contains two entries:
- state - a dict holding current optimization state. Its content
differs between optimizer classes.
- param_groups - a list containing all parameter groups where each
parameter group is a dict
- step(closure=None)[source]¶
Performs a single optimization step
- Parameters:
closure (callable, optional) – A closure that reevaluates the model and returns the loss.
- zero_grad(set_to_none: bool = True)¶
Sets the gradients of all optimized
torch.Tensor
s to zero.- Parameters:
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)
followed by a backward pass,.grad
s are guaranteed to be None for params that did not receive a gradient. 3.torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).