Optimizers¶
Optimizers update the Model parameters based on the gradients.
isort:skip_file
-
class
fairseq.optim.
FP16Optimizer
(args, params, fp32_optimizer, fp32_params)[source]¶ Wrap an optimizer to support FP16 (mixed precision) training.
-
classmethod
build_optimizer
(args, params)[source]¶ Parameters: - args (argparse.Namespace) – fairseq args
- params (iterable) – iterable of parameters to optimize
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
classmethod
-
class
fairseq.optim.
MemoryEfficientFP16Optimizer
(args, params, optimizer)[source]¶ Wrap an optimizer to support FP16 (mixed precision) training.
Compared to
fairseq.optim.FP16Optimizer
, this version does not maintain an FP32 copy of the model. We instead expect the optimizer to convert the gradients to FP32 internally and sync the results back to the FP16 model params. This significantly reduces memory usage but slightly increases the time spent in the optimizer.Since this wrapper depends on specific functionality in the wrapped optimizer (i.e., on-the-fly conversion of grads to FP32), only certain optimizers can be wrapped. This is determined by the supports_memory_efficient_fp16 property.
-
classmethod
build_optimizer
(args, params)[source]¶ Parameters: - args (argparse.Namespace) – fairseq args
- params (iterable) – iterable of parameters to optimize
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
classmethod
-
class
fairseq.optim.
FairseqOptimizer
(args)[source]¶ -
-
load_state_dict
(state_dict, optimizer_overrides=None)[source]¶ Load an optimizer state dict.
In general we should prefer the configuration of the existing optimizer instance (e.g., learning rate) over that found in the state_dict. This allows us to resume training from a checkpoint using a new set of optimizer args.
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
param_groups
¶
-
params
¶ Return an iterable of the parameters held by the optimizer.
-
supports_flat_params
¶ Whether the optimizer supports collapsing of the model parameters/gradients into a single contiguous Tensor.
-
supports_memory_efficient_fp16
¶
-
supports_step_with_scale
¶
-
-
class
fairseq.optim.adadelta.
Adadelta
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
supports_flat_params
¶ Whether the optimizer supports collapsing of the model parameters/gradients into a single contiguous Tensor.
-
-
class
fairseq.optim.adagrad.
Adagrad
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
supports_flat_params
¶ Whether the optimizer supports collapsing of the model parameters/gradients into a single contiguous Tensor.
-
-
class
fairseq.optim.adafactor.
FairseqAdafactor
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate. Note : Convergence issues empirically observed with fp16 on.
Might require search for appropriate configuration.
-
-
class
fairseq.optim.adam.
FairseqAdam
(args, params)[source]¶ Adam optimizer for fairseq.
Important note: this optimizer corresponds to the “AdamW” variant of Adam in its weight decay behavior. As such, it is most closely analogous to torch.optim.AdamW from PyTorch.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.fp16_optimizer.
FP16Optimizer
(args, params, fp32_optimizer, fp32_params)[source]¶ Wrap an optimizer to support FP16 (mixed precision) training.
-
classmethod
build_optimizer
(args, params)[source]¶ Parameters: - args (argparse.Namespace) – fairseq args
- params (iterable) – iterable of parameters to optimize
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
classmethod
-
class
fairseq.optim.nag.
FairseqNAG
(args, params)[source]¶ -
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.sgd.
SGD
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
supports_flat_params
¶ Whether the optimizer supports collapsing of the model parameters/gradients into a single contiguous Tensor.
-