Optimizers¶
Optimizers update the Model parameters based on the gradients.
-
class
fairseq.optim.
FP16Optimizer
(args, params, fp32_optimizer, fp32_params)[source]¶ Wrap an optimizer to support FP16 (mixed precision) training.
-
backward
(loss)[source]¶ Computes the sum of gradients of the given tensor w.r.t. graph leaves.
Compared to
fairseq.optim.FairseqOptimizer.backward()
, this function additionally dynamically scales the loss to avoid gradient underflow.
-
classmethod
build_optimizer
(args, params)[source]¶ Parameters: - args (argparse.Namespace) – fairseq args
- params (iterable) – iterable of parameters to optimize
-
load_state_dict
(state_dict, optimizer_overrides=None)[source]¶ Load an optimizer state dict.
In general we should prefer the configuration of the existing optimizer instance (e.g., learning rate) over that found in the state_dict. This allows us to resume training from a checkpoint using a new set of optimizer args.
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.
MemoryEfficientFP16Optimizer
(args, params, optimizer)[source]¶ Wrap an optimizer to support FP16 (mixed precision) training.
Compared to
fairseq.optim.FP16Optimizer
, this version does not maintain an FP32 copy of the model. We instead expect the optimizer to convert the gradients to FP32 internally and sync the results back to the FP16 model params. This significantly reduces memory usage but slightly increases the time spent in the optimizer.Since this wrapper depends on specific functionality in the wrapped optimizer (i.e., on-the-fly conversion of grads to FP32), only certain optimizers can be wrapped. This is determined by the supports_memory_efficient_fp16 property.
-
backward
(loss)[source]¶ Computes the sum of gradients of the given tensor w.r.t. graph leaves.
Compared to
fairseq.optim.FairseqOptimizer.backward()
, this function additionally dynamically scales the loss to avoid gradient underflow.
-
classmethod
build_optimizer
(args, params)[source]¶ Parameters: - args (argparse.Namespace) – fairseq args
- params (iterable) – iterable of parameters to optimize
-
load_state_dict
(state_dict, optimizer_overrides=None)[source]¶ Load an optimizer state dict.
In general we should prefer the configuration of the existing optimizer instance (e.g., learning rate) over that found in the state_dict. This allows us to resume training from a checkpoint using a new set of optimizer args.
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.
FairseqOptimizer
(args, params)[source]¶ -
-
load_state_dict
(state_dict, optimizer_overrides=None)[source]¶ Load an optimizer state dict.
In general we should prefer the configuration of the existing optimizer instance (e.g., learning rate) over that found in the state_dict. This allows us to resume training from a checkpoint using a new set of optimizer args.
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
supports_memory_efficient_fp16
¶
-
-
class
fairseq.optim.adadelta.
Adadelta
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.adagrad.
Adagrad
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.adafactor.
FairseqAdafactor
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate. Note : Convergence issues empirically observed with fp16 on.
Might require search for appropriate configuration.
-
-
class
fairseq.optim.adam.
FairseqAdam
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.fp16_optimizer.
FP16Optimizer
(args, params, fp32_optimizer, fp32_params)[source]¶ Wrap an optimizer to support FP16 (mixed precision) training.
-
backward
(loss)[source]¶ Computes the sum of gradients of the given tensor w.r.t. graph leaves.
Compared to
fairseq.optim.FairseqOptimizer.backward()
, this function additionally dynamically scales the loss to avoid gradient underflow.
-
classmethod
build_optimizer
(args, params)[source]¶ Parameters: - args (argparse.Namespace) – fairseq args
- params (iterable) – iterable of parameters to optimize
-
load_state_dict
(state_dict, optimizer_overrides=None)[source]¶ Load an optimizer state dict.
In general we should prefer the configuration of the existing optimizer instance (e.g., learning rate) over that found in the state_dict. This allows us to resume training from a checkpoint using a new set of optimizer args.
-
optimizer
¶ Return a torch.optim.optimizer.Optimizer instance.
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-
-
class
fairseq.optim.nag.
FairseqNAG
(args, params)[source]¶ -
-
optimizer_config
¶ Return a kwarg dictionary that will be used to override optimizer args stored in checkpoints. This allows us to load a checkpoint and resume training using a different set of optimizer args, e.g., with a different learning rate.
-