# mmengine.optim¶

mmengine.optim

## Optimizer¶

 AmpOptimWrapper A subclass of OptimWrapper that supports automatic mixed precision training based on torch.cuda.amp. OptimWrapper Optimizer wrapper provides a common interface for updating parameters. OptimWrapperDict A dictionary container of OptimWrapper. DefaultOptimWrapperConstructor Default constructor for optimizers.
 build_optim_wrapper Build function of OptimWrapper.

## Scheduler¶

 _ParamScheduler Base class for parameter schedulers. ConstantLR Decays the learning rate value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: end. ConstantMomentum Decays the momentum value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: end. ConstantParamScheduler Decays the parameter value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: end. CosineAnnealingLR Set the learning rate of each parameter group using a cosine annealing schedule, where $$\eta_{max}$$ is set to the initial value and $$T_{cur}$$ is the number of epochs since the last restart in SGDR: CosineAnnealingMomentum Set the momentum of each parameter group using a cosine annealing schedule, where $$\eta_{max}$$ is set to the initial value and $$T_{cur}$$ is the number of epochs since the last restart in SGDR: CosineAnnealingParamScheduler Set the parameter value of each parameter group using a cosine annealing schedule, where $$\eta_{max}$$ is set to the initial value and $$T_{cur}$$ is the number of epochs since the last restart in SGDR: ExponentialLR Decays the learning rate of each parameter group by gamma every epoch. ExponentialMomentum Decays the momentum of each parameter group by gamma every epoch. ExponentialParamScheduler Decays the parameter value of each parameter group by gamma every epoch. LinearLR Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: end. LinearMomentum Decays the momentum of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: end. LinearParamScheduler Decays the parameter value of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: end. MultiStepLR Decays the specified learning rate in each parameter group by gamma once the number of epoch reaches one of the milestones. MultiStepMomentum Decays the specified momentum in each parameter group by gamma once the number of epoch reaches one of the milestones. MultiStepParamScheduler Decays the specified parameter in each parameter group by gamma once the number of epoch reaches one of the milestones. OneCycleLR Sets the learning rate of each parameter group according to the 1cycle learning rate policy. OneCycleParamScheduler Sets the parameters of each parameter group according to the 1cycle learning rate policy. PolyLR Decays the learning rate of each parameter group in a polynomial decay scheme. PolyMomentum Decays the momentum of each parameter group in a polynomial decay scheme. PolyParamScheduler Decays the parameter value of each parameter group in a polynomial decay scheme. StepLR Decays the learning rate of each parameter group by gamma every step_size epochs. StepMomentum Decays the momentum of each parameter group by gamma every step_size epochs. StepParamScheduler Decays the parameter value of each parameter group by gamma every step_size epochs. ReduceOnPlateauLR Reduce the learning rate of each parameter group when a metric has stopped improving. ReduceOnPlateauMomentum Reduce the momentum of each parameter group when a metric has stopped improving. ReduceOnPlateauParamScheduler Reduce the parameters of each parameter group when a metric has stopped improving.