ConstantLR¶
- class mmengine.optim.ConstantLR(optimizer, *args, **kwargs)[source]¶
Decays the learning rate value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone:
end
. Notice that such decay can happen simultaneously with other changes to the learning rate value from outside this scheduler.- Parameters:
optimizer (Optimizer or OptimWrapper) – Wrapped optimizer.
factor (float) – The number we multiply learning rate until the milestone. Defaults to 1./3.
begin (int) – Step at which to start updating the learning rate. Defaults to 0.
end (int) – Step at which to stop updating the learning rate. Defaults to INF.
last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.
by_epoch (bool) – Whether the scheduled learning rate is updated by epochs. Defaults to True.
verbose (bool) – Whether to print the learning rate for each update. Defaults to False.