- class mmengine.optim.ReduceOnPlateauParamScheduler(optimizer, param_name, monitor='loss', rule='less', factor=0.1, patience=10, threshold=0.0001, threshold_rule='rel', cooldown=0, min_value=0.0, eps=1e-08, begin=0, end=1000000000, last_step=- 1, by_epoch=True, verbose=False)¶
Reduce the parameters of each parameter group when a metric has stopped improving. Models often benefit from reducing the parameters by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a
patiencenumber of epochs, the parameters are reduced.
The implementation is motivated by PyTorch ReduceLROnPlateau.
optimizer (Optimizer or BaseOptimWrapper) – optimizer or Wrapped optimizer.
param_name (str) – Name of the parameter to be adjusted, such as
monitor (str) – The name of the metric to measure whether the performance of the model is improved.
rule (str) – One of less, greater. In less rule, parameters will be reduced when the quantity monitored has stopped decreasing; in greater rule it will be reduced when the quantity monitored has stopped increasing. Defaults to ‘less’. The
ruleis the renaming of
factor (float) – Factor by which the parameters will be reduced. new_param = param * factor. Defaults to 0.1.
patience (int) – Number of epochs with no improvement after which parameters will be reduced. For example, if
patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the parameters after the 3rd epoch if the monitor value still hasn’t improved then. Defaults to 10.
threshold (float) – Threshold for measuring the new optimum, to only focus on significant changes. Defaults to 1e-4.
threshold_rule (str) – One of rel, abs. In rel rule, dynamic_threshold = best * ( 1 + threshold ) in ‘greater’ rule or best * ( 1 - threshold ) in less rule. In abs rule, dynamic_threshold = best + threshold in greater rule or best - threshold in less rule. Defaults to ‘rel’.
cooldown (int) – Number of epochs to wait before resuming normal operation after parameters have been reduced. Defaults to 0.
eps (float) – Minimal decay applied to parameters. If the difference between new and old parameters are smaller than eps, the update is ignored. Defaults to 1e-8.
begin (int) – Step at which to start triggering the scheduler to monitor in val within the interval calculated according to epoch of training. Defaults to 0.
end (int) – Step at which to stop triggering the scheduler to monitor in val within the interval calculated according to epoch of training. Defaults to INF.
last_step (int) – The index of last step. Used for resume without state dict. Defaults to -1.
by_epoch (bool) – Whether the scheduled parameters are updated by epochs. Defaults to True.
verbose (bool) – Whether to print the value for each update. Defaults to False.
- print_value(is_verbose, group, value)¶
Display the current parameter value.
Adjusts the parameter value of each parameter group based on the specified schedule.