Shortcuts

LogProcessor

class mmengine.runner.LogProcessor(window_size=10, by_epoch=True, custom_cfg=None, num_digits=4, log_with_hierarchy=False, mean_pattern='.*(loss|time|data_time|grad_norm).*')[source]

A log processor used to format log information collected from runner.message_hub.log_scalars.

LogProcessor instance is built by runner and will format runner.message_hub.log_scalars to tag and log_str, which can directly used by LoggerHook and MMLogger. Besides, the argument custom_cfg of constructor can control the statistics method of logs.

Parameters
  • window_size (int) – default smooth interval Defaults to 10.

  • by_epoch (bool) – Whether to format logs with epoch stype. Defaults to True.

  • custom_cfg (list[dict], optional) –

    Contains multiple log config dict, in which key means the data source name of log and value means the statistic method and corresponding arguments used to count the data source. Defaults to None.

    • If custom_cfg is None, all logs will be formatted via default methods, such as smoothing loss by default window_size. If custom_cfg is defined as a list of config dict, for example: [dict(data_src=loss, method=’mean’, log_name=’global_loss’, window_size=’global’)]. It means the log item loss will be counted as global mean and additionally logged as global_loss (defined by log_name). If log_name is not defined in config dict, the original logged key will be overwritten.

    • The original log item cannot be overwritten twice. Here is an error example: [dict(data_src=loss, method=’mean’, window_size=’global’), dict(data_src=loss, method=’mean’, window_size=’epoch’)]. Both log config dict in custom_cfg do not have log_name key, which means the loss item will be overwritten twice.

    • For those statistic methods with the window_size argument, if by_epoch is set to False, windows_size should not be epoch to statistics log value by epoch.

  • num_digits (int) – The number of significant digit shown in the logging message.

  • log_with_hierarchy (bool) – Whether to log with hierarchy. If it is True, the information is written to visualizer backend such as LocalVisBackend and TensorboardBackend with hierarchy. For example, loss will be saved as train/loss, and accuracy will be saved as val/accuracy. Defaults to False. New in version 0.7.0.

  • mean_pattern (str) – This is a regular expression used to match the log that need to be included in the smoothing statistics. New in version 0.7.3.

Examples

>>> # `log_name` is defined, `loss_large_window` will be an additional
>>> # record.
>>> log_processor = dict(
>>>     window_size=10,
>>>     by_epoch=True,
>>>     custom_cfg=[dict(data_src='loss',
>>>                       log_name='loss_large_window',
>>>                       method_name='mean',
>>>                       window_size=100)])
>>> # `log_name` is not defined. `loss` will be overwritten.
>>> log_processor = dict(
>>>     window_size=10,
>>>     by_epoch=True,
>>>     custom_cfg=[dict(data_src='loss',
>>>                       method_name='mean',
>>>                       window_size=100)])
>>> # Record loss with different statistics methods.
>>> log_processor = dict(
>>>     window_size=10,
>>>     by_epoch=True,
>>>     custom_cfg=[dict(data_src='loss',
>>>                       log_name='loss_large_window',
>>>                       method_name='mean',
>>>                       window_size=100),
>>>                  dict(data_src='loss',
>>>                       method_name='mean',
>>>                       window_size=100)])
>>> # Overwrite loss item twice will raise an error.
>>> log_processor = dict(
>>>     window_size=10,
>>>     by_epoch=True,
>>>     custom_cfg=[dict(data_src='loss',
>>>                       method_name='mean',
>>>                       window_size=100),
>>>                  dict(data_src='loss',
>>>                       method_name='max',
>>>                       window_size=100)])
AssertionError
get_log_after_epoch(runner, batch_idx, mode, with_non_scalar=False)[source]

Format log string after validation or testing epoch.

Parameters
  • runner (Runner) – The runner of validation/testing phase.

  • batch_idx (int) – The index of the current batch in the current loop.

  • mode (str) – Current mode of runner.

  • with_non_scalar (bool) – Whether to include non-scalar infos in the returned tag. Defaults to False.

Returns

Formatted log dict/string which will be recorded by runner.message_hub and runner.visualizer.

Return type

Tuple(dict, str)

get_log_after_iter(runner, batch_idx, mode)[source]

Format log string after training, validation or testing epoch.

Parameters
  • runner (Runner) – The runner of training phase.

  • batch_idx (int) – The index of the current batch in the current loop.

  • mode (str) – Current mode of runner, train, test or val.

Returns

Formatted log dict/string which will be recorded by runner.message_hub and runner.visualizer.

Return type

Tuple(dict, str)

Read the Docs v: v0.8.3
Versions
latest
stable
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.