Shortcuts

SingleDeviceStrategy

class mmengine._strategy.SingleDeviceStrategy(*, work_dir='work_dirs', experiment_name=None, env_kwargs=None, log_kwargs=None, auto_scale_lr=None)[source]

Strategy for single device training.

Parameters
convert_model(model)[source]

Convert layers of model.

convert all SyncBatchNorm (SyncBN) and mmcv.ops.sync_bn.SyncBatchNorm (MMSyncBN) layers in the model to BatchNormXd layers.

Parameters

model (nn.Module) – Model to convert.

Return type

torch.nn.modules.module.Module

load_checkpoint(filename, *, map_location='cpu', strict=False, revise_keys=[('^module.', '')], callback=None)[source]

Load checkpoint from given filename.

Parameters
Keyword Arguments
  • map_location (str or callable) – A string or a callable function to specifying how to remap storage locations. Defaults to ‘cpu’.

  • strict (bool) – strict (bool): Whether to allow different params for the model and checkpoint.

  • revise_keys (list) – A list of customized keywords to modify the state_dict in checkpoint. Each item is a (pattern, replacement) pair of the regular expression operations. Defaults to strip the prefix ‘module.’ by [(r’^module.’, ‘’)].

  • callback (callable, callable) – Callback function to modify the checkpoint after loading the checkpoint. Defaults to None.

Return type

dict

prepare(model, *, optim_wrapper=None, param_scheduler=None, compile=False, dispatch_kwargs=None)[source]

Prepare model and some components.

Parameters
Keyword Arguments
  • optim_wrapper (BaseOptimWrapper or dict, optional) – Computing the gradient of model parameters and updating them. Defaults to None. See build_optim_wrapper() for examples.

  • param_scheduler (_ParamScheduler or dict or list, optional) – Parameter scheduler for updating optimizer parameters. If specified, optim_wrapper should also be specified. Defaults to None. See build_param_scheduler() for examples.

  • compile (dict, optional) – Config to compile model. Defaults to False. Requires PyTorch>=2.0.

  • dispatch_kwargs (dict, optional) – Kwargs to be passed to other methods of Strategy. Defaults to None. If accumulative_counts is set in optim_wrapper, you need to provide max_iters in dispatch_kwargs.

resume(filename, *, resume_optimizer=True, resume_param_scheduler=True, map_location='default', callback=None)[source]

Resume training from given filename.

Four types of states will be resumed.

  • model state

  • optimizer state

  • scheduler state

  • randomness state

Parameters
  • filename (str) – Accept local filepath, URL, torchvision://xxx, open-mmlab://xxx.

  • resume_optimizer (bool) –

  • resume_param_scheduler (bool) –

  • map_location (Union[str, Callable]) –

  • callback (Optional[Callable]) –

Keyword Arguments
  • resume_optimizer (bool) – Whether to resume optimizer state. Defaults to True.

  • resume_param_scheduler (bool) – Whether to resume param scheduler state. Defaults to True.

  • map_location (str or callable) – A string or a callable function to specifying how to remap storage locations. Defaults to ‘default’.

  • callback (callable, callable) – Callback function to modify the checkpoint before saving the checkpoint. Defaults to None.

Return type

dict

save_checkpoint(filename, *, save_optimizer=True, save_param_scheduler=True, extra_ckpt=None, callback=None)[source]

Save checkpoint to given filename.

Parameters
Keyword Arguments
  • save_optimizer (bool) – Whether to save the optimizer to the checkpoint. Defaults to True.

  • save_param_scheduler (bool) – Whether to save the param_scheduler to the checkpoint. Defaults to True.

  • extra_ckpt (dict, optional) – Extra checkpoint to save. Defaults to None.

  • callback (callable, callable) – Callback function to modify the checkpoint before saving the checkpoint. Defaults to None.

Return type

None

Read the Docs v: v0.8.4
Versions
latest
stable
v0.8.4
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.