SingleDeviceStrategy¶
- class mmengine._strategy.SingleDeviceStrategy(*, work_dir='work_dirs', experiment_name=None, env_kwargs=None, log_kwargs=None, auto_scale_lr=None)[source]¶
Strategy for single device training.
- Parameters:
- convert_model(model)[source]¶
Convert layers of model.
convert all
SyncBatchNorm
(SyncBN) andmmcv.ops.sync_bn.SyncBatchNorm
(MMSyncBN) layers in the model toBatchNormXd
layers.- Parameters:
model (nn.Module) – Model to convert.
- Return type:
- load_checkpoint(filename, *, map_location='cpu', strict=False, revise_keys=[('^module.', '')], callback=None)[source]¶
Load checkpoint from given
filename
.- Parameters:
- Keyword Arguments:
map_location (str or callable) – A string or a callable function to specifying how to remap storage locations. Defaults to ‘cpu’.
strict (bool) – strict (bool): Whether to allow different params for the model and checkpoint.
revise_keys (list) – A list of customized keywords to modify the state_dict in checkpoint. Each item is a (pattern, replacement) pair of the regular expression operations. Defaults to strip the prefix ‘module.’ by [(r’^module.’, ‘’)].
callback (callable, callable) – Callback function to modify the checkpoint after loading the checkpoint. Defaults to None.
- Return type:
- prepare(model, *, optim_wrapper=None, param_scheduler=None, compile=False, dispatch_kwargs=None)[source]¶
Prepare model and some components.
- Parameters:
model (
torch.nn.Module
or dict) – The model to be run. It can be a dict used for build a model.optim_wrapper (BaseOptimWrapper | dict | None) –
param_scheduler (_ParamScheduler | Dict | List | None) –
dispatch_kwargs (dict | None) –
- Keyword Arguments:
optim_wrapper (BaseOptimWrapper or dict, optional) – Computing the gradient of model parameters and updating them. Defaults to None. See
build_optim_wrapper()
for examples.param_scheduler (_ParamScheduler or dict or list, optional) – Parameter scheduler for updating optimizer parameters. If specified,
optim_wrapper
should also be specified. Defaults to None. Seebuild_param_scheduler()
for examples.compile (dict, optional) – Config to compile model. Defaults to False. Requires PyTorch>=2.0.
dispatch_kwargs (dict, optional) – Kwargs to be passed to other methods of Strategy. Defaults to None. If
accumulative_counts
is set inoptim_wrapper
, you need to providemax_iters
indispatch_kwargs
.
- resume(filename, *, resume_optimizer=True, resume_param_scheduler=True, map_location='default', callback=None)[source]¶
Resume training from given
filename
.Four types of states will be resumed.
model state
optimizer state
scheduler state
randomness state
- Parameters:
- Keyword Arguments:
resume_optimizer (bool) – Whether to resume optimizer state. Defaults to True.
resume_param_scheduler (bool) – Whether to resume param scheduler state. Defaults to True.
map_location (str or callable) – A string or a callable function to specifying how to remap storage locations. Defaults to ‘default’.
callback (callable, callable) – Callback function to modify the checkpoint before saving the checkpoint. Defaults to None.
- Return type:
- save_checkpoint(filename, *, save_optimizer=True, save_param_scheduler=True, extra_ckpt=None, callback=None)[source]¶
Save checkpoint to given
filename
.- Parameters:
- Keyword Arguments:
save_optimizer (bool) – Whether to save the optimizer to the checkpoint. Defaults to True.
save_param_scheduler (bool) – Whether to save the param_scheduler to the checkpoint. Defaults to True.
extra_ckpt (dict, optional) – Extra checkpoint to save. Defaults to None.
callback (callable, callable) – Callback function to modify the checkpoint before saving the checkpoint. Defaults to None.
- Return type:
None