Shortcuts

DDPStrategy

class mmengine._strategy.DDPStrategy(*, model_wrapper=None, sync_bn=None, **kwargs)[source]

Distribution strategy for distributed data parallel training.

Parameters:
  • model_wrapper (dict) – Dict for model wrapper. Defaults to None.

  • sync_bn (str) – Type of sync batch norm. Defaults to None. Options are ‘torch’ and ‘mmcv’.

  • **kwargs – Other arguments for BaseStrategy.

convert_model(model)[source]

Convert all BatchNorm layers in the model to SyncBatchNorm (SyncBN) or mmcv.ops.sync_bn.SyncBatchNorm (MMSyncBN) layers.

Parameters:

model (nn.Module) – Model to be converted.

Returns:

Converted model.

Return type:

nn.Module

save_checkpoint(filename, *, save_optimizer=True, save_param_scheduler=True, extra_ckpt=None, callback=None)[source]

Save checkpoint to given filename.

Parameters:
  • filename (str) – Filename to save checkpoint.

  • save_optimizer (bool) –

  • save_param_scheduler (bool) –

  • extra_ckpt (dict | None) –

  • callback (Callable | None) –

Keyword Arguments:
  • save_optimizer (bool) – Whether to save the optimizer to the checkpoint. Defaults to True.

  • save_param_scheduler (bool) – Whether to save the param_scheduler to the checkpoint. Defaults to True.

  • extra_ckpt (dict, optional) – Extra checkpoint to save. Defaults to None.

  • callback (callable, callable) – Callback function to modify the checkpoint before saving the checkpoint. Defaults to None.

Return type:

None