Shortcuts

DeepSpeedOptimWrapper

class mmengine.optim.DeepSpeedOptimWrapper(optimizer)[source]
backward(loss, **kwargs)[source]

“Perform gradient back propagation.

Parameters

loss (torch.Tensor) –

Return type

None

load_state_dict(state_dict)[source]

A wrapper of Optimizer.load_state_dict. load the state dict of optimizer.

Provide unified load_state_dict interface compatible with automatic mixed precision training. Subclass can overload this method to implement the required logic. For example, the state dictionary of GradScaler should be loaded when training with torch.cuda.amp.

Parameters

state_dict (dict) – The state dictionary of optimizer.

Return type

None

state_dict()[source]

A wrapper of Optimizer.state_dict.

Return type

dict

step(**kwargs)[source]

Call the step method of optimizer.

update_params(loss)[source]

Update parameters in optimizer.

Return type

None

zero_grad(**kwargs)[source]

A wrapper of Optimizer.zero_grad.

Return type

None

Read the Docs v: v0.8.3
Versions
latest
stable
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.