Shortcuts

DeepSpeedOptimWrapper

class mmengine._strategy.deepspeed.DeepSpeedOptimWrapper(optimizer)[source]
backward(loss, **kwargs)[source]

“Perform gradient back propagation.

Parameters:

loss (Tensor) –

Return type:

None

load_state_dict(state_dict)[source]

A wrapper of Optimizer.load_state_dict. load the state dict of optimizer.

Provide unified load_state_dict interface compatible with automatic mixed precision training. Subclass can overload this method to implement the required logic. For example, the state dictionary of GradScaler should be loaded when training with torch.cuda.amp.

Parameters:

state_dict (dict) – The state dictionary of optimizer.

Return type:

None

state_dict()[source]

A wrapper of Optimizer.state_dict.

Return type:

dict

step(**kwargs)[source]

Call the step method of optimizer.

update_params(loss)[source]

Update parameters in optimizer.

Return type:

None

zero_grad(**kwargs)[source]

A wrapper of Optimizer.zero_grad.

Return type:

None

Read the Docs v: latest
Versions
latest
stable
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.1
v0.9.0
v0.8.5
v0.8.4
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.