Shortcuts

DeepSpeedOptimWrapper

class mmengine._strategy.deepspeed.DeepSpeedOptimWrapper(optimizer)[源代码]
backward(loss, **kwargs)[源代码]

“Perform gradient back propagation.

参数:

loss (Tensor) –

返回类型:

None

load_state_dict(state_dict)[源代码]

A wrapper of Optimizer.load_state_dict. load the state dict of optimizer.

Provide unified load_state_dict interface compatible with automatic mixed precision training. Subclass can overload this method to implement the required logic. For example, the state dictionary of GradScaler should be loaded when training with torch.cuda.amp.

参数:

state_dict (dict) – The state dictionary of optimizer.

返回类型:

None

state_dict()[源代码]

A wrapper of Optimizer.state_dict.

返回类型:

dict

step(**kwargs)[源代码]

Call the step method of optimizer.

update_params(loss)[源代码]

Update parameters in optimizer.

返回类型:

None

zero_grad(**kwargs)[源代码]

A wrapper of Optimizer.zero_grad.

返回类型:

None

Read the Docs v: stable
Versions
latest
stable
v0.10.4
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.1
v0.9.0
v0.8.5
v0.8.4
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.