Shortcuts

mmengine.dist.broadcast

mmengine.dist.broadcast(data, src=0, group=None)[source]

Broadcast the data from src process to the whole group.

data must have the same number of elements in all processes participating in the collective.

Note

Calling broadcast in non-distributed environment does nothing.

Parameters:
  • data (Tensor) – Data to be sent if src is the rank of current process, and data to be used to save received data otherwise.

  • src (int) – Source rank. Defaults to 0.

  • group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Defaults to None.

Return type:

None

Examples

>>> import torch
>>> import mmengine.dist as dist
>>> # non-distributed environment
>>> data = torch.arange(2, dtype=torch.int64)
>>> data
tensor([0, 1])
>>> dist.broadcast(data)
>>> data
tensor([0, 1])
>>> # distributed environment
>>> # We have 2 process groups, 2 ranks.
>>> data = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank
>>> data
tensor([1, 2]) # Rank 0
tensor([3, 4]) # Rank 1
>>> dist.broadcast(data)
>>> data
tensor([1, 2]) # Rank 0
tensor([1, 2]) # Rank 1
Read the Docs v: latest
Versions
latest
stable
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.1
v0.9.0
v0.8.5
v0.8.4
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.