Shortcuts

mmengine.dist.all_gather

mmengine.dist.all_gather(data, group=None)[源代码]

Gather data from the whole group in a list.

备注

Calling all_gather in non-distributed environment does nothing and just returns a list containing data itself.

备注

Unlike PyTorch torch.distributed.all_gather, all_gather() in MMEngine does not pass in an empty list gather_list and returns the gather_list directly, which is more convenient. The difference between their interfaces is as below:

  • MMEngine: all_gather(data, group) -> gather_list

  • PyTorch: all_gather(gather_list, data, group) -> None

参数:
  • data (Tensor) – Tensor to be gathered.

  • group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Defaults to None.

返回:

Return a list containing data from the whole group if in distributed environment, otherwise a list only containing data itself.

返回类型:

list[Tensor]

示例

>>> import torch
>>> import mmengine.dist as dist
>>> # non-distributed environment
>>> data = torch.arange(2, dtype=torch.int64)
>>> data
tensor([0, 1])
>>> output = dist.all_gather(data)
>>> output
[tensor([0, 1])]
>>> # distributed environment
>>> # We have 2 process groups, 2 ranks.
>>> data = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank
>>> data
tensor([1, 2])  # Rank 0
tensor([3, 4])  # Rank 1
>>> output = dist.all_gather(data)
>>> output
[tensor([1, 2]), tensor([3, 4])]  # Rank 0
[tensor([1, 2]), tensor([3, 4])]  # Rank 1
Read the Docs v: latest
Versions
latest
stable
v0.10.4
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.1
v0.9.0
v0.8.5
v0.8.4
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.