Shortcuts

mmengine.dist.gather

mmengine.dist.gather(data, dst=0, group=None)[源代码]

Gather data from the whole group to dst process.

备注

Calling gather in non-distributed environment dose nothing and just returns a list containing data itself.

备注

NCCL backend does not support gather.

备注

Unlike PyTorch torch.distributed.gather, gather() in MMEngine does not pass in an empty list gather_list and returns the gather_list directly, which is more convenient. The difference between their interfaces is as below:

  • MMEngine: gather(data, dst, group) -> gather_list

  • PyTorch: gather(data, gather_list, dst, group) -> None

参数:
  • data (Tensor) – Tensor to be gathered. CUDA tensor is not supported.

  • dst (int) – Destination rank. Defaults to 0.

  • group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Defaults to None.

返回:

dst process will get a list of tensor gathering from the whole group. Other process will get a empty list. If in non-distributed environment, just return a list containing data itself.

返回类型:

list[Tensor]

示例

>>> import torch
>>> import mmengine.dist as dist
>>> # non-distributed environment
>>> data = torch.arange(2, dtype=torch.int64)
>>> data
tensor([0, 1])
>>> output = dist.gather(data)
>>> output
[tensor([0, 1])]
>>> # distributed environment
>>> # We have 2 process groups, 2 ranks.
>>> data = torch.arange(2, dtype=torch.int64) + 1 + 2 * rank
>>> data
tensor([1, 2]) # Rank 0
tensor([3, 4]) # Rank 1
>>> output = dist.gather(data)
>>> output
[tensor([1, 2]), tensor([3, 4])]  # Rank 0
[]  # Rank 1
Read the Docs v: latest
Versions
latest
stable
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.1
v0.9.0
v0.8.5
v0.8.4
v0.8.3
v0.8.2
v0.8.1
v0.8.0
v0.7.4
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.0
v0.5.0
v0.4.0
v0.3.0
v0.2.0
Downloads
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.