Shortcuts

mmengine.dist.all_gather_object

mmengine.dist.all_gather_object(data, group=None)[源代码]

Gather picklable objects from the whole group into a list. Similar to all_gather(), but Python objects can be passed in. Note that the object must be picklable in order to be gathered.

备注

Calling all_gather_object in non-distributed environment does nothing and just returns a list containing data itself.

备注

Unlike PyTorch torch.distributed.all_gather_object, all_gather_object() in MMEngine does not pass in an empty list gather_list and returns the gather_list directly, which is more convenient. The difference between their interfaces is as below:

  • MMEngine: all_gather_object(data, group) -> gather_list

  • PyTorch: all_gather_object(gather_list, data, group) -> None

参数:
  • data (Any) – Pickable Python object to be broadcast from current process.

  • group (ProcessGroup, optional) – The process group to work on. If None, the default process group will be used. Defaults to None.

返回:

Return a list containing data from the whole group if in distributed environment, otherwise a list only containing data itself.

返回类型:

list[Tensor]

备注

For NCCL-based process groups, internal tensor representations of objects must be moved to the GPU device before communication starts. In this case, the used device is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is correctly set so that each rank has an individual GPU, via torch.cuda.set_device().

示例

>>> import torch
>>> import mmengine.dist as dist
>>> # non-distributed environment
>>> data = ['foo', 12, {1: 2}]  # any picklable object
>>> gather_objects = dist.all_gather_object(data[dist.get_rank()])
>>> output
['foo']
>>> # distributed environment
>>> # We have 3 process groups, 3 ranks.
>>> output = dist.all_gather_object(data[dist.get_rank()])
>>> output
['foo', 12, {1: 2}]  # Rank 0
['foo', 12, {1: 2}]  # Rank 1
['foo', 12, {1: 2}]  # Rank 2