Shortcuts

mmengine.dist.broadcast_object_list

mmengine.dist.broadcast_object_list(data, src=0, group=None)[source]

Broadcasts picklable objects in object_list to the whole group. Similar to broadcast(), but Python objects can be passed in. Note that all objects in object_list must be picklable in order to be broadcasted.

Note

Calling broadcast_object_list in non-distributed environment does nothing.

Parameters:
  • data (List[Any]) – List of input objects to broadcast. Each object must be picklable. Only objects on the src rank will be broadcast, but each rank must provide lists of equal sizes.

  • src (int) – Source rank from which to broadcast object_list.

  • group (ProcessGroup | None) – (ProcessGroup, optional): The process group to work on. If None, the default process group will be used. Default is None.

  • device (torch.device, optional) – If not None, the objects are serialized and converted to tensors which are moved to the device before broadcasting. Default is None.

Return type:

None

Note

For NCCL-based process groups, internal tensor representations of objects must be moved to the GPU device before communication starts. In this case, the used device is given by torch.cuda.current_device() and it is the user’s responsibility to ensure that this is correctly set so that each rank has an individual GPU, via torch.cuda.set_device().

Examples

>>> import torch
>>> import mmengine.dist as dist
>>> # non-distributed environment
>>> data = ['foo', 12, {1: 2}]
>>> dist.broadcast_object_list(data)
>>> data
['foo', 12, {1: 2}]
>>> # distributed environment
>>> # We have 2 process groups, 2 ranks.
>>> if dist.get_rank() == 0:
>>>     # Assumes world_size of 3.
>>>     data = ["foo", 12, {1: 2}]  # any picklable object
>>> else:
>>>     data = [None, None, None]
>>> dist.broadcast_object_list(data)
>>> data
["foo", 12, {1: 2}]  # Rank 0
["foo", 12, {1: 2}]  # Rank 1