Welcome to MMEngine’s documentation!¶
You can switch between Chinese and English documents in the lower-left corner of the layout.
Introduction¶
Coming soon. Please refer to chinese documentation.
Installation¶
Prerequisites¶
Python 3.6+
PyTorch 1.6+
CUDA 9.2+
GCC 5.4+
Prepare the Environment¶
Use conda and activate the environment:
conda create -n open-mmlab python=3.7 -y conda activate open-mmlab
Install PyTorch
Before installing
MMEngine
, please make sure that PyTorch has been successfully installed in the environment. You can refer to PyTorch official installation documentation. Verify the installation with the following command:python -c 'import torch;print(torch.__version__)'
Install MMEngine¶
Install with mim¶
mim is a package management tool for OpenMMLab projects, which can be used to install the OpenMMLab project easily.
pip install -U openmim
mim install mmengine
Install with pip¶
pip install mmengine
Use docker images¶
Build the image
docker build -t mmengine https://github.com/open-mmlab/mmengine.git#main:docker/release
More information can be referred from mmengine/docker.
Run the image
docker run --gpus all --shm-size=8g -it mmengine
Build from source¶
# if cloning speed is too slow, you can switch the source to https://gitee.com/open-mmlab/mmengine.git
git clone https://github.com/open-mmlab/mmengine.git
cd mmengine
pip install -e . -v
Verify the Installation¶
To verify if MMEngine
and the necessary environment are successfully installed, we can run this command:
python -c 'import mmengine;print(mmengine.__version__)'
15 minutes to get started with MMEngine¶
In this tutorial, we’ll take training a ResNet-50 model on CIFAR-10 dataset as an example. We will build a complete and configurable pipeline for both training and validation in only 80 lines of code with MMEgnine
.
The whole process includes the following steps:
Build a Model¶
First, we need to build a model. In MMEngine, the model should inherit from BaseModel
. Aside from parameters representing inputs from the dataset, its forward
method needs to accept an extra argument called mode
:
for training, the value of
mode
is “loss,” and theforward
method should return adict
containing the key “loss”.for validation, the value of
mode
is “predict”, and the forward method should return results containing both predictions and labels.
import torch.nn.functional as F
import torchvision
from mmengine.model import BaseModel
class MMResNet50(BaseModel):
def __init__(self):
super().__init__()
self.resnet = torchvision.models.resnet50()
def forward(self, imgs, labels, mode):
x = self.resnet(imgs)
if mode == 'loss':
return {'loss': F.cross_entropy(x, labels)}
elif mode == 'predict':
return x, labels
Build a Dataset and DataLoader¶
Next, we need to create Dataset and DataLoader for training and validation. For basic training and validation, we can simply use built-in datasets supported in TorchVision.
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
norm_cfg = dict(mean=[0.491, 0.482, 0.447], std=[0.202, 0.199, 0.201])
train_dataloader = DataLoader(batch_size=32,
shuffle=True,
dataset=torchvision.datasets.CIFAR10(
'data/cifar10',
train=True,
download=True,
transform=transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(**norm_cfg)
])))
val_dataloader = DataLoader(batch_size=32,
shuffle=False,
dataset=torchvision.datasets.CIFAR10(
'data/cifar10',
train=False,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(**norm_cfg)
])))
Build a Evaluation Metrics¶
To validate and test the model, we need to define a Metric called accuracy to evaluate the model. This metric needs inherit from BaseMetric
and implements the process
and compute_metrics
methods where the process
method accepts the output of the dataset and other outputs when mode="predict"
. The output data at this scenario is a batch of data. After processing this batch of data, we save the information to self.results
property.
compute_metrics
accepts a results
parameter. The input results
of compute_metrics
is all the information saved in process
(In the case of a distributed environment, results
are the information collected from all process
in all the processes). Use these information to calculate and return a dict
that holds the results of the evaluation metrics
from mmengine.evaluator import BaseMetric
class Accuracy(BaseMetric):
def process(self, data_batch, data_samples):
score, gt = data_samples
# save the middle result of a batch to `self.results`
self.results.append({
'batch_size': len(gt),
'correct': (score.argmax(dim=1) == gt).sum().cpu(),
})
def compute_metrics(self, results):
total_correct = sum(item['correct'] for item in results)
total_size = sum(item['batch_size'] for item in results)
# return the dict containing the eval results
# the key is the name of the metric name
return dict(accuracy=100 * total_correct / total_size)
Build a Runner and Run the Task¶
Now we can build a Runner with previously defined Model
, DataLoader
, and Metrics
, and some other configs shown as follows:
from torch.optim import SGD
from mmengine.runner import Runner
runner = Runner(
# the model used for training and validation.
# Needs to meet specific interface requirements
model=MMResNet50(),
# working directory which saves training logs and weight files
work_dir='./work_dir',
# train dataloader needs to meet the PyTorch data loader protocol
train_dataloader=train_dataloader,
# optimize wrapper for optimization with additional features like
# AMP, gradtient accumulation, etc
optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
# trainging coinfs for specifying training epoches, verification intervals, etc
train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
# validation dataloaer also needs to meet the PyTorch data loader protocol
val_dataloader=val_dataloader,
# validation configs for specifying additional parameters required for validation
val_cfg=dict(),
# validation evaluator. The default one is used here
val_evaluator=dict(type=Accuracy),
)
runner.train()
Finally, let’s put all the codes above together into a complete script that uses the MMEngine
executor for training and validation:
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
from torch.optim import SGD
from torch.utils.data import DataLoader
from mmengine.evaluator import BaseMetric
from mmengine.model import BaseModel
from mmengine.runner import Runner
class MMResNet50(BaseModel):
def __init__(self):
super().__init__()
self.resnet = torchvision.models.resnet50()
def forward(self, imgs, labels, mode):
x = self.resnet(imgs)
if mode == 'loss':
return {'loss': F.cross_entropy(x, labels)}
elif mode == 'predict':
return x, labels
class Accuracy(BaseMetric):
def process(self, data_batch, data_samples):
score, gt = data_samples
self.results.append({
'batch_size': len(gt),
'correct': (score.argmax(dim=1) == gt).sum().cpu(),
})
def compute_metrics(self, results):
total_correct = sum(item['correct'] for item in results)
total_size = sum(item['batch_size'] for item in results)
return dict(accuracy=100 * total_correct / total_size)
norm_cfg = dict(mean=[0.491, 0.482, 0.447], std=[0.202, 0.199, 0.201])
train_dataloader = DataLoader(batch_size=32,
shuffle=True,
dataset=torchvision.datasets.CIFAR10(
'data/cifar10',
train=True,
download=True,
transform=transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(**norm_cfg)
])))
val_dataloader = DataLoader(batch_size=32,
shuffle=False,
dataset=torchvision.datasets.CIFAR10(
'data/cifar10',
train=False,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(**norm_cfg)
])))
runner = Runner(
model=MMResNet50(),
work_dir='./work_dir',
train_dataloader=train_dataloader,
optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
val_dataloader=val_dataloader,
val_cfg=dict(),
val_evaluator=dict(type=Accuracy),
)
runner.train()
Training log would be similar to this:
2022/08/22 15:51:53 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]
CUDA available: True
numpy_random_seed: 1513128759
GPU 0: NVIDIA GeForce GTX 1660 SUPER
CUDA_HOME: /usr/local/cuda
...
2022/08/22 15:51:54 - mmengine - INFO - Checkpoints will be saved to /home/mazerun/work_dir by HardDiskBackend.
2022/08/22 15:51:56 - mmengine - INFO - Epoch(train) [1][10/1563] lr: 1.0000e-03 eta: 0:18:23 time: 0.1414 data_time: 0.0077 memory: 392 loss: 5.3465
2022/08/22 15:51:56 - mmengine - INFO - Epoch(train) [1][20/1563] lr: 1.0000e-03 eta: 0:11:29 time: 0.0354 data_time: 0.0077 memory: 392 loss: 2.7734
2022/08/22 15:51:56 - mmengine - INFO - Epoch(train) [1][30/1563] lr: 1.0000e-03 eta: 0:09:10 time: 0.0352 data_time: 0.0076 memory: 392 loss: 2.7789
2022/08/22 15:51:57 - mmengine - INFO - Epoch(train) [1][40/1563] lr: 1.0000e-03 eta: 0:08:00 time: 0.0353 data_time: 0.0073 memory: 392 loss: 2.5725
2022/08/22 15:51:57 - mmengine - INFO - Epoch(train) [1][50/1563] lr: 1.0000e-03 eta: 0:07:17 time: 0.0347 data_time: 0.0073 memory: 392 loss: 2.7382
2022/08/22 15:51:57 - mmengine - INFO - Epoch(train) [1][60/1563] lr: 1.0000e-03 eta: 0:06:49 time: 0.0347 data_time: 0.0072 memory: 392 loss: 2.5956
2022/08/22 15:51:58 - mmengine - INFO - Epoch(train) [1][70/1563] lr: 1.0000e-03 eta: 0:06:28 time: 0.0348 data_time: 0.0072 memory: 392 loss: 2.7351
...
2022/08/22 15:52:50 - mmengine - INFO - Saving checkpoint at 1 epochs
2022/08/22 15:52:51 - mmengine - INFO - Epoch(val) [1][10/313] eta: 0:00:03 time: 0.0122 data_time: 0.0047 memory: 392
2022/08/22 15:52:51 - mmengine - INFO - Epoch(val) [1][20/313] eta: 0:00:03 time: 0.0122 data_time: 0.0047 memory: 308
2022/08/22 15:52:51 - mmengine - INFO - Epoch(val) [1][30/313] eta: 0:00:03 time: 0.0123 data_time: 0.0047 memory: 308
...
2022/08/22 15:52:54 - mmengine - INFO - Epoch(val) [1][313/313] accuracy: 35.7000
In addition to these basic components, you can also use executor to easily combine and configure various training techniques, such as enabling mixed-precision training and gradient accumulation (see OptimWrapper), configuring the learning rate decay curve (see Metrics & Evaluator), and etc.
Registry¶
Coming soon. Please refer to chinese documentation.
Config¶
Coming soon. Please refer to chinese documentation.
Runner¶
Coming soon. Please refer to chinese documentation.
Hook¶
Coming soon. Please refer to chinese documentation.
Model¶
Coming soon. Please refer to chinese documentation.
Evaluation¶
Coming soon. Please refer to chinese documentation.
OptimWrapper¶
Coming soon. Please refer to chinese documentation.
Parameter Scheduler¶
Coming soon. Please refer to chinese documentation.
Data transform¶
Coming soon. Please refer to chinese documentation.
BaseDataset¶
Coming soon. Please refer to chinese documentation.
Abstract Data Element¶
Coming soon. Please refer to chinese documentation.
Visualization¶
Coming soon. Please refer to chinese documentation.
Initialization¶
Coming soon. Please refer to chinese documentation.
Distribution communication¶
Coming soon. Please refer to chinese documentation.
Logging¶
Coming soon. Please refer to chinese documentation.
File IO¶
Coming soon. Please refer to chinese documentation.
utils¶
Coming soon. Please refer to chinese documentation.
Resume training¶
Coming soon. Please refer to chinese documentation.
Speed up training¶
Coming soon. Please refer to chinese documentation.
Save memory on GPU¶
Coming soon. Please refer to chinese documentation.
Use modules from other libraries¶
Coming soon. Please refer to chinese documentation.
Train a GAN¶
Coming soon. Please refer to chinese documentation.
Hook¶
Coming soon. Please refer to chinese documentation.
Runner¶
Coming soon. Please refer to chinese documentation.
Evaluation¶
Coming soon. Please refer to chinese documentation.
Visualization¶
Coming soon. Please refer to chinese documentation.
Logging¶
Coming soon. Please refer to chinese documentation.
Migrate Runner from MMCV to MMEngine¶
Coming soon. Please refer to chinese documentation.
Migrate Hook from MMCV to MMEngine¶
Coming soon. Please refer to chinese documentation.
Migrate Model from MMCV to MMEngine¶
Coming soon. Please refer to chinese documentation.
Migrate Parameter Scheduler from MMCV to MMEngine¶
Coming soon. Please refer to chinese documentation.
Migrate Transform from MMCV to MMEngine¶
Coming soon. Please refer to chinese documentation.
mmengine.registry¶
A registry to map strings to classes or functions. |
|
Scope of current task used to reset the current registry, which can be accessed globally. |
Build a module from config dict when it is a class configuration, or call a function from config dict when it is a function configuration. |
|
Build a PyTorch model from config dict(s). |
|
Build a Runner object. |
|
Builds a |
|
Scan all modules in MMEngine’s root and child registries and dump to json. |
|
Traverse the whole registry tree from any given node, and collect information of all registered modules in this registry tree. |
mmengine.config¶
A facility for config and config files. |
|
A dictionary for config which has the same interface as python’s built- in dictionary and can be used as a normal dictionary. |
|
argparse action to split an argument into KEY=VALUE form on the first = and append to a dictionary. |
mmengine.runner¶
mmengine.runner
Loop¶
Base loop class. |
|
Loop for epoch-based training. |
|
Loop for iter-based training. |
|
Loop for validation. |
|
Loop for test. |
Checkpoints¶
A general checkpoint loader to manage all schemes. |
Find the latest checkpoint from the given path. |
|
Returns a dictionary containing a whole state of the module. |
|
Load checkpoint from a file or URI. |
|
Load state_dict to a module. |
|
Save checkpoint to file. |
|
Copy a model state_dict to cpu. |
Miscellaneous¶
A log processor used to format log information collected from |
|
Hook priority levels. |
Get priority value. |
mmengine.hooks¶
Base hook class. |
|
Save checkpoints periodically. |
|
A Hook to apply Exponential Moving Average (EMA) on the model during training. |
|
Collect logs from different components of |
|
Show or Write the predicted results during the process of testing. |
|
A hook to update some hyper-parameters in optimizer, e.g., learning rate and momentum. |
|
A hook that updates runtime information into message hub. |
|
Data-loading sampler for distributed training. |
|
A hook that logs the time spent during iteration. |
|
Synchronize model buffers such as running_mean and running_var in BN at the end of each epoch. |
|
Releases all unoccupied cached GPU memory during the process of training. |
mmengine.model¶
mmengine.model
Module¶
Base module for all modules in openmmlab. |
|
ModuleDict in openmmlab. |
|
ModuleList in openmmlab. |
|
Sequential module in openmmlab. |
Model¶
Base class for all algorithmic models. |
|
Base data pre-processor used for copying data to the target device. |
|
Image pre-processor for normalization and bgr to rgb conversion. |
|
Base model for inference with test-time augmentation. |
EMA¶
A base class for averaging model weights. |
|
Implements the exponential moving average (EMA) of the model. |
|
Exponential moving average (EMA) with momentum annealing strategy. |
|
Implements the stochastic weight averaging (SWA) of the model. |
Model Wrapper¶
A distributed model wrapper used for training,testing and validation in loop. |
|
A DistributedDataParallel wrapper for models in MMGeneration. |
|
A wrapper for sharding Module parameters across data parallel workers. |
Check if a module is a model wrapper. |
Weight Initialization¶
Initialize module parameters with constant values. |
|
Initialize module parameters with the values according to the method described in `Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification - He, K. |
|
Initialize module parameters with the values drawn from the normal distribution \(\mathcal{N}(\text{mean}, \text{std}^2)\). |
|
Initialize module by loading a pretrained model. |
|
Initialize module parameters with the values drawn from the normal distribution \(\mathcal{N}(\text{mean}, \text{std}^2)\) with values outside \([a, b]\). |
|
Initialize module parameters with values drawn from the uniform distribution \(\mathcal{U}(a, b)\). |
|
Initialize module parameters with values according to the method described in `Understanding the difficulty of training deep feedforward neural networks - Glorot, X. |
initialize conv/fc bias value according to a given probability value. |
|
Initialize a module. |
|
Update the _params_init_info in the module if the value of parameters are changed. |
|
Utils¶
Merge all dictionaries into one dictionary. |
|
Stack multiple tensors to form a batch and pad the tensor to the max shape use the right bottom padding mode in these images. |
|
Helper function to convert all SyncBatchNorm (SyncBN) and mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers in the model to `BatchNormXd layers. |
|
Helper function to convert all BatchNorm layers in the model to SyncBatchNorm (SyncBN) or `mmcv.ops.sync_bn.SyncBatchNorm`(MMSyncBN) layers. Adapted from <https://pytorch.org/docs/stable/generated/torch.nn.Sy ncBatchNorm.html#torch.nn.SyncBatchNorm.convert_sync_batchnorm>_. |
mmengine.optim¶
Optimizer¶
A subclass of |
|
Optimizer wrapper provides a common interface for updating parameters. |
|
A dictionary container of |
|
Default constructor for optimizers. |
Build function of OptimWrapper. |
Scheduler¶
Base class for parameter schedulers. |
|
Decays the learning rate value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: |
|
Decays the momentum value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: |
|
Decays the parameter value of each parameter group by a small constant factor until the number of epoch reaches a pre-defined milestone: |
|
Set the learning rate of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial value and \(T_{cur}\) is the number of epochs since the last restart in SGDR: |
|
Set the momentum of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial value and \(T_{cur}\) is the number of epochs since the last restart in SGDR: |
|
Set the parameter value of each parameter group using a cosine annealing schedule, where \(\eta_{max}\) is set to the initial value and \(T_{cur}\) is the number of epochs since the last restart in SGDR: |
|
Decays the learning rate of each parameter group by gamma every epoch. |
|
Decays the momentum of each parameter group by gamma every epoch. |
|
Decays the parameter value of each parameter group by gamma every epoch. |
|
Decays the learning rate of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: |
|
Decays the momentum of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: |
|
Decays the parameter value of each parameter group by linearly changing small multiplicative factor until the number of epoch reaches a pre-defined milestone: |
|
Decays the specified learning rate in each parameter group by gamma once the number of epoch reaches one of the milestones. |
|
Decays the specified momentum in each parameter group by gamma once the number of epoch reaches one of the milestones. |
|
Decays the specified parameter in each parameter group by gamma once the number of epoch reaches one of the milestones. |
|
Sets the learning rate of each parameter group according to the 1cycle learning rate policy. |
|
Sets the parameters of each parameter group according to the 1cycle learning rate policy. |
|
Decays the learning rate of each parameter group in a polynomial decay scheme. |
|
Decays the momentum of each parameter group in a polynomial decay scheme. |
|
Decays the parameter value of each parameter group in a polynomial decay scheme. |
|
Decays the learning rate of each parameter group by gamma every step_size epochs. |
|
Decays the momentum of each parameter group by gamma every step_size epochs. |
|
Decays the parameter value of each parameter group by gamma every step_size epochs. |
mmengine.evaluator¶
Evaluator¶
Wrapper class to compose multiple |
mmengine.structures¶
A base data interface that supports Tensor-like and dict-like operations. |
|
Data structure for instance-level annotations or predictions. |
|
Data structure for label-level annotations or predictions. |
|
Data structure for pixel-level annotations or predictions. |
mmengine.dataset¶
mmengine.dataset
Dataset¶
BaseDataset for open source projects in OpenMMLab. |
|
Compose multiple transforms sequentially. |
Dataset Wrapper¶
A wrapper of class balanced dataset. |
|
A wrapper of concatenated dataset. |
|
A wrapper of repeated dataset. |
Sampler¶
The default data sampler for both distributed and non-distributed environment. |
|
It’s designed for iteration-based runner and yields a mini-batch indices each time. |
Utils¶
Convert list of data sampled from dataset into a batch of data, of which type consistent with the type of each data_itement in |
|
Convert list of data sampled from dataset into a batch of data, of which type consistent with the type of each data_itement in |
|
This function will be called on each worker subprocess after seeding and before data loading. |
mmengine.device¶
Returns the currently existing device type. |
|
Returns the maximum GPU memory occupied by tensors in megabytes (MB) for a given device. |
|
Returns True if cuda devices exist. |
|
Returns True if Ascend PyTorch and npu devices exist. |
|
Returns True if Cambricon PyTorch and mlu devices exist. |
|
Return True if mps devices exist. |
mmengine.hub¶
Get config from external package. |
|
Get built model from external package. |
mmengine.logging¶
Formatted logger used to record messages. |
|
Message hub for component interaction. |
|
Unified storage format for different log types. |
Print a log message. |
mmengine.visualization¶
mmengine.visualization
Visualizer¶
MMEngine provides a Visualizer class that uses the |
visualization Backend¶
Base class for visualization backend. |
|
Local visualization backend class. |
|
Tensorboard visualization backend class. |
|
Wandb visualization backend class. |
mmengine.fileio¶
mmengine.fileio
File Backend¶
Abstract class of storage backends. |
|
A general file client to access files in different backends. |
|
Raw hard disks storage backend. |
|
Raw local storage backend. |
|
HTTP and HTTPS storage bachend. |
|
Lmdb storage backend. |
|
Memcached storage backend. |
|
Petrel storage backend (for internal usage). |
Register a backend. |
File IO¶
Dump data to json/yaml/pickle strings or files. |
|
Load data from json/yaml/pickle files. |
|
Create a symbolic link pointing to src named dst. |
|
Copy a file src to dst and return the destination file. |
|
Copy a local file src to dst and return the destination file. |
|
Copy the file src to local dst and return the destination file. |
|
Recursively copy an entire directory tree rooted at src to a directory named dst and return the destination directory. |
|
Recursively copy an entire directory tree rooted at src to a directory named dst and return the destination directory. |
|
Recursively copy an entire directory tree rooted at src to a local directory named dst and return the destination directory. |
|
Check whether a file path exists. |
|
Generate the presigned url of video stream which can be passed to mmcv.VideoReader. |
|
Read bytes from a given |
|
Return a file backend based on the prefix of uri or backend_args. |
|
Download data from |
|
Read text from a given |
|
Check whether a file path is a directory. |
|
Check whether a file path is a file. |
|
Concatenate all file paths. |
|
Scan a directory to find the interested directories or files in arbitrary order. |
|
Write bytes to a given |
|
Write text to a given |
|
Remove a file. |
|
Recursively delete a directory tree. |
Parse File¶
Load a text file and parse the content as a dict. |
|
Load a text file and parse the content as a list of strings. |
mmengine.dist¶
dist¶
Gather data from the whole group to |
|
Gathers picklable objects from the whole group in a single process. |
|
Gather data from the whole group in a list. |
|
Gather picklable objects from the whole group into a list. |
|
Reduces the tensor data across all machines in such a way that all get the final result. |
|
Reduces the dict across all machines in such a way that all get the final result. |
|
All-reduce parameters. |
|
Broadcast the data from |
|
Synchronize a random seed to all processes. |
|
Broadcasts picklable objects in |
|
Collected results in distributed environments. |
|
Collect results under cpu mode. |
|
Collect results under gpu mode. |
utils¶
Get distributed information of the given process group. |
|
Initialize distributed environment. |
|
Setup the local process group. |
|
Return the backend of the given process group. |
|
Return the number of the given process group. |
|
Return the rank of the given process group. |
|
Return the number of the current node. |
|
Return the rank of current process in the current node. |
|
Whether the current rank of the given process group is equal to 0. |
|
Decorate those methods which should be executed in master process. |
|
Synchronize all processes from the given process group. |
|
Return True if distributed environment has been initialized. |
|
Return local process group. |
|
Return default process group. |
|
Return the device of |
|
Return the device for communication among groups. |
|
Recursively convert Tensor in |
mmengine.utils¶
mmengine.utils
Manager¶
The metaclass for global accessible class. |
|
|
Path¶
Check if path is an absolute path in different backends. |
|
Scan a directory to find the interested files. |
|
Progress Bar¶
A progress bar which can print the progress. |
Track the progress of tasks iteration or enumeration with a progress bar. |
|
Track the progress of parallel task execution with a progress bar. |
|
Track the progress of tasks execution with a progress bar. |
Miscellaneous¶
A flexible Timer class. |
|
Check whether it is a list of some type. |
|
Check whether it is a tuple of some type. |
|
Check whether it is a sequence of some type. |
|
Whether the input is an string instance. |
|
Cast elements of an iterable object into some type. |
|
Cast elements of an iterable object into a list of some type. |
|
Cast elements of an iterable object into a tuple of some type. |
|
Concatenate a list of list into a single list. |
|
Slice a list into several sub lists by a list of given length. |
|
A decorator factory to check if prerequisites are satisfied. |
|
A decorator to check if some arguments are deprecate and try to replace deprecate src_arg_name to dst_arg_name. |
|
Marks functions as deprecated. |
|
Check whether the object has a method. |
|
Check if a method of base class is overridden in derived class. |
|
Import modules from the given list of strings. |
|
A decorator to check if some executable files are installed. |
|
A decorator to check if some python packages are installed. |
|
Add check points in a single line. |
mmengine.utils.dl_utils¶
A tool that counts the average running time of a function or a method. |
Collect the information of the running environments. |
|
Loads the Torch serialized object at the given URL. |
|
Detect whether model has a BatchNormalization layer. |
|
Check if a layer is a normalization layer. |
|
Check whether mmcv-full is installed. |
|
Convert tensor to 3-channel images or 1-channel gray images. |
|
A string with magic powers to compare to both Version and iterables! Prior to 1.10.0 torch.__version__ was stored as a str and so many did comparisons against torch.__version__ as if it were a str. |
|
Set multi-processing related environment. |
|
A wrapper of torch.meshgrid to compat different PyTorch versions. |
|
Changelog of v0.x¶
v0.3.0 (11/02/2022)¶
New Features & Enhancements¶
Support running on Ascend chip by @wangjiangben-hw in https://github.com/open-mmlab/mmengine/pull/572
Support torch
ZeroRedundancyOptimizer
by @nijkah in https://github.com/open-mmlab/mmengine/pull/551Add non-blocking feature to
BaseDataPreprocessor
by @shenmishajing in https://github.com/open-mmlab/mmengine/pull/618Add documents for
clip_grad
, and support clip grad by value. by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/513Add ROCm info when collecting env by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/633
Add a function to mark the deprecated function. by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/609
Call
register_all_modules
inRegistry.get()
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/541Deprecate
_save_to_state_dict
implemented in mmengine by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/610Add
ignore_keys
in ConcatDataset by @BIGWangYuDong in https://github.com/open-mmlab/mmengine/pull/556
Docs¶
Fix cannot show
changelog.md
in chinese documents. by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/606Fix Chinese docs whitespaces by @C1rN09 in https://github.com/open-mmlab/mmengine/pull/521
Translate installation and 15_min by @xin-li-67 in https://github.com/open-mmlab/mmengine/pull/629
Refine chinese doc by @Tau-J in https://github.com/open-mmlab/mmengine/pull/516
Add MMYOLO link in README by @Xiangxu-0103 in https://github.com/open-mmlab/mmengine/pull/634
Add MMEngine logo in docs by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/641
Fix docstring of
BaseDataset
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/656Fix docstring and documentation used for
hub.get_model
by @zengyh1900 in https://github.com/open-mmlab/mmengine/pull/659Fix typo in
docs/zh_cn/advanced_tutorials/visualization.md
by @MambaWong in https://github.com/open-mmlab/mmengine/pull/616Fix typo docstring of
DefaultOptimWrapperConstructor
by @triple-Mu in https://github.com/open-mmlab/mmengine/pull/644Fix typo in advanced tutorial by @cxiang26 in https://github.com/open-mmlab/mmengine/pull/650
Fix typo in
Config
docstring by @sanbuphy in https://github.com/open-mmlab/mmengine/pull/654Fix typo in
docs/zh_cn/tutorials/config.md
by @Xiangxu-0103 in https://github.com/open-mmlab/mmengine/pull/596Fix typo in
docs/zh_cn/tutorials/model.md
by @C1rN09 in https://github.com/open-mmlab/mmengine/pull/598
Bug Fixes¶
Fix error calculation of
eta_min
inCosineRestartParamScheduler
by @Z-Fran in https://github.com/open-mmlab/mmengine/pull/639Fix
BaseDataPreprocessor.cast_data
could not handle string data by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/602Make
autocast
compatible with mps by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/587Fix error format of log message by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/508
Fix error implementation of
is_model_wrapper
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/640Fix
VisBackend.add_config
is not called by @shenmishajing in https://github.com/open-mmlab/mmengine/pull/613Change
strict_load
of EMAHook to False by default by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/642Fix
open
encoding problem of Config in Windows by @sanbuphy in https://github.com/open-mmlab/mmengine/pull/648Fix the total number of iterations in log is a float number. by @jbwang1997 in https://github.com/open-mmlab/mmengine/pull/604
Fix
pip upgrade
CI by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/622
New Contributors¶
@shenmishajing made their first contribution in https://github.com/open-mmlab/mmengine/pull/618
@Xiangxu-0103 made their first contribution in https://github.com/open-mmlab/mmengine/pull/596
@Tau-J made their first contribution in https://github.com/open-mmlab/mmengine/pull/516
@wangjiangben-hw made their first contribution in https://github.com/open-mmlab/mmengine/pull/572
@triple-Mu made their first contribution in https://github.com/open-mmlab/mmengine/pull/644
@sanbuphy made their first contribution in https://github.com/open-mmlab/mmengine/pull/648
@Z-Fran made their first contribution in https://github.com/open-mmlab/mmengine/pull/639
@BIGWangYuDong made their first contribution in https://github.com/open-mmlab/mmengine/pull/556
@zengyh1900 made their first contribution in https://github.com/open-mmlab/mmengine/pull/659
v0.2.0 (11/10/2022)¶
New Features & Enhancements¶
Add SMDDP backend and support running on AWS by @austinmw in https://github.com/open-mmlab/mmengine/pull/579
Refactor
FileIO
but without breaking bc by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/533Add test time augmentation base model by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/538
Use
torch.lerp\_()
to speed up EMA by @RangiLyu in https://github.com/open-mmlab/mmengine/pull/519Support converting
BN
toSyncBN
by config by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/506Support defining metric name in wandb backend by @okotaku in https://github.com/open-mmlab/mmengine/pull/509
Add dockerfile by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/347
Docs¶
Fix API files of English documentation by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/525
Fix typo in
instance_data.py
by @Dai-Wenxun in https://github.com/open-mmlab/mmengine/pull/530Fix the docstring of the model sub-package by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/573
Fix a spelling error in docs/zh_cn by @cxiang26 in https://github.com/open-mmlab/mmengine/pull/548
Fix typo in docstring by @MengzhangLI in https://github.com/open-mmlab/mmengine/pull/527
Update
config.md
by @Zhengfei-0311 in https://github.com/open-mmlab/mmengine/pull/562
Bug Fixes¶
Fix
LogProcessor
does not smooth loss if the name of loss doesn’t start withloss
by @liuyanyi in https://github.com/open-mmlab/mmengine/pull/539Fix failed to enable
detect_anomalous_params
inMMSeparateDistributedDataParallel
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/588Fix CheckpointHook behavior unexpected if given
filename_tmpl
argument by @C1rN09 in https://github.com/open-mmlab/mmengine/pull/518Fix error argument sequence in
FSDP
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/520Fix uploading image in wandb backend @okotaku in https://github.com/open-mmlab/mmengine/pull/510
Fix loading state dictionary in
EMAHook
by @okotaku in https://github.com/open-mmlab/mmengine/pull/507Fix circle import in
EMAHook
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/523Fix unit test could fail caused by
MultiProcessTestCase
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/535Remove unnecessary “if statement” in
Registry
by @MambaWong in https://github.com/open-mmlab/mmengine/pull/536Fix
_save_to_state_dict
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/542Support comparing NumPy array dataset meta in
Runner.resume
by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/511Use
get
instead ofpop
to dumprunner_type
inbuild_runner_from_cfg
by @nijkah in https://github.com/open-mmlab/mmengine/pull/549Upgrade pre-commit hooks by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/576
Delete the error comment in
registry.md
by @vansin in https://github.com/open-mmlab/mmengine/pull/514Fix Some out-of-date unit tests by @C1rN09 in https://github.com/open-mmlab/mmengine/pull/586
Fix typo in
MMFullyShardedDataParallel
by @yhna940 in https://github.com/open-mmlab/mmengine/pull/569Update Github Action CI and CircleCI by @zhouzaida in https://github.com/open-mmlab/mmengine/pull/512
Fix unit test in windows by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/515
Fix merge ci & multiprocessing unit test by @HAOCHENYE in https://github.com/open-mmlab/mmengine/pull/529
New Contributors¶
@okotaku made their first contribution in https://github.com/open-mmlab/mmengine/pull/510
@MengzhangLI made their first contribution in https://github.com/open-mmlab/mmengine/pull/527
@MambaWong made their first contribution in https://github.com/open-mmlab/mmengine/pull/536
@cxiang26 made their first contribution in https://github.com/open-mmlab/mmengine/pull/548
@nijkah made their first contribution in https://github.com/open-mmlab/mmengine/pull/549
@Zhengfei-0311 made their first contribution in https://github.com/open-mmlab/mmengine/pull/562
@austinmw made their first contribution in https://github.com/open-mmlab/mmengine/pull/579
@yhna940 made their first contribution in https://github.com/open-mmlab/mmengine/pull/569
@liuyanyi made their first contribution in https://github.com/open-mmlab/mmengine/pull/539