BaseModel¶
- class mmengine.model.BaseModel(data_preprocessor=None, init_cfg=None)[源代码]¶
Base class for all algorithmic models.
BaseModel implements the basic functions of the algorithmic model, such as weights initialize, batch inputs preprocess(see more information in
BaseDataPreprocessor
), parse losses, and update model parameters.Subclasses inherit from BaseModel only need to implement the forward method, which implements the logic to calculate loss and predictions, then can be trained in the runner.
实际案例
>>> @MODELS.register_module() >>> class ToyModel(BaseModel): >>> >>> def __init__(self): >>> super().__init__() >>> self.backbone = nn.Sequential() >>> self.backbone.add_module('conv1', nn.Conv2d(3, 6, 5)) >>> self.backbone.add_module('pool', nn.MaxPool2d(2, 2)) >>> self.backbone.add_module('conv2', nn.Conv2d(6, 16, 5)) >>> self.backbone.add_module('fc1', nn.Linear(16 * 5 * 5, 120)) >>> self.backbone.add_module('fc2', nn.Linear(120, 84)) >>> self.backbone.add_module('fc3', nn.Linear(84, 10)) >>> >>> self.criterion = nn.CrossEntropyLoss() >>> >>> def forward(self, batch_inputs, data_samples, mode='tensor'): >>> data_samples = torch.stack(data_samples) >>> if mode == 'tensor': >>> return self.backbone(batch_inputs) >>> elif mode == 'predict': >>> feats = self.backbone(batch_inputs) >>> predictions = torch.argmax(feats, 1) >>> return predictions >>> elif mode == 'loss': >>> feats = self.backbone(batch_inputs) >>> loss = self.criterion(feats, data_samples) >>> return dict(loss=loss)
- 参数
data_preprocessor (dict, optional) – The pre-process config of
BaseDataPreprocessor
.init_cfg (dict, optional) – The weight initialized config for
BaseModule
.
- data_preprocessor¶
Used for pre-processing data sampled by dataloader to the format accepted by
forward()
.- Type
- cpu(*args, **kwargs)[源代码]¶
Overrides this method to call
BaseDataPreprocessor.cpu()
additionally.- 返回
The model itself.
- 返回类型
nn.Module
- cuda(device=None)[源代码]¶
Overrides this method to call
BaseDataPreprocessor.cuda()
additionally.- 返回
The model itself.
- 返回类型
nn.Module
- 参数
device (Optional[Union[int, str, torch.device]]) –
- abstract forward(inputs, data_samples=None, mode='tensor')[源代码]¶
Returns losses or predictions of training, validation, testing, and simple inference process.
forward
method of BaseModel is an abstract method, its subclasses must implement this method.Accepts
batch_inputs
anddata_sample
processed bydata_preprocessor
, and returns results according to mode arguments.During non-distributed training, validation, and testing process,
forward
will be called byBaseModel.train_step
,BaseModel.val_step
andBaseModel.test_step
directly.During distributed data parallel training process,
MMSeparateDistributedDataParallel.train_step
will first callDistributedDataParallel.forward
to enable automatic gradient synchronization, and then callforward
to get training loss.- 参数
inputs (torch.Tensor) – batch input tensor collated by
data_preprocessor
.data_samples (list, optional) – data samples collated by
data_preprocessor
.mode (str) –
mode should be one of
loss
,predict
andtensor
loss
: Called bytrain_step
and return lossdict
used for loggingpredict
: Called byval_step
andtest_step
and return list of results used for computing metric.tensor
: Called by custom use to getTensor
type results.
- 返回
If
mode == loss
, return adict
of loss tensor used for backward and logging.If
mode == predict
, return alist
of inference results.If
mode == tensor
, return a tensor ortuple
of tensor ordict
of tensor for custom use.
- 返回类型
- mlu(device=None)[源代码]¶
Overrides this method to call
BaseDataPreprocessor.mlu()
additionally.- 返回
The model itself.
- 返回类型
nn.Module
- 参数
device (Optional[Union[int, str, torch.device]]) –
- npu(device=None)[源代码]¶
Overrides this method to call
BaseDataPreprocessor.npu()
additionally.- 返回
The model itself.
- 返回类型
nn.Module
- 参数
device (Optional[Union[int, str, torch.device]]) –
备注
This generation of NPU(Ascend910) does not support the use of multiple cards in a single process, so the index here needs to be consistent with the default device
- parse_losses(losses)[源代码]¶
Parses the raw outputs (losses) of the network.
- 参数
losses (dict) – Raw output of the network, which usually contain losses and other necessary information.
- 返回
There are two elements. The first is the loss tensor passed to optim_wrapper which may be a weighted sum of all losses, and the second is log_vars which will be sent to the logger.
- 返回类型
- to(*args, **kwargs)[源代码]¶
Overrides this method to call
BaseDataPreprocessor.to()
additionally.- 返回
The model itself.
- 返回类型
nn.Module
- train_step(data, optim_wrapper)[源代码]¶
Implements the default model training process including preprocessing, model forward propagation, loss calculation, optimization, and back-propagation.
During non-distributed training. If subclasses do not override the
train_step()
,EpochBasedTrainLoop
orIterBasedTrainLoop
will call this method to update model parameters. The default parameter update process is as follows:Calls
self.data_processor(data, training=False)
to collect batch_inputs and corresponding data_samples(labels).Calls
self(batch_inputs, data_samples, mode='loss')
to get raw lossCalls
self.parse_losses
to getparsed_losses
tensor used to backward and dict of loss tensor used to log messages.Calls
optim_wrapper.update_params(loss)
to update model.
- 参数
optim_wrapper (OptimWrapper) – OptimWrapper instance used to update model parameters.
- 返回
A
dict
of tensor for logging.- 返回类型
Dict[str, torch.Tensor]