Evaluator¶
- class mmengine.evaluator.Evaluator(metrics)[source]¶
Wrapper class to compose multiple
BaseMetric
instances.- Parameters:
metrics (dict or BaseMetric or Sequence) – The config of metrics.
- evaluate(size)[source]¶
Invoke
evaluate
method of each metric and collect the metrics dictionary.- Parameters:
size (int) – Length of the entire validation dataset. When batch size > 1, the dataloader may pad some data samples to make sure all ranks have the same length of dataset slice. The
collect_results
function will drop the padded data based on this size.- Returns:
Evaluation results of all metrics. The keys are the names of the metrics, and the values are corresponding results.
- Return type:
- offline_evaluate(data_samples, data=None, chunk_size=1)[source]¶
Offline evaluate the dumped predictions on the given data .
- Parameters:
data_samples (Sequence) – All predictions and ground truth of the model and the validation set.
data (Sequence, optional) – All data of the validation set.
chunk_size (int) – The number of data samples and predictions to be processed in a batch.
- process(data_samples, data_batch=None)[source]¶
Convert
BaseDataSample
to dict and invoke process method of each metric.- Parameters:
data_samples (Sequence[BaseDataElement]) – predictions of the model, and the ground truth of the validation set.
data_batch (Any, optional) – A batch of data from the dataloader.