Shortcuts

Visualize Training Logs

MMEngine integrates experiment management tools such as TensorBoard, Weights & Biases (WandB), MLflow, ClearML, Neptune, DVCLive and Aim, making it easy to track and visualize metrics like loss and accuracy.

Below, we’ll show you how to configure an experiment management tool in just one line, based on the example from 15 minutes to get started with MMEngine.

TensorBoard

Configure the visualizer in the initialization parameters of the Runner, and set vis_backends to TensorboardVisBackend.

runner = Runner(
    model=MMResNet50(),
    work_dir='./work_dir',
    train_dataloader=train_dataloader,
    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
    val_dataloader=val_dataloader,
    val_cfg=dict(),
    val_evaluator=dict(type=Accuracy),
    visualizer=dict(type='Visualizer', vis_backends=[dict(type='TensorboardVisBackend')]),
)
runner.train()

WandB

Before using WandB, you need to install the wandb dependency library and log in to WandB.

pip install wandb
wandb login

Configure the visualizer in the initialization parameters of the Runner, and set vis_backends to WandbVisBackend.

runner = Runner(
    model=MMResNet50(),
    work_dir='./work_dir',
    train_dataloader=train_dataloader,
    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
    val_dataloader=val_dataloader,
    val_cfg=dict(),
    val_evaluator=dict(type=Accuracy),
    visualizer=dict(type='Visualizer', vis_backends=[dict(type='WandbVisBackend')]),
)
runner.train()

image

You can click on WandbVisBackend API to view the configurable parameters for WandbVisBackend. For example, the init_kwargs parameter will be passed to the wandb.init method.

runner = Runner(
    ...
    visualizer=dict(
        type='Visualizer',
        vis_backends=[
            dict(
                type='WandbVisBackend',
                init_kwargs=dict(project='toy-example')
            ),
        ],
    ),
    ...
)
runner.train()

MLflow (WIP)

ClearML

Before using ClearML, you need to install the clearml dependency library and refer to Connect ClearML SDK to the Server for configuration.

pip install clearml
clearml-init

Configure the visualizer in the initialization parameters of the Runner, and set vis_backends to ClearMLVisBackend.

runner = Runner(
    model=MMResNet50(),
    work_dir='./work_dir',
    train_dataloader=train_dataloader,
    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
    val_dataloader=val_dataloader,
    val_cfg=dict(),
    val_evaluator=dict(type=Accuracy),
    visualizer=dict(type='Visualizer', vis_backends=[dict(type='ClearMLVisBackend')]),
)
runner.train()

image

Neptune

Before using Neptune, you need to install neptune dependency library and refer to Neptune.AI for configuration.

pip install neptune

Configure the Runner in the initialization parameters of the Runner, and set vis_backends to NeptuneVisBackend.

runner = Runner(
    model=MMResNet50(),
    work_dir='./work_dir',
    train_dataloader=train_dataloader,
    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
    val_dataloader=val_dataloader,
    val_cfg=dict(),
    val_evaluator=dict(type=Accuracy),
    visualizer=dict(type='Visualizer', vis_backends=[dict(type='NeptuneVisBackend')]),
)
runner.train()

image

Please note: If the project and api_token are not specified, neptune will be set to offline mode and the generated files will be saved to the local .neptune file. It is recommended to specify the project and api_token during initialization as shown below.

runner = Runner(
    ...
    visualizer=dict(
        type='Visualizer',
        vis_backends=[
            dict(
                type='NeptuneVisBackend',
                init_kwargs=dict(project='workspace-name/project-name',
                                 api_token='your api token')
            ),
        ],
    ),
    ...
)
runner.train()

More initialization configuration parameters are available at neptune.init_run API.

DVCLive

Before using DVCLive, you need to install dvclive dependency library and refer to iterative.ai for configuration. Common configurations are as follows:

pip install dvclive
cd ${WORK_DIR}
git init
dvc init
git commit -m "DVC init"

Configure the Runner in the initialization parameters of the Runner, and set vis_backends to DVCLiveVisBackend.

runner = Runner(
    model=MMResNet50(),
    work_dir='./work_dir_dvc',
    train_dataloader=train_dataloader,
    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
    val_dataloader=val_dataloader,
    val_cfg=dict(),
    val_evaluator=dict(type=Accuracy),
    visualizer=dict(type='Visualizer', vis_backends=[dict(type='DVCLiveVisBackend')]),
)
runner.train()

Note

Recommend not to set work_dir as work_dirs. Or DVC will give a warning WARNING:dvclive:Error in cache: bad DVC file name 'work_dirs\xxx.dvc' is git-ignored if you run experiments in a OpenMMLab’s repo.

Open the report.html file under work_dir_dvc, and you will see the visualization as shown in the following image.

image

You can also configure a VSCode extension of DVC to visualize the training process.

More initialization configuration parameters are available at DVCLive API Reference.

Aim

Before using Aim, you need to install aim dependency library.

pip install aim

Configure the Runner in the initialization parameters of the Runner, and set vis_backends to AimVisBackend.

runner = Runner(
    model=MMResNet50(),
    work_dir='./work_dir',
    train_dataloader=train_dataloader,
    optim_wrapper=dict(optimizer=dict(type=SGD, lr=0.001, momentum=0.9)),
    train_cfg=dict(by_epoch=True, max_epochs=5, val_interval=1),
    val_dataloader=val_dataloader,
    val_cfg=dict(),
    val_evaluator=dict(type=Accuracy),
    visualizer=dict(type='Visualizer', vis_backends=[dict(type='AimVisBackend')]),
)
runner.train()

In the terminal, use the following command,

aim up

or in the Jupyter Notebook, use the following command,

%load_ext aim
%aim up

to launch the Aim UI as shown below.

image

Initialization configuration parameters are available at Aim SDK Reference.