Logging¶
This page shows descriptions of the logging functions included with Emmental which logs the learning information and checkpoints.
Logging Classes¶
The following docs describe elements of Emmental’s logging utilites.
Emmental logging module.
- class emmental.logging.Checkpointer[source]¶
Bases:
object
Checkpointing class to log train information.
- checkpoint(iteration, model, optimizer, lr_scheduler, metric_dict)[source]¶
Checkpointing the checkpoint.
- Parameters
iteration (
Union
[float
,int
]) – The current iteration.model (
EmmentalModel
) – The model to checkpoint.optimizer (
Optimizer
) – The optimizer used during training process.lr_scheduler (
_LRScheduler
) – Learning rate scheduler.metric_dict (
Dict
[str
,float
]) – The metric dict.
- Return type
None
- is_new_best(metric_dict)[source]¶
Update the best score.
- Parameters
metric_dict (
Dict
[str
,float
]) – The current metric dict.- Return type
Set
[str
]- Returns
The updated best metric set.
- load_best_model(model)[source]¶
Load the best model from the checkpoint.
- Parameters
model (
EmmentalModel
) – The current model.- Return type
- Returns
The best model load from the checkpoint.
- class emmental.logging.JsonWriter[source]¶
Bases:
emmental.logging.log_writer.LogWriter
A class for logging during training process.
- add_scalar(name, value, step)[source]¶
Log a scalar variable.
- Parameters
name (
str
) – The name of the scalar.value (
Union
[float
,int
]) – The value of the scalar.step (
Union
[float
,int
]) – The current step.
- Return type
None
- add_scalar_dict(metric_dict, step)¶
Log a scalar variable.
- Parameters
metric_dict (
Dict
[str
,Union
[float
,int
]]) – The metric dict.step (
Union
[float
,int
]) – The current step.
- Return type
None
- close()¶
Close the log writer.
- Return type
None
- write_config(config_filename='config.yaml')¶
Dump the config to file.
- Parameters
config_filename (
str
) – The config filename, defaults to “config.yaml”.- Return type
None
- class emmental.logging.LogWriter[source]¶
Bases:
object
A class for logging during training process.
- add_scalar(name, value, step)[source]¶
Log a scalar variable.
- Parameters
name (
str
) – The name of the scalar.value (
Union
[float
,int
]) – The value of the scalar.step (
Union
[float
,int
]) – The current step.
- Return type
None
- add_scalar_dict(metric_dict, step)[source]¶
Log a scalar variable.
- Parameters
metric_dict (
Dict
[str
,Union
[float
,int
]]) – The metric dict.step (
Union
[float
,int
]) – The current step.
- Return type
None
- class emmental.logging.LoggingManager(n_batches_per_epoch, epoch_count=0, batch_count=0)[source]¶
Bases:
object
A class to manage logging during training progress.
- Parameters
n_batches_per_epoch (
int
) – Total number batches per epoch.
- checkpoint_model(model, optimizer, lr_scheduler, metric_dict)[source]¶
Checkpoint the model.
- Parameters
model (
EmmentalModel
) – The model to checkpoint.optimizer (
Optimizer
) – The optimizer used during training process.lr_scheduler (
_LRScheduler
) – Learning rate scheduler.metric_dict (
Dict
[str
,float
]) – the metric dict.
- Return type
None
- close(model)[source]¶
Close the checkpointer and reload the model if necessary.
- Parameters
model (
EmmentalModel
) – The trained model.- Return type
- Returns
The reloaded model if necessary
- class emmental.logging.TensorBoardWriter[source]¶
Bases:
emmental.logging.log_writer.LogWriter
A class for logging to Tensorboard during training process.
- add_scalar(name, value, step)[source]¶
Log a scalar variable.
- Parameters
name (
str
) – The name of the scalar.value (
Union
[float
,int
]) – The value of the scalar.step (
Union
[float
,int
]) – The current step.
- Return type
None
- add_scalar_dict(metric_dict, step)¶
Log a scalar variable.
- Parameters
metric_dict (
Dict
[str
,Union
[float
,int
]]) – The metric dict.step (
Union
[float
,int
]) – The current step.
- Return type
None
- class emmental.logging.WandbWriter[source]¶
Bases:
emmental.logging.log_writer.LogWriter
A class for logging to wandb during training process.
- add_scalar(name, value, step)¶
Log a scalar variable.
- Parameters
name (
str
) – The name of the scalar.value (
Union
[float
,int
]) – The value of the scalar.step (
Union
[float
,int
]) – The current step.
- Return type
None
- add_scalar_dict(metric_dict, step)[source]¶
Log a scalar variable.
- Parameters
metric_dict (
Dict
[str
,Union
[float
,int
]]) – The metric dict.step (
Union
[float
,int
]) – The current step.
- Return type
None
- close()¶
Close the log writer.
- Return type
None
- write_config(config_filename='config.yaml')¶
Dump the config to file.
- Parameters
config_filename (
str
) – The config filename, defaults to “config.yaml”.- Return type
None
- write_log(log_filename='log.json')¶
Dump the log to file.
- Parameters
log_filename (
str
) – The log filename, defaults to “log.json”.- Return type
None
Configuration Settings¶
Visit the Configuring Emmental page to see how to provide configuration
parameters to Emmental via .emmental-config.yaml
.
The logging parameters of Emmental are described below:
# Logging configuration
logging_config:
counter_unit: epoch # [epoch, batch]
evaluation_freq: 1
writer_config:
writer: tensorboard # [json, tensorboard, wandb]
verbose: True
wandb_project_name:
wandb_run_name:
wandb_watch_model: False
wandb_model_watch_freq:
write_loss_per_step: False
checkpointing: False
checkpointer_config:
checkpoint_path:
checkpoint_freq: 1
checkpoint_metric:
model/train/all/loss: min # metric_name: mode, where mode in [min, max]
checkpoint_task_metrics: # task_metric_name: mode
checkpoint_runway: 0 # checkpointing runway (no checkpointing before k unit)
checkpoint_all: False # checkpointing all checkpoints
clear_intermediate_checkpoints: True # whether to clear intermediate checkpoints
clear_all_checkpoints: False # whether to clear all checkpoints