Model

The third component of Emmental’s pipeline is to build deep learning model with your tasks.

Emmental Model

The following describes elements of used for model creation.

Emmental model.

class emmental.model.EmmentalModel(name=None, tasks=None)[source]

Bases: torch.nn.modules.module.Module

A class to build multi-task model.

Parameters
  • name (Optional[str]) – Name of the model, defaults to None.

  • tasks (Union[EmmentalTask, List[EmmentalTask], None]) – A task or a list of tasks.

add_task(task)[source]

Add a single task into MTL network.

Parameters

task (EmmentalTask) – A task to add.

Return type

None

add_tasks(tasks)[source]

Build the MTL network using all tasks.

Parameters

tasks (Union[EmmentalTask, List[EmmentalTask]]) – A task or a list of tasks.

Return type

None

collect_state_dict()[source]

Collect the state dict.

Return type

Dict[str, Any]

flow(X_dict, task_names)[source]

Forward based on input and task flow.

Note

We assume that all shared modules from all tasks are based on the same input.

Parameters
  • X_dict (Dict[str, Any]) – The input data

  • task_names (List[str]) – The task names that needs to forward.

Return type

Dict[str, Any]

Returns

The output of all forwarded modules

forward(uids, X_dict, Y_dict, task_to_label_dict, return_loss=True, return_probs=True, return_action_outputs=False)[source]

Forward function.

Parameters
  • uids (List[str]) – The uids of input data.

  • X_dict (Dict[str, Any]) – The input data.

  • Y_dict (Dict[str, Tensor]) – The output data.

  • task_to_label_dict (Dict[str, str]) – The task to label mapping.

  • return_loss – Whether return loss or not, defaults to True.

  • return_probs – Whether return probs or not, defaults to True.

  • return_action_outputs – Whether return action_outputs or not,

  • False. (defaults to) –

Return type

Union[Tuple[Dict[str, List[str]], Dict[str, Tensor], Dict[str, Union[ndarray, List[ndarray]]], Dict[str, Union[ndarray, List[ndarray]]], Dict[str, Dict[str, Union[ndarray, List]]]], Tuple[Dict[str, List[str]], Dict[str, Tensor], Dict[str, Union[ndarray, List[ndarray]]], Dict[str, Union[ndarray, List[ndarray]]]]]

Returns

The uids, loss, prob, gold, action_output (optional) in the batch of all tasks.

load(model_path, verbose=True)[source]

Load model state_dict from file and reinitialize the model weights.

Parameters
  • model_path (str) – Saved model path.

  • verbose (bool) – Whether log the info, defaults to True.

Return type

None

load_state_dict(state_dict)[source]

Load the state dict.

Parameters

state_dict (Dict[str, Any]) – The state dict to load.

Return type

None

predict(dataloader, return_loss=True, return_probs=True, return_preds=False, return_action_outputs=False)[source]

Predict from dataloader.

Parameters
  • dataloader (EmmentalDataLoader) – The dataloader to predict.

  • return_loss (bool) – Whether return loss or not, defaults to True.

  • return_probs (bool) – Whether return probs or not, defaults to True.

  • return_preds (bool) – Whether return predictions or not, defaults to False.

  • return_action_outputs (bool) – Whether return action_outputs or not,

  • False. (defaults to) –

Return type

Dict[str, Any]

Returns

The result dict.

remove_task(task_name)[source]

Remove a existing task from MTL network.

Parameters

task_name (str) – The task name to remove.

Return type

None

save(model_path, iteration=None, metric_dict=None, verbose=True)[source]

Save model.

Parameters
  • model_path (str) – Saved model path.

  • iteration (Union[float, int, None]) – The iteration of the model, defaults to None.

  • metric_dict (Optional[Dict[str, float]]) – The metric dict, defaults to None.

  • verbose (bool) – Whether log the info, defaults to True.

Return type

None

score(dataloaders, return_average=True)[source]

Score the data from dataloader.

Parameters
Return type

Dict[str, float]

Returns

Score dict.

update_task(task)[source]

Update a existing task in MTL network.

Parameters

task (EmmentalTask) – A task to update.

Return type

None

Configuration Settings

Visit the Configuring Emmental page to see how to provide configuration parameters to Emmental via .emmental-config.yaml.

The model parameters are described below:

# Model configuration
model_config:
    model_path: # path to pretrained model
    device: 0 # -1 for cpu or gpu id (e.g., 0 for cuda:0)
    dataparallel: True # whether to use dataparallel or not
    distributed_backend: nccl # what distributed backend to use for DDP [nccl, gloo]