Model Training Output

This guide outlines the different components that are returned after running the model training command.


Once you run the model training command you will get a summary of the current training session. Here is a sample:



A list of the datasets that are included in the training session.


The list of labels that the model will be trained to recognize.

Available Labels

The available labels list is generated by analyzing the label fields of the annotations. Use this field to validate the labels that you have chosen for this training session against the labels available in the dataset.

Batch Size

This displays the batch size chosen for this training session. If you are training using your cpu, try starting with a batch size of 4 or 8 in order to get a feel for performance. If you are training using a GPU, you can test out batch size of 16 and 32. Explore what works best with your hardware.

Training Session


After each step the result of that step is printed to the console. The first statistic is the loss. The second is the time taken for that step.



Periodically the progress of the model is tested by running it against the validation dataset. The result of this analysis is a table containing the mean average precision, the mean average recall, and variety of specific precision statistics.


Training with Jupyter

If you train using Jupyter, you will see a graphical representation of training and validation loss. For example, in the image below, training loss is shown per step, in blue, and validation loss is shown for the first and last steps, in orange.


For more information about using our Jupyter Notebooks, please see the documentation on Training with Jupyter Notebooks.