This article describes how you can evaluate the trained model’s performance once you have uploaded your data on the platform and made your first prediction report. The model is trained on 80% of the data and its performance is evaluated on the rest 20% of the data. For this we have on the prediction report the tabs Advanced Graphs and Tech Specs

Advanced Graphs gives a detailed visualization of the test set results using graphs such as decision trees, confusion matrix, line plots, bar charts, etc.

For classification:

The decision tree gives the details on the model’s decision making process, going through each feature and its value in order to generate a decision on the classes

The confusion matrix gives the performance details of the trained classification model (classifier). Here we can check the precision and recall values. It is shown as an mxm table, thus, if you have 2 classes, you have a 2x2 confusion matrix, and if you have 3 classes (multiclass prediction) you will have a 3x3 matrix and so on. On the y-axis we have the actual labels and on the x-axis we have the predicted labels. Ideally the higher the values on the diagonal the better the classification model. It means that the model predicted most of the values correctly

Next, we have the Actual vs. Predicted Value bar charts which explains the same thing as the confusion matrix but may be sometimes easier to follow. The blue is for True labels and Red is for False. In the graph, on the axis we have the classes and on the y-axis we have the count/frequency of those classes. For example, the minor_damage bar chart conveys that of all the values predicted by the model, blue is the count of correct classification and red is the count of incorrect classification for the minor_damage class. Similarly for all the other classes the depiction is the same

Finally, you have the option to download each of these graphs and for the confusion matrix you can also use the expand feature in case it's too small to read

For regression:

The Actual vs. Predicted Value line plot gives the details on the test set results. The x-axis has the data points and the y-axis has the predicted values. The blue line represents the actual values in the test set and the red line represents the predicted values by the model. The overlap of the blue and red lines means that the actual and the predicted values are the same. The more the overlap the better the performance of the model

The percentage error plot is a 3D plot where we have the test data points on the x axis and the prediction column and the percent error on the y-axes

The Error plot histogram gives the same information as the percentage error plot but a different visualization. The bar chart depicts the number of datapoints that have a particular range of percent error