Classification and Regression with the BigML Dashboard

7.14 Takeaways

This chapter explains evaluations in detail. Here is a list of key points:

  • An evaluation allows you to measure your model, ensemble, logistic regression, deepnet, and fusion performance.

  • In BigML you can perform two types of evaluations: single evaluations and cross-validation evaluations.

  • You need a model and a testing dataset to create a single evaluation. (See Figure 7.115 .)

  • You just need an existing dataset to create a cross-validation evaluation. (See Figure 7.116 .)

  • BigML provides you a range of configuration options before creating your evaluation.

  • Performance measures are different for classification and regression models.

  • The confusion matrix is a key element to evaluate the performance of classification models.

  • You can compare your evaluations measures against models using the mean, the mode, and a random value to predict.

  • BigML provides different visualizations for the ROC curve, the Precision-Recall curve, the Gain curve, and the Lift curve along with their AUC, K-S statistic and other metrics.

  • You can compare two or more evaluations built with different configurations and algorithms to select the model with the best performance.

  • You can download your confusion matrix in Excel format.

  • You can create and use evaluations via the BigML API and bindings.

  • You can add descriptive information to your evaluations.

  • You can move your evaluations between projects.

  • You can share your evaluations with other people using the secret link.

  • You can stop your evaluations creation by deleting them.

  • You can permanently delete an existing evaluation.

\includegraphics[width=6cm]{images/evaluations/evaluations-workflow}
Figure 7.115 Single evaluations workflow
\includegraphics[width=5cm]{images/evaluations/cross-val-workflow}
Figure 7.116 Cross-validation workflow