Analyze Deployed Model Performance

The Overview page in the Analysis section provides general statistics on how well the deployed models are performing in utilization across all targets. Hover over individual bars in a chart for detailed information.

From Metric select an error metric on which to display.

When you make changes to the Metric radio buttons, the Health Score Distribution and Baseline Win Margins by Grouped Significance charts dynamically update.

For information on other types of navigation, see Chart and Table Toolbar Buttons.

The panes on this page are as follows.

Metric:

Winning Models Across Targets:

Health Score Distribution: A color-coded distribution chart visualizing the health the best models. Health score is a scaled metric ranging between -1 and 1, informing of improvement or degradation in a model’s predictive accuracy. A positive (green) health score signals that, since initial deployment, a model’s predictive accuracy has improved. A negative (red) health score signals a decrease in predictive accuracy since initial deployment.

Baseline Win Margin by Group Significance: Each bar in this chart represents an even-sized bin of targets. The height of the bars refers to the percent by which the best statistical or machine learning model beat or lost to the best baseline model with regards to the specified evaluation metric, across all targets within the bin.

The light blue line in the chart represents the total significance of a bin, which refers to the value amount selected during the Model Build Data section, such as total units or dollars. Positive win margins are ideal, indicating that machine learning and statistical models are beating the simplistic models on average. This chart is based on accuracy in Utilization. Like the Metric Values charts, displayed charts are based on Type and Metric Option selections.