Analyze Prediction Results for Targets
Use the Prediction page in the Analysis section to analyze results from model forecasts, different builds, and different stages of the project. This page visualizes the forecasted values for each model (blue) and back filled actuals for each target (orange). It is similar to the Train page and Backtest view of the Model Build phase Pipeline section, but this page displays in-production results.
This page includes an Accuracy view (default), an Impact view, and an Explanation view. Use the fields at the top of the page to filter the information displayed on any of the views. These fields include:
Top Models Visible: Select the number of models you want reflect in the information on the page. The Leaderboard - Latest Build list in each view displays (not Visible) in the Selection Rank column for models not selected in the top models.
Actuals View: The actuals to include in the line chart. Select All to see prior actuals to determine why a model made a prediction.
Forecast View: Determines how overlapping forecasts should be shown (blended or each forecasted version).
NOTE: Feature impact data is dependent on the type of model. Not all models have feature impact data.
Analyze Prediction Accuracy View Information
Click the Accuracy button at the top left of the Prediction page to view prediction accuracy information.
Click a target in the Targets pane to see the average metric score for each of its models, based on the error metric selected in the Metric drop-down. You can further filter the Leaderboard - Latest Build table by model stage build status.
The Accuracy view includes the following:
Metric: Use the Metric drop-down to select the type of error metric. See Appendix 3: Error Metrics for more information on the error metrics Sensible Machine Learning uses.
For each model of the selected target, you can view a line graph that shows the metric scores for each of its prediction runs. Click a model name in the top pane to view the model's forecast result information in the line chart. This includes the date of each of the model's prediction runs and the actuals score for each.
Leaderboard - Latest Build: Shows the selection rank for each model, the name of the model, and its prediction metric scores.
NOTE: If multiple Forecast Ranges were configured on the Configure Forecast page, each forecast range will be overlayed on top of each other. There are custom overlay models generated and included in this leaderboard.
Forecast Results: Visualizes the predictions made over time in comparison to actual values that were brought in during production. Any data or forecasts before initial deployment cannot be viewed. The chart dynamically updates when you change options in the Model Stage, Metric, and Build Status drop downs or when you select a different target. Hover over different areas of the lines in the chart for detailed information. Use the date sliders at the bottom of the graph to change the results to a different date range.
Analyze Prediction Impact
The Impact view shows the same information as the Accuracy view but also includes the feature impact scores for different models. The feature impact score shows how much influence a feature has for a given model.
NOTE: Feature impact data is dependent on the type of model. Not all models have feature impact data.
Analyze Prediction Explanations
The Explanation view shows the model metrics, predictions, and prediction intervals (if configured). The Explanation view also includes the feature values for the features used for a given model. Select a model from the Leaderboard grid to see its features and feature values for different dates over the course of the prediction. The top grid can be visually compared to the bottom grid to see which features had a large impact on the prediction.
TIP: To zoom in on a specific date, double-click the date in the Tug-of-War plot to see a feature-by-feature view of prediction explanations for that date.
NOTE: Feature impact data is dependent on the type of model. Not all models have feature impact data.