Utilization Phase Monitor Section

The Insights section consists of the following pages:

  • Flags: Create new flags to analyze when models pass certain threshold values.

  • Filters: Create new filters based on the created flags used to filter down each target to use one model post prediction runs.

  • Results: Analyze the results of the analyzed flags and filters. Provides an understanding of the system filtered each target down to one model based on the specific flag/filter combination.

Create Flags for Targets

Flags are used to find models that have unwanted metric values. For example, if a user wants to find all models that have a low growth rate, they would create a flag that evaluates whether the growth rate for a specific model is lower than a specified value.

Create Filters for Targets

Filters will be used to filter down each specific target to one model post prediction runs. Targets can only have one filter but a filter can have multiple targets.

Analyze Monitor Results for Targets

Use the Results page in the Monitor section to analyze which model was selected for each specific target. This page will show how the filtering system decides which model should be picked for each target based on the created Filter. Each Filter has the following options when creating one:

Order by: How to order the models for filtering, defaults to model rank.

Flag to analyze: Which flag to tie to the filter. This flag will be evaluated on either the best model for the target or all models based on the input for models to evalute.

Model to evaluate: Whether to evaluate the best model or all models. If best model is selected, the filter will evaluate if the best model trips the flag tied to the filter. If it does not that model will be selected for that target. If all models is selected then the filter will find if any model does not trip the flag in order of whichever column is selected in the order by field. The “best” model that does not trip the flag will be selected. If the filter does not find any models that fit the flag, the filter will enter the fallback steps.

Dimension to apply: Pick which dimensions this filter will be applied to. Multiple targets can have the same filter, however a singular target cannot have multiple filters.

Fallbacks: Steps to take if there are no models that do not trip the flag specified for the filter.

  • Fallback steps:

    • How to “fallback” if no models are selected after evaluating the models for the flag specified on the filter. Each fallback can have more than one step. Example: “If column x > y and column d <= w.

    • Fact to evaluate

      • Which column/metric to analyze for the fallback step.

    • Evaluation

      • Which comparison to use (greater than, less than, etc).

    • Value

      • The value to compare to, like Absolute Error < 10.

Ultimate Fallback Model: If there is no models that fit any criteria after both analyzing of the flag and the fallback steps, use the model selected here as an ultimate fallback. This input defaults to model rank 1, but also allows the user to select a specific model. If that model is not ran for the specific target being analyzed, then model rank 1 will be used.

Process Flags/Filters post creation

Once flags/filters are created, they will be able to be processed in the results page of the monitor section. When users process flags/filters, the buttons to process each system will be disabled until new flags or filters are created:

After Filters are processed, there will be data in both the Filters and Flags page that show the results of the evaluation on both systems. If only the flags are processed, there will be no data on the filters page. The filters page will show which model was selected for each target after processing, clicking on a row in the data table above will show the actuals for that target, the original model (model rank 1) as well as the selected model post filtering, aka the Model value in the selected row.

The filter system tab will show which step triggered the system to land on a certain model.

Create Model Overrides

If after filtering the model for a specific target does not satisfy the user, there is an option to “override” the filter system in which the user can select another model to represent the target at hand by clicking on the override button in the top right corner once a row is selected on the table. The user will be able to select anyone of the models ran on the selected target, this view will show a table of all models ran on a target and the specific model metrics as well. Once a model is selected in the dropdown, there is an apply button in the top right corner that will apply the override to that target.

The summary tab in the filters page will show how many targets had filters tripped versus how many targets did not have filtered tripped:

Analyze Flagging Results

When either process flags or filters is complete users will be able to analyze the flags that were tripped for each model target combination.

The summary tab will show many of each type of flag was tripped for each different forecast start date, letting the user know if there are any forecast start dates that cause more flags to be tripped than another. Each severity level will have it’s own color allowing users to be able to easioly distinguish how many of each type were tripped.