Build Confidence in Your Deployed Models

Model interpretability is a critical component to building confidence and trust in deployed models. The Interpretability page offers the ability to understand how different features have influenced a particular model's predictions. Sensible Machine Learning offers this interpretability in the form of a Leaderboard, Feature Impact and Feature Effects.

NEEDS DESCRIPTION OF THE LEADERBOARD

Features: Features measures the impact or influence that a particular feature has on a model's predictions. The larger the impact number is, the more influential that feature is on the model's predictions. Visualizing feature impact for all features that are powering a model is a great way to understand how influential features are in comparison to one and other. The range of feature impact values is relative to the model and will not be standardized across model to model. NEEDS DESCRIPTION  UPDATE

Feature Effects: Feature Effects measures how a model's average prediction and actuals values compares to a feature's value in order to showcase how a feature value influences a model's prediction. Feature effects are commonly displayed through scatter plots to visualize how a model's average prediction value changes when a feature takes on different values.

 

NOTE: Features and Feature Effects are only calculated for certain Machine Learning models.